Introduction
If you have managed Oracle E-Business Suite (EBS) in a production environment, you have almost certainly faced the dreaded scenario — users calling helpdesk saying pages are not loading, forms are hanging, or the application is completely unresponsive. Nine times out of ten, the culprit is a hung OACORE process.
But identifying a hung process in real time is only half the battle. The bigger challenge — and the one most DBAs overlook — is understanding the history. Why did OACORE hang? When did it start? How often has it happened? Was there a pattern? Without answers to these questions, you are destined to fight the same fire again and again.
This blog walks you through a structured approach to historical OACORE hung process analysis in Oracle EBS, complete with diagnostic queries, interpretation tips, and remediation guidance.
What Is OACORE and Why Does It Hang?
OACORE (Oracle Application Core) is the J2EE application server tier in Oracle EBS. It runs within the OC4J (Oracle Containers for Java) or WebLogic container — depending on your EBS version — and serves all web-based application requests including Oracle Forms, Self-Service pages, and Workflow Mailer interactions.
OACORE is essentially the heartbeat of Oracle EBS. When it hangs, the entire application becomes inaccessible to end users.
Common Causes of OACORE Hangs
| Cause | Description |
|---|---|
| Stuck Threads | JVM threads waiting indefinitely for a DB response or locked resource |
| JVM Out of Memory (OOM) | Heap exhaustion causing garbage collection loops |
| Database Lock / Deadlock | Long-running transactions blocking OACORE sessions |
| High CPU / Resource Contention | OS-level CPU or I/O saturation |
| Runaway SQL | A poorly performing query tying up OACORE DB sessions |
| Network Timeout | Loss of connectivity between app tier and DB tier |
| Concurrent Manager Overload | Excessive job submissions choking shared resources |
| Patch or Code Defect | A specific code path in a recently applied patch triggering a loop |
Why Historical Analysis Matters
Real-time monitoring tells you what is happening. Historical analysis tells you why it keeps happening.
A mature Oracle EBS operations team will always look back at past incidents to answer questions like:
- Did OACORE hang at the same time every day? (Batch job collision?)
- Were specific users or modules involved?
- Was there a database wait event common to all hang incidents?
- Did the problem start after a patch was applied?
- How long did each incident last before recovery?
This kind of root cause intelligence is what separates reactive firefighting from proactive EBS management.
Layer-by-Layer Historical Analysis
OACORE hangs leave traces across multiple layers. A thorough investigation should cover all of them.
Layer 1 – OS Process History
While the OS does not retain process history natively, you can correlate timestamps from system logs with hang incidents.
# Check system messages for OOM killer events (Linux)
grep -i "oom\|killed\|out of memory" /var/log/messages | grep -i java
# Check for core dumps generated by OACORE JVM
ls -lrt $ORACLE_HOME/j2ee/oacore/
# Review system crash/reboot history
last reboot | head -20
What to look for: OOM killer log entries, core dump timestamps, or unexpected reboots that align with reported hang incidents.
Layer 2 – OACORE Application Log History
The OACORE application log is your first and most direct source of hung process evidence.
cd $EBS_DOMAIN_HOME/servers/
cd $EBS_DOMAIN_HOME/servers/oacore_server3/logs
grep -i "stuck" -B 3 -A 3 oacore_server3.log00013 oacore_server3.log00030
Layer 3 – Oracle EBS FND Process History
Oracle EBS maintains its own metadata about OACORE service processes in the FND schema. This is invaluable for timeline reconstruction.
-- Full history of OACORE process lifecycle events
SELECT
fcp.OS_PROCESS_ID,
fcp.CONCURRENT_PROCESS_ID,
fcq.CONCURRENT_QUEUE_NAME,
fcqt.USER_CONCURRENT_QUEUE_NAME,
DECODE(fcp.PROCESS_STATUS_CODE,
'A', 'Active',
'C', 'Complete',
'D', 'Deactivating',
'K', 'Terminated',
'S', 'Stopped',
'T', 'Terminating',
fcp.PROCESS_STATUS_CODE) AS status_meaning,
fcp.CREATION_DATE AS process_start_time,
fcp.LAST_UPDATE_DATE AS last_activity,
ROUND((fcp.LAST_UPDATE_DATE
- fcp.CREATION_DATE)*24*60, 2) AS duration_minutes,
fcp.LOGFILE_NAME,
fcp.NODE_NAME AS logfile_node_name -- Corrected column name
FROM
FND_CONCURRENT_PROCESSES fcp,
FND_CONCURRENT_QUEUES fcq,
FND_CONCURRENT_QUEUES_TL fcqt
WHERE
fcp.CONCURRENT_QUEUE_ID = fcq.CONCURRENT_QUEUE_ID
AND fcp.QUEUE_APPLICATION_ID = fcq.APPLICATION_ID
AND fcq.CONCURRENT_QUEUE_ID = fcqt.CONCURRENT_QUEUE_ID
AND fcq.APPLICATION_ID = fcqt.APPLICATION_ID
AND fcqt.LANGUAGE = USERENV('LANG')
AND UPPER(fcq.CONCURRENT_QUEUE_NAME) LIKE '%OACORE%'
ORDER BY
fcp.CREATION_DATE DESC;
Interpretation tips:
- Frequent
K(Terminated) orT(Terminating) statuses indicate forced kills — a classic sign of repeated hangs. - Very short
duration_minutesvalues followed by a restart suggest the process was killed and restarted automatically by ICM. - Look for clusters of restarts at similar times of day — this points to a scheduled job or batch window collision.
Layer 4 – AWR Historical Session Data
Oracle's Automatic Workload Repository (AWR) captures session activity snapshots every hour (by default) and retains them for up to 8 days. This is goldmine data for OACORE hang forensics.
==> License Note: AWR queries require the Oracle Diagnostic Pack license. Verify your licensing before using
DBA_HIST_*views.
4a. What Was OACORE Doing When It Hung?
-- Historical snapshot of OACORE session activity and waits
SELECT
ash.SAMPLE_TIME,
ash.SESSION_ID,
ash.SESSION_SERIAL#,
-- ash.PROGRAM,
ash.MODULE,
ash.ACTION,
ash.SQL_ID,
ash.EVENT,
ash.WAIT_CLASS,
ash.SESSION_STATE,
ROUND(ash.TIME_WAITED/1000000, 2) AS time_waited_secs,
ash.BLOCKING_SESSION,
ash.MACHINE
FROM
DBA_HIST_ACTIVE_SESS_HISTORY ash
WHERE
(ash.PROGRAM LIKE '%OACORE%' OR ash.MODULE LIKE '%oracle.apps%')
AND ash.SAMPLE_TIME >= SYSDATE - 7
ORDER BY
ash.SAMPLE_TIME DESC;
4b. Most Common Wait Events Causing OACORE Hangs
-- Rank wait events that contributed most to OACORE hangs
SELECT
ash.EVENT,
ash.WAIT_CLASS,
ash.MODULE,
COUNT(*) AS wait_count,
ROUND(SUM(ash.TIME_WAITED)/1000000, 2) AS total_wait_secs,
ROUND(AVG(ash.TIME_WAITED)/1000000, 2) AS avg_wait_secs,
ROUND(MAX(ash.TIME_WAITED)/1000000, 2) AS max_wait_secs,
MIN(ash.SAMPLE_TIME) AS first_seen,
MAX(ash.SAMPLE_TIME) AS last_seen
FROM
DBA_HIST_ACTIVE_SESS_HISTORY ash
WHERE
(ash.PROGRAM LIKE '%OACORE%' OR ash.MODULE LIKE '%oracle.apps%')
AND ash.SAMPLE_TIME >= SYSDATE - 30
AND ash.EVENT IS NOT NULL
GROUP BY
ash.EVENT, ash.WAIT_CLASS, ash.MODULE
ORDER BY
wait_count DESC;
Common wait events that indicate a hung OACORE:
| Wait Event | What It Means |
|---|---|
enq: TX - row lock contention |
OACORE session blocked by a lock |
latch: shared pool |
Shared pool pressure — possible parse storm |
library cache lock |
DDL or heavy parse activity |
db file sequential read |
I/O bottleneck on indexed reads |
SQL*Net message from client |
Session idle — possibly stuck client-side |
gc buffer busy acquire |
RAC inter-node block contention |
4c. Hour-by-Hour OACORE Activity Timeline
-- When did OACORE experience the most stress? (Last 7 days)
SELECT
TRUNC(ash.SAMPLE_TIME, 'HH24') AS sample_hour,
COUNT(*) AS total_samples,
COUNT(DISTINCT ash.SESSION_ID) AS distinct_sessions,
SUM(CASE WHEN ash.BLOCKING_SESSION
IS NOT NULL THEN 1 ELSE 0 END) AS blocked_samples,
SUM(CASE WHEN ash.WAIT_CLASS
= 'Concurrency' THEN 1 ELSE 0 END) AS concurrency_waits,
SUM(CASE WHEN ash.SESSION_STATE
= 'ON CPU' THEN 1 ELSE 0 END) AS on_cpu_samples
FROM
DBA_HIST_ACTIVE_SESS_HISTORY ash
WHERE
(ash.PROGRAM LIKE '%OACORE%' OR ash.MODULE LIKE '%oracle.apps%')
AND ash.SAMPLE_TIME >= SYSDATE - 7
GROUP BY
TRUNC(ash.SAMPLE_TIME, 'HH24')
ORDER BY
sample_hour DESC;
This timeline query is extremely powerful. If you see blocked_samples spiking at 2:00 AM every night, you immediately know a batch job is locking rows that OACORE sessions need — a classic pattern in Oracle EBS environments running nightly GL, AP, or INV jobs.
4d. Historical Blocking Chain Analysis
-- OACORE sessions that were blocked — who was blocking them?
SELECT
ash.SAMPLE_TIME,
ash.SESSION_ID AS oacore_session,
ash.EVENT AS oacore_wait_event,
ash.BLOCKING_SESSION AS blocking_sid,
ash.BLOCKING_SESSION_SERIAL# AS blocking_serial,
b.PROGRAM AS blocker_program,
b.MODULE AS blocker_module,
b.SQL_ID AS blocker_sql_id,
ROUND(ash.TIME_WAITED/1000000,2) AS wait_secs
FROM
DBA_HIST_ACTIVE_SESS_HISTORY ash,
DBA_HIST_ACTIVE_SESS_HISTORY b
WHERE
(ash.PROGRAM LIKE '%OACORE%' OR ash.MODULE LIKE '%oracle.apps%')
AND ash.BLOCKING_SESSION IS NOT NULL
AND b.SESSION_ID = ash.BLOCKING_SESSION
AND b.SESSION_SERIAL# = ash.BLOCKING_SESSION_SERIAL#
AND b.SAMPLE_TIME = ash.SAMPLE_TIME
AND ash.SAMPLE_TIME >= SYSDATE - 30
ORDER BY
ash.SAMPLE_TIME DESC;
Layer 5 – FND Application Log History
Oracle EBS writes detailed application events to FND_LOG_MESSAGES. This table stores error codes, module paths, and thread-level information that complements AWR data.
-- Historical application errors related to OACORE
SELECT
flm.TIMESTAMP,
flm.MODULE,
DECODE(flm.LOG_LEVEL,
1, 'UNEXPECTED',
2, 'ERROR',
3, 'EXCEPTION',
4, 'EVENT',
flm.LOG_LEVEL) AS severity,
flm.MESSAGE_TEXT,
flm.PROCESS_ID,
flm.THREAD_ID
FROM
FND_LOG_MESSAGES flm
WHERE
UPPER(flm.MODULE) LIKE '%OACORE%'
AND flm.LOG_LEVEL <= 3
AND flm.TIMESTAMP >= SYSDATE - 7
ORDER BY
flm.TIMESTAMP DESC
FETCH FIRST 300 ROWS ONLY;
Putting It All Together — Root Cause Patterns
After running the above queries, map your findings to these common root cause patterns:
Pattern 1: Nightly Batch Collision
- Symptom: OACORE hangs appear in the AWR timeline between 1 AM – 4 AM.
- Evidence:
enq: TX - row lock contentionwait events; blocker program shows concurrent manager jobs. - Fix: Stagger batch job schedules; use
NOWAITor advisory locks; archive completed data before job run.
Pattern 2: JVM Memory Exhaustion
- Symptom: OACORE bounces frequently, log shows
OutOfMemoryError. - Evidence: OS logs show OOM killer activity; JVM heap dumps if configured.
- Fix: Increase
-XmxJVM heap parameter in OACORE config; tune garbage collection; check for memory leaks in custom code.
Pattern 3: Runaway SQL / Parse Storm
- Symptom: Gradual slowdown before hang;
latch: shared poolorlibrary cache lockwaits dominate. - Evidence: AWR Top SQL shows one SQL_ID with disproportionate parse calls.
- Fix: Use bind variables; tune the problematic SQL; increase
shared_pool_size.
Pattern 4: Network / Firewall Timeout
- Symptom: OACORE hangs intermittently with no clear DB wait event.
- Evidence: Sessions stuck on
SQL*Net message from clientfor unusually long periods. - Fix: Configure TCP keepalive; adjust firewall idle session timeout settings; use Oracle Connection Manager (CMAN).
Pattern 5: Post-Patch Regression
- Symptom: Hangs started immediately after a patching window.
- Evidence: FND process restart history shows the first hang aligns with the patch date.
- Fix: Review patch readme for known issues; roll back patch if critical; raise SR with Oracle Support.
Extending AWR Retention for Better History
By default AWR retains only 8 days of snapshot history. For recurring or periodic OACORE hang issues, extend retention to 30+ days:
-- Extend AWR retention to 30 days, snapshot every 30 minutes
BEGIN
DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(
retention => 43200, -- 30 days in minutes
interval => 30 -- snapshot every 30 minutes
);
END;
/
-- Verify settings
SELECT SNAP_INTERVAL, RETENTION
FROM DBA_HIST_WR_CONTROL;
Proactive Monitoring — Prevent the Next Hang
Historical analysis is reactive. Pair it with proactive measures:
- Set up OEM (Oracle Enterprise Manager) alerts for OACORE stuck threads and JVM heap usage.
- Configure WebLogic/OC4J stuck thread detection — automatically restart stuck threads after a threshold (e.g., 600 seconds).
- Create a custom AWR report job that emails OACORE wait event summaries every morning.
- Enable EBS Service Monitoring via Oracle Applications Manager (OAM) — it tracks JVM status and can auto-restart OACORE.
- Implement a cron job to parse application.log for "stuck thread" warnings and alert the DBA team instantly.
#!/bin/bash
# Simple cron-based stuck thread monitor
LOG_PATTERN="${EBS_DOMAIN_HOME}/servers/oacore_server*/logs/*.log"
MAIL_TO="dba-team@yourcompany.com"
HOSTNAME=$(hostname)
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
# Expand glob and check if any log files exist
LOG_FILES=$(ls ${LOG_PATTERN} 2>/dev/null)
if [ -z "$LOG_FILES" ]; then
echo "[$TIMESTAMP] WARNING: No OACORE log files found matching pattern: $LOG_PATTERN"
exit 1
fi
# Use awk to get total count across all matched log files
COUNT=$(grep -ih "stuck thread\|hung" $LOG_FILES 2>/dev/null | wc -l)
if [ "$COUNT" -gt 0 ]; then
BODY="ALERT: $COUNT stuck thread/hung warning(s) detected in OACORE logs.\n\nHost : $HOSTNAME\nTimestamp: $TIMESTAMP\nLog Path : $LOG_PATTERN\n\nMatching lines:\n$(grep -ih 'stuck thread\|hung' $LOG_FILES 2>/dev/null | tail -20)"
echo -e "$BODY" | mail -s "OACORE Hung Thread Alert - $HOSTNAME" "$MAIL_TO"
echo "[$TIMESTAMP] Alert sent: $COUNT stuck thread warning(s) found."
else
echo "[$TIMESTAMP] OK: No stuck thread warnings found."
fi
Summary
Historical OACORE hung process analysis is a critical skill for any Oracle EBS DBA or support engineer. By systematically investigating across OS logs, OACORE application logs, FND metadata tables, and Oracle AWR, you can build a complete picture of when, how, and why OACORE hangs — and prevent the next outage before it happens.