Beruflich Dokumente
Kultur Dokumente
All custom_top entries are under the directory in default .env file
$INST_TOP/ora/10.1.2/forms/server
======================================
SOLUTION:
2. Go to $INST_TOP/ora/10.1.2/forms/server directory
[applmgr@EBSTEST]$ cd $INST_TOP/ora/10.1.2/forms/server
======================================
oracle@tullx001.ss.gates.com's password:
======================================
When used_urec and used_ublk get to zero it will have completed rollback.
Concurrent managers not starting up after cmclean.sql:
Please recheck the server_id value from fnd_nodes. Running Autoconfig should fix the value and match it with the one in the dbc
file.
1) Check the profile option "Concurrent: GSM Enabled", if it is set to "Yes", change it to "No", restart the concurrent manager and
check.
2) SQL> select object_name from dba_objects where status = 'INVALID' and object_name like 'FND_CONC%';
If it returns something then use adadmin to recompile the invalid objects. Restart the CM and check.
SQL> commit;
SECOND SENARIO:
======================================
UNDER WHICH MANAGER REQUEST WAS RUN
=======================================
SELECT
b.user_concurrent_queue_name
FROM
fnd_concurrent_processes a
,fnd_concurrent_queues_vl b
,fnd_concurrent_requests c
WHERE 1=1
AND a.concurrent_queue_id = b.concurrent_queue_id
AND a.concurrent_process_id = c.controlling_manager
AND c.request_id = &request_id
Oracle supplies several useful scripts, (located in $FND_TOP/sql directory), for monitoring the concurrent managers:
afcmstat.sql Displays all the defined managers, their maximum capacity, pids, and their status.
afimchk.sql Displays the status of ICM and PMON method in effect, the ICM's log file, and determines if the concurrent
manger monitor is running.
Displays the concurrent manager and the name of its log file that processed a request.
afcmcreq.sql
afrqwait.sql Displays the requests that are pending, held, and scheduled.
afrqstat.sql Displays of summary of concurrent request execution time and status since a particular date.
afqpmrid.sql Displays the operating system process id of the FNDLIBR process based on a concurrent request id. The process
id can then be used with the ORADEBUG utility.
afimlock.sql Displays the process id, terminal, and process id that may be causing locks that the ICM and CRM are waiting to
get. You should run this script if there are long delays when submitting jobs, or if you suspect the ICM is in a
gridlock with another oracle process.
======================================
Solution:
EXEC FND_CONC_CLONE.SETUP_CLEAN;
COMMIT;
EXIT;
Run AutoConfig on all tiers, firstly on the DB tier and then the APPS tiers and webtier to repopulate the required
systemtables
Run the CMCLEAN.SQL script from the referenced note below (don’t forget to commit).
Note.134007.1 – ‘CMCLEAN.SQL – Non Destructive Script to Clean Concurrent Manager Tables‘
Start the middle tier services including your concurrent manager.
Retest the issue.
Posted October 17, 2013 by balaoracledba.com in 11i/R12, Concurrent Manager, Issues, OracleAppsR12
• Run $FND_TOP/patch/115/sql/afdcm037.sql
• Go to $FND_TOP/bin
R12 Opp(output Post Processor) and Workflow Mailer is down Leave a comment
When i see the Status OPP Manger and Workflow Mailer from Concurrent–>Manager–>Administer Screen. I see below status
Solution :
• Ensure Concurrent:GSM Enabled profile is set to ‘Y’
• FNDSM entry should be correct in Tnsnames.ora file and tnsping FNDSM_hostname should work fine.
• Then Bounce the Services.
Cause: cleanup_node failed due to ORA-01427: single-row subquery returns more than one row
Check that your system has enough resources to start a concurrent manager process. Contact your syst : 08-OCT-2013
00:30:51
Could not initialize the Service Manager FNDSM_apps01_dev. Verify that apps01 has been registered for concurrent processing.
ORACLE error 1427 in cleanup_node
Cause: cleanup_node failed due to ORA-01427: single-row subquery returns more than one row
ORA-06512: at “APPS.FND_CP_FNDSM”, line 29
ORA-06512: at line 1.
The SQL statement being executed at the time of
Routine AFPEIM encountered an error while starting concurrent manager IEU_WL_CS with library
/dev/applmgr/R12/apps/apps_st/appl/fnd/12.0.0/bin/FNDLIBR.
Solution
———-
sqlplus apps/apps
sql>exec fnd_conc_clone.setup_clean;
commit;
sql>@cmclean.sql
Started the concurrent manager on the application tier and it worked
Concurrent ->Manager ->Define form. If the Service Manager is not present/defined for a particular node,then this causes all
the services provided by Service Manager like OPP,WF etc.. not to work.
2. Log in as applmgr
cd to $FND_TOP/patch/115/sql
Run the script: afdcm037.sql
3. Relink FNDSM and FNDLIBR executables as mentioned below:
Output Post Processor is Down with Actual Process is 0 And Target Process is 1 Leave a
comment
If you see OPP is Down with Actual Process is 0 And Target Process is 1 then do the following
1. Shutdown concurrent server via command adcmctl.sh under $COMMON_TOP/admin/scripts/<context_name>
2. To ensure concurrent manager down; check there is no FNDLIBR process running.
ps -ef | grep applmgr | grep FNDLIBR
3. Run adadmin to relink FNDSVC executable.
The ATG / FND supplied data purge requests are the following:
- Purge Concurrent Request and/or Manager Data [FNDCPPUR]
- Purge Obsolete Workflow Runtime Data [FNDWFPR]
- Purge Signon Audit data [FNDSCPRG.sql]
- Purge Obsolete Generic File Manager Data [FNDGFMPR]
- Purge Debug Log and System Alerts [FNDLGPRG]
- Purge Rule Executions [FNDDWPUR]
- Purge Concurrent Processing Setup Data for Cloning [FNDCPCLN]
Metalink Note 732713.1 describes purging strategy for E-Business Suite 11i:
There is no single Archive/Purge routine that is called by all modules within eBusiness Suite, instead each module has module
specific archive/purge procedures.
Concurrent Jobs to purge data
Oracle Applications System Administrator’s Guide - Maintenance Release 11i (Part No. B13924-04)
Note 132254.1 Speeding up and Purging Workflow
Note 277124.1 FAQ on Purging Oracle Workflow Data
Note 337923.1 A closer examination of the Concurrent Program Purge Obsolete Workflow Runtime Data
Note 332103.1 Purge Debug Log And System Alerts Performance Issues
Note 1016344.102 What Tables Does the Purge Signon Audit Data Concurrent Program Affect?
Note 388088.1 How To Clear The Unsuccessful Logins
Oracle Applications System Administrator’s Guide - Maintenance Release 11i (Part No. B13924-04)
Note 565942.1 Which Table Column And Timing Period Does The FNDCPPUR Purge Program Use
Note 104282.1 Concurrent Processing Tables and Purge Concurrent Request and/or Manager Data Program (FNDCPPUR)
Note 92333.1 How to Optimize the Process of Running Purge Concurrent Request and/or Manager Data (FNDCPPUR)
Oracle Applications System Administrator’s Guide - Configuration Release 11i (Part No. B13925-06)
Note 423177.1 Date Parameters For "Purge Fnd_stats History Records" Do Not Auto-Increment
Oracle Applications System Administrator’s Guide - Configuration Release 11i (Part No. B13925-06)
Note 298698.1 Avoiding abnormal growth of FND_LOBS table in Application
Note 555463.1 How to Purge Generic or Purchasing Attachments from the FND_LOBS Table
Note 397118.1 Where Is 'Delete Data From Temporary Table' Concurrent Program - ICXDLTMP.SQL
Note 553711.1 Purge Obsolete Ecx Data Error ORA-06533: Subscript Beyond Count
Note 338523.1 Cannot Find ''Purge Obsolete Ecx Data'' Concurrent Request
Note 444524.1 About Oracle Applications Technology ATG_PF.H Rollup 6
Additional Notes
You can monitor and run purging programs through OAM by navigating to the Site Map--> Maintenence --> Purge section.
This note also gives reference of a white paper in Note 752322.1 "Reducing Your Oracle E-Business Suite Data Footprint using
Archiving, Purging, and Information Lifecycle Management"
======================================
ORA-01102: cannot mount database in EXCLUSIVE mode
Check for oracle SID related process already running
Cause: An instance tried to mount the database in exclusive mode, but some other instance has already mounted the database in
exclusive or parallel mode.
Action: Either mount the database in parallel mode or shut down all other instances before mounting the database in exclusive
mode.
======================================
Oracle Error:
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
RMAN-00571: ===========================================================
RMAN-00571: ===========================================================
RMAN-06025: no backup of archived log for thread 1 with sequence 41765 and starting SCN of
9738413586917 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 41764 and starting SCN of
9738413585738 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 41763 and starting SCN of
9738413584155 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 41762 and starting SCN of
9738413582950 found to restore
...
RMAN-06025: no backup of archived log for thread 1 with sequence 41734 and starting SCN of
9738413520883 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 41733 and starting SCN of
9738413519245 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 41732 and starting SCN of
9738413518015 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 41731 and starting SCN of
9738413516741 found to restore
RMAN-00571: ===========================================================
RMAN-00571: ===========================================================
===================
Thrd Seq Low SCN Low Time Next SCN Next Time
...
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
RMAN-00571: ===========================================================
RMAN-00571: ===========================================================
RMAN-06556: datafile 1 must be restored from backup older than SCN 9738413516668
database opened
How does it happen? What’s cause of the “datafile 1 must be restored from backup” ?
I found an excellent explanation here. According to this article, RMAN wont backup the archivelogs generated after the start of
the run of rman backup script.
We switch log every 10 minutes, so it very likely that new archivelog is generated during this period.
What happening when executing adpreclone.pl in DB and Apps Tier?
adpreclone.pl - This is the preparation phase, will collects information about the source system, creates a cloning stage area, and
generates templates and drivers. All of these are to reconfigure the instance on a Target machine.
Preclone will do the following:
Create templates
Any files under the $ORACLE_HOME that contain system specific information, will be replicated and converted into a template.
These templates are placed into the $ORACLE_HOME/appsutil/template directory.
Create driver(s)
A driver file, relating to these new templates is created called instconf.drv. This contains a list of all the templates and their locations,
and the destination configuration files that these templates will create.
This driver file is called instconf.drv and is placed into directory
$ORACLE_HOME/appsutil/driver
jlib contains all the Rapid Clone java code, jdbc libraries etc
context contains templates used for a Target XML file
data (Database Tier only) contains the driver file, and templates used to generate the control file SQL script
adcrdb.zip contains the template and list of datafiles on the Source
addbhomsrc.xml contains information on the datafile mount points of the Source
appl (Applications Tier only) this is used when merging appltops, i.e Multi-node to Single node cloning
RDBMS $ORACLE_HOME/appsutil/java/oracle
RDBMS $ORACLE_HOME/appsutil/clone/jlib/java/oracle
$COMMON_TOP/clone/jlib/java/oracle
It depends on 2 factors:
i. OS and ii. Database block size (DB_BLOCK_SIZE) parameter.
In 32 bit OS, You can create datafile upto 2GB to 4GB.
Following is the impact of DB_BLOCK_SIZE parameter on datafile size limitation:
For smallfile tablespace, single datafile can hold upto 2^22 or 4 MB or 4 million blocks, it means:
with DB_BLOCK_SIZE=4k, you can have max file size= 4k*4MB =16GB
with DB_BLOCK_SIZE=8k, you can have max file size= 8k*4MB =32GB
with DB_BLOCK_SIZE=16k, you can have max file size= 16k*4MB =64GB and so on..
For Bigfile tablespace(10g feature), a single data file can hold upto 2^32 or 4GB or 4 billion blocks, it means:
with DB_BLOCK_SIZE=4k, you can have max file size= 4k*4GB =16TB
with DB_BLOCK_SIZE=8k, you can have max file size= 8k*4GB =32TB
with DB_BLOCK_SIZE=16k, you can have max file size= 16k*4GB =64TB and so on..
Other Limits you can find in following Oracle Document:
The following configuration and environment files are also used by most AD utilities, but are not created by
AutoConfig.
IF THE STATUS
“MAINT” = MAINTENANCE MODE HAS BEEN ENABLED AND THE USERS WILL NOT BE ABLE TO LOGIN.
“NORMAL” = MAINTENANCE MODE HAS BEEN DE-ACTIVATED AND THE USERS WILL BE ABLE TO
LOGIN.
Processes
Oracle uses many small (focused) processes to manage and control the Oracle instance. This allows for optimum execution on
multi-processor systems using multi-core and multi-threaded technology. Some of these processes include:
Solution:
3. Check permissions for those directories for current user who is trying to start listener.
mkdir /var/tmp/.oracle
mkdir /tmp/.oracle
But unfortunately max votes got for a incorrect option. The correct answer is Server process.
Many DBA’s don’t know that we can perform complete recovery when we lost controlfile. (even i had some good argument with a friend
on my blog on this)
http://pavandba.wordpress.com/2010/03/18/how-to-do-complete-recovery-if-controlfiles-are-lost/
By reading above post, you might have got the point that we are creating new controlfile. In such cases, to open the database we
require latest SCN to be there in controlfile to match it with datafiles and redolog files.
If it doesn’t match, it will fail to open. So server process will take that responsibility to update the controlfile with latest SCN and this
Introduction
Oracle maintains its own buffer cache inside the system global area (SGA) for each instance. A properly
sized buffer cache can usually yield a cache hit ratio over 90%, meaning that nine requests out of ten are
satisfied without going to disk.
If a buffer cache is too small, the cache hit ratio will be small and more physical disk I/O will result. If a
buffer cache is too big, then parts of the buffer cache will be under-utilized and memory resources will be
wasted.
Oracle maintains statistics of buffer cache hits and misses. The following query will show you the overall
buffer cache hit ratio for the entire instance since it was started:
SELECT (P1.value + P2.value - P3.value) / (P1.value + P2.value)
FROM v$sysstat P1, v$sysstat P2, v$sysstat P3
WHERE P1.name = 'db block gets'
AND P2.name = 'consistent gets'
AND P3.name = 'physical reads'
You can also see the buffer cache hit ratio for one specific session since that session started:
SELECT (P1.value + P2.value - P3.value) / (P1.value + P2.value)
FROM v$sesstat P1, v$statname N1, v$sesstat P2, v$statname N2,
v$sesstat P3, v$statname N3
WHERE N1.name = 'db block gets'
AND P1.statistic# = N1.statistic#
AND P1.sid = <enter SID of session here>
AND N2.name = 'consistent gets'
AND P2.statistic# = N2.statistic#
AND P2.sid = P1.sid
AND N3.name = 'physical reads'
AND P3.statistic# = N3.statistic#
AND P3.sid = P1.sid
You can also measure the buffer cache hit ratio between time X and time Y by collecting statistics at times
X and Y and computing the deltas.
The db_block_buffers parameter in the parameter file determines the size of the buffer cache for the
instance. The size of the buffer cache (in bytes) is equal to the value of the db_block_buffers parameter
multiplied by the data block size.
You can change the size of the buffer cache by editing the db_block_buffers parameter in the parameter file
and restarting the instance.
If you set the db_block_lru_extended_statistics parameter to a positive number in the parameter file for an
instance and restart the instance, Oracle will populate a dynamic performance view called v$recent_bucket.
This view will contain the same number of rows as the setting of the db_block_lru_extended_statistics
parameter. Each row will indicate how many additional buffer cache hits there might have been if the buffer
cache were that much bigger.
For example, if you set db_block_lru_extended_statistics to 1000 and restart the instance, you can see how
the buffer cache hit ratio would have improved if the buffer cache were one buffer bigger, two buffers
bigger, and so on up to 1000 buffers bigger than its current size. Following is a query you can use, along
with a sample result:
SELECT 250 * TRUNC (rownum / 250) + 1 || ' to ' ||
250 * (TRUNC (rownum / 250) + 1) "Interval",
SUM (count) "Buffer Cache Hits"
FROM v$recent_bucket
GROUP BY TRUNC (rownum / 250)
This result set shows that enlarging the buffer cache by 250 buffers would have resulted in 16,083 more
hits. If there were about 30,000 hits in the buffer cache at the time this query was performed, then it would
appear that adding 500 buffers to the buffer cache might be worthwhile. Adding more than 500 buffers
might lead to under-utilized buffers and therefore wasted memory.
There is overhead involved in collecting extended LRU statistics. Therefore you should set the
db_block_lru_extended_ statistics parameter back to zero as soon as your analysis is complete.
In Oracle7, the v$recent_bucket view was named X$KCBRBH. Only the SYS user can query X$KCBRBH.
Also note that in X$KCBRBH the columns are called indx and count, instead of rownum and count.
If you set the db_block_lru_statistics parameter to true in the parameter file for an instance and restart the
instance, Oracle will populate a dynamic performance view called v$current_bucket. This view will contain
one row for each buffer in the buffer cache, and each row will indicate how many of the overall cache hits
have been attributable to that particular buffer.
By querying v$current_bucket with a GROUP BY clause, you can get an idea of how well the buffer cache
would perform if it were smaller. Following is a query you can use, along with a sample result:
SELECT 1000 * TRUNC (rownum / 1000) + 1 || ' to ' ||
1000 * (TRUNC (rownum / 1000) + 1) "Interval",
SUM (count) "Buffer Cache Hits"
FROM v$current_bucket
WHERE rownum > 0
GROUP BY TRUNC (rownum / 1000)
This result set shows that the first 3000 buffers are responsible for over 98% of the hits in the buffer cache.
This suggests that the buffer cache would be almost as effective if it were half the size; memory is being
wasted on an oversized buffer cache.
There is overhead involved in collecting LRU statistics. Therefore you should set the
db_block_lru_statistics parameter back to false as soon as your analysis is complete.
In Oracle7, the v$current_bucket view was named X$KCBCBH. Only the SYS user can query
X$KCBCBH. Also note that in X$KCBCBH the columns are called indx and count, instead of rownum and
count.
Full table scans of large tables usually result in physical disk reads and a lower buffer cache hit ratio. You
can get an idea of full table scan activity at the data file level by querying v$filestat and joining to
SYS.dba_data_files. Following is a query you can use and sample results:
SELECT A.file_name, B.phyrds, B.phyblkrd
FROM SYS.dba_data_files A, v$filestat B
WHERE B.file# = A.file_id
ORDER BY A.file_id
PHYRDS shows the number of reads from the data file since the instance was started. PHYBLKRD shows
the actual number of data blocks read. Usually blocks are requested one at a time. However, Oracle requests
blocks in batches when performing full table scans. (The db_file_multiblock_read_count parameter controls
this batch size.)
In the sample result set above, there appears to be quite a bit of full table scan activity in the data01.dbf data
file, since 593,336 read requests have resulted in 9,441,037 actual blocks read.
The v$sqlarea dynamic performance view contains one row for each SQL statement currently in the shared
SQL area of the SGA for the instance. v$sqlarea shows the first 1000 bytes of each SQL statement, along
with various statistics. Following is a query you can use:
SELECT executions, buffer_gets, disk_reads,
first_load_time, sql_text
FROM v$sqlarea
ORDER BY disk_reads
EXECUTIONS indicates the number of times the SQL statement has been executed since it entered the
shared SQL area. BUFFER_GETS indicates the collective number of logical reads issued by all executions
of the statement. DISK_READS shows the collective number of physical reads issued by all executions of
the statement. (A logical read is a read that resulted in a cache hit or a physical disk read. A physical read is
a read that resulted in a physical disk read.)
You can review the results of this query to find SQL statements that perform lots of reads, both logical and
physical. Consider how many times a SQL statement has been executed when evaluating the number of
reads.
Conclusion
This brief document gives you the basic information you need in order to optimize the buffer cache size for
your Oracle database. Also, you can zero in on SQL statements that cause a lot of I/O, and data files that
experience a lot of full table scans.
As you see, it writes the time stamp when the session was killed, and also gives the associated OS pid of the killed session in the
alert.log. As per Oracle documentation, 'Specify IMMEDIATE to instruct Oracle Database to roll back ongoing transactions,
release all session locks, recover the entire session state, and return control to you immediately.'
Syntax:
Include below three lines in your shell scrip to kill the sessions which are inactive for more than 60 minutes.
—————————————————————————————————————————————-
kill_session_script.sql
—————————————————————————————————————————————-
– Script to kill sessions inactive for more than 1 hr
–kill_session_script.sql
set serveroutput on size 100000
set echo off
set feedback off
set lines 300
spool /ora/app/oracle/admin/scripts/kill_session.sql
declare
cursor sessinfo is select * from v$session where status = ‘INACTIVE’ and last_call_et>3600;
sess sessinfo%rowtype;
sql_string1 Varchar2(2000);
sql_string2 Varchar2(2000);
begin
dbms_output.put_line(‘SPOOL /ora/app/oracle/admin/scripts/kill_session.log;’);
open sessinfo;
loop
fetch sessinfo into sess;
exit when sessinfo%notfound;
sql_string1:=’–sid=’||sess.sid||’ serail#=’||sess.serial#||’ machine=’||sess.machine||’ program=’||sess.program||’
username=’||sess.username||’ Inactive_sec=’||sess.last_call_et||’ OS_USER=’||sess.osuser;
dbms_output.put_line(sql_string1);
sql_string2:=’alter system kill session ‘||chr(39)||sess.sid||’,’||sess.serial#||chr(39)||’ ;';
dbms_output.put_line(sql_string2);
end loop;
close sessinfo;
dbms_output.put_line(‘SPOOL OFF;’);
dbms_output.put_line(‘exit;’);
end;
/
spool off;
set echo on;
set feedback on;
@/ora/app/oracle/admin/scripts/kill_session.sql;
Background:
In some situation DBA team wants to audit failed logon attempts when "unlock account" requirement becomes
frequently and user cannot figure out who from where is using incorrect password to cause account get locked.
Audit concern:
Oracle auditing may add extra load and require extra operation support. For this situation DBA only need audit on
failed logon attempts and do not need other audit information. Failed logon attempt is only be able to track through
Oracle audit trail, logon trigger does not apply to failure logon attempts
Hint: The setting here is suggested to use in a none production system. Please evaluate all concern and load
before use it in production.
Approach:
audit_trail=DB
Note:
database installed by manual script, the audit function may not turn on:
database installed by dbca, the default audit function may already turn on:
Check:
SQL> show parameter audit_trail
System altered.
Restart database
SQL> shutdown immediate
Database closed.
Database dismounted.
SQL> startup ;
Note: Oracle 11g has couple of audit turned on default when the audit_trail is set.
Generate a script to turn off default privilege audit which we don't need here.
SQL> SELECT 'noaudit '|| privilege||';' from dba_priv_audit_opts where user_name is NULL;
'NOAUDIT'||PRIVILEGE||';'
-------------------------------------------------
23 rows selected.
Audit succeeded.
4. Retrieve information
Note: audit information is stored on sys.aud$. There multiple views Oracle provide to help you read sys.aud$.
Logon failed information can be retrieve from dba_audit_session
linda xu JET_DEV102
HOME-linda xu 02/06/2013 13:40:12 LOGON 1017
linda xu JET_DEV102
HOME-linda xu 02/06/2013 13:40:25 LOGON 1017
linda xu JET_DEV102
HOME-linda xu 02/06/2013 15:31:29 LOGON 1017
linda xu JET_DEV102
HOME-linda xu 02/06/2013 15:31:38 LOGON 1017
4 rows selected.
------------------------------------------------------------
Up here, we be able to audit who is the bad boy causing account locked.
Noaudit succeeded.
no rows selected
Oracle use system tablespace for sys.aud$. For enhancement, you may consider to move sys.aud$ to separate
tablespace.
TABLE_NAME TABLESPACE_NAME
----------------------------- ------------------------------
AUD$ SYSTEM
Following example shows how to move sys.aud$ from system tablespace to user_data1 tablespace.
SQL> exec DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
audit_trail_location_value => 'USER_DATA1');
TABLE_NAME TABLESPACE_NAME
------------------------------ ------------------------------
AUD$ USER_DATA1
7. Clean up AUD$
I’ve recently been monitoring two databases where a high amount of import/exports are taking place. The SYSAUX and SYSTEM
tablespaces have been continually growing.
SELECT
sum(bytes/1024/1024) Mb,
segment_name,
segment_type
FROM
dba_segments
WHERE
tablespace_name = 'SYSAUX'
AND
segment_type in ('INDEX','TABLE')
GROUP BY
segment_name,
segment_type
ORDER BY Mb;
MB SEGMENT_NAME SEGMENT_TYPE
-- --------------------------------------- ----------------
2 WRH$_SQLTEXT TABLE
2 WRH$_ENQUEUE_STAT_PK INDEX
2 WRI$_ADV_PARAMETERS TABLE
2 WRH$_SEG_STAT_OBJ_PK INDEX
3 WRI$_ADV_PARAMETERS_PK INDEX
3 WRH$_SQL_PLAN_PK INDEX
3 WRH$_SEG_STAT_OBJ TABLE
3 WRH$_ENQUEUE_STAT TABLE
3 WRH$_SYSMETRIC_SUMMARY_INDEX INDEX
4 WRH$_SQL_BIND_METADATA_PK INDEX
4 WRH$_SQL_BIND_METADATA TABLE
6 WRH$_SYSMETRIC_SUMMARY TABLE
7 WRH$_SQL_PLAN TABLE
8 WRI$_OPTSTAT_TAB_HISTORY TABLE
8 I_WRI$_OPTSTAT_TAB_ST INDEX
9 I_WRI$_OPTSTAT_H_ST INDEX
9 I_WRI$_OPTSTAT_TAB_OBJ#_ST INDEX
12 I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST INDEX
12 I_WRI$_OPTSTAT_IND_ST INDEX
12 WRI$_OPTSTAT_HISTGRM_HISTORY TABLE
14 I_WRI$_OPTSTAT_IND_OBJ#_ST INDEX
20 WRI$_OPTSTAT_IND_HISTORY TABLE
306 I_WRI$_OPTSTAT_HH_ST INDEX
366 WRI$_OPTSTAT_HISTHEAD_HISTORY TABLE
408 I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST INDEX
To reduce these tables and indexes you can issue the following:
To find out the oldest available stats you can issue the following:
As each day continues the SYSAUX table is continuing to fill up because the job fails each night and cannot purge old stats.
To resolve this we have to issue a manual purge to clear down the old statistics. This can be UNDO tablespace extensive so it’s
best to keep an eye on the amount of UNDO being generated. I suggest starting with the oldest and working fowards.
BEGIN
sys.dbms_scheduler.create_job(
job_name => '"SYS"."PURGE_OPTIMIZER_STATS"',
job_type => 'PLSQL_BLOCK',
job_action => 'begin
dbms_stats.purge_stats(sysdate-3);
end;',
repeat_interval => 'FREQ=DAILY;BYHOUR=6;BYMINUTE=0;BYSECOND=0',
start_date => systimestamp at time zone 'Europe/Paris',
job_class => '"DEFAULT_JOB_CLASS"',
comments => 'job to purge old optimizer stats',
auto_drop => FALSE,
enabled => TRUE);
END;
Finally you will need to rebuild the indexes and move the tables. To do this you can spool a script to a dmp file and then run the
dmp file.
SQL> select 'alter index '||segment_name||' rebuild;' FROM dba_segments where tablespace_name =
'SYSAUX' AND segment_type = 'INDEX';
Edit the file to remove the first and last lines (SQL> SELECT…. and SQL> spool off)
Run the file to rebuild the indexes.
SQL> select 'alter table '||segment_name||' move tablespace SYSAUX;' FROM dba_segments where
tablespace_name = 'SYSAUX' AND segment_type = 'TABLE';
Then you can re-run the original query, mine produces the following now and my SYSAUX table is only a few hundred MB full.
http://www.unixarena.com/2013/08/linux-lvm-volume-group-operations.html
2.Create a new volume group using disk /dev/sdd1.Here the volumegroup name is “uavg”.
4.Check the volume status .You can see still volume is available for operation.
[root@mylinz ~]# df -h /mnt
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/uavg-ualvol1
51M 4.9M 43M 11% /mnt
[root@mylinz ~]#
[root@mylinz ~]# cd /mnt
[root@mylinz mnt]# touch 3 4 5 6
[root@mylinz mnt]# ls -lrt
total 18
drwx------. 2 root root 12288 Aug 6 22:55 lost+found
-rw-r--r--. 1 root root 0 Aug 7 00:32 1
-rw-r--r--. 1 root root 0 Aug 7 00:32 7
-rw-r--r--. 1 root root 0 Aug 12 21:17 6
-rw-r--r--. 1 root root 0 Aug 12 21:17 5
-rw-r--r--. 1 root root 0 Aug 12 21:17 4
-rw-r--r--. 1 root root 0 Aug 12 21:17 3
[root@mylinz mnt]#
5.But the volume is still reflecting the old device.This can be removed after remounting the volume.This can be down
when you have down time for the server.Please don;t forget to update “fstab” according to the new volume group name.
[root@mylinz ~]# umount /mnt
[root@mylinz ~]# mount -t ext4 /dev/mapper/uavg_new-ualvol1 /mnt
[root@mylinz ~]# df -h /mnt
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/uavg_new-ualvol1
51M 4.9M 43M 11% /mnt
[root@mylinz ~]# vgs
VG #PV #LV #SN Attr VSize VFree
uavg_new 1 1 0 wz--n- 508.00m 456.00m
vg_mylinz 1 2 0 wz--n- 19.51g 0
[root@mylinz ~]#
4.Check the “pvs” command output. /dev/sde will show as part of “uavg” now.
[root@mylinz ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 vg_mylinz lvm2 a- 19.51g 0
/dev/sdd1 uavg lvm2 a- 508.00m 456.00m
/dev/sde uavg lvm2 a- 508.00m 508.00m
/dev/sdf lvm2 a- 5.00g 5.00g
[root@mylinz ~]#
1.First find out the disks which we are planning to remove it from volume group is not used for any volumes using below
mentioned commands.
[root@mylinz ~]# lvs -a -o +devices
LV VG Attr LSize Origin Snap% Move Log Copy% Convert Devices
ualvol1 uavg -wi-a- 52.00m /dev/sdd1(0)
lv_root vg_mylinz -wi-ao 16.54g /dev/sda2(0)
lv_swap vg_mylinz -wi-ao 2.97g /dev/sda2(4234)
[root@mylinz ~]# pvs -a -o +devices |grep uavg
/dev/sdd1 uavg lvm2 a- 508.00m 456.00m /dev/sdd1(0)
/dev/sdd1 uavg lvm2 a- 508.00m 456.00m
/dev/sde uavg lvm2 a- 508.00m 508.00m
/dev/uavg/ualvol1 -- 0 0
[root@mylinz ~]#
From the above commands output,we can see disk “sde” is not used for any volumes (lvs command output) in volume
group “uavg” .
2.Check the disk details. From this details you can confirm ,PE (i.e physical extends) are not used in VG. (Total PE=127 &
Free PE=127).
[root@mylinz ~]# pvdisplay /dev/sde
--- Physical volume ---
PV Name /dev/sde
VG Name uavg
PV Size 512.00 MiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 127
Free PE 127
Allocated PE 0
PV UUID FadWLT-LjD8-v8VB-pboY-eZbK-vYpE-ZWq0i9
[root@mylinz ~]#
3.List the volume group details before removing the physical volume.
[root@mylinz ~]# vgs
VG #PV #LV #SN Attr VSize VFree
uavg 2 1 0 wz--n- 1016.00m 964.00m
vg_mylinz 1 2 0 wz--n- 19.51g 0
[root@mylinz ~]# vgdisplay uavg
--- Volume group ---
VG Name uavg
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 1016.00 MiB
PE Size 4.00 MiB
Total PE 254
Alloc PE / Size 13 / 52.00 MiB
Free PE / Size 241 / 964.00 MiB
VG UUID c87FyZ-5DND-oQ3n-iTh1-Vb1f-nBML-vUBUE9
[root@mylinz ~]#
4.Now we are ready to remove the “/dev/sde” from volume group “uavg”.
[root@mylinz ~]# vgreduce uavg /dev/sde
Removed "/dev/sde" from volume group "uavg"
[root@mylinz ~]# vgs
VG #PV #LV #SN Attr VSize VFree
uavg 1 1 0 wz--n- 508.00m 456.00m
vg_mylinz 1 2 0 wz--n- 19.51g 0
[root@mylinz ~]# vgdisplay uavg
--- Volume group ---
VG Name uavg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 10
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 508.00 MiB
PE Size 4.00 MiB
Total PE 127
Alloc PE / Size 13 / 52.00 MiB
Free PE / Size 114 / 456.00 MiB
VG UUID c87FyZ-5DND-oQ3n-iTh1-Vb1f-nBML-vUBUE9
[root@mylinz ~]#
From the above outputs ,you can see #PV reduced to “1″ and volume group size also reduced.
[root@mylinz ~]#
4.You can activate the volume group use same command with different options.
[root@mylinz ~]# vgchange -a y uavg
1 logical volume(s) in volume group "uavg" now active
[root@mylinz ~]# lvdisplay /dev/uavg/ualvol1
--- Logical volume ---
LV Name /dev/uavg/ualvol1
VG Name uavg
LV UUID 6GB8TR-ih7d-vg7J-xCLE-A8OH-gmwy-3XLyOb
LV Write Access read/write
LV Status available
# open 0
LV Size 52.00 MiB
Current LE 13
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
[root@mylinz ~]#
1.Run “vgcfgbackup” command to take new configuration backup for volume group “uavg”.
[root@mylinz ~]# vgcfgbackup uavg
Volume group "uavg" successfully backed up.
[root@mylinz ~]#
2.You can find the new configuration file under the below mentioned location.
[root@mylinz ~]# cd /etc/lvm/
[root@mylinz lvm]# ls -lrt
total 36
-rw-r--r--. 1 root root 21744 Aug 18 2010 lvm.conf
drwx------. 2 root root 4096 Aug 12 23:57 archive
drwx------. 2 root root 4096 Aug 13 00:27 backup
drwx------. 2 root root 4096 Aug 13 00:27 cache
[root@mylinz lvm]# cd backup/
[root@mylinz backup]# ls -lrt
total 8
-rw-------. 1 root root 1474 Jun 3 2012 vg_mylinz
-rw-------. 1 root root 1164 Aug 13 00:27 uavg
[root@mylinz backup]# file uavg
uavg: ASCII text
[root@mylinz backup]#
[root@mylinz backup]# more uavg
# Generated by LVM2 version 2.02.72(2) (2010-07-28): Tue Aug 13 00:27:46 2013
creation_host = "mylinz" # Linux mylinz 2.6.32-71.el6.x86_64 #1 SMP Wed Sep 1 01:33:01 EDT 2010 x86_64
creation_time = 1376333866 # Tue Aug 13 00:27:46 2013
uavg {
id = "c87FyZ-5DND-oQ3n-iTh1-Vb1f-nBML-vUBUE9"
seqno = 10
status = ["RESIZEABLE", "READ", "WRITE"] flags = [] extent_size = 8192 # 4
Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
5.Now assign the disks from SAN level to the system where you want to import the volume group.
6.Scan the disks and make the disks available for VG import.
Check out the Disks or LUN scanning procedure in Redhat Linux.
7.Import the volume group.
[root@mylinz ~]# vgimport uavg
Volume group "uavg" successfully imported
[root@mylinz ~]# vgs
VG #PV #LV #SN Attr VSize VFree
uavg 1 1 0 wz--n- 508.00m 456.00m
vg_mylinz 1 2 0 wz--n- 19.51g 0
[root@mylinz ~]#
8.Activate the volume group for normal operation.
[root@mylinz ~]# vgchange -a y uavg
1 logical volume(s) in volume group "uavg" now active
[root@mylinz ~]#
1.Let me remove device file for volume “uavg-ualvol1″ which is part of VG “uavg” .
Please leave a comment if you have any doubt on this . Share it in social networks to reach all the Linux administrators and
beginners.
When investigated, I found that both the instnaces were creating .tmp files in /usr/tmp directory with the same name. This error was being thrown
when one instance was trying to create .tmp file and a file with the same name was already created by the other instance.
To resolve the issue I shutdown both the apps and db services of one instance.
Created a directory 'temp' in '/usr/tmp' and changed the ownership of this dir to user owner of this instance
Logon to database as sysdba
Create pfile from spfile
modified UTL_FILE_DIR parameter's first entry from '/usr/tmp' to '/usr/tmp/temp'
Created spfile from pfile
Brought up the db and listener
Now modified the $APPLPTMP variable in TEST_oratest.xml file from '/usr/tmp' to '/usr/tmp/temp'
Run the autoconfig on apps tier/node
Brought up the apps services
Retested the issue and it was resolved
================================================================
Maintenance Mode when applying ADPATCH:
When you put your application in maintenance mode Workflow Buisness Events will stop, Users are not allowed to Login. It doesn't
matter weather you application services are down or not , but if you don't put your application in maintaince mode , your patch will
failed until and unless you use options=hotpatch.
------------------------------------
dadmin is not working how to enable maintenance mode oracle apps (EBS)
@$AD_TOP/patch/115/sql/adsetmmd.sql
You can also put your application in Maintenance mode from backend:
Enable Maintenance mode:
SQL> @$AD_TOP/patch/115/sql/adsetmmd.sql ENABLE
SQL> select fnd_profile.value(‘APPS_MAINTENANCE_MODE’) from dual; –> to check
When adsetmmd.sql runs, it sets the Profile Option 'Applications Maintenance Mode'
Note: Maintenance Mode is only needed for AutoPatch Sessions. Other AD utilities do not require
Maintenance Mode to be enabled. Maintenance Mode must be 'Enabled' before running AutoPatch and 'Disabled' after
the patch application is completed.
When Maintenance Mode is disabled, you can still run Autopatch by using options=hotpatch on the command line, if
necessary. However, doing so can cause a significant degradation of performance.
Oracle Mobile Application Server - Version 11.5.10.0 to 12.1.3 [Release 11.5 to 12.1]
Information in this document applies to any platform.
Information in this document applies to any platform.
Mobile Application Server - Version: 11.5.10 to 12.1
GOAL
One would like to start/stop MWA services using respectively adstrtal.sh/adstpall.sh control scripts
instead of the specific script mwactl.sh under $MWA_TOP/bin (11i) or INST_TOP/admin/scripts (r12).
FIX
1. Stop all the services (by running adstpall.sh under $COMMON_TOP/ in 11i or /admin/scripts in r12)
2. For 11i only apply Patch 5985992 (TKX patch), Patch 5712178 (MWA patch) if not already done, and Patch 8405261 per Note
781107.1.
3. For 11i or r12 modify value of s_mwastatus and s_other_service_group_status variables to 'enabled' (without quotes) in the xml
context file $APPL_TOP/admin/.xml in 11i or $INST_TOP/appl/admin/.xml in r12 (where is generally _)
4. Run Autoconfig
5. Now the MWA services can be started/stopped as other Applications processes using the
adstrtal.sh/adstpall.sh control scripts.
LINUX CUPS :
Generic Postscript "driver". Generally, for Postscript printers, you will not need a driver, as all applications produce PostScript. For
the printing system getting access to the printer-specific features the manufacturer supplies a PPD file for every PostScript printer.
Use this PPD file instead of a Foomatic-generated one to get all functionality of your PostScript printer working. The files provided
by Foomatic are generic files which contain only some standard options, use them only if you do not find an appropriate PPD for
your printer.
One can make use of all functionality which the PostScript printers have under Windows/MacOS when one uses the PPD file coming
with the printer, downloaded from here on OpenPrinting, from the manufacturer's home page, or from Adobe's web site (do
"unzip -L [filename].EXE" to get the PPD files). If there are several different PPD files for your printer model and none
dedicated for Linux or Unix, the PPD for Windows NT works best in most cases.
CUPS and PPR support PPD files directly, LPD/GNUlpr/LPRng, PDQ, CPS, and spooler-less users can set up their printers
with foomatic-rip as they would set up a printer with a Foomatic PPD file. foomatic-rip works as well with manufacturer-
supplied PostScript PPD files. This way all PostScript printers work perfectly under GNU/Linux or other free operating systems.
Ghostscript is not needed for them. See also our PPD documentation page for instructions.
See the tutorial chapter "Some Theoretical Background: CUPS, PPDs, PostScript, and Ghostscript" (PDF) for detailed information
about PostScript and PPD files.
APP-FND-00362: Routine &ROUTINE cannot execute request &REQUEST for program &PROGRAM, because theenvironment
variable &BASEPATH is not set for the application to which the concurrent program executable &EXECUTABLE belongs. Shut down
the concurrent managers. Set the basepath environment variable for theapplication. Restart the concurrent managers.
SOLUTION : check for custom environment file in $APPL_TOP and export custom path in that environment file.
AutoConfig could not successfully execute the following scripts: afdbprf.sh
and adcrobj.sh
Error During AutoConfig -
[PROFILE PHASE]
Directory: /u01/app/oracle/product/11.2.0/db_1/appsutil/install/visr12_appsdbnode
afdbprf.sh INSTE8_PRF 1
[APPLY PHASE]
Directory: /u01/app/oracle/product/11.2.0/db_1/appsutil/install/visr12_appsdbnode
adcrobj.sh INSTE8_APPLY 1
-- Start a new session and make sure you don't set the env by running <context>.env file
-- If you set the env then the Auto Config will fail
Classpath :
:/u01/app/oracle/product/11.0/db_1/jdbc/lib/ojdbc5.jar:/u01/app/oracle/product/11.0/db_1/appsutil/java/xmlparserv2.jar:/u01/app/oracle/product/1
1.0/db_1/appsutil/java:/u01/app/oracle/product/11.0/db_1/jlib/netcfg.jar
How to Lock Users Out Of E-Business Suite And Allow Specific Users in 11i/R12 Leave a
comment
This post is very handy during Month End Activities.
During Month ends if there is critical activity going from the business side and willing to restrict Business users accessing Oracle
Applications we can do below configuration changes, before editing any file take a backup of configuration files.
11i
1. Backup file $IAS_ORACLE_HOME/Apache/Apache/conf/apps.conf
2. Edit the apps.conf file and add a list of ip addresses for the users that you want to allow access to the system
e.g.
Alias /OA_HTML/ "/u01/jbcomn/html/"
<Location /OA_HTML/>
Order allow,deny
Allow from XX.XXX.XXX.XXX
Allow from XX.XXX.XXX.XXX
Allow from XX.XXX.XXX.XXX
Allow from X.XXX.XXX.XXX
Allow from localhost
Allow from your_apps_server.company.com
Allow from your_apps_server
</Location>
R12.X, R12.1X
1. Edit file $ORA_CONFIG_HOME/10.1.3/Apache/Apache/conf/custom.conf and add a list of ip addresses for the users that
you want to allow access to the system. The benefit of using custom.conf is that it is preserved when autoconfig is run.
e.g.
<Location ~ "/OA_HTML">
Order deny,allow
Deny from all
Allow from XX.XXX.XXX.XXX
Allow from XX.XXX.XXX.XXX
Allow from XX.XXX.XXX.XXX
Allow from X.XXX.XXX.XXX
Allow from localhost
Allow from your_apps_server.company.com
Allow from your_apps_server
</Location>
Note, you need to include localhost and your apps tier server name. One can use the PC name rather than IP address, however
PC name is more sensitive to network config
3. Restart Apache
4. Now only the users who are assigned to the ip addresses added will have access. All other users will get a forbidden error
when they attempt to login. This is a very simple solution and what makes it good is that it can be done programatically.
Forbidden
You don’t have permission to access /OA_HTML/AppsLocalLogin.jsp on this server
If you want to change the message you can do this: edit custom.conf add a line as follows (change the text to suit your
requirements)
ErrorDocument 403 “Forbidden oops, you cannot access the production instance as it is month end, only certain users have
access at this time
Stop/Start apache. Users will now receive the above message
Important: This may not work if the IP address hitting the web server is from a reverse proxy, load balancer or
some other device. This is because the IP address will not be from the end user.
How to increase JVM Count and Memory in Oracle Applications 11i and R12
How to increase the number OACORE process type(JVM) and required memory in R12:
Location in R12:$INST_TOP/apps/SID_HOSTNAME/ora/10.1.3/opmn/conf/opmn.xml
To increase JVM in R12.
Go to $INST_TOP/apps/SID_HOSTNAME/ora/10.1.3/opmn/conf/
Take a backup of opmn.xml file before editing,
A)Open the opmn.xml file and go to line no-128 and increase
numprocs=4
To increase the memory for oacore JVM, edit the file $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.properties
wrapper.bin.parameters=-verbose:gc -Xmx512M -Xms128M -XX:MaxPermSize=128M -XX:NewRatio=2 -
XX:+PrintGCTimeStamps -XX:+UseTLAB
to
wrapper.bin.parameters=-verbose:gc -Xmx1024M -Xms512M -XX:MaxPermSize=128M -XX:NewRatio=2 -
XX:+PrintGCTimeStamps -XX:+UseTLAB
– We normally allocate 2 or 4 jvms – We do not allocate 10 / 15
CASE II:
Size of core files might increasing path of which is: " /opt/oracle/PROD/inst/apps/PROD_prodapps/ora/10.1.2/forms"
FORMS_CATCHTERM. This variable enables or disables the Forms abnormal termination handler which captures middle tier crashes and writes
diagnostic information into the dump file or the forms server log file. Allowed values are {0,1}. By default, this value is set to '1' which enables
the Forms termination handler. Setting this variable to '0' disables the Forms termination Handler.
Moreover,
Note 356878.1: R11i / R12 : How to relink an E-Business Suite Installation of Release 11i and Release 12.x (Doc ID 356878.1)
Another oracle support note was found in this regard if above solution doesn't work.
Note 1194383.1: R12: Frequent frmweb core files created in $INST_TOP/ora/10.1.2/forms (frmweb core dumps) (Doc ID 1194383.1)
to apply Patch 8940272 - MULTIPLE CORE DUMPS FOUND DURING LOAD TESTING.
USER is "SYS"
SQL>
------------------------------ ------------------------------
DEVWEBSTORE10G_IC.CKPT.COM CKPT
SQL> drop database link "CKPT"."DEVWEBSTORE10G_IC.CKPT.COM "; <---- Drop by using schema name
with separation
ERROR at line 1:
SQL> drop database link DEVWEBSTORE10G_IC.CKPT.COM; <---- Drop by using without schema name
ERROR at line 1:
SQL> drop database link CKPT. DEVWEBSTORE10G_IC.CKPT.COM; <---- Drop by using without schema name
using pointer
ERROR at line 1:
plsql varchar2(1000);
cur number;
uid number;
rc number;
begin
select
from dba_users u
cur := SYS.DBMS_SYS_SQL.open_cursor;
SYS.DBMS_SYS_SQL.parse_as_user(
c => cur,
);
rc := SYS.DBMS_SYS_SQL.execute(cur);
SYS.DBMS_SYS_SQL.close_cursor(cur);
end;
/
Procedure created.
SQL>
SQL>
no rows selected
SQL>
Here No DB_LINK exists with the above name after Executing Procedure.
Step 2:- How to DROP ALL DB_LINKS of a “PRIVATE” schema from “SYS” user
This procedure is an extended for the above procedure “Drop_DbLink”, Create a procedure named “Dropschema_dblinks”
begin
select
l.db_link
from dba_db_links l
) loop
Drop_DbLink(
end loop;
end;
Procedure created.
SQL>
OWNER DB_LINK
------------------------------ ------------------------------
CKPT DEVWEBSTORE9I_IC.CKPT.COM
CKPT DEVWEBSTORE9I_IC.WORLD
CKPT INTER_EDI_RO.CKPT.COM
CKPT ORDERSHIPPING.CKPT.COM
CKPT ORDERSHIPPING.WORLD
CKPT SVC_IW.CKPT.COM
6 rows selected.
SQL>
no rows selected
SQL>
=============================================================
Cause: This happens when the file size of "reports.log" has reached its maximum limit at operating system which is 2GB
=============================================================
Responsibilty :-
Concurrent Program :-
Run the "XML Publisher Template Re-Generator Program" with parameter ALL .
ORA-00054: resource busy and acquire with NOWAIT specified or timeout
expired ORA-06512: at "APPS.WF_NOTIFICATION"
Approval Workflow Notification Mailer Error :
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired ORA-06512: at "APPS.WF_NOTIFICATION",
line 5130 ORA-06512: at line 1
Solution:
Sql> Select do.owner, do.object_name, do.object_type, dl.session_id, vs.serial#, vs.program, vs.machine, vs.osuser
where do.object_name ='WF_NOTIFICATIONS' and do.object_type='TABLE' and dl.lock_id1 =do.object_id and vs.sid =
dl.session_id;
Issue the command Alter system kill sessions 'Sid, serial#' immediate;
=-----------------------=--=-=-=-=-=-=-=
OPatch detects and reports any conflicts encountered when applying an interim patch with a previously
applied patch. The patch application fails in case of conflicts. You can use the -force option of OPatch to
override this failure. If you specify -force, the installer firsts rolls back any conflicting patches and then
proceeds with the installation of the desired interim patch.
You may experience a bug conflict and might want to remove the conflicting patch. This process is known as
patch rollback. During patch installation, OPatch saves copies of all the files that were replaced by the new
patch before the new versions of these files are loaded, and stores them in $ORACLE_HOME/.patch_storage.
These saved files are called rollback files and are key to making patch rollback possible. When you roll back a
patch, these rollback files are restored to the system. If you have gained a complete understanding of the
patch rollback process, you should only override the default behavior by using the -force flag. To roll back a
patch, execute the following command:
$ OPatch/opatch rollback -id <Patch_ID>
Please use below command to check the conflicts aganist the oracle_home and avoid to land in problems
Example:
$ unzip p9655017_10204_linux.zip
The other day, when I am doing patching on a RAC database, after executing the above conflict command, got below error
Following patches have conflicts. Please contact Oracle Support and get the merged patch of the patches :
check status
expire it
re-check status
in 10g
after that user test still have its old pass test.
that's all, good luck.
=================================================================================
ERROR
R12: Rapid Cloning Issue : ouicli.pl INSTE8_APPLY 255
SOLUTION
Step 1. Set $ORACLE_HOME/perl/bin in PATH environment variable.
Oracle has raised an alert in the alert.log and created a trace file as well, for a failed DBMS_SCHEDULER job with a strange
name which doesn’t appear in DBA_SCHEDULER_JOBS or DBA_SCHEDULER_PROGRAMS – what’s going on?
An extract from the alert log and/or the trace file mentioned in the alert log shows something like:
No matter how hard you scan the DBA_SCHEDULER_% views, you will not find anything with this name. What is actually
failing?
Oracle 11.1.0.6 onwards stopped listing these internal jobs in DBA_SCHEDULER_JOBS, as they did in 10g, and instead lists
them in DBA_AUTOTASK_% views. However, not by actual name, so don’t go looking for a TASK_NAME that matches the
above action name. You will fail.
Space advisor
Optimiser stats collection
SQl tuning advisor
The tasks that run for these autotask ‘clients’ are named as follows:
See MOS notes 756734.1, 755838.1, 466920.1 and Bug 12343947 for details. The first of these has the most relevant and
useful information.
UPDATE: My original failing autotask has been diagnosed by Oracle Support as bug 13840704 for which a patch exists
here for 11.2.0.2 and 11.2.0.3.
Oracle document id 13840704.8 has details, but it involves LOBs based on a user defined type. In this case, Spatial data
in an MDSYS.SDO_GEOMETRY column.
The view DBA_AUTOTASK_CLIENT won’t show you anything about a specific task, with the above names, but will show
you details of what the overall ‘client’ is, There are three:
CLIENT_NAME STATUS
------------------------------- --------
auto optimizer stats collection ENABLED
auto space advisor ENABLED
sql tuning advisor DISABLED
I can see from the task name in the alert log and trace file, that my failing task is a space advisor one, so, by looking into
the DBA_AUTOTASK_JOB_HISTORY view, I can see what’s been happening:
So, in my own example, the auto space advisor appears to have failed on Saturday and Sunday. Given that this is an
internal task, and nothing I can do will let me know about the invalid number problem, I need to log an SR with Oracle
on the matter. However, as I don’t want my fellow DBAs to be paged in the wee small hours for a known problem, I have
disabled the space advisor task as follows:
BEGIN
dbms_auto_task_admin.disable(
client_name => 'auto space advisor',
operation => NULL,
window_name => NULL);
END;
/
CLIENT_NAME STATUS
------------------------------- --------
auto space advisor DISABLED
Enabling it again after Oracle Support have helped resolve the problem is as simple as calling
dbms_auto_task_admin.enable with exactly the same parameters as for the disable call:
BEGIN
dbms_auto_task_admin.enable(
client_name => 'auto space advisor',
operation => NULL,
window_name => NULL);
END;
/
When enabling and/or disabling auto tasks, you must use the CLIENT_NAME as found in DBA_AUTOTASK_CLIENT view.
DBA_AUTOTASK_CLIENT
DBA_AUTOTASK_CLIENT_HISTORY
DBA_AUTOTASK_CLIENT_JOB
DBA_AUTOTASK_JOB_HISTORY
DBA_AUTOTASK_OPERATION
DBA_AUTOTASK_SCHEDULE
DBA_AUTOTASK_TASK
DBA_AUTOTASK_WINDOW_CLIENTS
DBA_AUTOTASK_WINDOW_HISTORY