Sie sind auf Seite 1von 75

Location of default environment file:

All custom_top entries are under the directory in default .env file

$INST_TOP/ora/10.1.2/forms/server

======================================

Function not available to this responsibility : When accessing CUSTOM FORM


After fresh clone, Application technical team used to report that ‘Function not available to this responsibility’ error
when they are trying to access custom form.

CAUSE: Missing the entry of CUSTOM_TOP in default.env file

SOLUTION:

1. Login to APPLMGR user to application Linux server


[applmgr@EBSTEST]$ sudo su – applmgr

2. Go to $INST_TOP/ora/10.1.2/forms/server directory
[applmgr@EBSTEST]$ cd $INST_TOP/ora/10.1.2/forms/server

3. Add the missing CUSTOM_TOP entry to default.env


XXX_TOP=/U01/applmgr/r12/CUSTOM/xxX/12.0.0

4. Restart the middle tier services.

5. Retest the issue.

======================================

oracle:ERPPROD @denux008:/home/oracle> scp -r oo1ogt_db_stats_40.log oracle@tullx001.ss.gates.com:/home/oracle

oracle@tullx001.ss.gates.com's password:

oo1ogt_db_stats_40.log 100% 100MB 573.6KB/s 1.1MB/s 02:58

======================================

The following SQL can be used to determine progress.

select sysdate, used_urec, used_ublk from v$transaction where ses_addr='C00000175AB97C58';

When used_urec and used_ublk get to zero it will have completed rollback.
Concurrent managers not starting up after cmclean.sql:

Please recheck the server_id value from fnd_nodes. Running Autoconfig should fix the value and match it with the one in the dbc
file.

Let's try to do the following:

1) Check the profile option "Concurrent: GSM Enabled", if it is set to "Yes", change it to "No", restart the concurrent manager and
check.

2) SQL> select object_name from dba_objects where status = 'INVALID' and object_name like 'FND_CONC%';
If it returns something then use adadmin to recompile the invalid objects. Restart the CM and check.

3) SQL> select * from dual; -> How many rows it return?

4) Login to SQL*Plus as applsys/apps and run the following:

SQL> update fnd_concurrent_requests


set status_code = 'X', phase_code = 'C'
where status_code = 'T';

SQL> commit;

SECOND SENARIO:

1.) Check apps listener

Ps –ef | grep lsnr

2.) execute adcmctl.sh stop


3.) execute adalnctl.shstop
4.) check apps listener
5.) adalnctl.sh start
6.) adcmctl.sh start

======================================
UNDER WHICH MANAGER REQUEST WAS RUN
=======================================

SELECT
b.user_concurrent_queue_name
FROM
fnd_concurrent_processes a
,fnd_concurrent_queues_vl b
,fnd_concurrent_requests c
WHERE 1=1
AND a.concurrent_queue_id = b.concurrent_queue_id
AND a.concurrent_process_id = c.controlling_manager
AND c.request_id = &request_id

Concurrent Manager Scripts

Oracle supplies several useful scripts, (located in $FND_TOP/sql directory), for monitoring the concurrent managers:

afcmstat.sql Displays all the defined managers, their maximum capacity, pids, and their status.

afimchk.sql Displays the status of ICM and PMON method in effect, the ICM's log file, and determines if the concurrent
manger monitor is running.

Displays the concurrent manager and the name of its log file that processed a request.

afcmcreq.sql

afrqwait.sql Displays the requests that are pending, held, and scheduled.

afrqstat.sql Displays of summary of concurrent request execution time and status since a particular date.

afqpmrid.sql Displays the operating system process id of the FNDLIBR process based on a concurrent request id. The process
id can then be used with the ORADEBUG utility.
afimlock.sql Displays the process id, terminal, and process id that may be causing locks that the ICM and CRM are waiting to
get. You should run this script if there are long delays when submitting jobs, or if you suspect the ICM is in a
gridlock with another oracle process.

======================================

CONCURRENT MANAGERS ERROR SENERIAOS


======================================

Managers down – Status show “Target node/queue unavailable 1 comment


Concurrent Managers Status shows ”Target node/queue unavailable” in Concurrent–>Manager–>Administer Screen form.

Solution:

Ensure Database is Running and Middle Tier Services are down.

Connect SQLPLUS as APPS user and run the following :

EXEC FND_CONC_CLONE.SETUP_CLEAN;

COMMIT;

EXIT;

 Run AutoConfig on all tiers, firstly on the DB tier and then the APPS tiers and webtier to repopulate the required
systemtables
 Run the CMCLEAN.SQL script from the referenced note below (don’t forget to commit).
 Note.134007.1 – ‘CMCLEAN.SQL – Non Destructive Script to Clean Concurrent Manager Tables‘
 Start the middle tier services including your concurrent manager.
 Retest the issue.
Posted October 17, 2013 by balaoracledba.com in 11i/R12, Concurrent Manager, Issues, OracleAppsR12

Managers down – Status “System Hold, Fix Manager” Leave a comment


Concurrent Managers Status shows “System Hold, Fix Manager” in Concurrent–>Manager–>Administer Screen form.
Solution:

• Ensure Concurrent :GSM Enabled profile is set to ‘Y’

• Run $FND_TOP/patch/115/sql/afdcm037.sql

• Go to $FND_TOP/bin

adrelink.sh force=y “fnd FNDLIBR”


adrelink.sh force=y “fnd FNDSM”
adrelink.sh force=y “fnd FNDFS”
adrelink.sh force=y “fnd FNDCRM”
• Run cmclean.sql
• Start Application Service (adstrtal.sh)
Posted October 17, 2013 by balaoracledba.com in 11i/R12, Concurrent Manager, Issues, OracleAppsR12

R12 Opp(output Post Processor) and Workflow Mailer is down Leave a comment
When i see the Status OPP Manger and Workflow Mailer from Concurrent–>Manager–>Administer Screen. I see below status

Solution :
• Ensure Concurrent:GSM Enabled profile is set to ‘Y’

• Verify Service Manager status in Administer Form.

• Verify Service Manager Definition.

• Ensure FNDSM Entries available in FND_CONCURRENT_QUEUES Table

• FNDSM entry should be correct in Tnsnames.ora file and tnsping FNDSM_hostname should work fine.
• Then Bounce the Services.

Ensure Concurrent:GSM Enabled profile is set to ‘Y’


Posted October 16, 2013 by balaoracledba.com in 11i/R12, Concurrent Manager, Issues, OracleAppsR12
ORA-06512: at “APPS.FND_CP_FNDSM”, line 29 Concurrent Manger not starting Leave a
comment
When i checked concurrent manager log under $APPLCSF/log/<SID>.mgr

I see below error:

Cause: cleanup_node failed due to ORA-01427: single-row subquery returns more than one row

ORA-06512: at “APPS.FND_CP_FNDSM”, line 29


ORA-06512: at line 1.
The SQL statement being executed at the time of Routine AFPEIM encountered an error while starting concurrent manager
STANDARD with library /dev/applmgr/R12/apps/apps_st/appl/fnd/12.0.0/bin/FNDLIBR.

Check that your system has enough resources to start a concurrent manager process. Contact your syst : 08-OCT-2013
00:30:51

Starting IEU_WL_CS Concurrent Manager : 08-OCT-2013 00:30:51

Could not initialize the Service Manager FNDSM_apps01_dev. Verify that apps01 has been registered for concurrent processing.
ORACLE error 1427 in cleanup_node
Cause: cleanup_node failed due to ORA-01427: single-row subquery returns more than one row
ORA-06512: at “APPS.FND_CP_FNDSM”, line 29
ORA-06512: at line 1.
The SQL statement being executed at the time of
Routine AFPEIM encountered an error while starting concurrent manager IEU_WL_CS with library
/dev/applmgr/R12/apps/apps_st/appl/fnd/12.0.0/bin/FNDLIBR.
Solution

———-

sqlplus apps/apps

sql>exec fnd_conc_clone.setup_clean;

commit;

sql>@cmclean.sql
Started the concurrent manager on the application tier and it worked

Posted October 8, 2013 by balaoracledba.com in 11i/R12, Concurrent Manager, Issues, OracleAppsR12

Concurrent Processing – R12 Output Post Processor Service Not Coming Up 3


comments
Reason :
If Service Manager for the node is not running. Possible cause might be service manager definition is missing under

Concurrent ->Manager ->Define form. If the Service Manager is not present/defined for a particular node,then this causes all
the services provided by Service Manager like OPP,WF etc.. not to work.

1. Shutdown all the services.

——Below Step 2 will create Service Manager “FNDSM”——-

2. Log in as applmgr
cd to $FND_TOP/patch/115/sql
Run the script: afdcm037.sql
3. Relink FNDSM and FNDLIBR executables as mentioned below:

$ adrelink.sh force=y link_debug=y “fnd FNDLIBR”


$ adrelink.sh force=y link_debug=y “fnd FNDSM”
4. Run cmclean.sql
5. Start up the managers/services

Posted October 7, 2013 by balaoracledba.com in 11i/R12, Concurrent Manager, Issues, OracleAppsR12

Output Post Processor is Down with Actual Process is 0 And Target Process is 1 Leave a
comment
If you see OPP is Down with Actual Process is 0 And Target Process is 1 then do the following
1. Shutdown concurrent server via command adcmctl.sh under $COMMON_TOP/admin/scripts/<context_name>
2. To ensure concurrent manager down; check there is no FNDLIBR process running.
ps -ef | grep applmgr | grep FNDLIBR
3. Run adadmin to relink FNDSVC executable.

a. Invoke adadmin from command prompt


b. Choose option 2 (2. Maintain Applications Files menu)
c. Choose option 1 (1. Relink Applications programs )
d. Then type “FND” When prompted; ( Enter list of products to link (‘all’ for all products) [all] : FND )
e. Ensure adrelink is exiting with status 0
4. Start Concurrent Managers using adcmctl.sh
In-built Data purge concurrent programs
As per metalink note 387459.1:

The ATG / FND supplied data purge requests are the following:
- Purge Concurrent Request and/or Manager Data [FNDCPPUR]
- Purge Obsolete Workflow Runtime Data [FNDWFPR]
- Purge Signon Audit data [FNDSCPRG.sql]
- Purge Obsolete Generic File Manager Data [FNDGFMPR]
- Purge Debug Log and System Alerts [FNDLGPRG]
- Purge Rule Executions [FNDDWPUR]
- Purge Concurrent Processing Setup Data for Cloning [FNDCPCLN]

Metalink Note 732713.1 describes purging strategy for E-Business Suite 11i:

There is no single Archive/Purge routine that is called by all modules within eBusiness Suite, instead each module has module
specific archive/purge procedures.
Concurrent Jobs to purge data

 Purge Obsolete Workflow Runtime Data (FNDWFPR)

Oracle Applications System Administrator’s Guide - Maintenance Release 11i (Part No. B13924-04)
Note 132254.1 Speeding up and Purging Workflow
Note 277124.1 FAQ on Purging Oracle Workflow Data
Note 337923.1 A closer examination of the Concurrent Program Purge Obsolete Workflow Runtime Data

 Purge Debug Log and System Alerts (FNDLGPRG)

Note 332103.1 Purge Debug Log And System Alerts Performance Issues

 Purge Signon Audit data (FNDSCPRG)

Note 1016344.102 What Tables Does the Purge Signon Audit Data Concurrent Program Affect?
Note 388088.1 How To Clear The Unsuccessful Logins

 Purge Concurrent Request and/or Manager Data (FNDCPPUR)

Oracle Applications System Administrator’s Guide - Maintenance Release 11i (Part No. B13924-04)
Note 565942.1 Which Table Column And Timing Period Does The FNDCPPUR Purge Program Use
Note 104282.1 Concurrent Processing Tables and Purge Concurrent Request and/or Manager Data Program (FNDCPPUR)
Note 92333.1 How to Optimize the Process of Running Purge Concurrent Request and/or Manager Data (FNDCPPUR)

 Delete Diagnostic Logs (DELDIAGLOG)

Note 466593.1 How To Delete Diagnostic Logs and Statistics?

 Delete Diagnostic Statistics (DELDIAGSTAT)

Note 466593.1 How To Delete Diagnostic Logs and Statistics?

 Purge FND_STATS History Records (FNDPGHST)

Oracle Applications System Administrator’s Guide - Configuration Release 11i (Part No. B13925-06)
Note 423177.1 Date Parameters For "Purge Fnd_stats History Records" Do Not Auto-Increment

 Page Access Tracking Purge Data (PATPURGE)


Note 413795.1 Page Access Tracking Data Purge Concurrent Request Fails With Ora-942
Note 461897.1 Which Tables store the Page Access Tracking Data?
Note 402116.1 Page Access Tracking in Oracle Applications Release 12

 Purge Obsolete Generic File Manager Data (FNDGFMPR)

Oracle Applications System Administrator’s Guide - Configuration Release 11i (Part No. B13925-06)
Note 298698.1 Avoiding abnormal growth of FND_LOBS table in Application
Note 555463.1 How to Purge Generic or Purchasing Attachments from the FND_LOBS Table

 Summarize and Purge Concurrent Request Statistics (FNDCPCRS)

(no references found)

 Purge Inactive Sessions (ICXDLTMP)

Note 397118.1 Where Is 'Delete Data From Temporary Table' Concurrent Program - ICXDLTMP.SQL

 Purge Obsolete ECX Data (FNDECXPR)

Note 553711.1 Purge Obsolete Ecx Data Error ORA-06533: Subscript Beyond Count
Note 338523.1 Cannot Find ''Purge Obsolete Ecx Data'' Concurrent Request
Note 444524.1 About Oracle Applications Technology ATG_PF.H Rollup 6

 Purge Rule Executions (FNDDWPURG)

(no references found)

Additional Notes

You can monitor and run purging programs through OAM by navigating to the Site Map--> Maintenence --> Purge section.

This note also gives reference of a white paper in Note 752322.1 "Reducing Your Oracle E-Business Suite Data Footprint using
Archiving, Purging, and Information Lifecycle Management"

======================================
ORA-01102: cannot mount database in EXCLUSIVE mode
Check for oracle SID related process already running

ps -ef |grep ora_|grep $ORACLE_SID

kill all the running process and then startup

ORA-01102: cannot mount database in exclusive mode

Cause: An instance tried to mount the database in exclusive mode, but some other instance has already mounted the database in
exclusive or parallel mode.

Action: Either mount the database in parallel mode or shut down all other instances before mounting the database in exclusive
mode.

======================================

RMAN ERROR WHILE RESTORING


ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below

ORA-01152: file 1 was not restored from a sufficiently old backup

ORA-01110: data file 1: '/package/oracle/oradata/perseus/system01.dbf'

"File 1 was not restored from a sufficiently old backup"


in RMAN Recover

RMAN> recover database;

starting media recovery

Oracle Error:

ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below

ORA-01152: file 1 was not restored from a sufficiently old backup


ORA-01110: data file 1: '/package/oracle/oradata/perseus/system01.dbf'

RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-03002: failure of recover command at 02/15/2012 11:09:12

RMAN-06053: unable to perform media recovery because of missing log

RMAN-06025: no backup of archived log for thread 1 with sequence 41765 and starting SCN of
9738413586917 found to restore

RMAN-06025: no backup of archived log for thread 1 with sequence 41764 and starting SCN of
9738413585738 found to restore

RMAN-06025: no backup of archived log for thread 1 with sequence 41763 and starting SCN of
9738413584155 found to restore

RMAN-06025: no backup of archived log for thread 1 with sequence 41762 and starting SCN of
9738413582950 found to restore

...

RMAN-06025: no backup of archived log for thread 1 with sequence 41734 and starting SCN of
9738413520883 found to restore

RMAN-06025: no backup of archived log for thread 1 with sequence 41733 and starting SCN of
9738413519245 found to restore

RMAN-06025: no backup of archived log for thread 1 with sequence 41732 and starting SCN of
9738413518015 found to restore

RMAN-06025: no backup of archived log for thread 1 with sequence 41731 and starting SCN of
9738413516741 found to restore

RMAN> alter database open resetlogs;

RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-03002: failure of alter db command at 02/15/2012 11:28:44

ORA-01152: file 1 was not restored from a sufficiently old backup

ORA-01110: data file 1: '/package/oracle/oradata/perseus/system01.dbf'


RMAN> list backup of archivelog all;

List of Backup Sets

===================

BS Key Size Device Type Elapsed Time Completion Time

------- ---------- ----------- ------------ ---------------

16198481 73.00K DISK 00:00:00 11-FEB-12

BP Key: 16198488 Status: AVAILABLE Compressed: YES Tag: SAT

Piece Name: /package/oracle/orabackup/rman/rman_PERSEUS_arc_20120211_4644_1

List of Archived Logs in backup set 16198481

Thrd Seq Low SCN Low Time Next SCN Next Time

---- ------- ---------- --------- ---------- ---------

1 41584 9738413221153 11-FEB-12 9738413222321 11-FEB-12

...

1 41724 9738413502482 12-FEB-12 9738413503782 12-FEB-12

1 41725 9738413503782 12-FEB-12 9738413505258 12-FEB-12

1 41726 9738413505258 12-FEB-12 9738413509317 12-FEB-12

1 41727 9738413509317 12-FEB-12 9738413513782 12-FEB-12

BS Key Size Device Type Elapsed Time Completion Time

------- ---------- ----------- ------------ ---------------

16205673 11.50K DISK 00:00:01 12-FEB-12

BP Key: 16205679 Status: AVAILABLE Compressed: YES Tag: SUN

Piece Name: /package/oracle/orabackup/rman/rman_PERSEUS_arc_20120212_4653_1

List of Archived Logs in backup set 16205673

Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------

1 41730 9738413516668 12-FEB-12 9738413516741 12-FEB-12

RMAN> recover database until sequence 41730;

Starting recover at 15-FEB-12

using channel ORA_DISK_1

using channel ORA_DISK_2

RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-03002: failure of recover command at 02/15/2012 11:38:42

RMAN-06556: datafile 1 must be restored from backup older than SCN 9738413516668

Need to tell RMAN when to stop.

RMAN> recover database until sequence 41731;

Starting recover at 15-FEB-12

using channel ORA_DISK_1

using channel ORA_DISK_2

starting media recovery

channel ORA_DISK_1: starting archived log restore to default destination

channel ORA_DISK_1: restoring archived log

archived log thread=1 sequence=41730

channel ORA_DISK_1: reading from backup piece


/package/oracle/orabackup/rman/rman_PERSEUS_arc_20120212_4653_1

channel ORA_DISK_1: piece


handle=/package/oracle/orabackup/rman/rman_PERSEUS_arc_20120212_4653_1 tag=SUN

channel ORA_DISK_1: restored backup piece 1

channel ORA_DISK_1: restore complete, elapsed time: 00:00:01


archived log file name=/oradb/archive/perseus/archive1_41730_729171422.dbf thread=1
sequence=41730

media recovery complete, elapsed time: 00:00:01

Finished recover at 15-FEB-12

RMAN> alter database open resetlogs;

database opened

new incarnation of database registered in recovery catalog

starting full resync of recovery catalog

full resync complete

How does it happen? What’s cause of the “datafile 1 must be restored from backup” ?

I found an excellent explanation here. According to this article, RMAN wont backup the archivelogs generated after the start of
the run of rman backup script.
We switch log every 10 minutes, so it very likely that new archivelog is generated during this period.
What happening when executing adpreclone.pl in DB and Apps Tier?
adpreclone.pl - This is the preparation phase, will collects information about the source system, creates a cloning stage area, and
generates templates and drivers. All of these are to reconfigure the instance on a Target machine.
Preclone will do the following:

Convert Symbolic links


All symbolic links pointing to a static path will be converted into relative paths

Create templates
Any files under the $ORACLE_HOME that contain system specific information, will be replicated and converted into a template.
These templates are placed into the $ORACLE_HOME/appsutil/template directory.

Create driver(s)
A driver file, relating to these new templates is created called instconf.drv. This contains a list of all the templates and their locations,
and the destination configuration files that these templates will create.
This driver file is called instconf.drv and is placed into directory
$ORACLE_HOME/appsutil/driver

Create Stage area


A clone stage is created containing the required java code and scripts to reconfigure the instace on the Target machine

Rapid Clone stage area:


dbTier : $ORACLE_HOME/appsutil/clone
appsTier(s) - $COMMON_TOP/clone

The stage area(s) consist of the following directories:-


jre used to run the java code on the Target machine.
bin contains the RapidClone scripts that can be run on the Target machine:-

 adclone.pl is the main cloning script


 adcfgclone.pl is used to configure the Target system, this calls adclone.pl
 adclonectx.pl is used to clone a Source XML file manually
 adaddnode.pl is used to add a new node to the Patch History tables
 adchkutl.sh checks for existence of require O/S utils, cc, make, ar and ld

jlib contains all the Rapid Clone java code, jdbc libraries etc
context contains templates used for a Target XML file
data (Database Tier only) contains the driver file, and templates used to generate the control file SQL script
adcrdb.zip contains the template and list of datafiles on the Source
addbhomsrc.xml contains information on the datafile mount points of the Source
appl (Applications Tier only) this is used when merging appltops, i.e Multi-node to Single node cloning

Executing adpreclone.pl will create a log file:-


Rapid Clone:
dbTier : $ORACLE_HOME/appsutil/log/$CONTEXT_NAME/StageDBTier_xxxxxx.log
appsTier : $APPL_TOP/admin/$CONTEXT_NAME/log/StageAppsTier_xxxxxx.log
Once this adpreclone.pl step has been completed successfully, all the java .class files under the following directories should be
identical to those under $JAVA_TOP/oracle :

RDBMS $ORACLE_HOME/appsutil/java/oracle
RDBMS $ORACLE_HOME/appsutil/clone/jlib/java/oracle
$COMMON_TOP/clone/jlib/java/oracle

$ # database and listener (>=10g)


$ dbshut $ORACLE_HOME

Oracle datafile size


What is the limitation for oracle datafile size?

It depends on 2 factors:
i. OS and ii. Database block size (DB_BLOCK_SIZE) parameter.
In 32 bit OS, You can create datafile upto 2GB to 4GB.
Following is the impact of DB_BLOCK_SIZE parameter on datafile size limitation:
For smallfile tablespace, single datafile can hold upto 2^22 or 4 MB or 4 million blocks, it means:
with DB_BLOCK_SIZE=4k, you can have max file size= 4k*4MB =16GB
with DB_BLOCK_SIZE=8k, you can have max file size= 8k*4MB =32GB
with DB_BLOCK_SIZE=16k, you can have max file size= 16k*4MB =64GB and so on..

For Bigfile tablespace(10g feature), a single data file can hold upto 2^32 or 4GB or 4 billion blocks, it means:
with DB_BLOCK_SIZE=4k, you can have max file size= 4k*4GB =16TB
with DB_BLOCK_SIZE=8k, you can have max file size= 8k*4GB =32TB
with DB_BLOCK_SIZE=16k, you can have max file size= 16k*4GB =64TB and so on..
Other Limits you can find in following Oracle Document:

AutoConfig-Managed AD Utility Files


File name Location Description
adconfig.txt $APPL_TOP/admin Contains environment information used by all
AD utilities.
Warning: Do not update this file manually.
<CONTEXT_NAME>.env $INST_TOP/ora/10.1.3 Used to configure the environment when
(UNIX)(UNIX) performing maintenance operations on the
<CONTEXT_NAME>.cmd (Windows) OracleAS 10.1.3 ORACLE_HOME.
<CONTEXT_NAME>.env (UNIX) RDBMS Used to configure the environment when
<CONTEXT_NAME>.cmd (Windows) ORACLE_HOME performing maintenance operations on the
database.
APPS<CONTEXT_NAME>.env APPL_TOP Named APPSORA in earlier releases, this file
(UNIX) calls the environment files needed to set up
APPS<CONTEXT_NAME>.cmd the APPL_TOP and the Applications
(Windows) ORACLE_HOME.
<CONTEXT_NAME>.env (UNIX) APPL_TOP Called by APPS<CONTEXT_NAME>.env (UNIX)
<CONTEXT_NAME>.cmd (Windows) or APPS<CONTEXT_NAME>.cmd (Windows)
file to set up the APPL_TOP. This file calls
either adovars.env (UNIX) or adovars.cmd
(Windows).
<CONTEXT_NAME>.env (UNIX) $INST_TOP/ora/10.1.2 Called by APPS<CONTEXT_NAME>.env (UNIX)
<CONTEXT_NAME>.cmd (Windows) or APPS<CONTEXT_NAME>.cmd (Windows) to
set up the OracleAS 10.1.2 ORACLE_HOME.
adovars.env (UNIX) adovars.cmd APPL_TOP/admin Called by the <CONTEXT_NAME>.env (UNIX)
(Windows) or <CONTEXT_NAME>.cmd (Windows) file
located in the APPL_TOP. Used to set
environment variables for Java and HTML.

The following configuration and environment files are also used by most AD utilities, but are not created by
AutoConfig.

Warning: Do not update any of these files manually.

Non-AutoConfig AD Utility Files


File name Location Description
applora.txt APPL_TOP/admin Contains information about required init.ora parameters for runtime.
applorau.txt APPL_TOP/admin Contains information about required init.ora parameters for install and
upgrade.
applprod.txt APPL_TOP/admin The AD utilities product description file, used to identify all products and
product dependencies.
applterr.txt APPL_TOP/admin The AD utilities territory description file. It contains information on all
supported territories and localizations.
fndenv.env FND_TOP Sets additional environment variables used by Oracle Application Object
Library. The default values should be applicable for all customers.

# When we execute env file in $APPL_Top it calls adovars.env located at $APPL_TOP/admin

MAINTENANCE MODE – (ADADMIN)


WHEN YOU ARE GOING TO INSTALL A PATCH ON APPLICATION THE RECOMMENDED OPTION IS THAT ENABLE
MAINTENANCE NODE. TO BRING THE APPLICATION IN MAINTENANCE MODE. WHEN YOU ENABLE OR DISABLE
‘MAINTENANCE MODE’, ADADMIN WILL EXECUTE THE SCRIPT.

ENABLE MAINTENANCE MODE:


@>/AD_TOP/PATCH/115/SQL/ADSETMMD.SQL ENABLE

DISABLE MAINTENANCE MODE:


@>/AD_TOP/PATCH/115/SQL/ADSETMMD.SQL DISABLE

TO VERIFY IF THE ENVIRONMENT IS IN MAINTENANCE MODE OR NOT EXECUTE FOLLOWING SCRIPT.

SELECT FND_PROFILE.VALUE('APPS_MAINTENANCE_MODE') AS STATUS


FROM DUAL;

IF THE STATUS
“MAINT” = MAINTENANCE MODE HAS BEEN ENABLED AND THE USERS WILL NOT BE ABLE TO LOGIN.
“NORMAL” = MAINTENANCE MODE HAS BEEN DE-ACTIVATED AND THE USERS WILL BE ABLE TO
LOGIN.

HOW TO ENABLE THE MAINTENANCE MODE


SET THE ENVIRONMENT VARIABLE ON APPLICATION INSTANCE.
RUN THE AD ADMINISTRATION UTILITY BY TYPING ADADMIN ON CONSOLE WINDOW. CHOSE OPTION 5 FROM THE
SELECTION MENU.

1.GENERATE APPLICATIONS FILE MENU.


2.MAINTAIN APPLICATIONS FILE MENU.
3.COMPILE/RELOAD APPLICATIONS DATABASE ENTITIES MENU.
4.MAINTAIN APPLICATIONS DATABASE ENTITIES MENU.
5.CHANGE MAINTENANCE MODE.
6.EXIT AD ADMINISTRATION.

Oracle applications patching Maintenance mode:


Why we need to put maintenance mode when applying patch in oracle applications:
While Applying a Patch, it is not Mandatory to Bring down All the Application Services except if it Mentioned in the
Patch Read Me. The Scope of the Maintenance Mode is to Avoid the End users to log in to application at the time of...
Patching.
As per the MOS Note 233044.1
Maintenance mode provides a clear separation between normal runtime operation of Oracle Applications
and system downtime for maintenance. Enabling the maintenance mode feature shuts down the Workflow
Business Events System and sets up function security so that no Oracle Applications functions are
available to users. Used only during AutoPatch sessions, maintenance mode ensures optimal performance
and reduces downtime when applying a patch. For more information, refer to Preparing your System for
Patching in Oracle Applications Maintenance Utilities.

Processes
Oracle uses many small (focused) processes to manage and control the Oracle instance. This allows for optimum execution on
multi-processor systems using multi-core and multi-threaded technology. Some of these processes include:

 PMON - Process Monitor


 SMON - System Monitor
 ARCn - Redo Log Archiver
 LGWR - Redo Log Writer
 DBWn - Database Writer
 CKPT - Checkpoint process
 RECO - Recoverer
 CJQn - Job Queue Coordinator
 QMNn - Queue-monitor processes
 Dnnn - Dispatcher Processes (multiplex server-processes on behalf of users)
 Snnn - Shared server processes (serve client-requests)
 MMAN - Memory Manager process which will help in automatic memory management when use sga_target,memory_target
 LSP0 - Logical standby coordinator process (controls Data Guard log-application)
 MRP - Media-recovery process (detached recovery-server process)
 MMON - This is the process which will write to AWR base tables ie WR$ tables
 MMNL - Memory monitor light (gathers and stores AWR statistics)
 PSP0 - Process-spawner (spawns Oracle processes)
 RFS - Remote file server process (archive to a remote site)
 DBRM - DB resource manager (new in 11g)
 DIAGn - Diagnosability process (new in 11g)
 FBDA - Flashback data archiver process (new in 11g)
 VKTM - Virtual Timekeeper (new in 11g)
 Wnnn - Space Management Co-ordination process (new in 11g)
 SMCn - Space Manager process (new in 11g)
ERROR MSG during CFGCLONE when starting listener:

System parameter file is /Test/GUICTEST/db/tech_st/11.1.0/network/admin/GUICTEST_iggp14/listener.ora

Log messages written to


/Test/GUICTEST/db/tech_st/11.1.0/admin/GUICTEST_iggp14/diag/tnslsnr/IGGP14/guictest/alert/log.xml

Error listening on: (ADDRESS=(PROTOCOL=ipc)(PARTIAL=yes)(QUEUESIZE=1))

No longer listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=IGGP14.ap.corp)(PORT=1571)))

TNS-12546: TNS:permission denied

TNS-12560: TNS:protocol adapter error

TNS-00516: Permission denied

Linux Error: 13: Permission denied

Solution:

1. Check if you have correct ORACLE_HOME, ORACLE_SID and PATH environments.

2. Check if /tmp/.oracle and /var/tmp/.oracle directories exists.

3. Check permissions for those directories for current user who is trying to start listener.

mkdir /var/tmp/.oracle

mkdir /tmp/.oracle

chown -R oracle:oinstall /var/tmp/.oracle /tmp/.oracle

chmod -R 01777 /var/tmp/.oracle /tmp/.oracle

NOT able to connect to a server via putty:

[root@manny ~]# ls -ld /var/empty/sshd/

drwxrwxrwx. 2 root root 4096 Aug 12 2010 /var/empty/sshd/sss

[root@manny ~]# chmod go-x -R /var/empty/sshd/

[root@manny ~]# /etc/init.d/sshd restart

[root@manny ~]# /etc/init.d/sshd start


Change permission of sshd to 711

which process updates controlfile, when doing


complete recovery of it?
which process updates controlfile, when doing complete recovery of it?

But unfortunately max votes got for a incorrect option. The correct answer is Server process.

Many DBA’s don’t know that we can perform complete recovery when we lost controlfile. (even i had some good argument with a friend

on my blog on this)

If you want to know how to do complete recovery, see below link

http://pavandba.wordpress.com/2010/03/18/how-to-do-complete-recovery-if-controlfiles-are-lost/

By reading above post, you might have got the point that we are creating new controlfile. In such cases, to open the database we

require latest SCN to be there in controlfile to match it with datafiles and redolog files.

If it doesn’t match, it will fail to open. So server process will take that responsibility to update the controlfile with latest SCN and this

info will be taken from datafiles

how to do complete recovery if controlfiles are lost


Lets see the steps to perform a complete recovery of database if we loose all the controlfiles

1. Take the trace of controlfile using below command


sql> alter database backup controlfile to trace;
Note : The above command will work fortunately if you have database still up and running. If not, you need to have the
latest controlfile trace. If not available and still you have all redolog and datafile information, then you can take trace of
other database and modify the name, path and sizes of redolog files and datafiles
2. From the controlfile trace, copy second CREATE CONTROLFILE command till characterset to another text file and save it
with .sql extension (generally i will save it as create_control.sql)
3. change RESETLOGS option to NORESETLOGS in that sql file.
4. sql> shutdown immediate;
5. sql> startup nomount;
6. sql> @create_control.sql (your current directory should be the location of this file or you can give path also before file
name)
Note : This will create controlfile and will place the database in MOUNT state. If any errors, observed, we need to debug
them.
7. sql> alter database open;

Tuning Oracle's Buffer Cache


Roger Schrag, Database Specialists, Inc.
http://www.dbspecialists.com

Introduction

Oracle maintains its own buffer cache inside the system global area (SGA) for each instance. A properly
sized buffer cache can usually yield a cache hit ratio over 90%, meaning that nine requests out of ten are
satisfied without going to disk.

If a buffer cache is too small, the cache hit ratio will be small and more physical disk I/O will result. If a
buffer cache is too big, then parts of the buffer cache will be under-utilized and memory resources will be
wasted.

Checking The Cache Hit Ratio

Oracle maintains statistics of buffer cache hits and misses. The following query will show you the overall
buffer cache hit ratio for the entire instance since it was started:
SELECT (P1.value + P2.value - P3.value) / (P1.value + P2.value)
FROM v$sysstat P1, v$sysstat P2, v$sysstat P3
WHERE P1.name = 'db block gets'
AND P2.name = 'consistent gets'
AND P3.name = 'physical reads'

You can also see the buffer cache hit ratio for one specific session since that session started:
SELECT (P1.value + P2.value - P3.value) / (P1.value + P2.value)
FROM v$sesstat P1, v$statname N1, v$sesstat P2, v$statname N2,
v$sesstat P3, v$statname N3
WHERE N1.name = 'db block gets'
AND P1.statistic# = N1.statistic#
AND P1.sid = <enter SID of session here>
AND N2.name = 'consistent gets'
AND P2.statistic# = N2.statistic#
AND P2.sid = P1.sid
AND N3.name = 'physical reads'
AND P3.statistic# = N3.statistic#
AND P3.sid = P1.sid

You can also measure the buffer cache hit ratio between time X and time Y by collecting statistics at times
X and Y and computing the deltas.

Adjusting The Size Of The Buffer Cache

The db_block_buffers parameter in the parameter file determines the size of the buffer cache for the
instance. The size of the buffer cache (in bytes) is equal to the value of the db_block_buffers parameter
multiplied by the data block size.

You can change the size of the buffer cache by editing the db_block_buffers parameter in the parameter file
and restarting the instance.

Determining If The Buffer Cache Should Be Enlarged

If you set the db_block_lru_extended_statistics parameter to a positive number in the parameter file for an
instance and restart the instance, Oracle will populate a dynamic performance view called v$recent_bucket.
This view will contain the same number of rows as the setting of the db_block_lru_extended_statistics
parameter. Each row will indicate how many additional buffer cache hits there might have been if the buffer
cache were that much bigger.

For example, if you set db_block_lru_extended_statistics to 1000 and restart the instance, you can see how
the buffer cache hit ratio would have improved if the buffer cache were one buffer bigger, two buffers
bigger, and so on up to 1000 buffers bigger than its current size. Following is a query you can use, along
with a sample result:
SELECT 250 * TRUNC (rownum / 250) + 1 || ' to ' ||
250 * (TRUNC (rownum / 250) + 1) "Interval",
SUM (count) "Buffer Cache Hits"
FROM v$recent_bucket
GROUP BY TRUNC (rownum / 250)

Interval Buffer Cache Hits


--------------- --------------------
1 to 250 16083
251 to 500 11422
501 to 750 683
751 to 1000 177

This result set shows that enlarging the buffer cache by 250 buffers would have resulted in 16,083 more
hits. If there were about 30,000 hits in the buffer cache at the time this query was performed, then it would
appear that adding 500 buffers to the buffer cache might be worthwhile. Adding more than 500 buffers
might lead to under-utilized buffers and therefore wasted memory.
There is overhead involved in collecting extended LRU statistics. Therefore you should set the
db_block_lru_extended_ statistics parameter back to zero as soon as your analysis is complete.

In Oracle7, the v$recent_bucket view was named X$KCBRBH. Only the SYS user can query X$KCBRBH.
Also note that in X$KCBRBH the columns are called indx and count, instead of rownum and count.

Determining If The Buffer Cache Is Bigger Than Necessary

If you set the db_block_lru_statistics parameter to true in the parameter file for an instance and restart the
instance, Oracle will populate a dynamic performance view called v$current_bucket. This view will contain
one row for each buffer in the buffer cache, and each row will indicate how many of the overall cache hits
have been attributable to that particular buffer.

By querying v$current_bucket with a GROUP BY clause, you can get an idea of how well the buffer cache
would perform if it were smaller. Following is a query you can use, along with a sample result:
SELECT 1000 * TRUNC (rownum / 1000) + 1 || ' to ' ||
1000 * (TRUNC (rownum / 1000) + 1) "Interval",
SUM (count) "Buffer Cache Hits"
FROM v$current_bucket
WHERE rownum > 0
GROUP BY TRUNC (rownum / 1000)

Interval Buffer Cache Hits


------------ -----------------
1 to 1000 668415
1001 to 2000 281760
2001 to 3000 166940
3001 to 4000 14770
4001 to 5000 7030
5001 to 6000 959

This result set shows that the first 3000 buffers are responsible for over 98% of the hits in the buffer cache.
This suggests that the buffer cache would be almost as effective if it were half the size; memory is being
wasted on an oversized buffer cache.

There is overhead involved in collecting LRU statistics. Therefore you should set the
db_block_lru_statistics parameter back to false as soon as your analysis is complete.

In Oracle7, the v$current_bucket view was named X$KCBCBH. Only the SYS user can query
X$KCBCBH. Also note that in X$KCBCBH the columns are called indx and count, instead of rownum and
count.

Full Table Scans


When Oracle performs a full table scan of a large table, the blocks are read into the buffer cache but placed
at the least recently used end of the LRU list. This causes the blocks to be aged out quickly, and prevents
one large full table scan from wiping out the entire buffer cache.

Full table scans of large tables usually result in physical disk reads and a lower buffer cache hit ratio. You
can get an idea of full table scan activity at the data file level by querying v$filestat and joining to
SYS.dba_data_files. Following is a query you can use and sample results:
SELECT A.file_name, B.phyrds, B.phyblkrd
FROM SYS.dba_data_files A, v$filestat B
WHERE B.file# = A.file_id
ORDER BY A.file_id

FILE_NAME PHYRDS PHYBLKRD


-------------------------------- ---------- ----------
/u01/oradata/PROD/system01.dbf 92832 130721
/u02/oradata/PROD/temp01.dbf 1136 7825
/u01/oradata/PROD/tools01.dbf 7994 8002
/u01/oradata/PROD/users01.dbf 214 214
/u03/oradata/PROD/rbs01.dbf 20518 20518
/u04/oradata/PROD/data01.dbf 593336 9441037
/u05/oradata/PROD/data02.dbf 4638037 4703454
/u06/oradata/PROD/index01.dbf 1007638 1007638
/u07/oradata/PROD/index02.dbf 1408270 1408270

PHYRDS shows the number of reads from the data file since the instance was started. PHYBLKRD shows
the actual number of data blocks read. Usually blocks are requested one at a time. However, Oracle requests
blocks in batches when performing full table scans. (The db_file_multiblock_read_count parameter controls
this batch size.)

In the sample result set above, there appears to be quite a bit of full table scan activity in the data01.dbf data
file, since 593,336 read requests have resulted in 9,441,037 actual blocks read.

Spotting I/O Intensive SQL Statements

The v$sqlarea dynamic performance view contains one row for each SQL statement currently in the shared
SQL area of the SGA for the instance. v$sqlarea shows the first 1000 bytes of each SQL statement, along
with various statistics. Following is a query you can use:
SELECT executions, buffer_gets, disk_reads,
first_load_time, sql_text
FROM v$sqlarea
ORDER BY disk_reads

EXECUTIONS indicates the number of times the SQL statement has been executed since it entered the
shared SQL area. BUFFER_GETS indicates the collective number of logical reads issued by all executions
of the statement. DISK_READS shows the collective number of physical reads issued by all executions of
the statement. (A logical read is a read that resulted in a cache hit or a physical disk read. A physical read is
a read that resulted in a physical disk read.)

You can review the results of this query to find SQL statements that perform lots of reads, both logical and
physical. Consider how many times a SQL statement has been executed when evaluating the number of
reads.

Conclusion

This brief document gives you the basic information you need in order to optimize the buffer cache size for
your Oracle database. Also, you can zero in on SQL statements that cause a lot of I/O, and data files that
experience a lot of full table scans.

Why alter system kill session IMMEDIATE is good


I am pretty sure that many of us come across of situations when a killed session by 'alter system kill session' command did put the
session in 'KILLED' status and never released the session for a long time on the database. It could be due to the fact that the
session would be rolling back the ongoing transaction.
Whenever we are in such situation, we generally try to find out the OS pid (on UNIX OS) associated with the killed session (which
is a bit difficult task, as the killed session paddr in v$session changes while the addr corresponding value in v$process does not),
and kill the associated OS process with 'kill -9' command on the OS level.
I have found the IMMEDIATE option with the 'alter system kill session' is more useful as it writes the following information in the
alert.log file after killing the session and also try to finish the things at the earliest possible to close the session from the
database:

Wed Feb 10 11:02:39 2010


Immediate Kill Session#: 515, Serial#: 36366
Immediate Kill Session: sess: c0000001be20d9f0 OS pid: 14686

As you see, it writes the time stamp when the session was killed, and also gives the associated OS pid of the killed session in the
alert.log. As per Oracle documentation, 'Specify IMMEDIATE to instruct Oracle Database to roll back ongoing transactions,
release all session locks, recover the entire session state, and return control to you immediately.'

Syntax:

alter system kill session 'sid,serial#' IMMEDIATE;

SQL> ALTER SYSTEM DISCONNECT SESSION 'sid,serial#' POST_TRANSACTION;


SQL> ALTER SYSTEM DISCONNECT SESSION 'sid,serial#' IMMEDIATE;
C:> orakill ORACLE_SID spid

Include below three lines in your shell scrip to kill the sessions which are inactive for more than 60 minutes.

##### To kill ###########################


${ORACLE_HOME}/bin/sqlplus -s ‘/as sysdba’ @/ora/app/oracle/admin/scripts/kill_session_script.sql
##### To kill ##########################

—————————————————————————————————————————————-
kill_session_script.sql
—————————————————————————————————————————————-
– Script to kill sessions inactive for more than 1 hr
–kill_session_script.sql
set serveroutput on size 100000
set echo off
set feedback off
set lines 300
spool /ora/app/oracle/admin/scripts/kill_session.sql
declare
cursor sessinfo is select * from v$session where status = ‘INACTIVE’ and last_call_et>3600;
sess sessinfo%rowtype;
sql_string1 Varchar2(2000);
sql_string2 Varchar2(2000);
begin
dbms_output.put_line(‘SPOOL /ora/app/oracle/admin/scripts/kill_session.log;’);
open sessinfo;
loop
fetch sessinfo into sess;
exit when sessinfo%notfound;
sql_string1:=’–sid=’||sess.sid||’ serail#=’||sess.serial#||’ machine=’||sess.machine||’ program=’||sess.program||’
username=’||sess.username||’ Inactive_sec=’||sess.last_call_et||’ OS_USER=’||sess.osuser;
dbms_output.put_line(sql_string1);
sql_string2:=’alter system kill session ‘||chr(39)||sess.sid||’,’||sess.serial#||chr(39)||’ ;';
dbms_output.put_line(sql_string2);
end loop;
close sessinfo;
dbms_output.put_line(‘SPOOL OFF;’);
dbms_output.put_line(‘exit;’);
end;
/
spool off;
set echo on;
set feedback on;
@/ora/app/oracle/admin/scripts/kill_session.sql;

How to audit failed logon attempts

Oracle Audit -- failed connection

Background:

In some situation DBA team wants to audit failed logon attempts when "unlock account" requirement becomes
frequently and user cannot figure out who from where is using incorrect password to cause account get locked.

Audit concern:

Oracle auditing may add extra load and require extra operation support. For this situation DBA only need audit on
failed logon attempts and do not need other audit information. Failed logon attempt is only be able to track through
Oracle audit trail, logon trigger does not apply to failure logon attempts

Hint: The setting here is suggested to use in a none production system. Please evaluate all concern and load
before use it in production.

Approach:

1. Turn on Oracle audit function by set init parameter:

audit_trail=DB

Note:

database installed by manual script, the audit function may not turn on:

database installed by dbca, the default audit function may already turn on:

Check:
SQL> show parameter audit_trail

NAME TYPE VALUE

------------------------------------ ----------- ------------------------------

audit_trail string NONE

Turn on Oracle audit

a. If database use spfile


SQL> alter system set audit_trail=DB scope=spfile ;

System altered.

b. if database use pfile, modify init<Sid>.ora directly.

Restart database
SQL> shutdown immediate

Database closed.

Database dismounted.

ORACLE instance shut down.

SQL> startup ;

ORACLE instance started.

2. Turn off Oracle default audit

Privilege audit information stored in dba_priv_audit_opts;

Note: Oracle 11g has couple of audit turned on default when the audit_trail is set.

Oracle 10g, audit options is setup by explicit command.

Generate a script to turn off default privilege audit which we don't need here.
SQL> SELECT 'noaudit '|| privilege||';' from dba_priv_audit_opts where user_name is NULL;

'NOAUDIT'||PRIVILEGE||';'

-------------------------------------------------

noaudit ALTER SYSTEM;

noaudit AUDIT SYSTEM;

noaudit CREATE SESSION;

noaudit CREATE USER;

noaudit ALTER USER;

noaudit DROP USER;

noaudit CREATE ANY TABLE;

noaudit ALTER ANY TABLE;

noaudit DROP ANY TABLE;

noaudit CREATE PUBLIC DATABASE LINK;

noaudit GRANT ANY ROLE;

noaudit ALTER DATABASE;

noaudit CREATE ANY PROCEDURE;

noaudit ALTER ANY PROCEDURE;

noaudit DROP ANY PROCEDURE;

noaudit ALTER PROFILE;

noaudit DROP PROFILE;

noaudit GRANT ANY PRIVILEGE;

noaudit CREATE ANY LIBRARY;

noaudit EXEMPT ACCESS POLICY;

noaudit GRANT ANY OBJECT PRIVILEGE;

noaudit CREATE ANY JOB;

noaudit CREATE EXTERNAL JOB;

23 rows selected.

-- run above commands

3. Turn on audit on failed connection


SQL> AUDIT CONNECT WHENEVER NOT SUCCESSFUL;

Audit succeeded.

SQL> SELECT PRIVILEGE,SUCCESS,FAILURE FROM dba_priv_audit_opts;

PRIVILEGE SUCCESS FAILURE

---------------------------------------- ---------- ----------

CREATE SESSION NOT SET BY ACCESS

4. Retrieve information

Note: audit information is stored on sys.aud$. There multiple views Oracle provide to help you read sys.aud$.
Logon failed information can be retrieve from dba_audit_session

SQL> select os_username, username, userhost, to_char(timestamp,'mm/dd/yyyy hh24:mi:ss') logon_time, action_name,


returncode from dba_audit_session;

OS_USERNAME USERNAME USERHOST TIMESTAMP


ACTION_NAME RETURNCODE

------------------------------ ------------------------------ -------------------------------------------------- --------


----------- ---------------------------- ----------

linda xu JET_DEV102
HOME-linda xu 02/06/2013 13:40:12 LOGON 1017

linda xu JET_DEV102
HOME-linda xu 02/06/2013 13:40:25 LOGON 1017

linda xu JET_DEV102
HOME-linda xu 02/06/2013 15:31:29 LOGON 1017

linda xu JET_DEV102
HOME-linda xu 02/06/2013 15:31:38 LOGON 1017

4 rows selected.

Note: RETURNCODE is the ORA error code return to user.

ORA-1017 is incorrect password


ORA-28000 is account locked

ORA-1045 is missing connect privilege

------------------------------------------------------------

Up here, we be able to audit who is the bad boy causing account locked.

5. Turn off the audit


If you no longer need the audit on failed attempts, run this command to turn off
SQL> noaudit CONNECT;

Noaudit succeeded.

SQL> SELECT PRIVILEGE,SUCCESS,FAILURE FROM dba_priv_audit_opts;

no rows selected

Oracle use system tablespace for sys.aud$. For enhancement, you may consider to move sys.aud$ to separate
tablespace.

6. Move sys.aud$ out of system tablespace.

Oracle 11g provide package dbms_audit_mgmt.set_audit_trail_location to relocate the aud$ table.


SQL> SELECT table_name, tablespace_name FROM dba_tables WHERE table_name ='AUD$';

TABLE_NAME TABLESPACE_NAME

----------------------------- ------------------------------

AUD$ SYSTEM

Following example shows how to move sys.aud$ from system tablespace to user_data1 tablespace.
SQL> exec DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
audit_trail_location_value => 'USER_DATA1');

PL/SQL procedure successfully completed.

SQL> SELECT table_name, tablespace_name FROM dba_tables WHERE table_name ='AUD$';

TABLE_NAME TABLESPACE_NAME

------------------------------ ------------------------------

AUD$ USER_DATA1

7. Clean up AUD$

You can simply run delete or truncate command

delete from sys.AUD$;


truncate table sys.AUD$;
Oracle – Optimizer stats not being purged
July 28, 2011Kerri RobbertsLeave a commentGo to comments

I’ve recently been monitoring two databases where a high amount of import/exports are taking place. The SYSAUX and SYSTEM
tablespaces have been continually growing.

To resolve this I set the stats retention period to 7 days.

SQL> exec dbms_stats.alter_stats_history_retention(7);


I then continued to monitor the database and found that the SYSAUX tablespace was still continuing to grow. When checking
the retention period it showed it to be as set, so I reduced it further to 3 days.

SQL> select dbms_stats.get_stats_history_retention from dual;


GET_STATS_HISTORY_RETENTION
---------------------------
3
I then tried rebuilding the stats indexes and tables as they would now be fragmented.

SELECT
sum(bytes/1024/1024) Mb,
segment_name,
segment_type
FROM
dba_segments
WHERE
tablespace_name = 'SYSAUX'
AND
segment_type in ('INDEX','TABLE')
GROUP BY
segment_name,
segment_type
ORDER BY Mb;
MB SEGMENT_NAME SEGMENT_TYPE
-- --------------------------------------- ----------------
2 WRH$_SQLTEXT TABLE
2 WRH$_ENQUEUE_STAT_PK INDEX
2 WRI$_ADV_PARAMETERS TABLE
2 WRH$_SEG_STAT_OBJ_PK INDEX
3 WRI$_ADV_PARAMETERS_PK INDEX
3 WRH$_SQL_PLAN_PK INDEX
3 WRH$_SEG_STAT_OBJ TABLE
3 WRH$_ENQUEUE_STAT TABLE
3 WRH$_SYSMETRIC_SUMMARY_INDEX INDEX
4 WRH$_SQL_BIND_METADATA_PK INDEX
4 WRH$_SQL_BIND_METADATA TABLE
6 WRH$_SYSMETRIC_SUMMARY TABLE
7 WRH$_SQL_PLAN TABLE
8 WRI$_OPTSTAT_TAB_HISTORY TABLE
8 I_WRI$_OPTSTAT_TAB_ST INDEX
9 I_WRI$_OPTSTAT_H_ST INDEX
9 I_WRI$_OPTSTAT_TAB_OBJ#_ST INDEX
12 I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST INDEX
12 I_WRI$_OPTSTAT_IND_ST INDEX
12 WRI$_OPTSTAT_HISTGRM_HISTORY TABLE
14 I_WRI$_OPTSTAT_IND_OBJ#_ST INDEX
20 WRI$_OPTSTAT_IND_HISTORY TABLE
306 I_WRI$_OPTSTAT_HH_ST INDEX
366 WRI$_OPTSTAT_HISTHEAD_HISTORY TABLE
408 I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST INDEX
To reduce these tables and indexes you can issue the following:

SQL> alter table <table name> move tablespace SYSAUX;


SQL> alter index <index name> rebuild online;
If you are only running standard edition then you can only rebuild indexes offline. Online index rebuild is a feature of Enterprise
Edition.

To find out the oldest available stats you can issue the following:

SQL> select dbms_stats.get_stats_history_availability from dual;


GET_STATS_HISTORY_AVAILABILITY
---------------------------------------------------------------------------
28-JUN-11 00.00.00.000000000 +01:00
To find out a list of how many stats are gathered for each day between the retention the current date and the oldest stats
history issue the following:

SQL> select trunc(SAVTIME),count(1) from WRI$_OPTSTAT_HISTHEAD_HISTORY group by trunc(SAVTIME)


order by 1;
TRUNC(SAV COUNT(1)
--------- ----------
28-JUN-11 2920140
29-JUN-11 843683
30-JUN-11 519834
01-JUL-11 958836
02-JUL-11 3158052
03-JUL-11 287
04-JUL-11 1253952
05-JUL-11 732361
06-JUL-11 507186
07-JUL-11 189416
08-JUL-11 2619
09-JUL-11 1491
10-JUL-11 287
11-JUL-11 126324
12-JUL-11 139556
13-JUL-11 181068
14-JUL-11 4832
15-JUL-11 258027
16-JUL-11 1152
17-JUL-11 287
18-JUL-11 27839
21 rows selected.
What has happened here is that the job run by MMON every 24hrs has checked the retention period and tried to run a purge of
all stats older than the retention period. As the job has not compeleted within 5 minutes because of the high number of stats
collected on each day, the job has given up and rolled back. Therefore the stats are not being purged.

As each day continues the SYSAUX table is continuing to fill up because the job fails each night and cannot purge old stats.

To resolve this we have to issue a manual purge to clear down the old statistics. This can be UNDO tablespace extensive so it’s
best to keep an eye on the amount of UNDO being generated. I suggest starting with the oldest and working fowards.

To manually purge the stats issue the following:

SQL> exec dbms_stats.purge_stats(to_date('10-JUL-11','DD-MON-YY'));PL/SQL procedure successfully


completed.
SQL> select trunc(SAVTIME),count(1) from WRI$_OPTSTAT_HISTHEAD_HISTORY group by trunc(SAVTIME)
order by 1;
TRUNC(SAVTIME) COUNT(1)
-------------------- ----------
29-Jun-2011 00:00:00 843683
30-Jun-2011 00:00:00 519834
01-Jul-2011 00:00:00 958836
02-Jul-2011 00:00:00 3158052
03-Jul-2011 00:00:00 287
04-Jul-2011 00:00:00 1253952
05-Jul-2011 00:00:00 732361
06-Jul-2011 00:00:00 507186
07-Jul-2011 00:00:00 189416
08-Jul-2011 00:00:00 2619
09-Jul-2011 00:00:00 1491
10-Jul-2011 00:00:00 287
11-Jul-2011 00:00:00 126324
12-Jul-2011 00:00:00 139556
13-Jul-2011 00:00:00 181068
14-Jul-2011 00:00:00 4832
15-Jul-2011 00:00:00 258027
16-Jul-2011 00:00:00 1152
17-Jul-2011 00:00:00 287
18-Jul-2011 00:00:00 27839
20 rows selected.
Once the amount of stats has been reduced the overnight job should work, alternatively you can create a job to run this
similarly to running manually. Using the following code in a scheduled job:

BEGIN
sys.dbms_scheduler.create_job(
job_name => '"SYS"."PURGE_OPTIMIZER_STATS"',
job_type => 'PLSQL_BLOCK',
job_action => 'begin
dbms_stats.purge_stats(sysdate-3);
end;',
repeat_interval => 'FREQ=DAILY;BYHOUR=6;BYMINUTE=0;BYSECOND=0',
start_date => systimestamp at time zone 'Europe/Paris',
job_class => '"DEFAULT_JOB_CLASS"',
comments => 'job to purge old optimizer stats',
auto_drop => FALSE,
enabled => TRUE);
END;
Finally you will need to rebuild the indexes and move the tables. To do this you can spool a script to a dmp file and then run the
dmp file.

SQL> select 'alter index '||segment_name||' rebuild;' FROM dba_segments where tablespace_name =
'SYSAUX' AND segment_type = 'INDEX';
Edit the file to remove the first and last lines (SQL> SELECT…. and SQL> spool off)
Run the file to rebuild the indexes.

You can then do the same with the tables

SQL> select 'alter table '||segment_name||' move tablespace SYSAUX;' FROM dba_segments where
tablespace_name = 'SYSAUX' AND segment_type = 'TABLE';
Then you can re-run the original query, mine produces the following now and my SYSAUX table is only a few hundred MB full.

.6875 WRH$_ENQUEUE_STAT TABLE


.75 WRH$_SEG_STAT_OBJ TABLE
.8125 WRH$_SYSMETRIC_SUMMARY_INDEX INDEX
.8125 I_WRI$_OPTSTAT_HH_ST INDEX
.8125 WRH$_SQL_PLAN_PK INDEX
1 WRI$_OPTSTAT_HISTHEAD_HISTORY TABLE
1 SYS$SERVICE_METRICS_TAB TABLE
2 I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST INDEX
2 WRH$_SYSMETRIC_SUMMARY TABLE
2 WRI$_ADV_PARAMETERS TABLE
2 WRI$_ADV_PARAMETERS_PK INDEX
4 WRH$_SQL_PLAN TABLE
689 rows selected.

http://www.unixarena.com/2013/08/linux-lvm-volume-group-operations.html

Linux – LVM – Volume Group Operations


August 13, 2013 in LVM, LVM Tutorials
We have already seen the basics of logical volume manager structure on the previous article.Here we are going to see
about Logical volume manager’s volume group operations and management. Volume group is high level container in LVM
which contains one or more physical disk or LUNS. A volume group can span in to multiple disks, whether internal or
external disks. External disks are typically SAN but could be external SCSI or ISCSI disks.According to the filesystem
requirements ,you can add or remove disk from volume group easily.This flexibility provides volumes can be re-
sized dynamically.
How to create new volume group ?
1.List the physical disks which were brought under logical volume manager control using pvcreate command.
[root@mylinz ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 vg_mylinz lvm2 a- 19.51g 0
/dev/sdd1 lvm2 a- 511.98m 511.98m
/dev/sde lvm2 a- 512.00m 512.00m
/dev/sdf lvm2 a- 5.00g 5.00g
[root@mylinz ~]#

2.Create a new volume group using disk /dev/sdd1.Here the volumegroup name is “uavg”.

[root@mylinz ~]# vgcreate uavg /dev/sdd1


Volume group "uavg" successfully created
[root@mylinz ~]#

3.Verify the new volume group.


[root@mylinz ~]# vgs uavg
VG #PV #LV #SN Attr VSize VFree
uavg 1 0 0 wz--n- 508.00m 508.00m
[root@mylinz ~]#
[root@mylinz ~]# pvs |grep uavg
/dev/sdd1 uavg lvm2 a- 508.00m 508.00m
[root@mylinz ~]#

4.For detailed volume group information,use below mentioned command.


[root@mylinz ~]# vgdisplay uavg
--- Volume group ---
VG Name uavg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 508.00 MiB
PE Size 4.00 MiB
Total PE 127
Alloc PE / Size 0 / 0
Free PE / Size 127 / 508.00 MiB
VG UUID c87FyZ-5DND-oQ3n-iTh1-Vb1f-nBML-vUBUE9
[root@mylinz ~]#

How to rename volume Group ? can we rename the VG on fly ?


We can rename the volume group in Redhat Linux using vgrename command.The rename can be done on the fly without
any impact. Here i am going to show the vgrename and proving that no impact on this activity.

1.List the currently mounted from volume group “uavg”

[root@mylinz ~]# df -h /mnt


Filesystem Size Used Avail Use% Mounted on
/dev/mapper/uavg-ualvol1
51M 4.9M 43M 11% /mnt
[root@mylinz ~]#

2.List the available volume group.


[root@mylinz ~]# vgs
VG #PV #LV #SN Attr VSize VFree
uavg 1 1 0 wz--n- 508.00m 456.00m
vg_mylinz 1 2 0 wz--n- 19.51g 0
3.Rename the volume group using “vgrename” command.
[root@mylinz ~]# vgrename uavg uavg_new
Volume group "uavg" successfully renamed to "uavg_new"
[root@mylinz ~]# vgs
VG #PV #LV #SN Attr VSize VFree
uavg_new 1 1 0 wz--n- 508.00m 456.00m
vg_mylinz 1 2 0 wz--n- 19.51g 0
[root@mylinz ~]#

4.Check the volume status .You can see still volume is available for operation.
[root@mylinz ~]# df -h /mnt
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/uavg-ualvol1
51M 4.9M 43M 11% /mnt
[root@mylinz ~]#
[root@mylinz ~]# cd /mnt
[root@mylinz mnt]# touch 3 4 5 6
[root@mylinz mnt]# ls -lrt
total 18
drwx------. 2 root root 12288 Aug 6 22:55 lost+found
-rw-r--r--. 1 root root 0 Aug 7 00:32 1
-rw-r--r--. 1 root root 0 Aug 7 00:32 7
-rw-r--r--. 1 root root 0 Aug 12 21:17 6
-rw-r--r--. 1 root root 0 Aug 12 21:17 5
-rw-r--r--. 1 root root 0 Aug 12 21:17 4
-rw-r--r--. 1 root root 0 Aug 12 21:17 3
[root@mylinz mnt]#

5.But the volume is still reflecting the old device.This can be removed after remounting the volume.This can be down
when you have down time for the server.Please don;t forget to update “fstab” according to the new volume group name.
[root@mylinz ~]# umount /mnt
[root@mylinz ~]# mount -t ext4 /dev/mapper/uavg_new-ualvol1 /mnt
[root@mylinz ~]# df -h /mnt
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/uavg_new-ualvol1
51M 4.9M 43M 11% /mnt
[root@mylinz ~]# vgs
VG #PV #LV #SN Attr VSize VFree
uavg_new 1 1 0 wz--n- 508.00m 456.00m
vg_mylinz 1 2 0 wz--n- 19.51g 0
[root@mylinz ~]#

How to Extend the volume group ?


Volume group can be extend on the fly by adding new disks or LUNS to the existing volume group.

1.List the volume group .


[root@mylinz ~]# vgs
VG #PV #LV #SN Attr VSize VFree
uavg 1 1 0 wz--n- 508.00m 456.00m
vg_mylinz 1 2 0 wz--n- 19.51g 0
[root@mylinz ~]#
2.List the available physical volumes for extending the volume group “uavg”.
Check out if you have any doubt to create new physical volume from new disks or LUNS.
[root@mylinz ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 vg_mylinz lvm2 a- 19.51g 0
/dev/sdd1 uavg lvm2 a- 508.00m 456.00m
/dev/sde lvm2 a- 512.00m 512.00m
/dev/sdf lvm2 a- 5.00g 5.00g
[root@mylinz ~]#
3.Let me choose “sde” to extend volume group “uavg”.
[root@mylinz ~]# vgextend uavg /dev/sde
Volume group "uavg" successfully extended
[root@mylinz ~]# vgs
VG #PV #LV #SN Attr VSize VFree
uavg 2 1 0 wz--n- 1016.00m 964.00m
vg_mylinz 1 2 0 wz--n- 19.51g 0
[root@mylinz ~]#
From the above output ,you can see volume group “uavg” has been extended successfully.

4.Check the “pvs” command output. /dev/sde will show as part of “uavg” now.
[root@mylinz ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 vg_mylinz lvm2 a- 19.51g 0
/dev/sdd1 uavg lvm2 a- 508.00m 456.00m
/dev/sde uavg lvm2 a- 508.00m 508.00m
/dev/sdf lvm2 a- 5.00g 5.00g
[root@mylinz ~]#

How to scan a disks for LVM ?


You need to scan a LVM disks whenever there is a hardware changes on your server.Hardware changes may be a newly
added or removed disks which will be hotplug disks or new disks added to SAN systems.
[root@mylinz ~]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "uavg" using metadata type lvm2
Found volume group "vg_mylinz" using metadata type lvm2
[root@mylinz ~]#

How to decrease the volume group size ? or How to remove disks


from LVM ?
Disks can be removed from volume group if its not used for any volumes.

1.First find out the disks which we are planning to remove it from volume group is not used for any volumes using below
mentioned commands.
[root@mylinz ~]# lvs -a -o +devices
LV VG Attr LSize Origin Snap% Move Log Copy% Convert Devices
ualvol1 uavg -wi-a- 52.00m /dev/sdd1(0)
lv_root vg_mylinz -wi-ao 16.54g /dev/sda2(0)
lv_swap vg_mylinz -wi-ao 2.97g /dev/sda2(4234)
[root@mylinz ~]# pvs -a -o +devices |grep uavg
/dev/sdd1 uavg lvm2 a- 508.00m 456.00m /dev/sdd1(0)
/dev/sdd1 uavg lvm2 a- 508.00m 456.00m
/dev/sde uavg lvm2 a- 508.00m 508.00m
/dev/uavg/ualvol1 -- 0 0
[root@mylinz ~]#
From the above commands output,we can see disk “sde” is not used for any volumes (lvs command output) in volume
group “uavg” .

2.Check the disk details. From this details you can confirm ,PE (i.e physical extends) are not used in VG. (Total PE=127 &
Free PE=127).
[root@mylinz ~]# pvdisplay /dev/sde
--- Physical volume ---
PV Name /dev/sde
VG Name uavg
PV Size 512.00 MiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 127
Free PE 127
Allocated PE 0
PV UUID FadWLT-LjD8-v8VB-pboY-eZbK-vYpE-ZWq0i9
[root@mylinz ~]#
3.List the volume group details before removing the physical volume.
[root@mylinz ~]# vgs
VG #PV #LV #SN Attr VSize VFree
uavg 2 1 0 wz--n- 1016.00m 964.00m
vg_mylinz 1 2 0 wz--n- 19.51g 0
[root@mylinz ~]# vgdisplay uavg
--- Volume group ---
VG Name uavg
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 1016.00 MiB
PE Size 4.00 MiB
Total PE 254
Alloc PE / Size 13 / 52.00 MiB
Free PE / Size 241 / 964.00 MiB
VG UUID c87FyZ-5DND-oQ3n-iTh1-Vb1f-nBML-vUBUE9
[root@mylinz ~]#

4.Now we are ready to remove the “/dev/sde” from volume group “uavg”.
[root@mylinz ~]# vgreduce uavg /dev/sde
Removed "/dev/sde" from volume group "uavg"
[root@mylinz ~]# vgs
VG #PV #LV #SN Attr VSize VFree
uavg 1 1 0 wz--n- 508.00m 456.00m
vg_mylinz 1 2 0 wz--n- 19.51g 0
[root@mylinz ~]# vgdisplay uavg
--- Volume group ---
VG Name uavg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 10
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 508.00 MiB
PE Size 4.00 MiB
Total PE 127
Alloc PE / Size 13 / 52.00 MiB
Free PE / Size 114 / 456.00 MiB
VG UUID c87FyZ-5DND-oQ3n-iTh1-Vb1f-nBML-vUBUE9
[root@mylinz ~]#
From the above outputs ,you can see #PV reduced to “1″ and volume group size also reduced.

How to activate and Deactivate the volume group ?


By default volume group will be in active mode. But some circumstances,you need to put the volume group in disabled
mode or inactive mode thus unknown to Linux kernel. Here we will see how to activate and deactivate the volume group.

1.List the volume groups.


[root@mylinz ~]# vgs
VG #PV #LV #SN Attr VSize VFree
uavg 1 1 0 wz--n- 508.00m 456.00m
vg_mylinz 1 2 0 wz--n- 19.51g 0
[root@mylinz ~]#

2.Deactive the volume group “uavg”


[root@mylinz ~]# vgchange -a n uavg
0 logical volume(s) in volume group "uavg" now active
[root@mylinz ~]# vgs
VG #PV #LV #SN Attr VSize VFree
uavg 1 1 0 wz--n- 508.00m 456.00m
vg_mylinz 1 2 0 wz--n- 19.51g 0
[root@mylinz ~]#
you can not deactivate VG if any opened volumes from that volumegroup. You have to unmount all the volumes from the
volume group before deactivating it .You will get below error if any volumes in opened state .
[root@mylinz ~]# vgchange -a n uavg
Can't deactivate volume group "uavg" with 1 open logical volume(s)
[root@mylinz ~]#

3.Check the volume status.It will be in “Not Available” status.


[root@mylinz ~]# lvdisplay /dev/uavg/ualvol1
--- Logical volume ---
LV Name /dev/uavg/ualvol1
VG Name uavg
LV UUID 6GB8TR-ih7d-vg7J-xCLE-A8OH-gmwy-3XLyOb
LV Write Access read/write
LV Status NOT available
LV Size 52.00 MiB
Current LE 13
Segments 1
Allocation inherit
Read ahead sectors auto

[root@mylinz ~]#

4.You can activate the volume group use same command with different options.
[root@mylinz ~]# vgchange -a y uavg
1 logical volume(s) in volume group "uavg" now active
[root@mylinz ~]# lvdisplay /dev/uavg/ualvol1
--- Logical volume ---
LV Name /dev/uavg/ualvol1
VG Name uavg
LV UUID 6GB8TR-ih7d-vg7J-xCLE-A8OH-gmwy-3XLyOb
LV Write Access read/write
LV Status available
# open 0
LV Size 52.00 MiB
Current LE 13
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2

[root@mylinz ~]#

5.Mount the volume .We are back to normal operation.


[root@mylinz ~]# mount -t ext4 /dev/mapper/uavg-ualvol1 /mnt
[root@mylinz ~]#

How to backup & restore the LVM volumegroup metadata ?


Automatically Metadata backups and archives are created whenever you create new volume group Linux system.By default
backup stored in /etc/lvm/backup and archives are stored in /etc/lvm/archives .We can also manually backup the lvm
configuration using ”vgcfgbackup” command.

1.Run “vgcfgbackup” command to take new configuration backup for volume group “uavg”.
[root@mylinz ~]# vgcfgbackup uavg
Volume group "uavg" successfully backed up.
[root@mylinz ~]#

2.You can find the new configuration file under the below mentioned location.
[root@mylinz ~]# cd /etc/lvm/
[root@mylinz lvm]# ls -lrt
total 36
-rw-r--r--. 1 root root 21744 Aug 18 2010 lvm.conf
drwx------. 2 root root 4096 Aug 12 23:57 archive
drwx------. 2 root root 4096 Aug 13 00:27 backup
drwx------. 2 root root 4096 Aug 13 00:27 cache
[root@mylinz lvm]# cd backup/
[root@mylinz backup]# ls -lrt
total 8
-rw-------. 1 root root 1474 Jun 3 2012 vg_mylinz
-rw-------. 1 root root 1164 Aug 13 00:27 uavg
[root@mylinz backup]# file uavg
uavg: ASCII text
[root@mylinz backup]#
[root@mylinz backup]# more uavg
# Generated by LVM2 version 2.02.72(2) (2010-07-28): Tue Aug 13 00:27:46 2013

contents = "Text Format Volume Group"


version = 1

description = "Created *after* executing 'vgcfgbackup /root/uavg.meta.bck uavg'"

creation_host = "mylinz" # Linux mylinz 2.6.32-71.el6.x86_64 #1 SMP Wed Sep 1 01:33:01 EDT 2010 x86_64
creation_time = 1376333866 # Tue Aug 13 00:27:46 2013

uavg {
id = "c87FyZ-5DND-oQ3n-iTh1-Vb1f-nBML-vUBUE9"
seqno = 10
status = ["RESIZEABLE", "READ", "WRITE"] flags = [] extent_size = 8192 # 4
Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0

3.To restore the volume group meta data,you below command.


[root@mylinz ~]# vgcfgrestore uavg
Restored volume group uavg
[root@mylinz ~]# vgs
VG #PV #LV #SN Attr VSize VFree
uavg 1 1 0 wz--n- 508.00m 456.00m
vg_mylinz 1 2 0 wz--n- 19.51g 0
[root@mylinz ~]#

How to export/Move the volume group to other Linux node ?


Complete LVM volume group can be moved from one system to another system using vg commands.Here we will see step
by step guide for this migration.
1.Unmount all the volumes from volume group which needs to be migrated.
2.Make the volume group inactive using “vgchange” command to ensure there will no I/O to the VG.
[root@mylinz ~]# vgchange -a n uavg
0 logical volume(s) in volume group "uavg" now active
[root@mylinz ~]#

3.Export the volumegroup .


[root@mylinz ~]# vgexport uavg
Volume group "uavg" successfully exported
[root@mylinz ~]#

4.You can verify the exported status using “pvscan” command.


[root@mylinz ~]# pvscan
PV /dev/sdd1 is in exported VG uavg [508.00 MiB / 456.00 MiB free] PV /dev/sda2 VG vg_mylinz lvm2
[19.51 GiB / 0 free] PV /dev/sde lvm2 [512.00 MiB] PV /dev/sdf
lvm2 [5.00 GiB] Total: 4 [25.50 GiB] / in use: 2 [20.00 GiB] / in no VG: 2 [5.50 GiB][root@mylinz ~]#

5.Now assign the disks from SAN level to the system where you want to import the volume group.

6.Scan the disks and make the disks available for VG import.
Check out the Disks or LUN scanning procedure in Redhat Linux.
7.Import the volume group.
[root@mylinz ~]# vgimport uavg
Volume group "uavg" successfully imported
[root@mylinz ~]# vgs
VG #PV #LV #SN Attr VSize VFree
uavg 1 1 0 wz--n- 508.00m 456.00m
vg_mylinz 1 2 0 wz--n- 19.51g 0
[root@mylinz ~]#
8.Activate the volume group for normal operation.
[root@mylinz ~]# vgchange -a y uavg
1 logical volume(s) in volume group "uavg" now active
[root@mylinz ~]#

How to recreate the device files for LVM volumes ?


Due to server crash or other reason, we may loose the LVM device files and volume group directory . In those situation ,you
need to recreate what you have lost. LVM provides command called “vgmknodes” which will help you to recreate those
missing files.Here is a small experiment.

1.Let me remove device file for volume “uavg-ualvol1″ which is part of VG “uavg” .

[root@mylinz ~]# cd /dev/mapper/


[root@mylinz mapper]# ls -lrt
total 0
crw-rw----. 1 root root 10, 58 Aug 5 19:28 control
lrwxrwxrwx. 1 root root 7 Aug 5 19:28 vg_mylinz-lv_root -> ../dm-0
lrwxrwxrwx. 1 root root 7 Aug 5 19:28 vg_mylinz-lv_swap -> ../dm-1
lrwxrwxrwx. 1 root root 7 Aug 13 00:47 uavg-ualvol1 -> ../dm-2
[root@mylinz mapper]# rm uavg-ualvol1
rm: remove symbolic link `uavg-ualvol1'? y
[root@mylinz mapper]#
2.Let me move the “/etc/lvm” to .old.

[root@mylinz etc]# mv lvm lvm.old


[root@mylinz etc]# ls -lrt |grep lvm
drwx------. 5 root root 4096 Jun 1 2012 lvm.old
[root@mylinz etc]#
3.Let me run “vgmknodes” and see whether this command is able to recreate the removed device file and lvm directory .
[root@mylinz etc]# vgmknodes

4.Check whether devices files are created or not.


[root@mylinz mapper]# ls -lrt
total 0
crw-rw----. 1 root root 10, 58 Aug 5 19:28 control
lrwxrwxrwx. 1 root root 7 Aug 5 19:28 vg_mylinz-lv_root -> ../dm-0
lrwxrwxrwx. 1 root root 7 Aug 5 19:28 vg_mylinz-lv_swap -> ../dm-1
brw-rw----. 1 root disk 253, 2 Aug 13 00:54 uavg-ualvol1
[root@mylinz mapper]#
wow…its recreated.

5.Let me check /etc/lvm directory is created or not .


[root@mylinz etc]# ls -lrt |grep lvm
drwx------. 5 root root 4096 Jun 1 2012 lvm.old
drwxr-xr-x. 3 root root 4096 Aug 13 00:54 lvm
[root@mylinz etc]#
Awesome…Its recreated.

How to remove the volume group ?


You can remove the volume group using vgremove command.
If any volumes from the volume group in mounted status ,you will get below error .
[root@mylinz ~]# vgremove uavg
Do you really want to remove volume group "uavg" containing 1 logical volumes? [y/n]: y
Can't remove open logical volume "ualvol1"
[root@mylinz ~]#

Un-mount the volume and remove the volume group.


[root@mylinz ~]# vgremove uavg
Do you really want to remove volume group "uavg" containing 1 logical volumes? [y/n]: y
[root@mylinz ~]#
[root@mylinz ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg_mylinz 1 2 0 wz--n- 19.51g 0
[root@mylinz ~]#
Hope this article shared enough information about LVM2 volume group administration.
You can also split and combine volume groups in LVM2 like veritas volume manager.

Please leave a comment if you have any doubt on this . Share it in social networks to reach all the Linux administrators and
beginners.

Thank you for visiting UnixArena.

ORACLE error 20100 in FDPSTP


Cause: FDPSTP failed due to ORA-20100: Error: FND_FILE failure. Unable to create file,
o0031866.tmp in the directory, /usr/tmp.

You will find more information in the request log.


ORA-06512: at "APPS.FND_FILE", line 417

When investigated, I found that both the instnaces were creating .tmp files in /usr/tmp directory with the same name. This error was being thrown
when one instance was trying to create .tmp file and a file with the same name was already created by the other instance.

 To resolve the issue I shutdown both the apps and db services of one instance.
 Created a directory 'temp' in '/usr/tmp' and changed the ownership of this dir to user owner of this instance
 Logon to database as sysdba
 Create pfile from spfile
 modified UTL_FILE_DIR parameter's first entry from '/usr/tmp' to '/usr/tmp/temp'
 Created spfile from pfile
 Brought up the db and listener
 Now modified the $APPLPTMP variable in TEST_oratest.xml file from '/usr/tmp' to '/usr/tmp/temp'
 Run the autoconfig on apps tier/node
 Brought up the apps services
 Retested the issue and it was resolved
================================================================
Maintenance Mode when applying ADPATCH:

When you put your application in maintenance mode Workflow Buisness Events will stop, Users are not allowed to Login. It doesn't
matter weather you application services are down or not , but if you don't put your application in maintaince mode , your patch will
failed until and unless you use options=hotpatch.

------------------------------------

dadmin is not working how to enable maintenance mode oracle apps (EBS)

@$AD_TOP/patch/115/sql/adsetmmd.sql
You can also put your application in Maintenance mode from backend:
Enable Maintenance mode:
SQL> @$AD_TOP/patch/115/sql/adsetmmd.sql ENABLE
SQL> select fnd_profile.value(‘APPS_MAINTENANCE_MODE’) from dual; –> to check

Disable Maintenance mode :


SQL> @$AD_TOP/patch/115/sql/adsetmmd.sql DISABLE
SQL> select fnd_profile.value(‘APPS_MAINTENANCE_MODE’) from dual;
3. Enabling and Disabling Maintenance Mode
Maintenance mode is Enabled or Disabled from adadmin.
When you Enable or Disable 'Maintenance Mode', adadmin will execute the script:

$AD_TOP/patch/115/sql/adsetmmd.sql sending the parameter 'ENABLE' or 'DISABLE' :

sqlplus /@adsetmmd.sql ENABLE | DISABLE

ENABLE - Enable Maintenance Mode .


DISABLE - Disable Maintenance Mode.

When adsetmmd.sql runs, it sets the Profile Option 'Applications Maintenance Mode'

(APPS_MAINTENANCE_MODE) to 'MAINT' to Enable 'Maintenance Mode' and to 'NORMAL' to Disable it.

4. Determining if Maintenance Mode is running


A quick way to verify if the Environment is on Maintenance Mode or not, is by checking the value of this
Profile Option as follows:
sqlplus apps/apps
SQL> select fnd_profile.value('APPS_MAINTENANCE_MODE') from dual;
If the query returns 'MAINT', then Maintenance Mode has been Enabled and the Users will not be able to
Login. If the query returns 'NORMAL' then Maintenance Mode has been De-Activated and the Users will be able to use
the application.

Note: Maintenance Mode is only needed for AutoPatch Sessions. Other AD utilities do not require

Maintenance Mode to be enabled. Maintenance Mode must be 'Enabled' before running AutoPatch and 'Disabled' after
the patch application is completed.

When Maintenance Mode is disabled, you can still run Autopatch by using options=hotpatch on the command line, if
necessary. However, doing so can cause a significant degradation of performance.

Mobile Web Applications Server - How to Start/Stop MWA Services Using


Control Scripts adstrtal.sh/adstpall.sh
Mobile Web Applications Server - How to Start/Stop MWA Services Using Control Scripts adstrtal.sh/adstpall.sh

Oracle Mobile Application Server - Version 11.5.10.0 to 12.1.3 [Release 11.5 to 12.1]
Information in this document applies to any platform.
Information in this document applies to any platform.
Mobile Application Server - Version: 11.5.10 to 12.1

GOAL

One would like to start/stop MWA services using respectively adstrtal.sh/adstpall.sh control scripts
instead of the specific script mwactl.sh under $MWA_TOP/bin (11i) or INST_TOP/admin/scripts (r12).

FIX

1. Stop all the services (by running adstpall.sh under $COMMON_TOP/ in 11i or /admin/scripts in r12)

2. For 11i only apply Patch 5985992 (TKX patch), Patch 5712178 (MWA patch) if not already done, and Patch 8405261 per Note
781107.1.

3. For 11i or r12 modify value of s_mwastatus and s_other_service_group_status variables to 'enabled' (without quotes) in the xml
context file $APPL_TOP/admin/.xml in 11i or $INST_TOP/appl/admin/.xml in r12 (where is generally _)
4. Run Autoconfig
5. Now the MWA services can be started/stopped as other Applications processes using the
adstrtal.sh/adstpall.sh control scripts.
LINUX CUPS :

Generic Postscript "driver". Generally, for Postscript printers, you will not need a driver, as all applications produce PostScript. For
the printing system getting access to the printer-specific features the manufacturer supplies a PPD file for every PostScript printer.
Use this PPD file instead of a Foomatic-generated one to get all functionality of your PostScript printer working. The files provided
by Foomatic are generic files which contain only some standard options, use them only if you do not find an appropriate PPD for
your printer.

One can make use of all functionality which the PostScript printers have under Windows/MacOS when one uses the PPD file coming
with the printer, downloaded from here on OpenPrinting, from the manufacturer's home page, or from Adobe's web site (do
"unzip -L [filename].EXE" to get the PPD files). If there are several different PPD files for your printer model and none
dedicated for Linux or Unix, the PPD for Windows NT works best in most cases.

CUPS and PPR support PPD files directly, LPD/GNUlpr/LPRng, PDQ, CPS, and spooler-less users can set up their printers
with foomatic-rip as they would set up a printer with a Foomatic PPD file. foomatic-rip works as well with manufacturer-
supplied PostScript PPD files. This way all PostScript printers work perfectly under GNU/Linux or other free operating systems.
Ghostscript is not needed for them. See also our PPD documentation page for instructions.

See the tutorial chapter "Some Theoretical Background: CUPS, PPDs, PostScript, and Ghostscript" (PDF) for detailed information
about PostScript and PPD files.

APP-FND-00362: Routine &ROUTINE cannot execute request &REQUEST for program &PROGRAM, because theenvironment
variable &BASEPATH is not set for the application to which the concurrent program executable &EXECUTABLE belongs. Shut down
the concurrent managers. Set the basepath environment variable for theapplication. Restart the concurrent managers.

SOLUTION : check for custom environment file in $APPL_TOP and export custom path in that environment file.
AutoConfig could not successfully execute the following scripts: afdbprf.sh
and adcrobj.sh
Error During AutoConfig -
[PROFILE PHASE]

AutoConfig could not successfully execute the following scripts:

Directory: /u01/app/oracle/product/11.2.0/db_1/appsutil/install/visr12_appsdbnode

afdbprf.sh INSTE8_PRF 1

[APPLY PHASE]

AutoConfig could not successfully execute the following scripts:

Directory: /u01/app/oracle/product/11.2.0/db_1/appsutil/install/visr12_appsdbnode

adcrobj.sh INSTE8_APPLY 1

[oracle@apps_rac01 visr12_apps_rac01]$ cd /u01/app/oracle/product/11.0/db_1/appsutil/scripts/visr12_apps_rac01


[oracle@apps_rac01 visr12_apps_rac01]$ adautocfg.sh
Enter the APPS user password:
The log file for this session is located at: /u01/app/oracle/product/11.0/db_1/appsutil/log/visr12_apps_rac01/07300646/adconfig.log

AutoConfig is configuring the Database environment...

AutoConfig will consider the custom templates if present.


Using ORACLE_HOME location : /u01/app/oracle/product/11.0/db_1
Classpath :
:/u01/app/oracle/product/11.0/db_1/jdbc/lib/ojdbc5.jar:/u01/app/oracle/product/11.0/db_1/appsutil/java/xmlparserv2.jar:/u01/app/oracle/product/1
1.0/db_1/appsutil/java:/u01/app/oracle/product/11.0/db_1/jlib/netcfg.jar:/u01/app/oracle/product/11.0/db_1/jlib/ldapjclnt11.jar

Using Context file : /u01/app/oracle/product/11.0/db_1/appsutil/visr12_apps_rac01.xml

Context Value Management will now update the Context file


Updating Context file...COMPLETED
Attempting upload of Context file and templates to database...COMPLETED
Updating rdbms version in Context file to db102
Updating rdbms type in Context file to 32 bits
Configuring templates from ORACLE_HOME ...
AutoConfig completed with errors.

Work Around / Fix -


-- When we runt he Auto Config it will try to recreate the listener.ora file so its advisable to keep the listener down -- -- during
AutoConfig run
[oracle@apps_rac01 visr12_apps_rac01]$ ./addlnctl.sh stop visr12
Logfile: /u01/app/oracle/product/11.0/db_1/appsutil/log/visr12_apps_rac01/addlnctl.txt
You are running addlnctl.sh version 120.1.12010000.4
Shutting down listener process visr12 ...
LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 30-JUL-2012 06:46:06
Copyright (c) 1991, 2009, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=apps_rac01.localdomain)(PORT=1521)))
The command completed successfully
addlnctl.sh: exiting with status 0

addlnctl.sh: check the logfile /u01/app/oracle/product/11.0/db_1/appsutil/log/visr12_apps_rac01/addlnctl.txt for more information ...

-- Start a new session and make sure you don't set the env by running <context>.env file

-- If you set the env then the Auto Config will fail

[oracle@apps_rac01 visr12_apps_rac01]$ sudo su - oracle

[oracle@apps_rac01 ~]$ cd /u01/app/oracle/product/11.0/db_1/appsutil/scripts/visr12_apps_rac01/

[oracle@apps_rac01 visr12_apps_rac01]$ ./adautocfg.sh

Enter the APPS user password:

The log file for this session is located at: /u01/app/oracle/product/11.0/db_1/appsutil/log/visr12_apps_rac01/07300647/adconfig.log

AutoConfig is configuring the Database environment...

AutoConfig will consider the custom templates if present.

Using ORACLE_HOME location : /u01/app/oracle/product/11.0/db_1

Classpath :
:/u01/app/oracle/product/11.0/db_1/jdbc/lib/ojdbc5.jar:/u01/app/oracle/product/11.0/db_1/appsutil/java/xmlparserv2.jar:/u01/app/oracle/product/1
1.0/db_1/appsutil/java:/u01/app/oracle/product/11.0/db_1/jlib/netcfg.jar

Using Context file : /u01/app/oracle/product/11.0/db_1/appsutil/visr12_apps_rac01.xml

Context Value Management will now update the Context file

Updating Context file...COMPLETED

Attempting upload of Context file and templates to database...COMPLETED

Updating rdbms version in Context file to db112


Updating rdbms type in Context file to 32 bits

Configuring templates from ORACLE_HOME ...

AutoConfig completed successfully.

diff between oracle and AP port in listener.ora

oracle + 105 = App port

How to Lock Users Out Of E-Business Suite And Allow Specific Users in 11i/R12 Leave a
comment
This post is very handy during Month End Activities.

During Month ends if there is critical activity going from the business side and willing to restrict Business users accessing Oracle
Applications we can do below configuration changes, before editing any file take a backup of configuration files.

11i
1. Backup file $IAS_ORACLE_HOME/Apache/Apache/conf/apps.conf
2. Edit the apps.conf file and add a list of ip addresses for the users that you want to allow access to the system
e.g.
Alias /OA_HTML/ "/u01/jbcomn/html/"
<Location /OA_HTML/>
Order allow,deny
Allow from XX.XXX.XXX.XXX
Allow from XX.XXX.XXX.XXX
Allow from XX.XXX.XXX.XXX
Allow from X.XXX.XXX.XXX
Allow from localhost
Allow from your_apps_server.company.com
Allow from your_apps_server
</Location>

R12.X, R12.1X
1. Edit file $ORA_CONFIG_HOME/10.1.3/Apache/Apache/conf/custom.conf and add a list of ip addresses for the users that
you want to allow access to the system. The benefit of using custom.conf is that it is preserved when autoconfig is run.
e.g.
<Location ~ "/OA_HTML">
Order deny,allow
Deny from all
Allow from XX.XXX.XXX.XXX
Allow from XX.XXX.XXX.XXX
Allow from XX.XXX.XXX.XXX
Allow from X.XXX.XXX.XXX
Allow from localhost
Allow from your_apps_server.company.com
Allow from your_apps_server
</Location>

Note, you need to include localhost and your apps tier server name. One can use the PC name rather than IP address, however
PC name is more sensitive to network config
3. Restart Apache
4. Now only the users who are assigned to the ip addresses added will have access. All other users will get a forbidden error
when they attempt to login. This is a very simple solution and what makes it good is that it can be done programatically.

If Any user tries to login he will get below error

The forbidden error looks like this:

Forbidden
You don’t have permission to access /OA_HTML/AppsLocalLogin.jsp on this server

If you want to change the message you can do this: edit custom.conf add a line as follows (change the text to suit your
requirements)
ErrorDocument 403 “Forbidden oops, you cannot access the production instance as it is month end, only certain users have
access at this time
Stop/Start apache. Users will now receive the above message

Important: This may not work if the IP address hitting the web server is from a reverse proxy, load balancer or
some other device. This is because the IP address will not be from the end user.

How to increase JVM Count and Memory in Oracle Applications 11i and R12
How to increase the number OACORE process type(JVM) and required memory in R12:
Location in R12:$INST_TOP/apps/SID_HOSTNAME/ora/10.1.3/opmn/conf/opmn.xml
To increase JVM in R12.
Go to $INST_TOP/apps/SID_HOSTNAME/ora/10.1.3/opmn/conf/
Take a backup of opmn.xml file before editing,
A)Open the opmn.xml file and go to line no-128 and increase
numprocs=4

How to increase the JVM Memory in R12:


B)To increase the memory for JVM.
In the same file go to line no-114
FROM:-server -verbose:gc -Xmx512M -Xms128M -XX:MaxPermSize=160M – XX:NewRatio=2
-XX:+PrintGCTimeStamps
Example :
How to increase set :
Xms128M to Xms512M = Memory allocated upfront to the JVM
Xmx512M to Xmx1024M = Maximum Memory allocated to the JVM
TO:
server -verbose:gc –Xmx1024M -Xms256M -XX:MaxPermSize=256M -XX:NewRatio=2
-XX:+PrintGC
How to increase the number JVMs in 11i:
You can edit the xml file and make the change as follows to increase (earlier 1 – now 2)the oacore JVMs and then run
autoconfig :
<oacore_nprocs oa_var=”s_oacore_nprocs”>2</oacore_nprocs>

Set the number of jvm as required and run autoconfig .

This will affect the jserv.conf (following line)


ApJServGroup OACoreGroup 1 1 /u03/oracle/prodora/iAS/Apache/Jserv/etc/jserv.properties

Alternatively : Manually increase the number of JVMs in $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.conf


and bounce Apache
How to increase the JVM Memory in 11i:

To increase the memory for oacore JVM, edit the file $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.properties
wrapper.bin.parameters=-verbose:gc -Xmx512M -Xms128M -XX:MaxPermSize=128M -XX:NewRatio=2 -
XX:+PrintGCTimeStamps -XX:+UseTLAB
to
wrapper.bin.parameters=-verbose:gc -Xmx1024M -Xms512M -XX:MaxPermSize=128M -XX:NewRatio=2 -
XX:+PrintGCTimeStamps -XX:+UseTLAB
– We normally allocate 2 or 4 jvms – We do not allocate 10 / 15

CASE II:

Size of core files might increasing path of which is: " /opt/oracle/PROD/inst/apps/PROD_prodapps/ora/10.1.2/forms"

So according to support note


Note 1331304.1: Forms Core Dump Files generated in E-Business Suite R12 (Doc ID 1331304.1)

Edit context parameter value "s_forms_catchterm"


FROM
<FORMS_CATCHTERM oa_var="s_forms_catchterm">1</FORMS_CATCHTERM>
TO
<FORMS_CATCHTERM oa_var="s_forms_catchterm">0</FORMS_CATCHTERM>

Run Autoconfig to make change effective.

FORMS_CATCHTERM. This variable enables or disables the Forms abnormal termination handler which captures middle tier crashes and writes
diagnostic information into the dump file or the forms server log file. Allowed values are {0,1}. By default, this value is set to '1' which enables
the Forms termination handler. Setting this variable to '0' disables the Forms termination Handler.

So, set it to 0 is to disable the Forms termination Handler.


You can remove those core files as they are not needed, because they are OK to remove. You can just use OS command to delete them.

Moreover,
Note 356878.1: R11i / R12 : How to relink an E-Business Suite Installation of Release 11i and Release 12.x (Doc ID 356878.1)

Another oracle support note was found in this regard if above solution doesn't work.

Note 1194383.1: R12: Frequent frmweb core files created in $INST_TOP/ora/10.1.2/forms (frmweb core dumps) (Doc ID 1194383.1)
to apply Patch 8940272 - MULTIPLE CORE DUMPS FOUND DURING LOAD TESTING.

You can apply with command:


>opatch apply
You also can rollback it with command:
>opatch rollback -id 8940272

DROPPING DB LINK of PRIVATE USER FROM SYS

February 21st, 2012 | Posted by nassyambasha@gmail.com in Administration

DROP DB_LINKS of a PRIVATE user from “SYS”


Create a procedure named “DROP_DBLINK” which will call values from “dba_users”, also which parse a cursor in it and it calls an
inbuilt package also.
To drop a private DB_LINK either we need to change user password or we need to know user password, Instead of that we can
drop DB_LINKS using this procedure.
Step 1:- Check the DB_LINK & Troubleshoot to drop

a) Check the existing DB_LINK of user “CKPT”.

SQL> show user

USER is "SYS"

SQL>

SQL> select db_link,owner from dba_db_links where owner='CKPT' and db_link=


‘DEVWEBSTORE10G_IC.CKPT.COM’;
DB_LINK OWNER

------------------------------ ------------------------------

DEVWEBSTORE10G_IC.CKPT.COM CKPT

b) Drop the DB_LINK from “SYS” user.

SQL> drop database link "CKPT"."DEVWEBSTORE10G_IC.CKPT.COM "; <---- Drop by using schema name
with separation

drop database link "CKPT"."DEVWEBSTORE10G_IC.CKPT.COM "

ERROR at line 1:

ORA-02024: database link not found

SQL> drop database link DEVWEBSTORE10G_IC.CKPT.COM; <---- Drop by using without schema name

drop database link DEVWEBSTORE10G_IC.CKPT.COM

ERROR at line 1:

ORA-02024: database link not found

SQL> drop database link CKPT. DEVWEBSTORE10G_IC.CKPT.COM; <---- Drop by using without schema name
using pointer

drop database link CKPT. DEVWEBSTORE10G_IC.CKPT.COM

ERROR at line 1:

ORA-02024: database link not found


SQL>

C) Create a procedure as below from “SYS” user.

SQL> Create or replace procedure Drop_DbLink(schemaName varchar2, dbLink varchar2 ) is

plsql varchar2(1000);

cur number;

uid number;

rc number;

begin

select

u.user_id into uid

from dba_users u

where u.username = schemaName;

plsql := 'drop database link "'||dbLink||'"';

cur := SYS.DBMS_SYS_SQL.open_cursor;

SYS.DBMS_SYS_SQL.parse_as_user(

c => cur,

statement => plsql,

language_flag => DBMS_SQL.native,

userID => uid

);

rc := SYS.DBMS_SYS_SQL.execute(cur);

SYS.DBMS_SYS_SQL.close_cursor(cur);

end;
/

Procedure created.

SQL>

D) Now drop one DB_LINK of a Private user


SQL> exec Drop_DbLink( 'CKPT', 'DEVWEBSTORE10G_IC.CKPT.COM' );

PL/SQL procedure successfully completed.

SQL>

SQL> select db_link,owner from dba_db_links where owner='CKPT' and


db_link='DEVWEBSTORE10G_IC.CKPT.COM';

no rows selected

SQL>

Here No DB_LINK exists with the above name after Executing Procedure.

Step 2:- How to DROP ALL DB_LINKS of a “PRIVATE” schema from “SYS” user

This procedure is an extended for the above procedure “Drop_DbLink”, Create a procedure named “Dropschema_dblinks”

create or replace procedure DropSchema_DbLinks(schemaName varchar2 ) is

begin

for link in(

select

l.db_link

from dba_db_links l

where l.owner = schemaName

) loop

Drop_DbLink(

schemaName => schemaName,

dbLink => link.db_link


);

end loop;

end;

Procedure created.

SQL>

SQL> select owner, db_link from dba_db_links where owner ='CKPT';

OWNER DB_LINK

------------------------------ ------------------------------

CKPT DEVWEBSTORE9I_IC.CKPT.COM

CKPT DEVWEBSTORE9I_IC.WORLD

CKPT INTER_EDI_RO.CKPT.COM

CKPT ORDERSHIPPING.CKPT.COM

CKPT ORDERSHIPPING.WORLD

CKPT SVC_IW.CKPT.COM

6 rows selected.

SQL> exec dropschema_dblinks('CKPT');

PL/SQL procedure successfully completed.

SQL>

SQL> select owner, db_link from dba_db_links where owner ='CKPT';

no rows selected
SQL>

Here it is all the “6” DB_LINKS dropped at one shot.

=============================================================

Issue Description:Program was terminated by signal 25

Cause: This happens when the file size of "reports.log" has reached its maximum limit at operating system which is 2GB

Solution: Rename/Truncate existing "reports.log" in directory $INST_TOP/logs/appl/conc/log/reports.log and create an empty


"reports.log" and restart the concurrent managers.

=============================================================

PO Output for Communication failed to produce PDF report

Responsibilty :-

XML Publisher Administrator

Concurrent Program :-

Run the "XML Publisher Template Re-Generator Program" with parameter ALL .
ORA-00054: resource busy and acquire with NOWAIT specified or timeout
expired ORA-06512: at "APPS.WF_NOTIFICATION"
Approval Workflow Notification Mailer Error :

ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired ORA-06512: at "APPS.WF_NOTIFICATION",
line 5130 ORA-06512: at line 1

Solution:

Sql> Select do.owner, do.object_name, do.object_type, dl.session_id, vs.serial#, vs.program, vs.machine, vs.osuser

from dba_locks dl, dba_objects do,v$session vs

where do.object_name ='WF_NOTIFICATIONS' and do.object_type='TABLE' and dl.lock_id1 =do.object_id and vs.sid =
dl.session_id;

Issue the command Alter system kill sessions 'Sid, serial#' immediate;

ALTER SYSTEM SET ddl_lock_timeout=30;

=-----------------------=--=-=-=-=-=-=-=

Patch Conflict Detection and Resolution

OPatch detects and reports any conflicts encountered when applying an interim patch with a previously
applied patch. The patch application fails in case of conflicts. You can use the -force option of OPatch to
override this failure. If you specify -force, the installer firsts rolls back any conflicting patches and then
proceeds with the installation of the desired interim patch.

You may experience a bug conflict and might want to remove the conflicting patch. This process is known as
patch rollback. During patch installation, OPatch saves copies of all the files that were replaced by the new
patch before the new versions of these files are loaded, and stores them in $ORACLE_HOME/.patch_storage.
These saved files are called rollback files and are key to making patch rollback possible. When you roll back a
patch, these rollback files are restored to the system. If you have gained a complete understanding of the
patch rollback process, you should only override the default behavior by using the -force flag. To roll back a
patch, execute the following command:
$ OPatch/opatch rollback -id <Patch_ID>

Please use below command to check the conflicts aganist the oracle_home and avoid to land in problems

step 1: unzip your patch zip file

step 2: run below command

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <patch_directory>

Example:

$ unzip p9655017_10204_linux.zip

$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir 9655017

The other day, when I am doing patching on a RAC database, after executing the above conflict command, got below error

Following patches have conflicts. Please contact Oracle Support and get the merged patch of the patches :

TNSLSNR for Linux: Version 10.1.0.5.0 – Production


System parameter file is /pa01/prod/inst/apps/prod_ebsapps01/ora/10.1.2/network/admin/listener.ora
Log messages written to /pa01/prod/inst/apps/prod_ebsapps01/logs/ora/10.1.2/network/apps_prod.log
Error listening on: (ADDRESS=(PROTOCOL=TCP)(Host=ebsapps01)(Port=1628))
TNS-12533: TNS:illegal ADDRESS parameters
TNS-12560: TNS:protocol adapter error
TNS-00503: Illegal ADDRESS parameters

Check Application sqlnet.ora file


How to re-open expired oracle database account without change password
Today i'll show you how to reopen Oracle database user account without changing password, which have status expired. let's
create demonstration:
Here is video of these procedures
create user

CREATE USER test IDENTIFIED BY test;


grant create session to test;

check status

SELECT username, account_status, expiry_date


FROM dba_users
WHERE username = 'TEST';
-------------------------------------------
USERNAME ACCOUNT_STATUS EXPIRY_DATE
TEST OPEN 20-Sep-14 11:29:00

expire it

alter user test password expire;

re-check status

SELECT username, account_status, expiry_date


FROM dba_users
WHERE username = 'TEST';
-------------------------------------------
USERNAME ACCOUNT_STATUS EXPIRY_DATE
TEST EXPIRED 24-Mar-14 11:31:50

get the existing password, here is two method:


1) in 11g

select password from user$ where name='TEST';


PASSWORD
------------------------------
7A0F2B316C212D67

in 10g

select password from dba_users where username='TEST';


PASSWORD
------------------------------
7A0F2B316C212D67

open account with

alter user test identified by values '7A0F2B316C212D67';


2)

SELECT DBMS_METADATA.get_ddl ('USER', 'TEST')


FROM DUAL;
-------------------------------------------
CREATE USER "TEST" IDENTIFIED BY VALUES
'S:79B1417837DCF0FBFACEFB10D7DBDC7B7EA63CC986036567BDCBA144B940;7A0F2B316C212D67'
DEFAULT TABLESPACE "USERS"
TEMPORARY TABLESPACE "TEMP"
PASSWORD EXPIRE

edit above script and execute it

ALTER USER "TEST" IDENTIFIED BY VALUES


'S:79B1417837DCF0FBFACEFB10D7DBDC7B7EA63CC986036567BDCBA144B940;7A0F2B316C212D67'

chek status again

SELECT username, account_status, expiry_date


FROM dba_users
WHERE username = 'TEST';
-------------------------------------------
USERNAME ACCOUNT_STATUS EXPIRY_DATE
TEST OPEN 20-Sep-14 11:36:47

after that user test still have its old pass test.
that's all, good luck.

=================================================================================

ERROR
R12: Rapid Cloning Issue : ouicli.pl INSTE8_APPLY 255

SOLUTION
Step 1. Set $ORACLE_HOME/perl/bin in PATH environment variable.

Step 2. $ export PATH=$ORACLE_HOME/perl/bin:$PATH

Step 3. Run adcfgclone.pl


I/O WAIT ISSUE IN DATABASE:

ORA$AT_SA_SPC_SY Jobs failing?

Send article as PDF

Oracle has raised an alert in the alert.log and created a trace file as well, for a failed DBMS_SCHEDULER job with a strange
name which doesn’t appear in DBA_SCHEDULER_JOBS or DBA_SCHEDULER_PROGRAMS – what’s going on?

An extract from the alert log and/or the trace file mentioned in the alert log shows something like:

*** SERVICE NAME:(SYS.USERS) ...


*** MODULE NAME:(DBMS_SCHEDULER) ...
*** ACTION NAME:(ORA$AT_SA_SPC_SY_nnn) ...

Where ‘nnn’ in the action name is a number.

No matter how hard you scan the DBA_SCHEDULER_% views, you will not find anything with this name. What is actually
failing?

Oracle 11.1.0.6 onwards stopped listing these internal jobs in DBA_SCHEDULER_JOBS, as they did in 10g, and instead lists
them in DBA_AUTOTASK_% views. However, not by actual name, so don’t go looking for a TASK_NAME that matches the
above action name. You will fail.

There are three different autotask types:

 Space advisor
 Optimiser stats collection
 SQl tuning advisor

The tasks that run for these autotask ‘clients’ are named as follows:

 ORA$AT_SA_SPC_SY_nnn for Space advisor tasks


 ORA$AT_OS_OPT_SY_nnn for Optimiser stats collection tasks
 ORA$AT_SQ_SQL_SW_nnn for Space advisor tasks

See MOS notes 756734.1, 755838.1, 466920.1 and Bug 12343947 for details. The first of these has the most relevant and
useful information.

UPDATE: My original failing autotask has been diagnosed by Oracle Support as bug 13840704 for which a patch exists
here for 11.2.0.2 and 11.2.0.3.
Oracle document id 13840704.8 has details, but it involves LOBs based on a user defined type. In this case, Spatial data
in an MDSYS.SDO_GEOMETRY column.

The view DBA_AUTOTASK_CLIENT won’t show you anything about a specific task, with the above names, but will show
you details of what the overall ‘client’ is, There are three:

select client_name, status


from dba_autotask_client;

CLIENT_NAME STATUS
------------------------------- --------
auto optimizer stats collection ENABLED
auto space advisor ENABLED
sql tuning advisor DISABLED

I can see from the task name in the alert log and trace file, that my failing task is a space advisor one, so, by looking into
the DBA_AUTOTASK_JOB_HISTORY view, I can see what’s been happening:

select distinct client_name, window_name, job_status, job_info


from dba_autotask_job_history
where job_status <> 'SUCCEEDED'
order by 1,2;

CLIENT_NAME WINDOW_NAME JOB_STATUS JOB_INFO


------------------ --------------- ---------- -------------------------------------------
auto space advisor SATURDAY WINDOW FAILED ORA-6502: PL/SQL: numeric or value error...
auto space advisor SUNDAY WINDOW FAILED ORA-6502: PL/SQL: numeric or value error...

So, in my own example, the auto space advisor appears to have failed on Saturday and Sunday. Given that this is an
internal task, and nothing I can do will let me know about the invalid number problem, I need to log an SR with Oracle
on the matter. However, as I don’t want my fellow DBAs to be paged in the wee small hours for a known problem, I have
disabled the space advisor task as follows:

BEGIN
dbms_auto_task_admin.disable(
client_name => 'auto space advisor',
operation => NULL,
window_name => NULL);
END;
/

PL/SQL procedure successfully completed

Checking DBA_AUTOTASK_CLIENT again, shows that it is indeed disabled:

select client_name, status


from dba_autotask_client
where client_name = 'auto space advisor';

CLIENT_NAME STATUS
------------------------------- --------
auto space advisor DISABLED
Enabling it again after Oracle Support have helped resolve the problem is as simple as calling
dbms_auto_task_admin.enable with exactly the same parameters as for the disable call:

BEGIN
dbms_auto_task_admin.enable(
client_name => 'auto space advisor',
operation => NULL,
window_name => NULL);
END;
/

PL/SQL procedure successfully completed

When enabling and/or disabling auto tasks, you must use the CLIENT_NAME as found in DBA_AUTOTASK_CLIENT view.

The full list of DBA_AUTOTASK_% views is:

 DBA_AUTOTASK_CLIENT
 DBA_AUTOTASK_CLIENT_HISTORY
 DBA_AUTOTASK_CLIENT_JOB
 DBA_AUTOTASK_JOB_HISTORY
 DBA_AUTOTASK_OPERATION
 DBA_AUTOTASK_SCHEDULE
 DBA_AUTOTASK_TASK
 DBA_AUTOTASK_WINDOW_CLIENTS
 DBA_AUTOTASK_WINDOW_HISTORY

Hope this helps!

Das könnte Ihnen auch gefallen