Sie sind auf Seite 1von 277

11gR2 ASMCMD commands

chdg - Changes existing disk group (add disks, drop disks, or rebalance) based on XML configuration file. You can use ALTER DISKGROUP... commands for same too, but here we are learning ASMCMD commands extensions in 11gr2. The chdg command add disks, delete disks or set rebalance power level on an existing disk group. Syntax : chdg {config_file.xml | 'contents_of_xml_file'} XML configuration template <chdg> update disk clause (add/delete disks/failure groups) name disk group to change power power to perform rebalance <add> items to add are placed here </add> <drop> items to drop are placed here </drop> <fg> failure group name failure group name </fg> <dsk> diskname disk name path disk path size size of the disk to add </dsk> </chdg> Example: We will add disk /dev/disk/disk61 to existing disk group DISK and set rebalance power level to 4. find existing disk in a disk group DATA SQL> select name,path from v$asm_disk where group_number=1; NAME PATH --------------- ----------------DATA_0000 /dev/rdisk/disk50 DATA_0001 /dev/rdisk/disk51 DATA_0002 /dev/rdisk/disk60 Create following XML configuration file and save it as adddsk.xml <chdg name="data" power="4">

<add> <dsk string="/dev/rdisk/disk61"/> </add> </chdg> and execute following $asmcmd ASMCMD>chdg adddsk.xml ASMCMD> Now check again to see disks in DATA disk group SQL> select name,path from v$asm_disk where group_number=1; NAME PATH --------------- ----------------DATA_0000 /dev/rdisk/disk50 DATA_0001 /dev/rdisk/disk51 DATA_0002 /dev/rdisk/disk60 DATA_0003 /dev/rdisk/disk61 <--- New disk added Lets drop this disk with chdg command. You can use ALTER DISKGROUP DATA DROP DISK command too. Create a XML file <chdg name="data" power="4"> <drop> <dsk name="DATA_0003"/> </drop> </chdg> and save it as dropdsk.xml and execute following. $asmcmd ASMCMD>chdg adddsk.xml ASMCMD> Now check again to see disks in DATA disk group SQL> select name,path from v$asm_disk where group_number=1; NAME PATH --------------- ----------------DATA_0000 /dev/rdisk/disk50 DATA_0001 /dev/rdisk/disk51

DATA_0002 /dev/rdisk/disk60 DATA_003 disk name no longer exits!!! chkdg - Checks or repairs a disk group. The 11gR2 ASM CHECK command checks for
The disks consistency The alias directory is linked correctly All metadata directories and internal consistency of ASM disk group metadata.

It writes findings in alert logs and display them on database control page too.In 11gR2 the default is norepair Syntax : chkdg [--repair] <<diskgroupname>> Example: ASMCMD> chkdg data ASMCMD> The following are the contents from ASM alert log file ... ... SQL> /* ASMCMD */ALTER DISKGROUP data CHECK NOREPAIR NOTE: starting check of diskgroup DATA kfdp_checkDsk(): 6 kfdp_checkDsk(): 7 kfdp_checkDsk(): 8 SUCCESS: check of diskgroup DATA found no errors SUCCESS: /* ASMCMD */ALTER DISKGROUP data CHECK NOREPAIR ... ... mkdg -Creates a disk group based on XML configuration file Syntax : mkdg {config_file.xml | 'contents_of_xml_file'} XML configuration template <dg> disk group name disk group name redundancy normal, external, high

<fg> failure group name failure group name </fg> <dsk> disk name disk name path disk path size size of the disk to add </dsk> <a> attribute name attribute name value attribute value </a> </dg> Example: Create new disk group DATA2 First, create a XML configuration file with external redundancy and save it as mkdg.xml <dg name="data2" redundancy="external"> <dsk string="/dev/rdisk/disk61"/> <a name="compatible.rdbms" value="10.2"/> </dg> $ls -l mkdg.xml -rw-r--r-- 1 oracle oinstall 86 Nov 20 10:59 mkdg.xml $asmcmd ASMCMD>mkdg mkdg.xml ASMCMD> ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 1024 4096 1048576 80896 78196 0 78196 0 N DATA/ MOUNTED EXTERN N 1024 4096 1048576 76800 76750 0 76750 0 N DATA2/ lsdsk - Lists Oracle ASM Disks. It runs in connected mode first and pulls information from v$ASM_DISK_STAT and v$ASM_DISK dynamic views otherwise it runs in disconnected mode and pulls information from disk header. The -I option forces non-connected mode. Syntax : lsdsk {-kptgMHI}{-G diskgroup } { --member|--candidate} {--discovery}{--statistics}{pattern}

-k:Displays the TOTAL_MB, FREE_MB, OS_MB,NAME, FAILGROUP, LIBRARY, LABEL, UDID, PRODUCT, REDUNDANCY, and PATH columns of the V$ASM_DISK view. --statistics: Displays the READS, WRITES, READ_ERRS, WRITE_ERRS, READ_TIME, WRITE_TIME, BYTES_READ, BYTES_WRITTEN, and the PATH columns of the V$ASM_DISK view. -p:Displays the GROUP_NUMBER, DISK_NUMBER, INCARNATION, MOUNT_STATUS, HEADER_STATUS, MODE_STATUS, STATE, and the PATH columns of the V$ASM_DISK view. -t:Displays the CREATE_DATE, MOUNT_DATE, REPAIR_TIMER, and the PATH columns of the V$ASM_DISK view. -g:Selects from GV$ASM_DISK_STAT, or from GV$ASM_DISK if the -discovery flag is also specified. GV$ASM_DISK.INST_ID is included in the output. --discovery:Selects from V$ASM_DISK, or from GV$ASM_DISK if the -g flag is also specified. This option is always enabled if the Oracle ASM instance is version 10.1 or earlier. This flag is disregarded if lsdsk is running in nonconnected mode. -H:Suppresses column headings. -I:Scans disk headers for information rather than extracting the information from an Oracle ASM instance. This option forces non-connected mode. -G:Restricts results to only those disks that belong to the group specified by diskgroup. -M:Displays the disks that are visible to some but not all active instances. These are disks that, if included in a disk group, cause the mount of that disk group to fail on the instances where the disks are not visible. --candidate: Restricts results to only disks having membership status equal to CANDIDATE. --member:Restricts results to only disks having membership status equal to MEMBER. pattern: Returns only information about the specified disks that match the supplied pattern.

Example 1: $ asmcmd ASMCMD> lsdsk Path /dev/rdisk/disk50 /dev/rdisk/disk51 /dev/rdisk/disk60 /dev/rdisk/disk61 Example 2: The following command display disk attached to disk group DATA2 and their space information. ASMCMD> lsdsk -k -G DATA2 Total_MB Free_MB OS_MB Name Failgroup Library Label UDID Product Redund Path 76800 76750 76800 DATA2_0000 DATA2_0000 System UNKNOWN /dev/rdisk/disk61 Example 3: The following shows io statistics for disks in DATA2 disk group ASMCMD> lsdsk -t -G DATA2 --statistics Reads Write Read_Errs Write_Errs Read_time Write_Time Bytes_Read Bytes_Written Voting_File Create_Date Mount_Date Repair_Timer Path 18 447 0 0 .026287 3.841985 77824 1830912 N 20-NOV-10 20-NOV-10 0 /dev/rdisk/disk61 Example 4: The following displays disks attached to DATA2 and DATA disk groups ASMCMD> lsdsk -G DATA2 Path /dev/rdisk/disk61 ASMCMD> lsdsk -G DATA Path /dev/rdisk/disk50 /dev/rdisk/disk51 /dev/rdisk/disk60 ASMCMD> dropdg -Drops a disk group. DROP diskgroup command marks the headers of disks belonging to a diskgroup that cannot be mounted by ASM as

FORMER. If diskgroup is being used by any other nodes or ASM instance then this dropdg command fails. The -r (INCLUDING CONTENTS) option of dropdg will drop the diskgroup and files if diskgroup is empty . The -f(Force) with INCLUDING CONTENTS should be used with caution as this will not check if diskgroup is being used by any other ASM instance and it will clear all disks in that diskgroup. Syntax: dropdg { -r -f } { -r } <<diskgroup>> Example: ASMCMD> dropdg data2 ORA-15039: diskgroup not dropped ORA-15053: diskgroup "DATA2" contains existing files (DBD ERROR: OCIStmtExecute) ASMCMD>dropdg -r data2 ASMCMD> iostat -Displays I/O statistics for disks. lsdg -Displays disk groups and their information.The lsdg command queries V$ASM_DISKGROUP_STAT by default. If the --discovery flag is specified, the V$ASM_DISKGROUP is queried instead. Example: ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 1024 4096 1048576 80896 78196 0 78196 0 N DATA/ MOUNTED EXTERN N 1024 4096 1048576 76800 76750 0 76750 0 N DATA2/ umount -Dismounts a disk group Syntax: umount { -a | [-f] diskgroup } -a Dismounts all mounted disk groups. -f Forces the dismount operation.

Example: The following example first checks the disk group with lsdg command and then unmount the data2 diskgroup. You will see data2 is unmounted if you run lsdg command again. ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 1024 4096 1048576 80896 78196 0 78196 0 N DATA/ MOUNTED EXTERN N 1024 4096 1048576 76800 76750 0 76750 0 N DATA2/ ASMCMD> umount data2 ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 1024 4096 1048576 80896 78196 0 78196 0 N DATA/ ASMCMD> mount : Mounts a disk group.You can mount ASM diskgroup in restrict mode for mainitance/rebalnace operations and during this mode client cannot access files in that diskgroup. If you are running RAC then MOUNT RESTRICT will mount diskgroup exclusively on that instance and clients cannot access files in that diskgroup until it mounted back in normal mode. Why in restricted mode? It improve the rebalance operation performace as there are no external connections to the disk group. Syntax: mount [--restrict] { [-a] | [-f] diskgroup[ diskgroup ...] } -a Mounts all disk groups. --restrict Mounts in restricted mode. -f Forces the mount operation. Example: In the previous example of unmount command we left DATA2 in unmounted stage. lets mount DATA2 disk group in restrict mode again and then unmount and mount in normal mode. ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name

MOUNTED EXTERN N 1024 4096 1048576 80896 78196 0 78196 0 N DATA/ ASMCMD> ASMCMD> mount --restrict DATA2 ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 1024 4096 1048576 80896 78196 0 78196 0 N DATA/ RESTRICTED EXTERN N 1024 4096 1048576 76800 76750 0 76750 0 N DATA2/ The state in above showing output showing RESTRICTED for DATA2 ASMCMD> umount data2 ASMCMD> mount data2 ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 1024 4096 1048576 80896 78196 0 78196 0 N DATA/ MOUNTED EXTERN N 1024 4096 1048576 76800 76750 0 76750 0 N DATA2/ DATA2 is removed from RESTRICTED mode. offline - Offline disks or failure groups that belong to a disk group. You won't be able to take disk offline in a disk group with external redundancy Syntax: offline -G diskgroup { -F failgroup |-D disk} [-t {minutes | hours}] -G diskgroup Disk group name. -F failgroup Failure group name. -D disk Specifies a single disk name. -t minutes | hours Specifies the time before the specified disk is dropped as nm or nh, where m specifies minutes and h specifies hours. The default unit is hours. Example: Lets add a disk to disk group2 with chdg command. ASMCMD> chdg adddsk.xml

ASMCMD> lsdsk -G DATA2 Path /dev/rdisk/disk61 /dev/rdisk/disk62 <-- New disk added ASMCMD> ASMCMD> lsdsk -k -G data2 Total_MB Free_MB OS_MB Name Failgroup Library Label UDID Product Redund Path 76800 76774 76800 DATA2_0000 DATA2_0000 System UNKNOWN /dev/rdisk/disk61 76800 76774 76800 DATA2_0001 DATA2_0001 System UNKNOWN /dev/rdisk/disk62 ASMCMD> offline -G data2 -D data2_0001 ORA-15067: command or option incompatible with diskgroup redundancy (DBD ERROR: OCIStmtExecute) ASMCMD> online - Online all disks, a single disk, or a failure group that belongs to a disk group. Syntax : online { [-a] -G diskgroup | -F failgroup |-D disk} [-w] -a Online all offline disks in the disk group. -G diskgroup Disk group name. -F failgroup Failure group name. -D disk Disk name. -w Wait option. Causes ASMCMD to wait for the disk group to be rebalanced before returning control to the user. The default is not waiting. rebal - Rebalances a disk group and it's useful if you have added some disks to a diskgroups to load balance I/O.The power level can be set from 0 to 11. A value of 0 disables rebalancing. If the rebalance power is not specified, the value defaults to the setting of the ASM_POWER_LIMIT initialization parameter. You can determine if a rebalance operation is occurring with the ASMCMD

lsop command Syntax: rebal [--power power] [-w] diskgroup --power power Power setting (0 to 11). -w Wait option. Causes ASMCMD to wait for the disk group to be rebalanced before returning control to the user. The default is not waiting. Example: The following example rebalance the data2 disk group power level set to 4 from 0. ASMCMD> lsop Group_Name Dsk_Num State Power <--- means no rebalance activity is going on ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 1024 4096 1048576 80896 78196 0 78196 0 N DATA/ MOUNTED EXTERN N 1024 4096 1048576 153600 153548 0 153548 0 N DATA2/ ASMCMD> ASMCMD> rebal --power 4 data2 ASMCMD> lsop Group_Name Dsk_Num State Power DATA2 REBAL WAIT 4 <--- rebalance is currently running... ASMCMD> lsop Group_Name Dsk_Num State Power <--- means no rebalance activity completed. The STATE can be one of the followings: - Wait : No rebalance is running or wait period is specified by Admins - Run : Rebalance is running. - REAP : Rebalance operation stopped. - HALT : Halted by Admins. - ERRORS : Errors during rebalance operations and halted. md_backup, md_restore: Create backup file on a filesystem for asm disk group metadata information you can restore this backup file by md_restore command of ASMCMD.

Syntax: md_backup -b <<backupfilename>> -G <<diskgroup>> When you restore RMAN backup to a lost diskgroup or to a different server you will get errors something like ORA-01119: error in creating database file ... ORA-17502: ksfdcre:4 Failed to create file ... ORA-15001: diskgroup "DATA" does not exist or is not mounted You have two options to restore : 1. Use SET newname for datafile <<fileno#>> to <<new diskgroup>> or db_file_name_convert option to restore these files to new disk group. 2. Recreate ASM diskgroup manually and other user defined directory structures inside that diskgroup. Let try this with this example. Example: For this example I will create different directories paths and one tablespace ts1 with 2 datafiles on DATA2 disk group. We will take a tablespace backup, DATA2 diskgroup metadata backup. We will restore DATA2 and it's directory tree with md_restore and tablespace datafiles from the RMAN backup. ASMCMD> cd DATA2 ASMCMD>mkdir mydir1 ASMCMD>mkdir mydir2 ASMCMD>ls -l Type Redund Striped Time Sys Name N mydir2/ N mydir1/ ASMCMD> cd mydir1 ASMCMD> cd mydir1 ASMCMD> ls -l ASMCMD>mkdir ts1_dir ASMCMD>mkdir ts2_dir ASMCMD>ls -l Type Redund Striped Time Sys Name N ts1_dir/ N ts2_dir/ Create a tablespace and create one table inside it.

SQL> create tablespace ts1 datafile '+DATA2/test1.dbf' size 1m; Tablespace created. SQL> alter tablespace ts1 add datafile '+DATA2/ts2.dbf' size 2m; Tablespace altered SQL> connect scott/tiger SQL> create table test tablespace ts1 as select * from user_objects; Table created SQL> select count(1) from test; COUNT(1) ---------7 Take the ASM DATA2 diskgroup metadata backup ASMCMD> md_backup data2asm_backup -G DATA2 Disk group metadata to be backed up: DATA2 Current alias directory path: mydir1/ts2_dir Current alias directory path: mydir1 Current alias directory path: mydir2 Current alias directory path: mydir1/ts1_dir Current alias directory path: TEST Current alias directory path: TEST/DATAFILEST/DATAFILE ASMCMD> exit $ ls -lt -rw-r--r-- 1 oracle oinstall 13418 Nov 20 13:03 data2aasm_backup Take RMAN tablespace ts1 backup with following commands. RMAN> run { 2> allocate channel c1 type disk; 3> backup tablespace ts1 format "/backup/test/ts1_%s_%t"; 4> } using target database control file instead of recovery catalog allocated channel: c1 channel c1: sid=51 instance=TEST1 devtype=DISK Starting backup at 20-NOV-10 channel c1: starting full datafile backupset

channel c1: specifying datafile(s) in backupset input datafile fno=00007 name=+DATA2/ts2.dbf input datafile fno=00006 name=+DATA2/ts1.dbf channel c1: starting piece 1 at 20-NOV-10 channel c1: finished piece 1 at 20-NOV-10 piece handle=/backup/test/ts1_11_735580273 tag=TAG20101120T155112 comment=NONE channel c1: backup set complete, elapsed time: 00:00:01 Finished backup at 20-NOV-10 released channel: c1 RMAN> RMAN> RMAN> **end-of-file** SQL> alter tablespace ts1 offline; Tablespace altered. Now drop the DATA2 disk group with force option. $asmcmd ASMCMD> dropdg data2 ORA-15039: diskgroup not dropped ORA-15053: diskgroup "DATA2" contains existing files (DBD ERROR: OCIStmtExecute) ASMCMD>dropdg -r data2 ASMCMD> SQL>connect scott/tiger SQL> select * from test; select * from test * ERROR at line 1: ORA-00376: file 6 cannot be read at this time ORA-01110: data file 6: '+DATA2/ts1.dbf' It's time to restore ts1 tablespace files from RMAN backup. RMAN> run { 2> allocate channel c1 type disk format '/backup/test/ts1_%s_%t' ; 3> restore tablespace ts1 ; 4> }

Lets use ASM md_restore command to create DATA2 diskgroup from backup. This will restore all the metadata information and create directory structure. $ asmcmd ASMCMD> md_restore disk2asm_backup Current Diskgroup metadata being restored: DATA2 Diskgroup DATA2 created! System template ONLINELOG modified! System template AUTOBACKUP modified! System template ASMPARAMETERFILE modified! System template OCRFILE modified! System template ASM_STALE modified! System template OCRBACKUP modified! System template PARAMETERFILE modified! System template ASMPARAMETERBAKFILE modified! System template FLASHFILE modified! System template XTRANSPORT modified! System template DATAGUARDCONFIG modified! System template TEMPFILE modified! System template ARCHIVELOG modified! System template CONTROLFILE modified! System template DUMPSET modified! System template BACKUPSET modified! System template FLASHBACK modified! System template DATAFILE modified! System template CHANGETRACKING modified! Directory +DATA2/mydir1 re-created! Directory +DATA2/TEST re-created! Directory +DATA2/mydir2 re-created! Directory +DATA2/mydir1/ts2_dir re-created! Directory +DATA2/mydir1/ts1_dir re-created! Directory +DATA2/TEST/DATAFILE re-created! ASMCMD> Restore tablespace ts1 datafiles from RMAN backups RMAN> run { 2> allocate channel c1 type disk format '/backup/test/ts1_%s_%t' ; 3> restore tablespace ts1 ; 4> } SQL> alter tablespace ts1 online;

alter tablespace ts1 online * ERROR at line 1: ORA-01113: file 6 needs media recovery ORA-01110: data file 6: '+DATA2/ts1.dbf' SQL> recover tablespace ts1; Media recovery complete. SQL> alter tablespace ts1 online; Tablespace altered. SQL> alter tablespace ts1 online; Tablespace altered. SQL> connect scott/tiger Connected. SQL> select count(1) from test; COUNT(1) ---------7 cp - It's going to make your life so easy when moving database across different servers. It allows you to copy files between ASM diskgroup and OS filesystem. In eairler release you have to use either RMAN command or setup FTP to move files between. 10g Example: In 10gR2 this is how you need to setup FTP with Oracle XMLDB - Connect to Oracle instance as sys and execute @ORACLE_HOME/rdbms/admin/catxdbdbca 7777 8080 This will enable ftp on port 7777 and http service on port 8080 - use ftp to move files between ASM and filesystem FTP> open <<hostname>> 7777 331 pass required for SYSTEM Password: 230 SYSTEM logged in ftp> Relax! in 11g you can move files just by using cp command 11gR2 example 11gR2 Example:

$ ls -l -rw-r----- 1 oracle oinstall 212992 Nov 20 15:51 ts1_11_735580273 $ asmcmd ASMCMD> cp /backup/test/ts1_11_735580273 +DATA/ copying /backup/test/ts1_11_735580273 -> +DATA/ts1_11_735580273 ASMCMD> cd +DATA ASMCMD> ls -l Type Redund Striped Time Sys Name Y ASM/ Y TEST/ N archlogs/ Y test-mvip/ N ts1_11_735580273 => +DATA/ASM/BACKUPSET/ts1_11_735580273.304.735585509 ASMCMD>

Duplicating a controlfile into ASM


Duplicating a controlfile into ASM when original controlfile is stored on a file system On the database instance: 1. Identify the location of the current controlfile: SQL> select name from v$controfile; NAME -------------------------------------------------------------------------------/oradata2/102b/oradata/P10R2/control01.ctl'

2. Shutdown the database and start the instance: SQL> shutdown normal SQL> startup nomount 3. Use RMAN to duplicate the controlfile: $ rman nocatalog RMAN>connect target RMAN>restore controlfile to '' from '';

RMAN> restore controlfile to '+DG1' from '/oradata2/102b/oradata/P10R2/control01.ctl'; Starting restore at 23-DEC-05 allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=156 devtype=DISK channel ORA_DISK_1: copied control file copy Finished restore at 23-DEC-05

We are only specifying the name of the diskgroup, so Oracle will create an OMF (Oracle Managed File). Use ASMCMD or sqlplus to identify the name assigned to the controlfile 4. On the ASM instance, identify the name of the controlfile: Using ASMCMD: $ asmcmd ASMCMD> cd ASMCMD> find -t controlfile . *

Changing the current directory to the diskgroup where the controlfile was created will speed the search. Output:

ASMCMD> find -t controlfile . * +DG1/P10R2/CONTROLFILE/backup.308.577785757 ASMCMD>

Note the name assigned to the controlfile. Although the name starts with the backup word, that does not indicate is a backup of the file. This just the name assigned for the identical copy of the current controlfile. 5. On the database side: Modify init.ora or spfile, adding the new path to parameter control_files. if using init.ora, just modify the control_files parameter and restart the database.

If using spfile, 1) startup nomount the database instance 2) alter system set control_files='+DG1/P10R2/CONTROLFILE/backup.308.577785757','/oradata2/102b/oradata /P10R2/control01.ctl' scope=spfile; For RAC instance: alter system set control_files='+DG1/P10R2/CONTROLFILE/backup.308.577785757','/oradata2/102b/oradata /P10R2/control01.ctl' scope=spfile sid='*'; 3) shutdown immediate

start the instance. Verify that new control file has been recognized. If the new controlfile was not used, the complete procedure needs to be repeated. Duplicating a controlfile into ASM using a specific name It is also possible to duplicate the controlfile using a specific name for the new controlfile. In the following example, the controlfile is duplicated into a new diskgroup where controlfiles have not been created before.

On the ASM instance: A. Create the directory to store the new controlfile. SQL> alter diskgroup add directory '+//CONTROLFILE'; Note that ASM uses directories to store the files and those are created automatically when using OMF files. (just specifying the diskgroup name). Asumming that other OMF files were created on the diskgroup, the first directory (DB_NAME) already exist, so it is only required to create the directory for the controlfile.

SQL> alter diskgroup DG1 add directory '+DG1/P10R2/CONTROLFILE'; ASMCMD can also be used ASMCMD>cd dg1 ASMCMD>mkdir controlfile On the database instance: B. Edit init.ora or spifile and modify parameter control_file: control_files='+DG1/P10R2/CONTROLFILE/control02.ctl','/oradata2/102b/oradata/P10R2 /control01.ctl' C. Identify the location of the current controlfile:

SQL> select name from v$controfile; NAME -------------------------------------------------------------------------------/oradata2/102b/oradata/P10R2/control01.ctl'

D. Shutdown the database and start the instance: SQL> shutdown normal SQL> startup nomount E. Use RMAN to duplicate the controlfile: $ rman nocatalog RMAN>connect target RMAN>restore controlfile to '' from '';

RMAN> restore controlfile to '+DG1/PROD/controlfile/control02.ctl' from '/oradata2/102b/oradata/P10R2/control01.ctl'; Starting restore at 23-DEC-05 allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=156 devtype=DISK

channel ORA_DISK_1: copied control file copy Finished restore at 23-DEC-05

F. Start the database: SQL> alter database mount; SQL> alter database open;

Now, using ASMCMD to search for information for the controlfiles, the find -t contrlfile command will return two records. That does not indicate there were created two controlfiles. The name specified is an alias name and is only an entry in the ASM metadata (V$ASM_ALIAS). Oracle will create the alias and the OMF entry when user specifies the file name. Duplicating a controlfile into ASM when original controlfile is stored on ASM If using spfile to start the instance: 1. Modify the spfile specifically the parameter control_files. In this example, a second controlfile is going to be created on same diskgroup DATA1.

sql> alter system set control_files='+DATA1/v102/controlfile/current.261.637923577','+ DATA1' scope=spfile sid='*';

2. Start the instance in NOMOUNT mode. 3. From rman, duplicate the controlfile

$ rman nocatalog RMAN>connect target RMAN> restore controlfile from '+DATA1/v102/controlfile/current.261.637923577';


The output for the execution is like: Starting restore at 08-NOV-07 allocated channel: ORA_DISK_1

channel ORA_DISK_1: sid=147 instance=V1021 devtype=DISK channel ORA_DISK_1: copied control file copy output filename=+DATA1/v102/controlfile/current.261.637923577 output filename=+DATA1/v102/controlfile/current.269.638120375 Finished restore at 08-NOV-07 Note that the command prints the name of the new created file: +DATA1/v102/controlfile/current.269.638120375 4. Mount and Open the database

RMAN> sql 'alter database mount'; RMAN> sql 'alter database open;

5. Validate both controlfiles are present

SQL> select name from v$controlfile; NAME -------------------------------------------------------------------------------+DATA1/v102/controlfile/current.261.637923577 +DATA1/v102/controlfile/current.269.638120375

6. Modify the control_file parameter with the complete path of the new file:

sql> alter system set control_files='+DATA1/v102/controlfile/current.261.637923577','+ DATA1/v102/controlfile/current.269.638120375' scope=spfile sid='*';


Next time instance are restarted, will pick both files.

When using init.ora file:

1) Edit init.ora and add new disk group name or same disk group name for mirroring

controlfiles. Example: control_files=('+GROUP1','+GROUP2')

(2) Start the instance in NOMOUNT mode. (3) Execute restore command, to duplicate the controlfile using the original location. Presuming, your current controlfile location DISK path is '+data/V10G/controlfile/Current.260.605208993' , execute:

RMAN> restore controlfile from '+data/V10G/controlfile/Current.260.605208993'; Starting restore at 29-APR-05 allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=317 devtype=DISK channel ORA_DISK_1: copied controlfile copy output filename=+GROUP2/v10g/controlfile/backup.268.7 output filename=+GROUP2/v10g/controlfile/backup.260.5 Finished restore at 29-APR-05
(4) Mount and open the database:

RMAN> alter database mount; database mounted released channel: ORA_DISK_1 RMAN> alter database open; database opened RMAN> exit
(5) Verify new mirrored controlfiles via sqlplus

SQL> show parameter control_files NAME TYPE VALUE ------------------------------------ ----------- -----------------------------control_files string +GROUP2/v10g/controlfile/backup.268.7, +GROUP2/v10g/controlfile/backup.260.5

ASM Filenames

1. Fully Qualified ASM Filename: +group/dbname/file_type/file_type_tag.file.incarnation

Example: +dgroup2/sample/controlfile/Current.256.541956473

2. Numeric ASM Filename: +group.file.incarnation Example: +dgroup2.257.541956473

3. Alias ASM Filenames: +group/dir_1//dir_n/filename Example: +dgroup1/myfiles/control_file1 +dgroup2/mydir/second.dbf

4. Alias ASM Filename with Template: +group(template_name)/alias Example: +dgroup1(my_template)/config1

5. Incomplete ASM Filename:+group Example: +dgroup1

6. Incomplete ASM Filename with Template: +group(template_name) Example: +dgroup1(my_template)

ASMCMD
ASMCMD is a command-line utility that you can use to easily view and manipulate files and directories within Automatic Storage Management (ASM) disk groups. It can list the contents of disk groups, perform searches, create and remove directories and aliases, display space utilization, and more. To run ASMCMD in interactive mode: 1. At the operating system command prompt, enter: asmcmd An ASMCMD command prompt is displayed. ASMCMD> 2. Enter an ASMCMD command and press the Enter key. The command runs and displays its output, if any, and then ASMCMD prompts for the next command.

3. Continue entering ASMCMD commands. Enter the command exit to exit ASMCMD. Commands:

cd: Changes the current directory to the specified directory. du: Displays the total disk space occupied by ASM files in the specified ASM directory and all its subdirectories, recursively. exit: Exits ASMCMD. find: Lists the paths of all occurrences of the specified name (with wildcards) under the specified directory. help: Displays the syntax and description of ASMCMD commands. ls: Lists the contents of an ASM directory, the attributes of the specified file, or the names and attributes of all disk groups. lsct: Lists information about current ASM clients. lsdg: Lists all disk groups and their attributes. mkalias: Creates an alias for a system-generated filename. mkdir: Creates ASM directories. pwd:Displays the path of the current ASM directory. rm: Deletes the specified ASM files or directories. rmalias: Deletes the specified alias, retaining the file that the alias points to

asm_alias.sql
-- PURPOSE : Provide a summary report of all alias definitions contained within all ASM disk groups.

SET LINESIZE 145 SET PAGESIZE 9999 SET VERIFY off COLUMN disk_group_name FORMAT a16 HEAD 'Disk Group Name' COLUMN alias_name FORMAT a30 HEAD 'Alias Name' COLUMN file_number HEAD 'File|Number' COLUMN file_incarnation HEAD 'File|Incarnation' COLUMN alias_index HEAD 'Alias|Index' COLUMN alias_incarnation HEAD 'Alias|Incarnation' COLUMN parent_index HEAD 'Parent|Index' COLUMN reference_index HEAD 'Reference|Index' COLUMN alias_directory FORMAT a10 HEAD 'Alias|Directory?' COLUMN system_created FORMAT a8 HEAD 'System|Created?' break on report on disk_group_name skip 1

SELECT g.name disk_group_name , a.name alias_name , a.file_number file_number , a.file_incarnation file_incarnation , a.alias_index alias_index , a.alias_incarnation alias_incarnation , a.parent_index parent_index , a.reference_index reference_index , a.alias_directory alias_directory , a.system_created system_created FROM v$asm_alias a JOIN v$asm_diskgroup g USING (group_number) ORDER BY g.name , a.file_number /

asm_clients.sql
-- PURPOSE : Provide a summary report of all clients making use of this ASM instance.

SET LINESIZE 145 SET PAGESIZE 9999 SET VERIFY off COLUMN disk_group_name FORMAT a15 HEAD 'Disk Group Name' COLUMN instance_name FORMAT a20 HEAD 'Instance Name' COLUMN db_name FORMAT a9 HEAD 'Database Name' COLUMN status FORMAT a12 HEAD 'Status' break on report on disk_group_name skip 1 SELECT a.name disk_group_name , c.instance_name instance_name

, c.db_name db_name , c.status status FROM v$asm_diskgroup a JOIN v$asm_client c USING (group_number) ORDER BY a.name /

asm_disks_perf.sql -- PURPOSE : Provide a summary report of all disks contained within all ASM disk groups along with their performance metrics.

SET LINESIZE 145 SET PAGESIZE 9999 SET VERIFY off COLUMN disk_group_name FORMAT a15 HEAD 'Disk Group Name' COLUMN disk_path FORMAT a20 HEAD 'Disk Path' COLUMN reads FORMAT 999,999,999 HEAD 'Reads' COLUMN writes FORMAT 999,999,999 HEAD 'Writes' COLUMN read_errs FORMAT 999,999 HEAD 'Read|Errors' COLUMN write_errs FORMAT 999,999 HEAD 'Write|Errors' COLUMN read_time FORMAT 999,999,999 HEAD 'Read|Time' COLUMN write_time FORMAT 999,999,999 HEAD 'Write|Time' COLUMN bytes_read FORMAT 999,999,999,999 HEAD 'Bytes|Read' COLUMN bytes_written FORMAT 999,999,999,999 HEAD 'Bytes|Written' break on report on disk_group_name skip 2 compute sum label "" of reads writes read_errs write_errs read_time write_time bytes_read bytes_written on disk_group_name compute sum label "Grand Total: " of reads writes read_errs write_errs read_time write_time bytes_read bytes_written on report SELECT a.name , b.path , b.reads , b.writes

disk_group_name disk_path reads writes

, b.read_errs read_errs , b.write_errs write_errs , b.read_time read_time , b.write_time write_time , b.bytes_read bytes_read , b.bytes_written bytes_written FROM v$asm_diskgroup a JOIN v$asm_disk b USING (group_number) ORDER BY a.name /

asm_diskgroups.sql
-- PURPOSE : Provide a summary report of all disk groups. SET LINESIZE 145 SET PAGESIZE 9999 SET VERIFY off COLUMN group_name FORMAT a16 HEAD 'Disk Group|Name' COLUMN sector_size FORMAT 99,999 HEAD 'Sector|Size' COLUMN block_size FORMAT 99,999 HEAD 'Block|Size' COLUMN allocation_unit_size FORMAT 999,999,999 HEAD 'Allocation|Unit Size' COLUMN state FORMAT a11 HEAD 'State' COLUMN type FORMAT a6 HEAD 'Type' COLUMN total_mb FORMAT 999,999,999 HEAD 'Total Size (MB)' COLUMN used_mb FORMAT 999,999,999 HEAD 'Used Size (MB)' COLUMN pct_used FORMAT 999.99 HEAD 'Pct. Used' break on report on disk_group_name skip 1 compute sum label "Grand Total: " of total_mb used_mb on report SELECT name , sector_size , block_size

group_name sector_size block_size

, allocation_unit_size allocation_unit_size , state state , type type , total_mb total_mb , (total_mb - free_mb) used_mb , ROUND((1- (free_mb / total_mb))*100, 2) pct_used FROM v$asm_diskgroup ORDER BY name /

asm_disks.sql
-- PURPOSE : Provide a summary report of all disks contained within all disk -groups. This script is also responsible for queriing all -candidate disks - those that are not assigned to any disk -group. SET LINESIZE 145 SET PAGESIZE 9999 SET VERIFY off COLUMN disk_group_name FORMAT a20 HEAD 'Disk Group Name' COLUMN disk_file_path FORMAT a17 HEAD 'Path' COLUMN disk_file_name FORMAT a20 HEAD 'File Name' COLUMN disk_file_fail_group FORMAT a20 HEAD 'Fail Group' COLUMN total_mb FORMAT 999,999,999 HEAD 'File Size (MB)' COLUMN used_mb FORMAT 999,999,999 HEAD 'Used Size (MB)' COLUMN pct_used FORMAT 999.99 HEAD 'Pct. Used' break on report on disk_group_name skip 1 compute sum label "" of total_mb used_mb on disk_group_name compute sum label "Grand Total: " of total_mb used_mb on report SELECT NVL(a.name, '[CANDIDATE]') disk_group_name , b.path disk_file_path , b.name disk_file_name , b.failgroup disk_file_fail_group

, b.total_mb total_mb , (b.total_mb - b.free_mb) used_mb , ROUND((1- (b.free_mb / b.total_mb))*100, 2) pct_used FROM v$asm_diskgroup a RIGHT OUTER JOIN v$asm_disk b USING (group_number) ORDER BY a.name /

asm_files.sql
-- PURPOSE : Provide a summary report of all files (and file metadata) information for all ASM disk groups. SET LINESIZE 150 SET PAGESIZE 9999 SET VERIFY off COLUMN full_alias_path FORMAT a63 HEAD 'File Name' COLUMN system_created FORMAT a8 HEAD 'System|Created?' COLUMN bytes FORMAT 9,999,999,999,999 HEAD 'Bytes' COLUMN space FORMAT 9,999,999,999,999 HEAD 'Space' COLUMN type FORMAT a18 HEAD 'File Type' COLUMN redundancy FORMAT a12 HEAD 'Redundancy' COLUMN striped FORMAT a8 HEAD 'Striped' COLUMN creation_date FORMAT a20 HEAD 'Creation Date' COLUMN disk_group_name noprint BREAK ON report ON disk_group_name SKIP 1 compute sum label "" of bytes space on disk_group_name compute sum label "Grand Total: " of bytes space on report SELECT CONCAT('+' || disk_group_name, SYS_CONNECT_BY_PATH(alias_name, '/')) full_alias_path , bytes , space , NVL(LPAD(type, 18), '<DIRECTORY>') type , creation_date , disk_group_name , LPAD(system_created, 4) system_created

FROM ( SELECT g.name disk_group_name , a.parent_index pindex , a.name alias_name , a.reference_index rindex , a.system_created system_created , f.bytes bytes , f.space space , f.type type , TO_CHAR(f.creation_date, 'DD-MON-YYYY HH24:MI:SS') creation_date FROM v$asm_file f RIGHT OUTER JOIN v$asm_alias a USING (group_number, file_number) JOIN v$asm_diskgroup g USING (group_number) ) WHERE type IS NOT NULL START WITH (MOD(pindex, POWER(2, 24))) = 0 CONNECT BY PRIOR rindex = pindex /

asm_files2.sql
-- PURPOSE : Provide a summary report of all files (and file metadata) -information for all ASM disk groups. SET LINESIZE 145 SET PAGESIZE 9999 SET VERIFY off COLUMN disk_group_name FORMAT a16 HEAD 'Disk Group Name' COLUMN file_name FORMAT a30 HEAD 'File Name' COLUMN bytes FORMAT 9,999,999,999,999 HEAD 'Bytes' COLUMN space FORMAT 9,999,999,999,999 HEAD 'Space' COLUMN type FORMAT a18 HEAD 'File Type' COLUMN redundancy FORMAT a12 HEAD 'Redundancy' COLUMN striped FORMAT a8 HEAD 'Striped' COLUMN creation_date FORMAT a20 HEAD 'Creation Date' break on report on disk_group_name skip 1 compute sum label "" of bytes space on disk_group_name compute sum label "Grand Total: " of bytes space on report

SELECT g.name disk_group_name , a.name file_name , f.bytes bytes , f.space space , f.type type , TO_CHAR(f.creation_date, 'DD-MON-YYYY HH24:MI:SS') creation_date FROM v$asm_file f JOIN v$asm_alias a USING (group_number, file_number) JOIN v$asm_diskgroup g USING (group_number) WHERE system_created = 'Y' ORDER BY g.name , file_number /

asm_templates.sql
-- | PURPOSE : Provide a summary report of all template information for all ASM disk groups. SET LINESIZE 145 SET PAGESIZE 9999 SET VERIFY off COLUMN disk_group_name FORMAT a16 HEAD 'Disk Group Name' COLUMN entry_number FORMAT 999 HEAD 'Entry Number' COLUMN redundancy FORMAT a12 HEAD 'Redundancy' COLUMN stripe FORMAT a8 HEAD 'Stripe' COLUMN system FORMAT a6 HEAD 'System' COLUMN template_name FORMAT a30 HEAD 'Template Name' break on report on disk_group_name skip 1 SELECT b.name , a.entry_number , a.redundancy , a.stripe

disk_group_name entry_number redundancy stripe

, a.system system , a.name template_name FROM v$asm_template a JOIN v$asm_diskgroup b USING (group_number) ORDER BY b.name , a.entry_number /

Data Guard Quick Reference

Managing a Physical Standby Database

1. Starting Up a Physical Standby Database


1.1 STARTUP: Starts the database, mounts the database as a physical standby database, and opens the database for read-only access. 1.2 STARTUP MOUNT: Starts and mounts the database as a physical standby database, but does not open the database. SQL> STARTUP MOUNT; 1.3 Start log apply services after mounting DB: To start Redo Apply, issue the following statement: SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; To start real-time apply, issue the following statement: SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;

2. Shutting Down a Physical Standby Database


2.1 Find out if the standby database is performing Redo Apply or real-time apply. If the MRP0 or MRP process exists, then the standby database is applying redo. SQL> SELECT PROCESS, STATUS FROM V$MANAGED_STANDBY; 2.2 If log apply services are running, cancel them as shown in the following example: SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL; 2.3 Shut down the standby database. SQL> SHUTDOWN;

3.Opening a Physical Standby Database for Read-Only Access


3.1 Open a standby database for read-only access when it is currently shut down: SQL> STARTUP; 3.2 Open a standby database for read-only access when it is currently performing Redo Apply or realtime apply: Cancel Redo Apply or real-time apply: SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL; Open the database for read-only access: SQL> ALTER DATABASE OPEN;

4. To change the standby database from being open for read-only access to performing Redo Apply:
4.1 Terminate all active user sessions on the standby database. 4.2 Restart Redo Apply or real-time apply. To start Redo Apply, issue the following statement: SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; To start real-time apply, issue the following statement: SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;

Data Pump
Data Pump

Home >> Reference >> General Reference >> Data Pump


Introduction

Data Pump Components: Direct Path API (DPAPI): Oracle Database 10g supports a direct path API interface that minimizes data conversion and parsing at both unload and load time. External Table Services: Data Pump uses the new ORACLE_DATAPUMP access driver that provides external tables write and read access to files containing binary streams. The DBMS_METADATA package is used by worker processes for all metadata unloading and loading. Database object definitions are stored using XML rather than SQL. The DBMS_DATAPUMP package embodies the API for high-speed export and import utilities for bulk data and metadata movement. The SQL*Loader client has been integrated with external tables, thereby providing automatic migration of loader control files to external table access parameters. The expdp and impdp clients are thin layers that make calls to the DBMS_DATAPUMP package to initiate and monitor Data Pump operations.

Master Table During the operation, a master table is maintained in the schema of the user who initiated the Data Pump export. The master table has the same name as the name of the Data Pump job. This table maintains one row per object with status information. In the event of a failure, Data Pump uses the information in this table to restart the job. The master table is the heart of every Data Pump operation; it maintains all the information about the job. Data Pump uses the master table to restart a failed or suspended job. The master table is dropped (by default) when the Data Pump job finishes successfully. The master table is written to the dump file set as the last step of the export dump operation and is removed from the users schema. For the import dump operation, the master table is loaded from the dump file set to the users schema as the first step and is used to sequence the objects being imported.

Data Pump Processes All Data Pump work is done though jobs. Data Pump jobs, unlike DBMS jobs, are merely server processes that process the data on behalf of the main process. The main process, known as a master control process, coordinates this effort via Advanced Queuing; it does so through a special table created at runtime known as a master table.

Client process: This process is initiated by the client utility: expdp,impdp, or other clients to make calls to the Data Pump API. Since the Data Pump is completely integrated to the database, once the Data Pump job is initiated, this process is not necessary for the progress of the job. Shadow process: When a client logs into the Oracle database, a foreground process is created (a standard feature of Oracle). This shadow process services the client data dump API requests. This process creates the master table and creates Advanced Queuing (AQ) queues used for communication. Once the client process is ended, the shadow process also goes away. Master control process (MCP): Master control process controls the execution of the Data Pump job; there is one MCP per job. MCP divides the Data Pump job into various metadata and data load or unload jobs and hands them over to the worker processes. The MCP has a process name of the format ORACLE_SID_DMnn_PROCESS_ID. It maintains the job state, job description, restart, and file information in the master table. Worker process: The MCP creates the worker processes based on the value of the PARALLEL parameter. The worker process performs the tasks requested by MCP, mainly loading or unloading data and metadata. The worker processes have the format ORACLE_SID_DWnn_PROCESS_ID. The worker processes maintain the current status in the master table that can beused to restart a failed job. Parallel Query (PQ) processes: The worker processes can initiate parallel query processes if external table is used as the data access method for loading or unloading. These are standardparallel query slaves of the parallel execution architecture.

Data Pump Benefits Data access methods: Direct path External tables Detachment from and reattachment to long-running jobs Restarting of Data Pump jobs Fine-grained object and data selection Explicit database version specification Parallel execution Estimation of export job space consumption Network mode in a distributed environment: the data pump operations can be performed from one database to another without writing to a dump file, using the network method. Remapping capabilities during import Data sampling and metadata compression

Create Data Pump Directory Since the Data Pump is server based, directory objects must be created in the database where the Data Pump files will be stored. The user executing Data Pump must have been granted permissions on the directory. READ permission is required to perform import, and WRITE permission is required to perform export and to create log files or SQL files. The user (who owns the software installation and database files) must have READ and WRITE operating system privileges on the directory. BUT the database user does NOT need any operating system privileges on the directory for Data Pump to succeed.

SQL> SELECT * FROM dba_directories;

SQL> CREATE OR REPLACE DIRECTORY data_pump_dir AS 'c:\temp'; -- Set up default directory data_pump_dir (The name of the default directory must be DATA_PUMP_DIR) SQL> CREATE OR REPLACE DIRECTORY dpdata AS ''; SQL> GRANT READ, WRITE ON DIRECTORY dpdataTO scott; Export Modes 1. Full expdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=exp%U.dmp FULL=y LOGFILE=exp.log JOB_NAME=expdp PARALLEL=30

2. User (Owner) expdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=scott.dmp SCHEMAS=scott LOGFILE=exp.log JOB_NAME=expdp

3. Table expdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=scott.dmp TABLES=scott.emp,blake.dept LOGFILE=exp.log JOB_NAME=expdp

4. Tablespace expdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=exp.dmp TABLESPACES=users, tools TRANSPORT_FULL_CHECK=y LOGFILE=exp.log JOB_NAME=expdp

5. Transportable Tablespace Export ALTER TABLESPACE users READ ONLY; ALTER TABLESPACE example READ ONLY; expdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=exp_tbs.dmp TRANSPORT_TABLESPACES=users,example TRANSPORT_FULL_CHECK=y LOGFILE=exp_tbs.log ALTER TABLESPACE users READ WRITE; ALTER TABLESPACE example READ WRITE;

6. Export metadata expdp \'/ as sysdba\' SCHEMAS=scott DIRECTORY=dpdata DUMPFILE=meta.dmp CONTENT=metadata_only

Import Modes

1. Full impdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=exp.dmp FULL=y LOGFILE=imp.log JOB_NAME=impdp impdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=exp%U.dmp FULL=y LOGFILE=imp.log JOB_NAME=DMFV_impdp PARALLEL=30

2. User (Owner) impdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=scott.dmp SCHEMAS=scott LOGFILE=imp.log JOB_NAME=impdp 3. Table impdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=scott.dmp TABLES=scott.emp,blake.dept LOGFILE=imp.log JOB_NAME=impdp 4. Tablespace 5. Transportable Tablespace

Other Features

REMAP_DATAFILE REMAP_SCHEMA REMAP_TABLESPACE TRANSFORM NETWORK_LINK

Data Pump Monitoring

1. Script: @dpstatus.sql 2. DBA_DATAPUMP_JOBS how many worker processes (column DEGREE) are working on the job. 3. DBA_DATAPUMP_SESSIONS when joined with the previous view and V$SESSION gives the SID of the session of the main foreground process. SQL> select sid, serial# from v$session s, dba_datapump_sessions d where s.saddr = d.saddr;

4. Alert log When the process starts up, the MCP and the worker processes are shown in the alert log as follows: kupprdp: master process DM00 started with pid=23, OS id=20530 to execute SYS.KUPM$MCP.MAIN('CASES_EXPORT', 'ANANDA'); kupprdp: worker process DW01 started with worker id=1, pid=24, OS id=20532 to execute SYS.KUPW$WORKER.MAIN('CASES_EXPORT', 'ANANDA'); kupprdp: worker process DW03 started with worker id=2, pid=25, OS id=20534 to execute SYS.KUPW$WORKER.MAIN('CASES_EXPORT', 'ANANDA'); 5. V$SESSION_LONGOPS predict the time it will take to complete the job. select sid, serial#, sofar, totalwork from v$session_longops where sofar != totalwork;

Controlling Data Pump Jobs


1. Switching Between Logging and Interactive Client Mode: expdp system/oracle parfile=c:\rmancmd\longexport.dpectl

1.1 switch this job to interactive client mode by typing CTRL-C 1.2 switch from interactive client mode back to logging mode Export> CONTINUE_CLIENT

2.Viewing Job Status:

From interactive mode Export> status

3. Closing and Reattaching To A Job 3.1 detach from a client session, but leave a running job still executing Export> exit_client

3.2 reattach to a running job after closing the client connection, reopen a session by $ expdp system/oracle attach=longexport

4. Halt the execution of the job: Export> stop_job

restart the job: Export> START_JOB

5. End the job Export> kill_job

Transportable Tablespaces

Limitations

COMPATIBLE parameter set to 10.0.0 or higher on both source and target databases The databases must use the same database character set and national character set. Character set conversion is not possible in transportable tablespaces. A limitation exists on the CLOB datatype columns created prior to Oracle 10g (applicable only if the database was upgraded from an earlier release). RMAN does not convert the CLOB data; the application must take care of the conversion if any is required.

Steps

1. Check Endian format

Determine if the platforms use the same Endian format by querying the V$TRANSPORTABLE_PLATFORM in the source and target databases: SQL> select a.platform_id, a.platform_name, endian_format from v$transportable_platform a, v$database b where a.platform_id = b.platform_id; PLATFORM_ID PLATFORM_NAME ENDIAN_FOR ----------- -------------------------- ---------10 Linux IA (32-bit) Little

2. Check self-contained tablespaces

Ensure the tablespaces to be transported are self-contained. Use the DBMS_TTS.TRANSPORT_SET_CHECK procedure to determine this. For example: SQL> EXEC DBMS_TTS.TRANSPORT_SET_CHECK( 'SALES_DATA,SALES_INDEX',TRUE);

3. Read-only tablespaces in source database

Make the tablespaces to be transported read-only in the source database. SQL> ALTER TABLESPACE SALES_DATA READ ONLY; SQL> ALTER TABLESPACE SALES_INDEX READ ONLY;

4. Export metadata

Use the expdp utility to unload the metadata information for the tablespaces to be transported. $ expdp system DUMPFILE=sales_tts.dmp LOGFILE=sales_tts.log DIRECTORY=dumplocation TRANSPORT_FULL_CHECK=Y TRANSPORT_TABLESPACES=SALES_DATA,SALES_INDEX

5. RMAN conversion (optinonal)

In step 1, if we have determined that the Endian formats are same for the platforms, you can skip this step and proceed to step 6. If the Endian formats are different, the datafiles need to be converted using RMAN. To convert the datafiles from the little-Endian format (Linux) to the big-Endian format

(Sun Solaris), do the following: $ rman target / RMAN> CONVERT TABLESPACE 'sales_data, sales_index' 2> TO PLATFORM 'Solaris[tm] OE (64-bit)' 3> DB_FILE_NAME_CONVERT = 4> '/oradata/BT10GNF1/sales_data01.dbf', 5> '/tmp/sales_data01_sun.dbf', 6> '/oradata/BT10GNF1/sales_index01.dbf', 7> '/tmp/sales_index01_sun.dbf'; If you decide to convert the datafiles at the target platform, you can do sojust replace line 2 with FROM_PLATFORM 'Linux IA (32-bit)'.
6. Import metadata

Use operating system utilities to copy the converted datafiles and the metadata dump file to the target server. Use the impdp utility on the target to import the metadata and plug-in the tablespaces. The target user must already exist in the target database, if not, you can make the objects owned by an existing user using the REMAP_SCHEMA parameter, as shown here: $impdp system DUMPFILE=sales_tts.dmp LOGFILE=sales_tts_imp.log DIRECTORY=data_dump_dir TRANSPORT_DATAFILES='/oradata/SL10H/sales_data01.dbf', '/oradata/SL10H/sales_index01.dbf'
7. Read-write tablespace in target database

Make the new tablespaces read-write in the target database, like so: SQL> ALTER TABLESPACE SALES_DATA READ WRITE; SQL> ALTER TABLESPACE SALES_INDEX READ WRITE;

Backup Sample Scripts and Examples

RMAN online full backup to disk

1. Backup as backupsets
connect target / set echo on run { allocate channel channel1 type disk; backup FILESPERSET 5 format '/backup/%d_t%t_s%s_FULL' (database) CURRENT CONTROLFILE SPFILE; BACKUP AS COPY CURRENT CONTROLFILE FORMAT '/backup/control01.ctl'; SQL "alter system archive log current"; backup format 'ALO_%d_%s_%t' (archivelog all); release channel channel1; SQL "ALTER DATABASE BACKUP CONTROLFILE TO TRACE"; }

2. Backup as image copy


connect target / set echo on run { allocate channel channel1 type disk; backup as copy format '/backup/%U' (database) CURRENT CONTROLFILE SPFILE; BACKUP AS COPY CURRENT CONTROLFILE FORMAT '/backup/control01.ctl'; SQL "alter system archive log current";

backup as copy format '/backup/ALO_%d_%s_%t' (archivelog all); release channel channel1; SQL "ALTER DATABASE BACKUP CONTROLFILE TO TRACE"; }

RMAN offline full backup to disk

1. Backup as backupsets
connect target / run { shutdown immediate; startup force DBA; shutdown immediate; startup mount; allocate channel channel1 type DISK; backup FILESPERSET 8 format '/backup/%d_t%t_s%s_OFFLINE' (database) CURRENT CONTROLFILE SPFILE; BACKUP CURRENT CONTROLFILE FORMAT '/backup/cntrlfile.copy'; BACKUP AS COPY CURRENT CONTROLFILE FORMAT '/backup/control01.ctl'; release channel channel1; shutdown immediate; startup; SQL "ALTER DATABASE BACKUP CONTROLFILE TO TRACE"; }

2. Backup as image copy


connect target / run { shutdown immediate; startup force DBA;

shutdown immediate; startup mount; allocate channel channel1 type DISK; backup as copy format '/backup/%U' (database) CURRENT CONTROLFILE SPFILE; BACKUP AS COPY CURRENT CONTROLFILE FORMAT '/backup/control01.ctl'; release channel channel1; shutdown immediate; startup; SQL "ALTER DATABASE BACKUP CONTROLFILE TO TRACE"; }

RMAN backup schell script for RAC (two nodes example)


Adapted from Alejandro Vargas's blog

#!/usr/bin/ksh # rman_backup_as_copy_to_FS export v_inst1=racdbtst1 export v_inst2=racdbtst2 # Rman Backup Location variable # ----------------------------export v_rman_loc=/vmasmtest/BACKUP/rman_backups # Step 1: Administrative tasks, crosscheck and delete obsolete # -----------------------------------------------------------export ORACLE_SID=$v_inst1 rman target / nocatalog <<EOF crosscheck backupset; crosscheck copy; crosscheck archivelog all; delete noprompt expired backup ; delete noprompt obsolete; exit EOF

# This script run from 1st node. We use an external identified DBA user, ops$oracle, to execute # the archive log current. From the same session we connect as ops$oracle into the 2nd instance # You need remote_os_authent=TRUE on both instances to connect remotely without password # Step 2: Archive log current on 1st Instance # Step 3: Archive log current on 2nd Instance # ------------------------------------------sqlplus -s /@$v_inst1 << EOF select instance_name from v\$instance / alter system archive log current / connect /@$v_inst2; select instance_name from v\$instance / alter system archive log current / exit EOF # On step 4 we use 4 channels. This needs to be customized according the number of cpu's/IO # channels available. Rman is invoked in nocatalog mode, we need to have configured # ORACLE_HOME, ORACLE_SID and PATH on the environment, as we did in the previous steps. # Step 4: Rman backup as copy to file system including controlfile and archivelogs # ------------------------------------------------------------------------------rman target / nocatalog <<EOF run { allocate channel backup_disk1 type disk format '$v_rman_loc/%U'; allocate channel backup_disk2 type disk format '$v_rman_loc/%U'; backup as COPY tag '%TAG' database include current controlfile; release channel backup_disk1; release channel backup_disk2; } exit EOF

# Step 5 and 6: Archive log current on 1st and 2nd Instances # ---------------------------------------------------------sqlplus -s /@$v_inst1 << EOF select instance_name from v\$instance /

alter system archive log current / connect /@$v_inst2; select instance_name from v\$instance / alter system archive log current / exit EOF # Step 7: Rman backup as copy archivelogs not backed up and print backupset list to log rman target / nocatalog <<EOF backup as copy archivelog all format '$v_rman_loc/%d_AL_%T_%u_s%s_p%p' ; list backupset; exit EOF # Redirecting rman output to log will suppress standard output, because of that # running separately. rman target / nocatalog log=$v_rman_loc/backupset_info.log <<EOF list backup summary; list backupset; list backup of controlfile; exit EOF # eof rman_backup_as_copy_to_FS

Notes: 1. Add the similar entries to tnsnames.ora if necessary upgrade1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = vip01)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = upgrade.world) (INSTANCE_NAME = upgrade1) ) ) upgrade2 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = vip02)(PORT = 1521)) (CONNECT_DATA =

(SERVER = DEDICATED) (SERVICE_NAME = upgrade.world) (INSTANCE_NAME = upgrade2) ) ) 2. Change login as following if necessary: sqlplus -s system/<password>@$v_inst1 << EOF

Recovery scripts and examples RMAN recover database


connect target / RUN { shutdown immediate; startup mount; restore database; recover database; alter database open; }

connect target / RUN { shutdown immediate; startup nomount; set controlfile autobackup format for device type disk to '/db1/orabackup/%F'; restore controlfile from autobackup; mount database; restore database; recover database; alter database open; }

RMAN recover tablespace

connect target / RUN { sql "alter tablespace sysaux offline"; RESTORE TABLESPACE sysaux;; RECOVER TABLESPACE sysaux; SQL "alter tablespace sysaux online"; }

RMAN recover data file

connect target / RUN { sql "alter tablespace sysaux offline"; restore datafile 'C:\ORACLE\PRODUCT\10.1.0\ORADATA\NICK\SYSAUX01.DBF'; recover datafile 'C:\ORACLE\PRODUCT\10.1.0\ORADATA\NICK\SYSAUX01.DBF'; SQL "alter tablespace sysaux online"; }

RMAN Point-In-Time-Recovery database (incomplete recovery)


Until SCN:
connect target / RUN { RESTORE DATABASE; RECOVER DATABASE UNTIL SCN 1000; # recovers through SCN 999 or ALTER DATABASE OPEN RESETLOGS; }

Until Time:
export NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS' connect target / RUN { set until time '28-JUL-05 06:00:00'; Restore database; Recover database;

sql "alter database open reset logs"; }

Until Log Seq: This example assumes that log sequence 1234 was lost due to a disk failure and the database needs to be recovered by using available archived redo logs.
RUN { SET UNTIL SEQUENCE 1234 THREAD 1; RESTORE CONTROLFILE TO '?/oradata/cf.tmp'; RESTORE CONTROLFILE FROM '?/oradata/cf.tmp'; # restores to all CONTROL_FILES locations ALTER DATABASE MOUNT; RESTORE DATABASE; RECOVER DATABASE; # recovers through log 1233 ALTER DATABASE OPEN RESETLOGS; # you must add new tempfiles to locally-managed temporary tablespaces after restoring # a backup control file SQL "ALTER TABLESPACE temp ADD TEMPFILE ''?/oradata/trgt/temp01.dbf'' REUSE"; }

RMAN restore control file


See also here 1. Shut down the database and try to start it up. The instance will start and try to mount the database, but when it doesnt find the control files, the database fails to mount:
RMAN> SHUTDOWN IMMEDIATE; database closed database dismounted Oracle instance shut down RMAN> STARTUP Oracle instance started RMAN-00571: RMAN-00569: ERROR MESSAGE STACK FOLLOWS RMAN-00571: RMAN-03002: failure of startup command at 07/11/2005 17:18:05 ORA-00205: error in identifying controlfile, check alert log for more info

You can avoid the preceding error messages by using the alternative command STARTUP NOMOUNT:
RMAN> SHUTDOWN IMMEDIATE; RMAN> STARTUP NOMOUNT;

2. Issue the RESTORE CONTROLFILE command so RMAN can copy the control file backups to their default locations specified in the init.ora file:
RMAN> RESTORE CONTROLFILE;

3. After the restore is over, mount the database:


RMAN> ALTER DATABASE MOUNT;

4. Recover the database:


RMAN> RECOVER DATABASE;

5. Because RMAN restores the control files from its backups, you have to open the database with the RESETLOGS option:
RMAN> ALTER DATABASE OPEN RESETLOGS;

RMAN tablespace Point-In-Time-Recovery (TSPITR)


THIS PROCEDURE WILL RECOVER THE TABLESPACE IN AUX INSTANCE FIRST AND TRANSFER DATA TO TARGET IN ONE RMAN STEP DON'T USE THIS PROCEDURE, IF YOU DON'T WANT TO OVERWRITE TARGET TABLESPACE DATA!!!! Recover the tablespaces from the database (the target database) by first performing the PITR in a temporary instance called the auxiliary database, which is created solely to serve as the staging area for the recovery of the tablespaces. Heres how to use RMAN to perform a TSPITR:

1. Create the auxiliary database. Use a skeleton initialization parameter file for the auxiliary instance along the lines of the following:
db_name=help (this is the target database_name) db_file_name_convert=('/oraclehome/oradata/target/', '/tmp/') /* Lets you convert the target database data files to a different name */ log_file_name_convert=('/oraclehome/oradata/target/redo', '/tmp/redo') /* Lets you convert the target database redo log files to a different name. */ instance_name=aux control_files=/tmp/control1.ctl compatible=10.0.2 db_block_size=8192

2. Start up the auxiliary database in the nomount mode:


$ sqlplus /nolog SQL> CONNECT sys/oracle@aux AS sysdba SQL> STARTUP NOMOUNT PFILE = /tmp/initaux.ora

3. Generate some archived redo logs and back up the target database. You can use the ALTER SYSTEM SWITCH LOGFILE command to produce the archived redo log files. 4. Connect to all three databasesthe catalog, target, and auxiliary databasesas follows:
$ rman target sys/sys_passwd@targetdb catalog rman/rman@rmandb auxiliary system/oracle@aux

5. Perform a TSPITR. If you want to recover until a certain time, for example, you can use the following statement (assuming your NLS_DATE format uses the following format mask: Mon DD YYYY HH24:MI:SS):
RMAN> RECOVER TABLESPACE users UNTIL TIME ('JUN 30 2005 12:00:00');

This is a deceptively simple step, but RMAN performs a number of tasks in this step. It restores the data files in the users tablespace to the auxiliary database and recovers them to the time you specified. It then exports the metadata about the objects in the tablespaces from the auxiliary to the target database. RMAN also uses the SWITCH command to point the control file to the newly recovered data files.

6. Once the recovery is complete, bring the user tablespace online:


$ rman target sys/sys_passwd@targetdb RMAN> SQL "alter tablespace users online"; RMAN> Exit;

7. Shut down the auxiliary instance and remove all the control files, redo log files, and data files pertaining to the auxiliary database.

RMAN restore and recover a non-archivelog database from a full (cold) backup
Source: http://www.idevelopment.info/ In this case study, (running in no-archivelog mode), any user error or media failure would require a complete database recovery. You can, however, use the SET UNTIL command to recover to different points in time when incrementals are taken. (Keep in mind that in our example, we did not make use of incremental backups!) NOTE: Because redo logs are not archived, only full and incremental backups (if you were taking incremental backups) are available for restore and recovery. It is assumed that you have all the configuration files like: * Server parameter file (spfile - equivalent of init.ora in 9i) * tnsnames.ora * listener.ora * sqlnet.ora (optional) are all in their appropriate places. It is also assumed that you can startup the Oracle instance in NOMOUNT MODE and connect from RMAN to the target instance.
The steps are: 1. If not using a recovery catalog, or if the database name is ambiguous in, you need to start RMAN and set the DBID before restoring the controlfile from autobackup. 2. Startup database in NOMOUNT mode. (You should have restored the initialization file for database, and listener files [only if connecting over SQLNET].) 3. Restore controlfile. 4. Mount the database.

5. Restore all database files. 6. Apply all incrementals. (In this example, we are not taking incremental backups, so this step is not required.) 7. Open database with RESETLOGS mode to re-create the online log files. 8. You will need to manually add any tempfiles back to the database after recovering the database.

set dbid 2528050866; connect target /; startup nomount; run { # ----------------------------------------------------------# Uncomment the SET UNTIL command to restore database to the # incremental backup taken two days ago. # SET UNTIL TIME 'SYSDATE-2'; # ----------------------------------------------------------set controlfile autobackup format for device type disk to '/orabackup1/rman/TARGDB/%F'; restore controlfile from autobackup; alter database mount; restore database; recover database noredo; alter database open resetlogs; sql "alter tablespace temp add tempfile ''/u06/app/oradata/TARGDB/temp01.dbf'' size 500m autoextend on next 500m maxsize 1500m"; } exit

NOTE: Tempfiles are automatically excluded from RMAN backups. This requires them to be re-added at recovery time.

RAC Recovery Scripts and Examples

Restore and recover the database - complete recovery

1. Take the database out of cluster mode


SQL> SQL> SQL> SQL> shutdown abort startup no mount alter system set cluster_database=false scope=spfile sid='*'; shutdown abort/immeidate

2. Restore the database via RMAN:


rman target=/ RMAN> startup mount; RMAN> restore database;

3. Recover the Database


RMAN> recover database; RMAN> alter database open;

4. Place the database back into cluster mode and startup both instances:
# sqlplus "/ as sysdba" SQL> alter system set cluster_database=true scope=spfile sid='*'; SQL> shutdown immediate; # srvctl start database -d em [oracle@rac1 bdump]$ srvctl status database -d em Instance em1 is running on node rac1 Instance em2 is running on node rac2

Restore and recover the Database using recovered control file -incomplete/point in time recovery

1. Take the database out of cluster mode


SQL> SQL> SQL> SQL> shutdown abort startup nomount alter system set cluster_database=false scope=spfile sid='*'; shutdown abort/immeidate

2. Check the backup files and take note of the Database ID


rman target / nocatalog RMAN> set dbid=519338572 --find dbid by checking backup log or file name

RMAN> startup nomount;

3. Restore the controlfile from a time previous to the crash:


RMAN> list backup of controlfile; RMAN> restore controlfile from '/vmasmtest/BACKUP/rman_backups/cf_DRACDBTST_id-519338572_6ei8vq5p'; RMAN> mount database;

4. Set until which time we want to recover, using the 'set until time' clause, the we do restore and recover, in this example the three commands are passed to Rman within a single block:
RMAN> run { set until time="to_date('01-FEB-07 16:14:28','DD-MON-YY HH24:MI:SS')"; 2> restore database; 3> recover database; }

5. Once recover finish we open using restlogs option:


RMAN> alter database open RESETLOGS; RMAN> exit Recovery Manager complete.

6. Finally we need to establish Cluster Mode and open both instances.


1)Mount instance 1 and set cluster_database=true :
SQL> show parameters cluster_database SQL> alter system set cluster_database=true scope=spfile sid='*'; SQL> shutdown immediate

2) Restart the database in cluster Mode:


srvctl start database -d racdbtst srvctl start service -d racdbtst crs_stat t

3) Check restore point on test table:


SQL> select * from restable1;

Start Grid Control


Run start_grid.sh Content of start_grid.sh: #!/bin/ksh export ORACLE_SID=GRID export ORACLE_HOME=/u01/app/oracle/product/10.2.0/oms10g export PATH=$ORACLE_HOME/bin:$PATH:$ORACLE_HOME/opmn/bin export LD_LIBRARY_PATH=$ORACLE_HOME/lib /u01/app/oracle/product/10.2.0/oms10g/bin/emctl start oms /u01/app/oracle/product/10.2.0/oms10g/opmn/bin/opmnctl startall /u01/app/oracle/product/10.2.0/oms10g/bin/emctl start iasconsole export ORACLE_SID=AGENT export ORACLE_HOME=/u01/app/oracle/product/10.2.0/agent10g export TNS_ADMIN=/u01/app/oracle/product/10.2.0/agent10g/network/admin export PATH=$ORACLE_HOME/bin:$PATH /u01/app/oracle/product/10.2.0/agent10g/bin/emctl start agent

Stop Grid Control


Run stop_grid.sh Content of stop_grid.sh:

#!/bin/ksh export ORACLE_SID=GRID export ORACLE_HOME=/u01/app/oracle/product/10.2.0/oms10g export PATH=$ORACLE_HOME/bin:$PATH:$ORACLE_HOME/opmn/bin export LD_LIBRARY_PATH=$ORACLE_HOME/lib /u01/app/oracle/product/10.2.0/oms10g/bin/emctl stop oms /u01/app/oracle/product/10.2.0/oms10g/bin/emctl stop iasconsole /u01/app/oracle/product/10.2.0/oms10g/opmn/bin/opmnctl stopall export ORACLE_SID=AGENT export ORACLE_HOME=/u01/app/oracle/product/10.2.0/agent10g export PATH=$ORACLE_HOME/bin:$PATH /u01/app/oracle/product/10.2.0/agent10g/bin/emctl stop agent exit

OMS Commands
emctl start oms emctl stop oms emctl status oms

EM Console Commands for Application Server


emctl start em emctl stop em emctl status em

OEM Agent Commands


emctl start agent emctl stop agent emctl status agent

Grid Control and Agent logs


ORACLE_HOME/sysman/log/ AGENT_HOME/sysman/log/ DB Control Repository

Create dbconsole repository


/u01/app/oracle/product/10.2.0/db_1/bin/emca -config dbcontrol db -repos create

Drop dbconsole repository


/u01/app/oracle/product/10.2.0/db_1/bin/emca -config dbcontrol db -repos drop

Recreate dbconsole repository


/u01/app/oracle/product/10.2.0/db_1/bin/emca -config dbcontrol db -repos recreate

Reconfigure dbconsole repository

Problem: Clone and rename the db to another host and dbconsole won't start with error $ emctl start dbconsole OC4J Configuration issue. /u01/app/oracle/product/10.2.0/db_1/oc4j/j2ee/OC4J_DBConsole_<hostname>_<dbnname> not found.
Option 1

try <ORACLE_HOME>/bin/emca -deconfig dbcontrol db -repos recreate. If fails go to option 2.


Option 2

Step1. Login to SQLPLUS as user SYS or SYSTEM, and drop the sysman account & mangement objects:

* Clean up the Db Console repository SQL> drop user sysman cascade; SQL> drop role MGMT_USER; SQL> drop user MGMT_VIEW cascade; SQL> drop public synonym MGMT_TARGET_BLACKOUTS; SQL> drop public synonym SETEMVIEWUSERCONTEXT; * issue a "commit;"

Step2. Export the correct values for the ORACLE_HOME and ORACLE_SID environment variables Step3. Change directories to the $ORACLE_HOME/bin directory Step4. Cleanup the External DB Console Configuration files by issuing:

emca -deconfig dbcontrol db -repos drop OR To delete the configurartion files: - remove the following directories from your filesystem: <ORACLE_HOME>/<hostname_sid> <ORACLE_HOME>/oc4j/j2ee/OC4J_DBConsole_<hostname>_<sid>

Step5. Recreate the DB Console Repository & external Configuration files by issuing: emca -config dbcontrol db -repos create

Partition Methods
o o o o o

Range Partitioning List Partitioning Hash Partitioning Composite Partitioning Invterval Partitioning

o o

Reference Partitioning Virtual column-based Partitioning (11g)

Partitioned Indexes
o o

Local Partitioned Indexes Global Partitioned Indexes Global Range Partitioned Indexes Global Hash Partitioned Indexes Global Nonpartitioned Indexes

Global Indexes vs Local Indexes

Just like partitioned tables, partitioned indexes improve manageability, availability, performance, and scalability. They can either be partitioned independently (global indexes) or automatically linked to a table's partitioning method (local indexes). In general, you should use global indexes for OLTP applications and local indexes for data warehousing or DSS applications. Also, whenever possible, you should try to use local indexes because they are easier to manage. When deciding what kind of partitioned index to use, you should consider the following guidelines in order:
1. If the table partitioning column is a subset of the index keys, use a local index. If this is the case, you are finished. If this is not the case, continue to guideline 2. 2. If the index is unique, use a global index. If this is the case, you are finished. If this is not the case, continue to guideline 3. 3. If your priority is manageability, use a local index. If this is the case, you are finished. If this is not the case, continue to guideline 4. 4. If the application is an OLTP one and users need quick response times, use a global index. If the application is a DSS one and users are more interested in throughput, use a local index. 5. RAC Commands Cheatsheet

6. Home >> Reference >> Oracle RAC >> RAC Commands Cheatsheet 7. http://www.dbaexpert.com/blog/2007/07/rac-cheatsheet/
8. cluvfy

9. myrac1> cluvfy -h 10. USAGE: 11. cluvfy [ -help ] 12. cluvfy stage { -list | -help } 13. cluvfy stage {-pre|-post} <stage-name> <stage-specific options> [-verbose] 14. cluvfy comp { -list | -help } 15. cluvfy comp <component-name> <component-specific options> [-verbose] 16.
17. oifcfg

18. myrac1> oifcfg -help 19.

20. Name: 21. oifcfg - Oracle Interface Configuration Tool. 22. 23. Usage: oifcfg iflist [-p [-n]] 24. oifcfg setif {-node <nodename> | -global} {<if_name>/<subnet>:<if_type>} 25. oifcfg getif [-node <nodename> | -global] [ -if <if_name>[/<subnet>] [-type <if_type>] ] 26. oifcfg delif [-node <nodename> | -global] [<if_name>[/<subnet>]] 27. oifcfg [-help] 28. 29. <nodename> - name of the host, as known to a communications network 30. <if_name> - name by which the interface is configured in the system 31. <subnet> - subnet address of the interface 32. <if_type> - type of the interface { cluster_interconnect | public | storage } 33.
34. ocrconfig

35. myrac1> ocrconfig -help 36. Name: 37. ocrconfig - Configuration tool for Oracle Cluster Registry. 38. Synopsis: 39. ocrconfig [option] 40. option: 41. -export <filename> [-s online] 42. - Export cluster register contents to a file 43. -import <filename> - Import cluster registry contents from a file 44. -upgrade [<user> [<group>]] 45. - Upgrade cluster registry from previous version 46. -downgrade [-version <version string>] 47. - Downgrade cluster registry to the specified version 48. -backuploc <dirname> - Configure periodic backup location 49. -showbackup - Show backup information 50. -restore <filename> - Restore from physical backup 51. -replace ocr|ocrmirror [<filename>] - Add/replace/remove a OCR device/file 52. -overwrite - Overwrite OCR configuration on disk 53. -repair ocr|ocrmirror <filename> - Repair local OCR configuration 54. -help - Print out this help information 55.
56. crs_stat

57. myrac1> crs_stat -h 58. Usage: crs_stat [resource_name [...]] [-v] [-l] [-q] [-c cluster_member] 59. crs_stat [resource_name [...]] -t [-v] [-q] [-c cluster_member] 60. crs_stat -p [resource_name [...]] [-q] 61. crs_stat [-a] application -g 62. crs_stat [-a] application -r [-c cluster_member] 63. crs_stat -f [resource_name [...]] [-q] [-c cluster_member] 64. crs_stat -ls [resource_name [...]] [-q]

65.
66. crs_register u resname 67. crs_profile 68. crs_relocate 69. crs_start 70. crs_stop 71. crs_unregister

72.
73. clscfg

74. myrac1> clscfg -h 75. clscfg: EXISTING configuration version 3 detected. 76. clscfg: version 3 is 10G Release 2. 77. clscfg Oracle cluster configuration tool 78. 79. This tool is typically invoked as part of the Oracle Cluster Ready 80. Services install process. It configures cluster topology and other 81. settings. Use -help for information on any of these modes. 82. Use one of the following modes of operation. 83. -install - creates a new configuration 84. -upgrade - upgrades an existing configuration 85. -downgrade - downgrades an existing configuration 86. -add - adds a node to the configuration 87. -delete - deletes a node from the configuration 88. -local - creates a special single-node configuration for ASM 89. -concepts - brief listing of terminology used in the other modes 90. 91. -trace - may be used in conjunction with any mode above for tracing 92. WARNING: Using this tool may corrupt your cluster configuration. Do not 93. use unless you positively know what you are doing. 94.
95. crsctl

96. myrac1 > crsctl 97. Usage: crsctl check crs - checks the viability of the CRS stack 98. crsctl check cssd - checks the viability of CSS 99. crsctl check crsd - checks the viability of CRS 100. crsctl check evmd - checks the viability of EVM 101. crsctl set css <parameter> <value> - sets a parameter override 102. crsctl get css <parameter> - gets the value of a CSS parameter 103. crsctl unset css <parameter> - sets CSS parameter to its default 104. crsctl query css votedisk - lists the voting disks used by CSS

105. crsctl add css votedisk <path> - adds a new voting disk 106. crsctl delete css votedisk <path> - removes a voting disk 107. crsctl enable crs - enables startup for all CRS daemons 108. crsctl disable crs - disables startup for all CRS daemons 109. crsctl start resources - starts CRS resources. (start this first in 10g2) 110. crsctl stop resources - stops CRS resources. 111. crsctl start crs - starts all CRS daemons. 112. crsctl stop crs - stops all CRS daemons. Stops CRS resources in case of cluster. 113. crsctl debug statedump evm - dumps state info for evm objects 114. crsctl debug statedump crs - dumps state info for crs objects 115. crsctl debug statedump css - dumps state info for css objects 116. crsctl debug log css [module:level]{,module:level} 117. - Turns on debugging for CSS 118. crsctl debug trace css - dumps CSS in-memory tracing cache 119. crsctl debug log crs [module:level]{,module:level} 120. - Turns on debugging for CRS 121. crsctl debug trace crs - dumps CRS in-memory tracing cache 122. crsctl debug log evm [module:level]{,module:level} 123. - Turns on debugging for EVM 124. crsctl debug trace evm - dumps EVM in-memory tracing cache 125. crsctl debug log res <resname:level> turns on debugging for resources 126. crsctl query crs softwareversion [<nodename>] - lists the version of CRS software installed 127. crsctl query crs activeversion - lists the CRS software operating version 128. crsctl lsmodules css - lists the CSS modules that can be used for debugging 129. crsctl lsmodules crs - lists the CRS modules that can be used for debugging 130. crsctl lsmodules evm - lists the EVM modules that can be used for debugging 131. 132. If necesary any of these commands can be run with additional tracing by 133. adding a trace argument at the very front. 134. Example: crsctl trace check css
135. Clusterware check

136. 137. 138. 139. 140. 141. 142. 143. 144. 145. 146.

log files in $ORA_CRS_HOME/nodename Crsctl check crs Crsctl check cssd Crsctl check crsd Crsctl check evmd Crsctl query crs softwareversion Crsctl query crs softwareversion node2 Crsctl start crs Crsctl stop crs Crsctl debug log res resname:level

147.

Interconnect check:

148. 149.
150.

Oifcfg getif olsnodes


disable auto reboot of aix nodes /etc/init*

151. 152. 153. 154. 155. 156. 157. 158. 159.


160.

myrac1> ls -lrt /etc/init* lrwxrwxrwx 1 root system 14 Feb 20 11:18 /etc/init -> /usr/sbin/init -rw-rr 1 root system 2914 Feb 22 16:19 /etc/inittab.orig -r-xr-xr-x 1 root system 3194 Feb 22 16:19 /etc/init.evmd -r-xr-xr-x 1 root system 36807 Feb 22 16:19 /etc/init.cssd -r-xr-xr-x 1 root system 4854 Feb 22 16:19 /etc/init.crsd -r-xr-xr-x 1 root system 2226 Feb 22 16:19 /etc/init.crs -rw-rr 1 root system 3093 Feb 22 21:58 /etc/inittab
OCR check:

161. 162. 163. 164. 165.


166.

Ocrcheck Ocrconfig export /tmp/dba/exp_ocr.dmp s online Ocrconfig showbackup ocrdump


VIP:

167. Ifconfig a 168. Ifconfig en8 delete host1-vip 169. Ifconfig en8 delete host2-vip 170. Command = /apps/oracle/product/10.2.0/CRS/bin/racgons add_config ictcdb621:6200 ictcdb622:6200 171. Command = /apps/oracle/product/10.2.0/CRS/bin/oifcfg setif -global en8/10.249.199.0:public en9/172.16.32.0:cluster_interconnect 172.
173. srvctl

174. Set the SRVM_TRACE environment variable to debug srvctl: 175. $ export SRVM_TRACE=true 176. 177. myrac1> srvctl -h 178. Usage: srvctl [-V] 179. Usage: srvctl add database -d <name> -o <oracle_home> [-m <domain_name>] [p <spfile>] [-A <name|ip>/netmask] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY}] [-s <start_options>] [-n <db_name>] [-y {AUTOMATIC | MANUAL}] 180. Usage: srvctl add instance -d <name> -i <inst_name> -n <node_name> 181. Usage: srvctl add service -d <name> -s <service_name> -r <preferred_list> [-a "<available_list>"] [-P <TAF_policy>] 182. Usage: srvctl add service -d <name> -s <service_name> -u {-r <new_pref_inst> | -a <new_avail_inst>}

183. Usage: srvctl add nodeapps -n <node_name> -o <oracle_home> -A <name|ip>/netmask[/if1[|if2|...]] 184. Usage: srvctl add asm -n <node_name> -i <asm_inst_name> -o <oracle_home> [p <spfile>] 185. Usage: srvctl config database 186. Usage: srvctl config database -d <name> [-a] [-t] 187. Usage: srvctl config service -d <name> [-s <service_name>] [-a] [-S <level>] 188. Usage: srvctl config nodeapps -n <node_name> [-a] [-g] [-o] [-s] [-l] 189. Usage: srvctl config asm -n <node_name> 190. Usage: srvctl config listener -n <node_name> 191. Usage: srvctl disable database -d <name> 192. Usage: srvctl disable instance -d <name> -i <inst_name_list> 193. Usage: srvctl disable service -d <name> -s <service_name_list> [-i <inst_name>] 194. Usage: srvctl disable asm -n <node_name> [-i <inst_name>] 195. Usage: srvctl enable database -d <name> 196. Usage: srvctl enable instance -d <name> -i <inst_name_list> 197. Usage: srvctl enable service -d <name> -s <service_name_list> [-i <inst_name>] 198. Usage: srvctl enable asm -n <node_name> [-i <inst_name>] 199. Usage: srvctl getenv database -d <name> [-t "<name_list>"] 200. Usage: srvctl getenv instance -d <name> -i <inst_name> [-t "<name_list>"] 201. Usage: srvctl getenv service -d <name> -s <service_name> [-t "<name_list>"] 202. Usage: srvctl getenv nodeapps -n <node_name> [-t "<name_list>"] 203. Usage: srvctl modify database -d <name> [-n <db_name] [-o <ohome>] [-m <domain>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY}] [-s <start_options>] [-y {AUTOMATIC | MANUAL}] 204. Usage: srvctl modify instance -d <name> -i <inst_name> -n <node_name> 205. Usage: srvctl modify instance -d <name> -i <inst_name> {-s <asm_inst_name> | r} 206. Usage: srvctl modify service -d <name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f] 207. Usage: srvctl modify service -d <name> -s <service_name> -i <avail_inst_name> -r [-f] 208. Usage: srvctl modify service -d <name> -s <service_name> -n -i <prefered_inst> [-a <available_list>] [-f] 209. Usage: srvctl modify asm -n <node_name> -i <asm_inst_name> -p <spfile> 210. Usage: srvctl relocate service -d <name> -s <service_name> -i <old_inst_name> t <new_inst_name> [-f] 211. Usage: srvctl remove database -d <name> [-f] 212. Usage: srvctl remove instance -d <name> -i <inst_name> [-f] 213. Usage: srvctl remove service -d <name> -s <service_name> [-i <inst_name>] [-f] 214. Usage: srvctl remove nodeapps -n <node_name_list> [-f] 215. Usage: srvctl remove asm -n <node_name> [-i <asm_inst_name>] [-f] 216. Usage: srvctl setenv database -d <name> {-t <name>=<val>[,<name>=<val>,...] | -T <name>=<val>}

217. Usage: srvctl setenv instance -d <name> [-i <inst_name>] {-t <name>=<val>[,<name>=<val>,...] | -T <name>=<val>} 218. Usage: srvctl setenv service -d <name> [-s <service_name>] {-t <name>=<val>[,<name>=<val>,...] | -T <name>=<val>} 219. Usage: srvctl setenv nodeapps -n <node_name> {-t <name>=<val>[,<name>=<val>,...] | -T <name>=<val>} 220. Usage: srvctl start database -d <name> [-o <start_options>] [-c <connect_str> | q] 221. Usage: srvctl start instance -d <name> -i <inst_name_list> [-o <start_options>] [-c <connect_str> | -q] 222. Usage: srvctl start service -d <name> [-s "<service_name_list>" [-i <inst_name>]] [-o <start_options>] [-c <connect_str> | -q] 223. Usage: srvctl start nodeapps -n <node_name> 224. Usage: srvctl start asm -n <node_name> [-i <asm_inst_name>] [-o <start_options>] [-c <connect_str> | -q] 225. Usage: srvctl start listener -n <node_name> [-l <lsnr_name_list>] 226. Usage: srvctl status database -d <name> [-f] [-v] [-S <level>] 227. Usage: srvctl status instance -d <name> -i <inst_name_list> [-f] [-v] [-S <level>] 228. Usage: srvctl status service -d <name> [-s "<service_name_list>"] [-f] [-v] [-S <level>] 229. Usage: srvctl status nodeapps -n <node_name> 230. Usage: srvctl status asm -n <node_name> 231. Usage: srvctl stop database -d <name> [-o <stop_options>] [-c <connect_str> | -q] 232. Usage: srvctl stop instance -d <name> -i <inst_name_list> [-o <stop_options>] [-c <connect_str> | -q] 233. Usage: srvctl stop service -d <name> [-s "<service_name_list>" [-i <inst_name>]] [-c <connect_str> | -q] [-f] 234. Usage: srvctl stop nodeapps -n <node_name> 235. Usage: srvctl stop asm -n <node_name> [-i <asm_inst_name>] [-o <stop_options>] [-c <connect_str> | -q] 236. Usage: srvctl stop listener -n <node_name> [-l <lsnr_name_list>] 237. Usage: srvctl unsetenv database -d <name> -t <name_list> 238. Usage: srvctl unsetenv instance -d <name> [-i <inst_name>] -t <name_list> 239. Usage: srvctl unsetenv service -d <name> [-s <service_name>] -t <name_list>
240. Usage: srvctl unsetenv nodeapps -n <node_name> -t <name_list> 241. Check enable/disable the startup of the Oracle Clusterware daemons 242. 243. Oracle has scls_scr directory at /etc/oracle path. We can check about enable/disable startup status of the Oracle Clusterware daemons at crsstart file in /etc/oracle/scls_scr/<hostname>/root/ path. root@rac1# cat /etc/oracle/scls_scr/rac1/root/crsstart enable

root@rac1# cd CRS_HOME/bin root@rac1# ./crsctl disable crs root@rac1# cat /etc/oracle/scls_scr/rac1/root/crsstart disable after disabled by "crsctl disable crs", crsstart file was changed be "disable" root@rac1# ./crsctl enable crs root@rac1# cat /etc/oracle/scls_scr/rac1/root/crsstart enable after enabled by "crsctl enable crs", crsstart file was changed be "enable"

CRS RESOURCE STATUS


srvctl status database -d <database-name> [-f] [-v] [-S <level>] srvctl status instance -d <database-name> -i <instance-name> >[,<instance-name-list>] [-f] [-v] [-S <level>] srvctl status service -d <database-name> -s <service-name>[,<service-name-list>] [-f] [-v] [-S <level>] srvctl status nodeapps [-n <node-name>] srvctl status asm -n <node_name>
EXAMPLES: Status of the database, all instances and all services. srvctl status database -d ORACLE -v Status of named instances with their current services. srvctl status instance -d ORACLE -i RAC01, RAC02 -v Status of a named services. srvctl status service -d ORACLE -s ERP -v Status of all nodes supporting database applications. srvctl status node

START CRS RESOURCES


srvctl start database -d <database-name> [-o < start-options>] [-c <connect-string> | -q] srvctl start instance -d <database-name> -i <instance-name> [,<instance-name-list>] [-o <start-options>] [-c <connect-string> | -q]

srvctl start service -d <database-name> [-s <service-name>[,<service-name-list>]] [-i <instance-name>] [-o <start-options>] [-c <connect-string> | -q] srvctl start nodeapps -n <node-name> srvctl start asm -n <node_name> [-i <asm_inst_name>] [-o <start_options>]
EXAMPLES: Start the database with all enabled instances. srvctl start database -d ORACLE Start named instances. srvctl start instance -d ORACLE -i RAC03, RAC04 Start named services. Dependent instances are started as needed. srvctl start service -d ORACLE -s CRM Start a service at the named instance. srvctl start service -d ORACLE -s CRM -i RAC04 Start node applications. srvctl start nodeapps -n myclust-4

STOP CRS RESOURCES


srvctl stop database -d <database-name> [-o <stop-options>] [-c <connect-string> | -q] srvctl stop instance -d <database-name> -i <instance-name> [,<instance-name-list>] [-o <stop-options>][-c <connect-string> | -q] srvctl stop service -d <database-name> [-s <service-name>[,<service-name-list>]] [-i <instance-name>][-c <connect-string> | -q] [-f] srvctl stop nodeapps -n <node-name> srvctl stop asm -n <node_name> [-i <asm_inst_name>] [-o <start_options>]
EXAMPLES: Stop the database, all instances and all services. srvctl stop database -d ORACLE Stop named instances, first relocating all existing services. srvctl stop instance -d ORACLE -i RAC03,RAC04 Stop the service. srvctl stop service -d ORACLE -s CRM Stop the service at the named instances. srvctl stop service -d ORACLE -s CRM -i RAC04 Stop node applications. Note that instances and services also stop. srvctl stop nodeapps -n myclust-4

ADD CRS RESOURCES


srvctl add database -d <name> -o <oracle_home> [-m <domain_name>] [-p <spfile>] [-A <name|ip>/netmask] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY}] [-s <start_options>] [-n <db_name>] srvctl add instance -d <name> -i <inst_name> -n <node_name> srvctl add service -d <name> -s <service_name> -r <preferred_list> [-a <available_list>] [-P <TAF_policy>] [-u] srvctl add nodeapps -n <node_name> -o <oracle_home> [-A <name|ip>/netmask[/if1[|if2|...]]] srvctl add asm -n <node_name> -i <asm_inst_name> -o <oracle_home> OPTIONS: -A vip range, node, and database, address specification. The format of address string is: [<logical host name>]/<VIP address>/<net mask>[/<host interface1[ | host interface2 |..]>] [,] [<logical host name>]/<VIP address>/<net mask> [/<host interface1[ | host interface2 |..]>] -a for services, list of available instances, this list cannot include preferred instances -m domain name with the format us.mydomain.com -n node name that will support one or more instances -o $ORACLE_HOME to locate Oracle binaries -P for services, TAF preconnect policy - NONE, PRECONNECT -r for services, list of preferred instances, this list cannot include available instances. -s spfile name -u updates the preferred or available list for the service to support the specified instance. Only one instance may be specified with the -u switch. Instances that already support the service should not be included.
EXAMPLES: Add a new node: srvctl add nodeapps -n myclust-1 -o $ORACLE_HOME A 139.184.201.1/255.255.255.0/hme0 Add a new database. srvctl add database -d ORACLE -o $ORACLE_HOME Add named instances to an existing database. srvctl add instance -d ORACLE -i RAC01 -n myclust-1 srvctl add instance -d ORACLE -i RAC02 -n myclust-2 srvctl add instance -d ORACLE -i RAC03 -n myclust-3 Add a service to an existing database with preferred instances (-r) and available instances (-a). Use basic failover to the available instances.

srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04 Add a service to an existing database with preferred instances in list one and available instances in list two. Use preconnect at the available instances. srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04 -P PRECONNECT

REMOVE CRS RESOURCES


srvctl remove database -d <database-name> srvctl remove instance -d <database-name> [-i <instance-name>] srvctl remove service -d <database-name> -s <service-name> [-i <instance-name>] srvctl remove nodeapps -n <node-name>
EXAMPLES: Remove the applications for a database. srvctl remove database -d ORACLE Remove the applications for named instances of an existing database. srvctl remove instance -d ORACLE -i RAC03 srvctl remove instance -d ORACLE -i RAC04 Remove the service. srvctl remove service -d ORACLE -s STD_BATCH Remove the service from the instances. srvctl remove service -d ORACLE -s STD_BATCH -i RAC03,RAC04 Remove all node applications from a node. srvctl remove nodeapps -n myclust-4

MODIFY CRS RESOURCES


srvctl modify database -d <name> [-n <db_name] [-o <ohome>] [-m <domain>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY}] [-s <start_options>] srvctl modify instance -d <database-name> -i <instance-name> -n <node-name> srvctl modify instance -d <name> -i <inst_name> {-s <asm_inst_name> | -r} srvctl modify service -d <database-name> -s <service_name> -i <instance-name> -t <instance-name> [-f] srvctl modify service -d <database-name> -s <service_name> -i <instance-name> -r [-f] srvctl modify nodeapps -n <node-name> [-A <address-description> ] [-x] OPTIONS:

-i <instance-name> -t <instance-name> the instance name (-i) is replaced by the instance name (-t) -i <instance-name> -r the named instance is modified to be a preferred instance -A address-list for VIP application, at node level -s <asm_inst_name> add or remove ASM dependency
EXAMPLES: Modify an instance to execute on another node. srvctl modify instance -d ORACLE -n myclust-4 Modify a service to execute on another node. srvctl modify service -d ORACLE -s HOT_BATCH -i RAC01 -t RAC02 Modify an instance to be a preferred instance for a service. srvctl modify service -d ORACLE -s HOT_BATCH -i RAC02 r

RELOCATE SERVICES
srvctl relocate service -d <database-name> -s <service-name> [-i <instance-name >]-t<instancename > [-f]
EXAMPLES: Relocate a service from one instance to another srvctl relocate service -d ORACLE -s CRM -i RAC04 -t RAC01

ENABLE CRS RESOURCES (The resource may be up or down to use this function)
srvctl enable database -d <database-name> srvctl enable instance -d <database-name> -i <instance-name> [,<instance-name-list>] srvctl enable service -d <database-name> -s <service-name>] [, <service-name-list>] [-i <instance-name>]
EXAMPLES: Enable the database. srvctl enable database -d ORACLE Enable the named instances. srvctl enable instance -d ORACLE -i RAC01, RAC02 Enable the service.

srvctl enable service -d ORACLE -s ERP,CRM Enable the service at the named instance. srvctl enable service -d ORACLE -s CRM -i RAC03

DISABLE CRS RESOURCES (The resource must be down to use this function)
srvctl disable database -d <database-name> srvctl disable instance -d <database-name> -i <instance-name> [,<instance-name-list>] srvctl disable service -d <database-name> -s <service-name>] [,<service-name-list>] [-i <instance-name>]
EXAMPLES: Disable the database globally. srvctl disable database -d ORACLE Disable the named instances. srvctl disable instance -d ORACLE -i RAC01, RAC02 Disable the service globally. srvctl disable service -d ORACLE -s ERP,CRM Disable the service at the named instance. srvctl disable service -d ORACLE -s CRM -i RAC03,RAC04

11gR2 RAC Commands

SRVCTL Commands SRVCTL is used to manage the following resources (components): Component asm database diskgroup filesystem home listener service ons, eons Abbreviation asm db dg filesystem home lsnr serv ons, eons Description Oracle ASM instance Database instance Oracle ASM disk group Oracle ASM file system Oracle home or Oracle Clusterware home Oracle Net listener Database service Oracle Notification Services (ONS)

The available commands used with SRVCTL are: Command Description add Adds a component to the Oracle Restart configuration. config Displays the Oracle Restart configuration for a component. disable Disables management by Oracle Restart for a component. enable Reenables management by Oracle Restart for a component. getenv Displays environment variables in the Oracle Restart configuration for a database, Oracle ASM instance, or listener. modify Modifies the Oracle Restart configuration for a component. remove Removes a component from the Oracle Restart configuration. setenv Sets environment variables in the Oracle Restart configuration for a database, Oracle ASM instance, or listener. start Starts the specified component. status Displays the running status of the specified component. stop Stops the specified component. unsetenv Unsets environment variables in the Oracle Restart configuration for a database, Oracle ASM instance, or listener.

Commands

Objects

Comment

srvctl add srvctl modify srvctl remove srvctl relocate

instance database service nodeapps service

The OCR is modified.

You can reallocate a service from one named instance to

another named instance. srvctl start srvctl stop srvctl status instance database service asm nodeapps enable = when the server restart the resource must be restarted disable = when the server restart the resource must NOT be restarted (perhaps we are working for some maintenance tasks) Lists configuration information from the OCR (Oracle Cluster Registry). srvctl getenv = displays the environment variables stored in the OCR for target. srvctl setenv = allows these variables to be set srvctl unsetenv = llows these variables to be unset

srvctl disable srvctl enable

instance database service asm

srvctl config

database service asm nodeapps instance database service nodeapps

srvctl getenv srvctl setenv srvctl unsetenv

Frequently used commands: srvctl start database -d DBname srvctl stop database -d DBname

srvctl start instance -d DBname -i INSTANCEname srvctl stop instance -d DBname -i INSTANCEname srvctl start instance -d DBname -i INSTANCEname srvctl stop instance -d DBname -i INSTANCEname srvctl status database -d DBname srvctl status instance -d DBname -i INSTANCEname srvctl status nodeapps -n NODEname srvctl enable database -d DBname srvctl disable database -d DBname

srvctl enable instance -d DBname -i INSTANCEname srvctl disable instance -d DBname -i INSTANCEname srvctl config database -d DBname srvctl getenv nodeaps -> to get some information about the database from OCR.

CRSCTL Commands Dual Environment CRSCTL Commands: crsctl add resource crsctl add type crsctl check css crsctl delete resource crsctl delete type crsctl get hostname crsctl getperm resource crsctl getperm type crsctl modify resource crsctl modify type crsctl setperm resource crsctl setperm type crsctl start resource crsctl status resource crsctl status type crsctl stop resource Oracle RAC Environment CRSCTL Commands: The commands listed in this section manage the Oracle Clusterware stack in an Oracle RAC environment, which consists of the following: Oracle Clusterware, the member nodes and server pools Oracle ASM (if installed) Cluster Synchronization Services Cluster Time Synchronization Services crsctl add crs administrator crsctl add css votedisk crsctl add serverpool

crsctl check cluster crsctl check crs crsctl check resource crsctl check ctss crsctl config crs crsctl delete crs administrator crsctl delete css votedisk crsctl delete node crsctl delete serverpool crsctl disable crs crsctl enable crs crsctl get css crsctl get css ipmiaddr crsctl get nodename crsctl getperm serverpool crsctl lsmodules crsctl modify serverpool crsctl pin css crsctl query crs administrator crsctl query crs activeversion crsctl query crs releaseversion crsctl query crs softwareversion crsctl query css ipmidevice crsctl query css votedisk crsctl relocate resource crsctl relocate server crsctl replace discoverystring crsctl replace votedisk crsctl set css crsctl set css ipmiaddr crsctl set css ipmiadmin crsctl setperm serverpool crsctl start cluster crsctl start crs crsctl status server crsctl status serverpool crsctl stop cluster crsctl stop crs crsctl unpin css crsctl unset css

Oracle Restart Environment CRSCTL Commands: The commands listed in this section control Oracle High Availability Services. crsctl check has crsctl config has crsctl disable has crsctl enable has crsctl query has releaseversion crsctl query has softwareversion crsctl start has crsctl stop has

ATTENTION: The following commands are deprecated in Oracle Clusterware 11g release 2 (11.2): crs_stat crs_register crs_unregister crs_start crs_stop crs_getperm crs_profile crs_relocate crs_setperm crsctl check crsd crsctl check cssd crsctl check evmd crsctl debug log crsctl set css votedisk crsctl start resources crsctl stop resources

CRSD:

Cluster Ready Services Daemon Engine for HA operation

Manages 'application resources' Starts, stops, and fails 'application resources' over Spawns separate 'actions' to start/stop/check application resources Maintains configuration profiles in the OCR (Oracle Configuration Repository)- Stores current known state in the OCR. Runs as root Is restarted automatically on failure

OCSSD:

Cluster Synchronization Services Daemon OCSSD is part of RAC and Single Instance with ASM Provides access to node membership Provides group services Provides basic cluster locking Integrates with existing vendor clusteware, when present Can also runs without integration to vendor clustware Runs as Oracle. Failure exit causes machine reboot. This is a feature to prevent data corruption in event of a split brain.

EVMD:

Event Management Daemon Generates events when things happen Spawns a permanent child evmlogger Evmlogger, on demand, spawns children Scans callout directory and invokes callouts Runs as Oracle Restarted automatically on failure

OPROCD

Process monitor for the cluster (not used on Linux and Windows) Starting with 10.2.0.4, it replace hangcheck timer module on Linux. If OPROCS fails, clusterware will reboot the nodes.

LMS (Global Cache Services)


Enables copies of blocks to be transferred from one instance to another without writing to disk (cache fusion)

LMON (Global Enqueue Services Monitor)

Lock monitor, responsible for cluster reconfiguration and locks when an instance joins or leaves the cluster

LMD (Global Enqueue Services daemon)


Lock Manager, manage requests for resources to control access to blocks

LCK0 (Instance Enqueue process)


Manages instance resource requests and cross-instance call operations for shared resources

DIAG (Diagnostic daemon)


Diagnostic needs for a RAC environment

TAF Database Configurations


TAF works with the following database configurations to effectively mask a database failure:

Oracle Real Application Clusters Replicated systems Standby databases Single instance Oracle database See Also:
Oracle Real Application Clusters Installation and Configuration Guide

FAILOVER_MODE Parameters
The FAILOVER_MODE parameter must be included in the CONNECT_DATA section of a connect descriptor. FAILOVER_MODE can contain the subparameters described in Table 13-4. Table 13-4 Subparameters of the FAILOVER_MODE Parameter
FAILOVER_MODE Subparameter
BACKUP TYPE

Description

Specify a different net service name for backup connections. A backup should be specified when using preconnect to pre-establish connections. Specify the type of failover. Three types of Oracle Net failover functionality are available by default to Oracle Call Interface (OCI) applications:

FAILOVER_MODE Subparameter

Description
session:

Set to failover the session. If a user's connection is lost, a new session is automatically created for the user on the backup. This type of failover does not attempt to recover selects. select: Set to enable users with open cursors to continue fetching on them after failure. However, this mode involves overhead on the client side in normal select operations. none: This is the default. No failover functionality is used. This can also be explicitly specified to prevent failover from happening.

METHOD

Determines how fast failover occurs from the primary node to the backup node:
basic:

Set to establish connections at failover time. This option requires almost no work on the backup server until failover time. preconnect: Set to pre-established connections. This provides faster failover but requires that the backup instance be able to support all connections from every supported instance.

RETRIES

Specify the number of times to attempt to connect after a failover. If DELAY is specified, RETRIES defaults to five retry attempts. Note: If a callback function is registered, then this subparameter is ignored.

DELAY

Specify the amount of time in seconds to wait between connect attempts. If RETRIES is specified, DELAY defaults to one second. Note: If a callback function is registered, then this subparameter is ignored.

Note:
Oracle Net Manager does not provide support for TAF parameters. These parameters must be manually added.

TAF Implementation
Important:
Do not set the GLOBAL_DBNAME parameter in the SID_LIST_listener_name section of the listener.ora. A statically configured global database name disables TAF.

Depending on the FAILOVER_MODE parameters, you can implement TAF in a number of ways. Oracle recommends the following methods:

Example: TAF with Connect-Time Failover and Client Load Balancing Example: TAF Retrying a Connection Example: TAF Pre-Establishing a Connection

Example: TAF with Connect-Time Failover and Client Load Balancing

Implement TAF with connect-time failover and client load balancing for multiple addresses. In the following example, Oracle Net connects randomly to one of the protocol addresses on sales1-server or sales2-server. If the instance fails after the connection, the TAF application fails over to the other node's listener, reserving any SELECT statements in progress.
sales.us.acme.com= (DESCRIPTION= (LOAD_BALANCE=on) (FAILOVER=on) (ADDRESS= (PROTOCOL=tcp) (HOST=sales1-server) (PORT=1521)) (ADDRESS= (PROTOCOL=tcp) (HOST=sales2-server) (PORT=1521)) (CONNECT_DATA= (SERVICE_NAME=sales.us.acme.com) (FAILOVER_MODE= (TYPE=select) (METHOD=basic))))

Example: TAF Retrying a Connection

TAF also provides the ability to automatically retry connecting if the first connection attempt fails with the RETRIES and DELAY parameters. In the following example, Oracle Net tries to reconnect to the listener on sales1-server. If the failover connection fails, Oracle Net waits 15 seconds before trying to reconnect again. Oracle Net attempts to reconnect up to 20 times.
sales.us.acme.com= (DESCRIPTION= (ADDRESS= (PROTOCOL=tcp) (HOST=sales1-server) (PORT=1521)) (CONNECT_DATA= (SERVICE_NAME=sales.us.acme.com) (FAILOVER_MODE= (TYPE=select) (METHOD=basic) (RETRIES=20)

(DELAY=15))))

Example: TAF Pre-Establishing a Connection

A backup connection can be pre-established. The initial and backup connections must be explicitly specified. In the following example, clients that use net service name sales1.us.acme.com to connect to the listener on sales1-server are also preconnected to sales2-server. If sales1-server fails after the connection, Oracle Net fails over to sales2server, preserving any SELECT statements in progress. Likewise, Oracle Net preconnects to sales1-server for those clients that use sales2.us.acme.com to connect to the listener on sales2-server.
sales1.us.acme.com= (DESCRIPTION= (ADDRESS= (PROTOCOL=tcp) (HOST=sales1-server) (PORT=1521)) (CONNECT_DATA= (SERVICE_NAME=sales.us.acme.com) (INSTANCE_NAME=sales1) (FAILOVER_MODE= (BACKUP=sales2.us.acme.com) (TYPE=select) (METHOD=preconnect)))) sales2.us.acme.com= (DESCRIPTION= (ADDRESS= (PROTOCOL=tcp) (HOST=sales2-server) (PORT=1521)) (CONNECT_DATA= (SERVICE_NAME=sales.us.acme.com) (INSTANCE_NAME=sales2) (FAILOVER_MODE= (BACKUP=sales1.us.acme.com) (TYPE=select) (METHOD=preconnect))))

TAF Verification

You can query FAILOVER_TYPE, FAILOVER_METHOD, and FAILED_OVER columns in the V$SESSION view to verify that TAF is correctly configured. Use the V$SESSION view to obtain information about the connected clients and their TAF status. For example, query the FAILOVER_TYPE, FAILOVER_METHOD, and FAILED_OVER columns to verify that you have correctly configured TAF as in the following SQL statement:
SELECT MACHINE, FAILOVER_TYPE, FAILOVER_METHOD, FAILED_OVER, COUNT(*) FROM V$SESSION

GROUP BY MACHINE, FAILOVER_TYPE, FAILOVER_METHOD, FAILED_OVER;

The output before failover resembles the following:


MACHINE FAILOVER_TYPE FAILOVER_M FAI COUNT(*) -------------------- ------------- ---------- --- ---------sales1 NONE NONE NO 11 sales2 SELECT PRECONNECT NO 1

The output after failover is:


MACHINE FAILOVER_TYPE FAILOVER_M FAI COUNT(*) -------------------- ------------- ---------- --- ---------sales2 NONE NONE NO 10 sales2 SELECT PRECONNECT YES 1

Note:
You can monitor each step of TAF using an appropriately configured OCI TAF CALLBACK function.

See Also:

Oracle Call Interface Programmer's Guide Oracle Database Reference for more information about the V$SESSION view

Specifying the Instance Role for Primary and Secondary Instance Configurations
The INSTANCE_ROLE parameter is an optional parameter for the CONNECT_DATA section of a connect descriptor. It enables you to specify a connection to the primary or secondary instance of Oracle9i Real Application Clusters configurations. This parameter is useful when:

You want to explicitly connect to a primary or secondary instance. The default is the primary instance. You want to use TAF to preconnect to a secondary instance.

INSTANCE_ROLE supports the following values: primary Specifies a connection to the primary instance secondary Specifies a connection to the secondary instance

any Specifies a connection to whichever instance has the lowest load, regardless of primary or secondary instance role Example: Connection to Instance Role Type In the following example, net service name sales_primary enables connections to the primary instance, and net service name sales_secondary enables connections to the secondary instance.
sales_primary= (DESCRIPTION= (ADDRESS= (PROTOCOL=tcp) (HOST=sales1-server) (PORT=1521)) (ADDRESS= (PROTOCOL=tcp) (HOST=sales2-server) (PORT=1521)) (CONNECT_DATA= (SERVICE_NAME=sales.us.acme.com) (INSTANCE_ROLE=primary))) sales_secondary= (DESCRIPTION= (ADDRESS= (PROTOCOL=tcp) (HOST=sales1-server) (PORT=1521)) (ADDRESS= (PROTOCOL=tcp) (HOST=sales2-server) (PORT=1521)) (CONNECT_DATA= (SERVICE_NAME=sales.us.acme.com) (INSTANCE_ROLE=secondary)))

Example: Connection To a Specific Instance There are times when Oracle Enterprise Manager and other system management products need to connect to a specific instance regardless of its role to perform administrative tasks. For these types of connections, configure (INSTANCE_NAME=instance_name) and (INSTANCE_ROLE=any) to connect to the instance regardless of its role. In the following example, net service name sales1 enables connections to the instance on sales1-server and sales2 enables connections to the instance on sales2-server. (SERVER=dedicated) is specified to force a dedicated server connection.
sales1= (DESCRIPTION= (ADDRESS= (PROTOCOL=tcp) (HOST=sales1-server)

(PORT=1521)) (CONNECT_DATA= (SERVICE_NAME=sales.us.acme.com) (INSTANCE_ROLE=any) (INSTANCE_NAME=sales2) (SERVER=dedicated))) sales2= (DESCRIPTION= (ADDRESS= (PROTOCOL=tcp) (HOST=sales2-server) (PORT=1521)) (CONNECT_DATA= (SERVICE_NAME=sales.us.acme.com) (INSTANCE_ROLE=any) (INSTANCE_NAME=sales2) (SERVER=dedicated)))

Example: TAF Pre-Establishing a Connection If Transparent Application Failover (TAF) is configured, a backup connection can be preestablished to the secondary instance. The initial and backup connections must be explicitly specified. In the following example, Oracle Net connects to the listener on sales1-server and preconnects to sales2-server, the secondary instance. If sales1-server fails after the connection, the TAF application fails over to sales2-server, the secondary instance, preserving any SELECT statements in progress.
sales1.acme.com= (DESCRIPTION= (ADDRESS= (PROTOCOL=tcp) (HOST=sales1-server) (PORT=1521)) (CONNECT_DATA= (SERVICE_NAME=sales.us.acme.com) (INSTANCE_ROLE=primary) (FAILOVER_MODE= (BACKUP=sales2.acme.com) (TYPE=select) (METHOD=preconnect)))) sales2.acme.com= (DESCRIPTION= (ADDRESS= (PROTOCOL=tcp) (HOST=sales2-server) (PORT=1521)) (CONNECT_DATA= (SERVICE_NAME=sales.us.acme.com) (INSTANCE_ROLE=secondary)))

racandtaf

RAC and TAF Home >> Reference >> Oracle RAC >> RAC and TAF From http://www.dba-oracle.com/art_oramag_rac_taf.htm Oracle RAC and Hardware Failover To detect a node failure, the Cluster Manager uses a background processGlobal Enqueue Service Monitor (LMON)to monitor the health of the cluster. When a node fails, the Cluster Manager reports the change in the cluster's membership to Global Cache Services (GCS) and Global Enqueue Service (GES). These services are then remastered based on the current membership of the cluster. To successfully remaster the cluster services, Oracle RAC keeps track of all resources and resource states on each node and then uses this information to restart these resources on a backup node. These processes also manage the state of in-flight transactions and work with TAF to either restart or resume the transactions on the new node. Now let's see how Oracle RAC and TAF work together to ensure that a server failure does not cause an unplanned service interruption. Using Transparent Application Failover After an Oracle RAC node crashesusually from a hardware failureall new application transactions are automatically rerouted to a specified backup node. The challenge in rerouting is to not lose transactions that were "in flight" at the exact moment of the crash. One of the requirements of continuous availability is the ability to restart in-flight application transactions, allowing a failed node to resume processing on another server without interruption. Oracle's answer to application failover is a new Oracle Net mechanism dubbed Transparent Application Failover. TAF allows the DBA to configure the type and method of failover for each Oracle Net client. For an application to use TAF, it must use failover-aware API calls from the Oracle Call Interface (OCI). Inside OCI are TAF callback routines that can be used to make any application failover-aware. While the concept of failover is simple, providing an apparent instant failover can be extremely complex, because there are many ways to restart in-flight transactions. The TAF architecture offers the ability to restart transactions at either the transaction (SELECT) or session level:

SELECT failover. With SELECT failover, Oracle Net keeps track of all SELECT statements issued during the transaction, tracking how many rows have been fetched back to the client for each cursor associated with a SELECT statement. If the connection to the instance is lost, Oracle Net establishes a connection to another Oracle RAC node and reexecutes the SELECT statements, repositioning the cursors so the client can continue

fetching rows as if nothing has happened. The SELECT failover approach is best for data warehouse systems that perform complex and time-consuming transactions. SESSION failover. When the connection to an instance is lost, SESSION failover results only in the establishment of a new connection to another Oracle RAC node; any work in progress is lost. SESSION failover is ideal for online transaction processing (OLTP) systems, where transactions are small.

Oracle TAF also offers choices on how to restart a failed transaction. The Oracle DBA may choose one of the following failover methods:

BASIC failover. In this approach, the application connects to a backup node only after the primary connection fails. This approach has low overhead, but the end user experiences a delay while the new connection is created. PRECONNECT failover. In this approach, the application simultaneously connects to both a primary and a backup node. This offers faster failover, because a pre-spawned connection is ready to use. But the extra connection adds everyday overhead by duplicating connections.

Currently, TAF will fail over standard SQL SELECT statements that have been caught during a node crash in an in-flight transaction failure. In the current release of TAF, however, TAF must restart some types of transactions from the beginning of the transaction. The following types of transactions do not automatically fail over and must be restarted by TAF:

Transactional statements. Transactions involving INSERT, UPDATE, or DELETE statements are not supported by TAF. ALTER SESSION statements. ALTER SESSION and SQL*Plus SET statements do not fail over. The following do not fail over and cannot be restarted: Temporary objects. Transactions using temporary segments in the TEMP tablespace and global temporary tables do not fail over. PL/SQL package states. PL/SQL package states are lost during failover.

Using Oracle RAC and TAF Together The continuous availability features of Oracle RAC and TAF come together when these products cooperate in restarting failed transactions. Let's take a closer look at how this works. Within each connected Oracle Net client, tnsnames.ora file parameters define the failover types and methods for that client. The parameters direct Oracle RAC and TAF on how to restart any transactions that may be in-flight during a hardware failure on the node. It is important to note that TAF failover control is external to the Oracle RAC cluster, and each Oracle Net client may have unique failover types and methods, depending on processing requirements. The following is a client tnsnames.ora file entry for a node, including its current TAF failover parameters:

bubba.world = (DESCRIPTION_LIST = (FAILOVER = true) (LOAD_BALANCE = true) (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP) (HOST = redneck)(PORT = 1521)) (CONNECT_DATA = (SERVICE_NAME = bubba) (SERVER = dedicated) (FAILOVER_MODE = (BACKUP=cletus) (TYPE=select) (METHOD=preconnect) (RETRIES=20) (DELAY=3) ) ) )

The failover_mode section of the tnsnames.ora file lists the parameters and their values: BACKUP=cletus. This names the backup node that will take over failed connections when a node crashes. In this example, the primary server is bubba, and TAF will reconnect failed transactions to the cletus instance in case of server failure. TYPE=select. This tells TAF to restart all in-flight transactions from the beginning of the transaction (and not to track cursor states within each transaction). METHOD=preconnect. This directs TAF to create two connections at transaction startup time: one to the primary bubba database and a backup connection to the cletus database. In case of instance failure, the cletus database will be ready to resume the failed transaction. RETRIES=20. This directs TAF to retry a failover connection up to 20 times. DELAY=3. This tells TAF to wait three seconds between connection retries. Remember, you must set these TAF parameters in every tnsnames.ora file on every Oracle Net client that needs transparent failover. Putting It All Together An Oracle Net client can be a single PC or a huge application server. In the architectures of giant Oracle RAC systems, each application server has a customized tnsnames.ora file that governs the failover method for all connections that are routed to that application server. Watching TAF in Action

The transparency of TAF operation is a tremendous advantage to application users, but DBAs need to quickly see what has happened and where failover traffic is going, and they need to be able to get the status of failover transactions. To provide this capability, the Oracle data dictionary has several new columns in the V$SESSION view that give the current status of failover transactions. The following query calls the new FAILOVER_TYPE, FAILOVER_METHOD, and FAILED_OVER columns of the V$SESSION view. Be sure to note that the query is restricted to nonsystem sessions, because Oracle data definition language (DDL) and data manipulation language (DML) are not recoverable with TAF.
select username, sid, serial#, failover_type, failover_method, failed_over from v$session where username not in ('SYS','SYSTEM', 'PERFSTAT') and failed_over = 'YES';

You can run this script against the backup node after an instance failure to see those transactions that have been reconnected with TAF. Remember, TAF will quickly redirect transactions, so you'll only see entries for a short period of time immediately after the failover. A backup node can have a variety of concurrent failover transactions, because the tnsnames.ora file on each Oracle Net client specifies the backup node, the failover type, and the failover method.

RAC Performance Tuning Global Cache Wait Events


GC - Global Cache Current - Current block CR - Consistent Read block

Wait Event

Contention type

Description

An instance requests authorization for a block to be accessed in current mode t the resource receives the request. The master has the current version of the blo to the requester via Cache Fusion and keeps a Past Image (.PI) If you get this then do the following gc current block 2-way write/write

Analyze the contention, segments in the "current blocks received" secti Use application partitioning scheme Make sure the system has enough CPU power Make sure the interconnect is as fast as possible Ensure that socket send and receive buffers are configured correctly

gc current block 3-way

write/write

An instance requests authorization for a block to be accessed in current mode t the resource receives the request and forwards it to the current holder of the bl holding instance sends a copy of the current version of the block to the request exclusive lock to the requesting instance. It also keeps a past Image (PI). Use the above actions to increase the performance

gc current block 2-way gc current block 3-way

write/read write/read

The difference with the one above is that this sends a copy of the block thus ke

The difference with the one above is that this sends a copy of the block thus ke

The requester will eventually get the block via cache fusion but it is delayed du

gc current block busy

The block was being used by another session on another session was delayed as the holding instance could not write the corresponding

write/write If you get this then do the following

Ensure the log writer is tuned

gc current buffer busy gc current block congested

local none

This is the same as above (gc current block busy), the difference is that anothe requested the block (hence local contention)

This is caused if heavy congestion on the GCS, thus CPU resources are stretch

Global Enqueue Waits


TX - Transaction enqueue; used for transaction demarcation and tracking TM - Table or partition enqueue; used to protect table definitions during DML operations HW - High-Water Mark enqueue; used to protect table definitions during DML operation SQ - Sequence enqueue; used to serialize incrementing of an Oracle sequence number US - Undo Segment enqueue; mainly used by the Automatic Undo Management (AUM) feature TA - Enqueue used mainly for transaction recovery as part of instance recovery

Incremental Roll Forward

Overview
With Oracle Database 10g, you can use incremental backups to quickly move a standby database forward in time or load new data into a clone database for testing purposes. You can perform testing in a clone or standby database configuration, then flash back the test changes, and periodically refresh the clone database (or standby database) from the primary database by using the generated incremental backups.

Steps

1. 2. 3. 4. 5. 6. 7. 8.

Make an image copy (clone) of the primary database for test purposes. Roll the clone database forward to make it transactionally consistent. Enable Flashback Database on the clone database. Open the clone database resetlogs for use as a testing database. Perform testing for the required period (day or week). Flash back all changes done during the testing. Apply incremental backup from the original production database. Go to step 2.

Startup and Open Standby Database


Startup commands
startup nomount alter database mount standby database; alter database recover managed standby database disconnect; select severity, error_code,message,to_char(timestamp,'DD-MON-YYYY HH24:MI:SS') from v$dataguard_status;

Open standby read only


alter database recover managed standby database cancel; alter database open read only; select OPEN_MODE from v$database;

Back to redo apply (it only works when users are disconnect from the database)
alter database recover managed standby database disconnect from session;

Errors when users are connecting:


SQL> alter database recover managed standby database disconnect from session; alter database recover managed standby database disconnect from session * ERROR at line 1: ORA-01093: ALTER DATABASE CLOSE only permitted with no sessions connected

Check Primary and Standby Status


Check role and status (both primary and standby)
select NAME, DB_UNIQUE_NAME, OPEN_MODE, DATABASE_ROLE from v$database; select NAME, OPEN_MODE, DATABASE_ROLE from v$database; --9i db

Check protection mode on primary database


select protection_mode, protection_level from v$database; PROTECTION_MODE PROTECTION_LEVEL

-------------------- -------------------MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE

Check processes and statuses


SELECT PROCESS, STATUS,SEQUENCE#,BLOCK#,BLOCKS, DELAY_MINS FROM V$MANAGED_STANDBY;

Log Apply
Start log apply in standby
alter database recover managed standby database disconnect from session;

Remove a delay from a standby


alter database recover managed standby database cancel; alter database recover managed standby database nodelay disconnect;

Cancel managed recovery/stop log apply


alter database recover managed standby database cancel;

Disable/Enable archive log destinations


alter system set log_archive_dest_state_2 = 'defer'; alter system set log_archive_dest_state_2 = 'enable';

Logical standby apply stop/start Stop...


alter database stop logical standby apply;

Start...

alter database start logical standby apply;

Logs
Check which logs are missing and log apply gap Run this on the standby

alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS'; select sequence#, archived, applied, first_time, next_time v$archived_log order by sequence#; from

Run this on both primary and standby

select count(*) from v$archive_gap; select * from v$archive_gap; select max(sequence#) from v$log_history;

select local.thread# , local.sequence# from (select thread# , from sequence# v$archived_log

where dest_id=1) local where local.sequence# not in (select sequence# from v$archived_log where dest_id=2 and thread# = local.thread#)

See how up to date a physical standby is Run this on the primary


set numwidth 15 select max(sequence#) current_seq from v$log;

Then run this on the standby


set numwidth 15 select max(applied_seq#) last_seq from v$archive_dest_status;

Switch logs
alter system switch logfile; alter system archive log current;

Register a missing log file


alter database register physical logfile '<fullpath/filename>';

If FAL doesn't work and it says the log is already registered


alter database register or replace physical logfile '<fullpath/filename>';

If that doesn't work, try this...

shutdown immediate startup nomount alter database mount standby database; alter database recover automatic standby database;

wait for the recovery to finish - then cancel

shutdown immediate startup nomount alter database mount standby database; alter database recover managed standby database disconnect;

Display info about all log destinations To be run on the primary


set lines 100 set numwidth 15 column ID format 99 column "SRLs" format 99 column active format 99 col type format a4 select ds.dest_id id , ad.status , ds.database_mode db_mode , ad.archiver type , ds.recovery_mode , ds.protection_mode , ds.standby_logfile_count "SRLs" , ds.standby_logfile_active active , ds.archived_seq# from v$archive_dest_status ds , v$archive_dest ad where ds.dest_id = ad.dest_id and ad.status != 'INACTIVE' order by ds.dest_id /

Display log destinations options

To be run on the primary

set numwidth 8 lines 100

column id format 99 select dest_id id , archiver , transmit_mode , affirm , async_blocks async , net_timeout net_time , delay_mins delay , reopen_secs reopen , register,binding from v$archive_dest order by dest_id /

List any standby redo logs

set lines 100 pages 999 col member format a70 select st.group# , st.sequence# , ceil(st.bytes / 1048576) mb , lf.member from v$standby_log st , v$logfile lf where st.group# = lf.group#

Misc
Turn on fal tracing on the primary db
alter system set LOG_ARCHIVE_TRACE = 128;

Stop the Data Guard broker

alter system set dg_broker_start=false;

Startup and Open Standby Database


Startup commands
startup nomount alter database mount standby database; alter database recover managed standby database disconnect; select severity, error_code,message,to_char(timestamp,'DD-MON-YYYY HH24:MI:SS') from v$dataguard_status;

Open standby read only


alter database recover managed standby database cancel; alter database open read only; select OPEN_MODE from v$database;

Back to redo apply (it only works when users are disconnect from the database)
alter database recover managed standby database disconnect from session;

Errors when users are connecting:


SQL> alter database recover managed standby database disconnect from session; alter database recover managed standby database disconnect from session * ERROR at line 1: ORA-01093: ALTER DATABASE CLOSE only permitted with no sessions connected

Check Primary and Standby Status


Check role and status (both primary and standby)
select NAME, DB_UNIQUE_NAME, OPEN_MODE, DATABASE_ROLE from v$database; select NAME, OPEN_MODE, DATABASE_ROLE from v$database; --9i db

Check protection mode on primary database


select protection_mode, protection_level from v$database; PROTECTION_MODE PROTECTION_LEVEL

-------------------- -------------------MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE

Check processes and statuses


SELECT PROCESS, STATUS,SEQUENCE#,BLOCK#,BLOCKS, DELAY_MINS FROM V$MANAGED_STANDBY;

Log Apply
Start log apply in standby
alter database recover managed standby database disconnect from session;

Remove a delay from a standby


alter database recover managed standby database cancel; alter database recover managed standby database nodelay disconnect;

Cancel managed recovery/stop log apply


alter database recover managed standby database cancel;

Disable/Enable archive log destinations


alter system set log_archive_dest_state_2 = 'defer'; alter system set log_archive_dest_state_2 = 'enable';

Logical standby apply stop/start Stop...


alter database stop logical standby apply;

Start...
alter database start logical standby apply;

Logs
Check which logs are missing and log apply gap Run this on the standby

alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS'; select sequence#, archived, applied, first_time, next_time v$archived_log order by sequence#; from

Run this on both primary and standby

select count(*) from v$archive_gap; select * from v$archive_gap; select max(sequence#) from v$log_history;

select local.thread# , local.sequence# from (select thread# , from sequence# v$archived_log

where dest_id=1) local where local.sequence# not in (select sequence# from v$archived_log where dest_id=2 and

thread# = local.thread#) /

See how up to date a physical standby is Run this on the primary


set numwidth 15 select max(sequence#) current_seq from v$log;

Then run this on the standby


set numwidth 15 select max(applied_seq#) last_seq from v$archive_dest_status;

Switch logs
alter system switch logfile; alter system archive log current;

Register a missing log file

alter database register physical logfile '<fullpath/filename>';

If FAL doesn't work and it says the log is already registered


alter database register or replace physical logfile '<fullpath/filename>';

If that doesn't work, try this...

shutdown immediate startup nomount alter database mount standby database; alter database recover automatic standby database;

wait for the recovery to finish - then cancel

shutdown immediate startup nomount alter database mount standby database; alter database recover managed standby database disconnect;

Display info about all log destinations To be run on the primary


set lines 100 set numwidth 15 column ID format 99 column "SRLs" format 99 column active format 99 col type format a4 select ds.dest_id id , ad.status , ds.database_mode db_mode , ad.archiver type , ds.recovery_mode , ds.protection_mode , ds.standby_logfile_count "SRLs" , ds.standby_logfile_active active , ds.archived_seq# from v$archive_dest_status ds , v$archive_dest ad where ds.dest_id = ad.dest_id and ad.status != 'INACTIVE' order by ds.dest_id /

Display log destinations options

To be run on the primary

set numwidth 8 lines 100 column id format 99 select dest_id id , archiver , transmit_mode , affirm , async_blocks async , net_timeout net_time , delay_mins delay , reopen_secs reopen , register,binding from v$archive_dest order by dest_id /

List any standby redo logs

set lines 100 pages 999 col member format a70 select st.group# , st.sequence# , ceil(st.bytes / 1048576) mb , lf.member from v$standby_log st , v$logfile lf where st.group# = lf.group#

Misc
Turn on fal tracing on the primary db
alter system set LOG_ARCHIVE_TRACE = 128;

Stop the Data Guard broker

alter system set dg_broker_start=false;

Instance Tuning Steps

Step 1: Define the Problem

Identify the performance objective. What is the measure of acceptable performance? How many transactions an hour, or seconds, response time will meet the required performance level? Identify the scope of the problem. What is affected by the slowdown? Is the whole instance slow? Is it a particular application, program, specific operation, or a single user? Identify the time frame when the problem occurs. Is the problem only evident during peak hours? Does performance deteriorate over the course of the day? Was the slowdown gradual (over the space of months or weeks) or sudden? Quantify the slowdown.
Identify any changes. Has the operating system software, hardware, application software, or Oracle release been upgraded? Has more data been loaded into the system, or has the data volume or user population grown

Step 2: Examine the Host System and Examine the Oracle Statistics
Examine Host System

CPU Usage o If there is a significant amount of idle CPU, then there could be an I/O, application, or database bottleneck. Note that wait I/O should be considered as idle CPU. o If there is high CPU usage, then determine whether the CPU is being used effectively. Is the majority of CPU usage attributable to a small number of high-CPU using programs, or is the CPU consumed by an evenly distributed workload? o If the CPU is used by a small number of high-usage programs, then look at the programs to determine the cause. If a small number of Oracle processes consumes most of the CPU resources, then use SQL_TRACE and TKPROF to identify the SQL or PL/SQL statements o Oracle CPU statistics: V$SYSSTAT, V$SESSTAT and V$RSRC_CONSUMER_GROUP I/O Problems

An overly active I/O system can be evidenced by disk queue lengths greater than two, or disk service times that are over 20-30ms. o Use operating system monitoring tools, such as sar -d or iostat, to determine what processes are running on the system as a whole and to monitor disk access to all files. o Check the Oracle wait event data in V$SYSTEM_EVENT to see whether the top wait events are I/O related. o An I/O problem can also manifest itself with non-I/O related wait events. For example, the difficulty in finding a free buffer in the buffer cache or high wait times for log to be flushed to disk can also be symptoms of an I/O problem. Network: Using operating system utilities, look at the network round-trip ping time and the number of collisions. If the network is causing large delays in response time, then investigate possible causes.

Step 3: Implement and Measure Change


asdfasd

Step 4: Determine whether the performance objective defined in step 1 has been met. If not, then repeat steps 2 and 3 until the performance goals are met.
Tuning Steps Scripts

NOTES:
1. SPOOL ALL THE RESULTS IN SQLPLUS IF NECESSARY SQL> SPOOL FILENAME SQL > . SQL> SPOO OFF 2. MARK TIME $date SQL> prompt -- current system time is: SQL>select to_char(sysdate, 'Dy DD-Mon-YYYY HH24:MI:SS') as "Current Time" from dual;

CPU Performance
Find CPU Consuming Sessions Method 1: compare the PID from top and session.sql: $ top (then 'c') SQL> @sess.sql Method 2: run the following SQL script: SQL> @top_cpu_user.sql

Wait Event

List Wait Events SQL> @wait.sql

Lock and Block


List Blocking and Blocked Sessions

SQL> @whoblock SQL> @show_blocking_sessions SQL> @rac_locks_blocking


List Locks

SQL> @show_dml_lock SQL> @show_all_lock

Running SQL
Find Bad SQL Statements

SQL> @expensive_sql SQL> @find_sql.sql SQL> @suspicious_sql SQL> @wait_sql


List Currently Running SQL Statements

SQL> @what_sql SQL> @who_sql SQL> @who_sql2

Memory Statistics
Memory Hit Ratio

SQL> @hit (The result is spooled to hit.lst)


SGA Statistics

SQL> @sga_stat SQL> @sga_free_pool

Hard Parsing Statistics

SQL> @hard_parse SQL> @recent_hard (The result is spooled to recent_hard.lst)

Other
RAC hang diag: SQL> @hang SQL> @racdiag

Introduction to Explain Plan


Explain plan displays:

Row source tree : - an ordering of the tables referenced by the statement - an access method for each table mentioned in the statement - a join method for tables affected by join operations in the statement - Data operations like filter, sort, or aggregation The plan table also contains: - optimization, such as the cost and cardinality of each operation - partitioning, such as the set of accessed partitions - parallel execution, such as the distribution method of join inputs

Examine an explain plan:

Look for the following in an explain plan: - Full scans - Unselective range scans - Late predicate filters - Wrong join order - Late filter operations - The columns of the index being used - Their selectivity (fraction of table being accessed)

How to Use Explain Plan


1. Create the plan table if it does not already exist

SQL> @?/rdbms/admin/utlxplan.sql

2. Generate plan

General method: SQL> delete plan_table; SQL> explain plan for <SQL_STATEMENT>;

Option 1: Identifying Statements for EXPLAIN PLAN Example: SQL> EXPLAIN PLAN SET STATEMENT_ID = 'st1' FOR SELECT last_name FROM employees;

Option 2: Specifying Different Tables for EXPLAIN PLAN Example: 1) SQL> EXPLAIN PLAN INTO my_plan_table FOR SELECT last_name FROM employees; 2) SQL> EXPLAIN PLAN SET STATEMENT_ID = 'st1' INTO my_plan_table FOR SELECT last_name FROM employees;

3. Show the plan

UTLXPLS.SQL: this script displays the plan table output for serial processing; same as "select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'))" SQL> @?/rdbms/admin/utlxpls;

UTLXPLP.SQL: this script displays the plan table output including parallel execution columns; same as "select * from table(dbms_xplan.display())" SQL> @?/rdbms/admin/utlxplp;

DBMS_XPLAN.DISPLAY procedure accepts options for displaying the plan table output :
o o o

A plan table name if you are using a table different than PLAN_TABLE A statement Id if you have set a statement Id with the EXPLAIN PLAN A format option that determines the level of detail: BASIC, SERIAL, and TYPICAL, ALL,

Examples: SQL> select * from table(dbms_xplan.display); SQL> select plan_table_output from table(dbms_xplan.display()); SQL> select plan_table_output from table(dbms_xplan.display(<table_name>, <statement id>,'TYPICAL'));

Explain Plan for Past Queries

Examples: SQL> select sid,username,sql_id,prev_sql_id,sql_child_number,username from v$session where username='&user'; SQL> SELECT * FROM table(DBMS_XPLAN.DISPLAY_CURSOR(('&sql_id'),0,'ALL')); --in memory SQL> SELECT * FROM table(DBMS_XPLAN.DISPLAY_AWR('&sql_id')) --in AWR report

What to look in explain plan Execution Plan shows the SQL optimizer's query execution path

Full Table Scan vs. Indexes


Full Table Scan -whole table is read up to the high water mark FTS is not recommended for large tables unless you are reading >5-10% Index lookup -Data is accessed by looking up key values in an index and returning rowids

Methods of index lookup:


index unique scan

Always returns a single value (a single rowid), like empno=1345 Quickest access method available (points to the block) Index unique scan is one of the most efficient ways of accessing data. This access method is used for returning the data from B-tree indexes. The optimizer chooses a unique scan when all columns of a unique (B-tree) index are specified with equality conditions.

Index range scan


Accessing a range values of a particular column, like empno> 34434 Index range scan is a common operation for accessing selective data. It can be bounded (bounded on both sides) or unbounded (on one or both sides). Data is returned in the ascending

order of index columns. Multiple rows with identical values are sorted (in ascending order) by the ROWIDs.

The optimizer uses a range scan when it finds one or more leading columns of an index specified in conditions, such as the following:
* col1 = :b1 * col1 < :b1 * col1 > :b1 * AND

combination of the preceding conditions for leading columns in the index otherwise

* col1 like 'ASD%' wild-card searches should not be in a leading position the condition col1 like '%ASD' does not result in a range scan.

index full scan


Statistics that indicate that it is going to be more efficient than a full table scan and a sort A full scan is available if a predicate references one of the columns in the index A full scan is also available if 1) all of the columns in the table referenced in the query are included in the index. 2) at least one of the index columns is not null

index fast full scan


Scans all the block in the index. Rows are not returned in sorted order Fast full index scans are an alternative to a full table scan when the index contains all the columns that are needed for the query, and at least one column in the index key has the NOTNULL constraint. A fast full scan accesses the data in the index itself, without accessing the table.

index skip scan


Finds rows even if the column is not the leading column of a concatenated index This concept is easier to understand if one imagines a prefix index to be similar to a partitioned table. In a partitioned object the partition key (in this case the leading column) defines which partition data is stored within. In the index case every row underneath each key (the prefix column) would be ordered under that key. In a skip scan of a prefixed index, the prefixed value is skipped and the non-prefix columns are accessed as logical sub-indexes. The trailing columns are ordered within the prefix column and so a 'normal' index access can be done ignoring the prefix.

Join Types

Sort Merge Join

Two steps: 1) sort join operation: Both the inputs are sorted on the join key 2) merge join operation: The sorted lists are merged together.

Sort merge joins are useful when the join condition between two tables is an inequality condition (but not a nonequality) like <, <=, >, or >= Sort merge joins can perform better than hash joins if both of the following conditions exist:

1) The row sources are sorted already. 2) A sort operation does not have to be done.

Sort merge joins perform better than nested loop joins for large data sets. You cannot use hash joins unless there is an equality condition. USE_MERGE hint

Nested Loops

Steps:

1) The optimizer determines the driving table and designates it as the outer table. 2) The other table is designated as the inner table. 3) For every row in the outer table, Oracle accesses all the rows in the inner table. The outer loop is for every row in outer table and the inner loop is for every row in the inner table. The outer loop appears before the inner loop in the execution plan, as follows:
NESTED LOOPS outer_loop
inner_loop

Nested Loop is better only if a few rows are being retrieved. The optimizer uses nested loop joins when joining small number of rows, with a good driving condition between the two tables. The outer loop is the driving row source. The inner loop is iterated for every row returned from the outer loop, ideally by an index scan. USE_NL(table1 table2) hint

Hash Join

Hash joins are used for joining large data sets. The optimizer uses the smaller of two tables or data sources to build a hash table on the join key in memory. It then scans the larger table, probing the hash table to find the joined rows. Hash joins generally perform better than sort merge joins USE_HASH hint

Cartesian Product

No join conditions between tables Applying the ORDERED hint, instructs the optimizer to use a Cartesian join. By specifying a table before its join table is specified, the optimizer does a Cartesian join.

Sorts -expensive operations especially on large tables

Filter limiting by where clause and some other conditions AWR Introduction

Introduction

Automatic workload repository (AWR) is a collection of persistent system performance statistics owned by SYS. It resides in SYSAUX tablespace. By default snapshot are generated once every 60min and maintained for 7 days by default. AWR is enabled only when the STATISTICS_LEVEL initialization parameter is set to TYPICAL (the default) or ALL. A value BASIC turns off all AWR statistics and metrics collection and disables all self-tuning capabilities of the database. The V$STATISTICS_LEVEL view shows the statistic component, description, and at what level of the STATISTICS_LEVEL parameter the component is enabled. SQL> SELECT statistics_name, activation_level FROM v$statistics_level

The AWR infrastructure

An in-memory statistics collection facility that is used by Oracle Database 10g components to collect statistics. These statistics are stored in memory for performance reasons. Statistics stored in memory are accessible through dynamic performance (V$) views. AWR snapshots represent the persistent portion of the facility. The AWR snapshots are accessible through data dictionary views and Database Control. Data is owned by the SYS schema. The MMON (which stands for manageability monitor) process is responsible for filtering and transferring the memory statistics to the disk every hour. When the buffer is full, the MMNL (which stands for manageability monitor light) process is responsible to flush the information to the repository.

AWR Data

Object statistics that determine both access and usage statistics of database segments New SQL statistics that are used to efficiently identify top SQL statements based on CPU, Elapsed, and Parse statistics The new wait classes interface used for high-level performance analysis The new time-model statistics based on how much time activities have spent Some of the statistics currently collected in V$SYSSTAT and V$SESSTAT Some of the Oracle optimizer statistics that include statistics for self-learning and tuning Operating system statistics The Active Session History (ASH), which represents the history of recent session activity Metrics that provides the rate of change for certain base statistics

AWR Report

Quick Tip
1. Generate reports by executing one of following scripts:
$ORACLE_HOME/rdbms/admin/awrrpt.sql $ORACLE_HOME/rdbms/admin/awrrpti.sql $ORACLE_HOME/rdbms/admin/awrinput.sql

2. For text report, copy and paste the report to the following website to get recommendations: Statspack Analyzer

AWR Report Information

Report summary Wait events statistics SQL statistics Instance activity statistics I/O statistics Buffer pool statistics Advisory statistics Wait statistics Undo statistics Latch statistics Segment statistics Dictionary cache statistics Library cache statistics SGA statistics Resource limit statistics init.ora parameters

Generate AWR reports


You can generate AWR reports by running SQL scripts: 1)The awrrpt.sql SQL script generates an HTML or text report that displays statistics for a range of snapshot Ids. 2)The awrrpti.sql SQL script generates an HTML or text report that displays statistics for a range of snapshot Ids on a specified database and instance. 3) The awrsqrpt.sql SQL script generates an HTML or text report that displays statistics of a particular SQL statement for a range of snapshot Ids. Run this report to inspect or debug the performance of a SQL statement. 4) The awrsqrpi.sql SQL script generates an HTML or text report that displays statistics of a particular SQL statement for a range of snapshot Ids on a specified SQL. 5) The awrddrpt.sql SQL script generates an HTML or text report that compares detailed performance attributes and configuration settings between two selected time periods. 6) The awrddrpi.sql SQL script generates an HTML or text report that compares detailed performance attributes and configuration settings between two selected time periods on a specific database and instance. To generate an HTML or text report for a range of snapshot Ids, run the awrrpt.sql script at the SQL prompt:

Views

Few Views which helps while generating the AWR report DBA_HIST_SNAPSHOT DBA_HIST_WR_CONTROL DBA_HIST_BASELINE

Procedures
How to Modify the AWR SNAP SHOT SETTINGS:
BEGIN DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings( retention => 43200, -- Minutes (= 30 Days). Current value retained if NULL. interval => 30); -- Minutes. Current value retained if NULL. END; /

Creating the Baseline:


BEGIN DBMS_WORKLOAD_REPOSITORY.create_baseline ( start_snap_id => 10, end_snap_id => 100, baseline_name => 'AWR First baseline'); END; /

Dropping the AWR baseline:


BEGIN DBMS_WORKLOAD_REPOSITORY.DROP_BASELINE ( baseline_name => 'AWR First baseline'); END; /

Dropping the AWR snaps in range:


BEGIN DBMS_WORKLOAD_REPOSITORY.drop_snapshot_range( (row_snap_id=>40, High_snap_id=>80); END; /

Creating SNAPSHOT Manually:


BEGIN DBMS_WORKLOAD_REPOSITORY.create_snapshot(); END; /

Workload Repository Views:

All AWR tables are stored under the SYS schema in the new special tablespace named SYSAUX, and named in the format WRM$_* and WRH$_*. WRM$_* stores metadata information such as the database being examined and the snapshots taken (M stands for "metadata.") WRH$_* holds the actual collected statistics. (H stands for "historical" ) There are several views with the prefix DBA_HIST_ built upon these tables, which can be used to write your own performance diagnosis tool. The names of the views directly relate to the table; for example, the view DBA_HIST_SYSMETRIC_SUMMARY is built upon the table WRH$_SYSMETRIC_SUMMARY.

The following workload repository views are available: * V$ACTIVE_SESSION_HISTORY - Displays the active session history (ASH) sampled every second. * V$METRIC - Displays metric information. * V$METRICNAME - Displays the metrics associated with each metric group. * V$METRIC_HISTORY - Displays historical metrics. * V$METRICGROUP - Displays all metrics groups. * DBA_HIST_ACTIVE_SESS_HISTORY - Displays the history contents of the active session history. * DBA_HIST_BASELINE - Displays baseline information. * DBA_HIST_DATABASE_INSTANCE - Displays database environment information. * DBA_HIST_SNAPSHOT - Displays snapshot information. * DBA_HIST_SQL_PLAN - Displays SQL execution plans. * DBA_HIST_WR_CONTROL - Displays AWR settings.

ADDM

Create ADDM Report in SQLPLUS


@?/rdbms/admin/addmrpt.sql

Create an Advisor Task


BEGIN -- Create an ADDM task. DBMS_ADVISOR.create_task ( advisor_name => 'ADDM', task_name => '970_1032_AWR_SNAPSHOT', task_desc => 'Advisor for snapshots 970 to 1032.'); -- Set the start and end snapshots. DBMS_ADVISOR.set_task_parameter ( task_name => '970_1032_AWR_SNAPSHOT', parameter => 'START_SNAPSHOT', value => 970); DBMS_ADVISOR.set_task_parameter ( task_name => '970_1032_AWR_SNAPSHOT', parameter => 'END_SNAPSHOT', value => 1032); -- Execute the task. DBMS_ADVISOR.execute_task(task_name => '970_1032_AWR_SNAPSHOT'); END; / -- Display the report. SET LONG 100000 SET PAGESIZE 50000 SELECT DBMS_ADVISOR.get_task_report('970_1032_AWR_SNAPSHOT') AS report FROM dual; SET PAGESIZE 24

Display ADDM Report in SQLPLUS


SET LONG 100000 SET PAGESIZE 50000 select * from DBA_ADVISOR_TASKS order by created desc -- find out ADDM report name SELECT DBMS_ADVISOR.get_task_report('&reportname') AS report FROM dual;

Related Views

DBA_ADVISOR_TASKS - Basic information about existing tasks. DBA_ADVISOR_LOG - Status information about existing tasks. DBA_ADVISOR_FINDINGS - Findings identified for an existing task. DBA_ADVISOR_RECOMMENDATIONS - Recommendations for the problems identified by an existing task.

Advisory Framework

Framework

Automatic Database Diagnostic Monitor (ADDM): Performs a top-down instance analysis, identifies problems and potential causes, and gives recommendations for fixing the problems. ADDM can potentially call other advisors. SQL Tuning Advisor: Provides tuning advice for SQL statements SQL Access Advisor: Deals with schema issues and determines optimal data access paths such as indexes and materialized views PGA Advisor: Gives detailed statistics for the work areas and provides recommendations about optimal usage of PGA memory based on workload characteristics SGA Advisor: Responsible for tuning and recommending SGA size depending on pattern of access for the various components within the SGA Segment Advisor: Monitors object space issues and analyzes growth trends Undo Advisor: Suggests parameter values and the amount of additional space that is needed to support flashback for a specified time

ADDM

ADDM analysis is performed every time an AWR snapshot is taken. The MMON process triggers ADDM analysis each time a snapshot is taken to do an analysis of the period corresponding to the last two snapshots. The results of the ADDM analysis are also stored in the AWR and are accessible through the dictionary views and the EM Database Control. Analysis is performed from the top down, identifying symptoms first and then refining them to reach the root cause. The goal of analysis is to reduce a single throughput metric called the DBtime. DBtime is the cumulative time spent by the database server in processing user requests, which includes wait time and CPU time. The ADDM is enabled automatically only when the STATISTICS_LEVEL parameter is set to TYPICAL or ALL.

DBA Advisory Dictionaries


DBA_ADVISOR_USAGE - Usage information for each advisor DBA_ADVISOR_RATIONALE - Rationales for the recommendations DBA_ADVISOR_ACTIONS - Actions associated with recommendations DBA_ADVISOR_RECOMMENDATIONS - Tasks recommendations DBA_ADVISOR_FINDINGS - Findings discovered by the advisor DBA_ADVISOR_OBJECTS - Object referenced by tasks DBA_ADVISOR_COMMANDS - Commands associated with actions DBA_ADVISOR_PARAMETERS - Tasks parameters DBA_ADVISOR_LOG - Tasks current status information DBA_ADVISOR_TASKS - Global information about the task DBA_ADVISOR_DEFINITIONS - Properties of the advisors

Tips

Show the number of findings of ADDM for the last 24 hours by category: SQL> SELECT type, count(*) FROM dba_advisor_findings NATURAL JOIN dba_advisor_tasks WHERE created between sysdate -1 and sysdate GROUP BY type You can invoke the ADDM report using SQL*Plus by running the $ORACLE_HOME/rdbms/admin/addmrpt.sql script. The output is saved as a text file. You should look at the recommendations in the order theyre listed in the RANK column. SQL> SELECT distinct message FROM dba_advisor_recommendations JOIN dba_advisor_findings USING (finding_id, task_id) WHERE rank = 0

Setting Alert

Building Your Own Alert Mechanism


You can use the following steps to set up a threshold and alert mechanism if you do not want to use the EM Database Control:

1. Query V$METRICNAME to identify the metrics in which you are interested. 2. Set warning and critical thresholds using the DBMS_SERVER_ALERT.SET_THRESHOLD procedure. 3. Subscribe to the ALERT_QUE AQ using the DBMS_AQADM.ADD_SUBSCRIBER procedure. 4. Create an agent for the subscribing user of the alerts using the DBMS_AQADM.CREATE_AQ_AGENT procedure. 5. Associate the user with the AQ agent using the DBMS_AQADM.ENABLE_DB_ACCESS procedure. 6. Grant the DEQUEUE privilege using the DBMS_AQADM.GRANT_QUEUE_PRIVILEGE procedure. 7. Optionally register for the alert enqueue notification using the DBMS_AQ.REGISTER procedure. 8. Configure e-mail using the DBMS_AQELM.SET* procedures. 9. Dequeue the alert using the DBMS_AQ.DEQUEUE procedure. 10. After the message has been dequeued, use DBMS_SERVER_ALERT.EXPAND_MESSAGE to expand the text of the message.
SQL Tuning Advisor Home >> Reference >> Performance Tuning >> SQL Tuning Advisor

Automatic Tuning Optimizer (ATO) to perform the following four specific types of analysis:

Statistics Analysis: ATO checks each query object for missing or stale statistics and makes a recommendation to gather relevant statistics. It also collects auxiliary information to supply missing statistics or correct stale statistics in case recommendations are not implemented. SQL Profiling: ATO verifies its own estimates and collects auxiliary information to remove estimation errors. It also collects auxiliary information in the form of customized optimizer settings, such as first rows and all rows, based on past execution history of the SQL statement. It builds a SQL Profile using the auxiliary information and makes a recommendation to create it. When a SQL Profile is created, the profile enables the query optimizer, under normal mode, to generate a well-tuned plan. Access Path Analysis: ATO explores whether a new index can be used to significantly improve access to each table in the query, and when appropriate makes recommendations to create such indexes. SQL Structure Analysis: Here, ATO tries to identify SQL statements that lend themselves to bad plans, and makes relevant suggestions to restructure them. The suggested restructuring can be syntactic as well as semantic changes to the SQL code.

SQL Tuning Advisor takes one or more SQL statements as input. The input can come from different sources:

High-load SQL statements identified by ADDM SQL statements that are currently in cursor cache

SQL statements from Automatic Workload Repository (AWR): A user can select any set of SQL statements captured by AWR. This can be done using snapshots or baselines. Custom workload: A user can create a custom workload consisting of statements of interest to the user. These may be statements that are not in cursor cache and are not high-load to be captured by ADDM or AWR. For such statements, a user can create a custom workload and tune it using the advisor.

SQL Profile
How to export and then import a SQL Profile: 1. Create a staging table through a call to the CREATE_STGTAB_SQLPROF procedure. exec DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF(table_name => 'STAGING_TABLE',schema_name => 'JFV'); 2. Call the PACK_STGTAB_SQLPROF procedure one or more times to write existing SQL profile data into the staging table. exec DBMS_SQLTUNE.PACK_STGTAB_SQLPROF(profile_name => 'SP_DINA',staging_table_name => 'STAGING_TABLE'); 3. Move the staging table from your production environment to a test machine through the means of choicefor example, by means of a Data Pump job or a database link. 4. On the test machine, call the UNPACK_STGTAB_SQLPROF procedure to create SQL Profiles on the new system from the profile data in the staging table. EXEC DBMS_SQLTUNE.REMAP_STGTAB_SQLPROF(old_profile_name => 'SP_DINA',new_profile_name => 'SP_NEW_DINA',staging_table_name => 'STAGING_TABLE'); EXEC DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF(replace => TRUE,staging_table_name => 'STAGING_TABLE');

SQL Access Advisor Home >> Reference >> Performance Tuning >> SQL Access Advisor Input to the SQL Access Advisor can come from one or all of the following sources:

Current SQL statements from V$SQL (the SQL cache) A user-specified list of SQL statements The name of a schema; in a data warehouse environment, this is typically a dimensional model in a single schema STSs previously saved in the workload repository

Given the workload, the SQL Access Advisor performs an analysis that includes all the following tasks:

Considers whether only indexes, only materialized views, or a combination of both would provide the most benefit

Balances storage and maintenance costs against the performance gains when recommending new indexes or materialized views Generates DROP recommendations to drop an unused index or materialized view if a full workload is specified Optimizes materialized views to leverage query rewrite and fast refresh where possible Recommends materialized view logs to facilitate fast refresh Recommends combining multiple indexes into a single index where appropriate

DBMS_ADVISOR Using the procedures of the DBMS_ADVISOR package, a typical tuning advisor session may comprise the following steps: 1. Create an advisor task using the DBMS_ADVISOR.CREATE_TASK procedure. The advisor task is a data area in the advisor repository that manages the tuning efforts. An existing task can be a template for another task. 2. Use the DBMS_ADVISOR.SET_TASK_PARAMETERS procedure to set up appropriate parameters to control the advisors behavior. Typical parameters are TARGET_OBJECTS, TIME_WINDOW, and TIME_LIMIT. 3. Perform analysis using the DBMS_ADVISOR.EXECUTE_TASK procedure. You can interrupt the analysis process anytime to review the results up to that point. You can then resume the interrupted analysis for more recommendations, or you can adjust the task parameters and then resume execution. 4. Review the results using the DBMS_ADVISOR.GET_TASK_REPORT procedure. You can also view the results using dictionary views. 5. DBMS_SQLTUNE Home >> Reference >> Performance Tuning >> DBMS_SQLTUNE 6. A typical session using the SQL Tuning Advisor would use DBMS_SQLTUNE procedures and functions as follows: 1. Create a tuning task with CREATE_TUNING_TASK. Understanding the SQL Tuning Advisor 333 2. Execute the tuning task using EXECUTE_TUNING_TASK. 3. Review the results of the tuning task by calling the function REPORT_TUNING_TASK. 4. Accept the SQL profile generated by the tuning task using ACCEPT_SQL_PROFILE. 7. 5. Alters specific attributes of an existing SQL Profile object 8. exec dbms_sqltune.alter_sql_profile('some_sqlprofile',
'STATUS', 'DISABLED'); exec DBMS_SQLTUNE.alter_sql_profile('some_sqlprofile','DESCRIPTION','this is a test sql profile');

Oracle Memory Compoments

SGA: Shared Pool (for SQL and PL/SQL execution): include library cache and data dictionary cache Large pool (for large allocations such as RMAN backup buffers) Java pool (for java objects and other java execution memory) Buffer cache (for caching disk blocks) Streams pool Redo log buffer PGA: Process-private memory, such as memory used for sorting and hash joins

Memory Management

Automatic Shared Memory Management (10g) Automatic Memory Management(AMM) (11g)

Memory Initialization Parameters


Parameter SGA_MAX_SIZE SGA_TARGET Description specifies the maximum size of the SGA for the lifetime of the instance. specifies the total size of all SGA components specifies the size of the DEFAULT buffer pool for buffers with the primary block size (the block size defined by the DB_BLOCK_SIZE initialization parameter) specifies the size of the cache for the nK buffers Dynamic? NO ALTER SYSTEM ALTER SYSTEM ALTER SYSTEM NO ALTER SYSTEM ALTER SYSTEM ALTER SYSTEM ALTER SYSTEM

DB_CACHE_SIZE

DB_nK_CACHE_SIZE LOG_BUFFER

the number of bytes allocated for the redo log buffer the size in bytes of the area devoted to shared SQL and SHARED_POOL_SIZE PL/SQL statements. LARGE_POOL_SIZE JAVA_POOL_SIZE STREAMS_POOL_SIZE the size of the large pool; the default is 0 the size of the Java pool

Commands and Scripts

Check SGA allocation: SQL> select component, current_size/1024/1024 "CURRENT_SIZE", min_size/1024/1024, user_specified_size/1024/1024 "USER_SPECIFIED_SIZE", last_oper_type "TYPE" from v$sga_dynamic_components; SQL> SELECT f.pool, f.name, s.sgasize, f.bytes, ROUND(f.bytes/s.sgasize*100, 2) "% Free" FROM (SELECT SUM(bytes) sgasize, pool FROM v$sgastat GROUP BY pool) s, v$sgastat f WHERE f.name = 'free memory' AND f.pool = s.pool; SQL> sho sga

Read and Write Concurrency Rates


Component Read Rate Write Rate Concurrency

Archive logs Redo logs Undo segment tablespaces


TEMP

High High Low Low Low High Low Low

High High High Low Medium Medium Medium Low

Low Low High High High High High High

tablespaces

Index tablespaces Data tablespaces Application log and output files Binaries (shared)

Oracle File I/O Statistics


select NAME, PHYRDS "Physical Reads", round((PHYRDS / PD.PHYS_READS)*100,2) "Read %", PHYWRTS "Physical Writes", round(PHYWRTS * 100 / PD.PHYS_WRTS,2) "Write %", fs.PHYBLKRD+FS.PHYBLKWRT "Total Block I/O's" from ( select sum(PHYRDS) PHYS_READS, sum(PHYWRTS) PHYS_WRTS from v$filestat ) pd, v$datafile df, v$filestat fs

where df.FILE# = fs.FILE# order by fs.PHYBLKRD+fs.PHYBLKWRT desc


File Name - Datafile name Physical Reads - Number of physical reads Reads % - Percentage of physical reads Physical Writes - Number of physical writes Writes % - Percentage of physical writes Total Block I/O's - Number of I/O blocks

Tracing a User Session


dbms_support dbms_system dbms_monitor (10g new feature) Oracle Event 10046 Trace Trace session: Enable:SQL> ALTER SESSION SET EVENTS '10046 trace name context forever level 12' Disable: SQL> ALTER SESSION SET EVENTS '10046 trace name context off' System-wide tracing: Enable: SQL> alter system set events '10046 trace name context forever,level 12'; Disable: SQL> alter system set events '10046 trace name context off'; Level 1 - the same as a normal set sql_trace=true Level 4 - including values of bind variables Level 8 - including wait events Level 12 - including both bind variables and wait events. Oradebug (10046) Logon Trigger: there may be some situations where it is necessary to trace the activity of a specific user. In this case a logon trigger could be used. An example:

CREATE OR REPLACE TRIGGER SYS.set_trace AFTER LOGON ON DATABASE WHEN (USER like '&USERNAME') DECLARE lcommand varchar(200); BEGIN EXECUTE IMMEDIATE 'alter session set statistics_level=ALL';

EXECUTE IMMEDIATE 'alter session set max_dump_file_size=UNLIMITED'; EXECUTE IMMEDIATE 'alter session set events ''10046 trace name context forever, level 12'''; END set_trace; /

SQL*Plus Autotrace
1. Create Plan Table SQL> @?/rdbms/admin/catplan.sql 2. Create PLUSTRACE Role SQL> @?/sqlplus/admin/plustrce.sql 3.Grant PLUSTRACE Role SQL> GRANT plustrace TO <user_name>;

4. AUTOTRACE syntax:

SET AUTOTRACE OFF - No report is generated. This is the default. SET AUTOTRACE ON EXPLAIN - The report shows only the optimizer execution path. SET AUTOTRACE ON STATISTICS - The report shows only the SQL statement execution statistics. SET AUTOTRACE ON - The report includes both the optimizer execution path and the SQL statement execution statistics. SET AUTOTRACE TRACEONLY - Like SET AUTOTRACE ON, but suppresses the printing of the query output.

Check Session History


Check Session History

Home >> Reference >> Performance Tuning >> Check Session History

Query 1
SELECT * FROM v$active_session_history sess, all_users us WHERE TRUNC (sample_time, 'DD') = '16-MAY-2008' AND us.user_id = sess.user_id AND us.username = '&user'; -- and sess.session_id=

Query 2
SELECT * FROM dba_hist_active_sess_history ORDER BY sample_time DESC;

Query 3
Find out history SQL statements which has blocking session
SELECT * FROM v$active_session_history sess, all_users us, v$sqltext text WHERE to_char (sample_time, 'DD-Mon-YYYY HH24:MI:SS') >= '07-Jun-2010 01:55:00' and to_char (sample_time, 'DD-Mon-YYYY HH24:MI:SS') <= '07-Jun-2010 02:05:00' AND us.user_id = sess.user_id AND us.username not in ('SYS', 'SYSTEM','DBSNMP') and text.sql_id=SESS.SQL_ID and sess.BLOCKING_SESSION_STATUS not in ('NO HOLDER','NOT IN WAIT' )

Find out history SQL statements


SELECT * FROM v$active_session_history sess, all_users us, v$sqltext text WHERE to_char (sample_time, 'DD-Mon-YYYY HH24:MI:SS') >= '07-Jun-2010 01:55:00' and to_char (sample_time, 'DD-Mon-YYYY HH24:MI:SS') <= '07-Jun-2010 02:05:00' --TRUNC (sample_time, 'DD') = '07-JUN-2010' AND us.user_id = sess.user_id AND us.username not in ('SYS', 'SYSTEM','DBSNMP') and text.sql_id=SESS.SQL_ID

Query 4
All blocking locks for last 7 days.
SELECT distinct a.sql_id ,a.inst_id,a.blocking_session,a.blocking_session_serial#,a.user_id,s.sql_text ,a.module FROM GV$ACTIVE_SESSION_HISTORY a ,gv$sql s where a.sql_id=s.sql_id and blocking_session is not null and a.user_id <> 0 -- exclude SYS user and a.sample_time > sysdate - 7

select * from ( SELECT a.sql_id , COUNT(*) OVER (PARTITION BY a.blocking_session,a.user_id ,a.program) cpt, ROW_NUMBER() OVER (PARTITION BY a.blocking_session,a.user_id ,a.program order by blocking_session,a.user_id ,a.program ) rn, a.blocking_session,a.user_id ,a.program, s.sql_text FROM sys.WRH$_ACTIVE_SESSION_HISTORY a ,sys.wrh$_sqltext s where a.sql_id=s.sql_id and blocking_session_serial# <> 0 and a.user_id <> 0 and a.sample_time > sysdate - 10 ) where rn = 1

Set parameter in a session Home >> Reference >> Performance Tuning >> Set parameters in a session exec dbms_system.set_int_param_in_session( sid, serial#, '&param_name', &number); exec dbms_system.set_bool_param_in_session( sid, serial#,'&param_name', &bool);

UNDO_RETENTION

Introduction

Proactive tuning Undo retention is tuned for the longest-running query with an AUTOEXTEND undo tablespace.

Undo retention is tuned for the best possible value with a fixed size undo tablespace. Query duration information is collected every 30 seconds. Reactive tuning Undo retention is gradually lowered under space pressure. The oldest unexpired extents are used first. Undo retention never goes below either UNDO_RETENTION or 15 minutes. Enabled by default

UNDO_RETENTION

The default for this parameter is 900 seconds. If UNDO_RETENTION is set to zero or if no value is specified, Oracle 10g automatically tunes the undo retention for the current undo tablespace using 900 as the minimum value. If you set UNDO_RETENTION to a value other than zero, Oracle 10g autotunes the undo retention using the specified value as the minimum

RETENTION GUARANTEE

Use RETENTION GUARANTEE clause to guarantee the undo retention. This means that the database will make certain that undo will always be available for the specified undo retention period. You can use the RETENTION GUARANTEE clause when creating the undo tablespace (CREATE UNDO TABLESPACE or CREATE DATABASE) or later using the ALTER TABLESPACE statement. To turn the retention guarantee off, use the RETENTION NOGUARANTEE clause.

Automatic Checkpoint Tuning

MTTR

The Mean Time to Recovery Advisor (the MTTR Advisor) performs automatic checkpoint tuning. You do not need to set up any checkpoint related parameters. Setting the FAST_START_MTTR_TARGET parameter to a nonzero value or not setting this parameter enables automatic checkpoint tuning. The STATISTICS_LEVEL parameter must be set to TYPICAL or ALL. If you explicitly set this parameter to zero, you disable automatic checkpoint tuning. After the database runs a typical workload for some time, the V$MTTR_TARGET_ADVICE dictionary view shows advisory information and an estimate of the number of additional I/O operations that would occur under different FAST_START_MTTR_TARGET values. A new column OPTIMAL_LOGFILE_SIZE is now available in V$INSTANCE_RECOVERY. This column shows the redo log file size in me

Checkpoint Overview Check-pointing is an important Oracle activity which records the highest system change number (SCN,) so that all data blocks less than or equal to the SCN are known to be written out to the data files. If there is a failure and then subsequent cache recovery, only the redo records containing changes at SCN(s) higher than the checkpoint need to be applied during recovery. As we are aware, instance and crash recovery occur in two steps - cache recovery followed by transaction recovery. During the cache recovery phase, also known as the rolling forward stage, Oracle applies all committed and uncommitted changes in the redo log files to the affected data blocks. The work required for cache recovery processing is proportional to the rate of change to the database and the time between checkpoints.

Trace Analyzer Metalink 224270.1 1. Download Trace Analyzer trca.zip trca_sample.zip 2. Installing the Trace Analyzer SQL> @/trca/install/tacreate Enter value for default_tablespace: TOOLS01 Using TOOLS01 for the default tablespace Enter value for temporary_tablespace: TEMP Using TEMP for the temporary tablespace 3. Trace sessions SQL> @sess exec sys.dbms_monitor.session_trace_enable (&sid ,&serial, TRUE,FALSE); exec sys.dbms_monitor.session_trace_disable(&sid, &serial) 4. Using the Trace Analyzer sqlplus trcanlzr SQL> start /trca/run/trcanlzr.sql <file_name> SQL> start trcanlzr.sql largesql.trc <== your trace file SQL> start trcanlzr.sql control_file.txt <== your text file

GATHER_DATABASE_STATS Procedures:
o

statistics for all objects in a database SQL> exec dbms_stats.gather_database_stats; SQL> exec dbms_stats.gather_database_stats(estimate_percent => NULL, method_opt => 'AUTO', granularity => 'ALL', cascade => 'TRUE', options => 'GATHER AUTO' );

GATHER_DICTIONARY_STATS Procedure:
o

statistics for all dictionary objects SQL> select distinct trunc(last_analyzed), count(*) from dba_tables where owner='SYS' group by trunc(last_analyzed); -- Check how many sys tables have statistics: SQL> exec dbms_stats.gather_dictionary_stats ; SQL> select distinct trunc(last_analyzed), count(*) from dba_tables where owner='SYS' group by trunc(last_analyzed);

GATHER_FIXED_OBJECTS_STATS Procedure

SQL> exec dbms_stats.gather_fixed_objects_stats ;

GATHER_INDEX_STATS Procedure:

an index statistics SQL> exec dbms_stats.gather_index_stats('schema_name', 'index_name' );

GATHER_SCHEMA_STATS Procedures:

statistics for all objects in a schema SQL> exec dbms_stats.gather_schema_stats('schema_name', cascade=>TRUE );

GATHER_SYSTEM_STATS Procedure

link

GATHER_TABLE_STATS Procedure:

a table, column, and its index statistics SQL> select * from dba_tab_modifications; --check table changes: modified since the last time statistics were gathered on the tables. SQL> exec dbms_stats.gather_table_stats('schema_name', 'table_name') ; SQL>select owner, table_name, last_analyzed from dba_tables order by 1,2,3;

Automated scheduler job

Optimizer statistics are automatically gathered with the job GATHER_STATS_JOB. This job gathers statistics on all objects in the database which have: * Missing statistics * Stale statistics This job is created automatically at database creation time and is managed by the Scheduler. The Scheduler runs this job when the maintenance window is opened. By default, the maintenance window opens every night from 10 P.M. to 6 A.M. and all day on weekends. Query: SQL> SELECT owner, job_name,enabled FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME = 'GATHER_STATS_JOB';

Enable automatic statistics collection: SQL> exec dbms_scheduler.enable('GATHER_STATS_JOB'); ;

Disable automatic statistics collection: SQL> exec dbms_scheduler.disalbe('GATHER_STATS_JOB');

DBMS_STATS vs ANALYZE

DBMS_STATS

DBMS_STATS can be done in parallel Monitoring can be done and stale statistics can be collected for changed rows using DBMS_STATS Import/export/set statistics directly with dbms_stats It is easier to automate with dbms_stats (it is procedural, analyze is just a command) dbms_stats is the stated, preferred method of collecting statisttics. dbms_stats can analyze external tables, analyze cannot DBMS_STATS gathers statistics only for cost-based optimization; it does not gather other statistics. For example, the table statistics gathered by DBMS_STATS include the

number of rows, number of blocks currently containing data, and average row length but not the number of chained rows, average free space, or number of unused data blocks. dbms_stats can gather system stats

ANALYZE

ANALYZE calculates global statistics for partitioned tables and indexes instead of gathering them directly. This can lead to inaccuracies for some statistics, such as the number of distinct values. DBMS_Stats won't do that. Most importantly, in the future, ANALYZE will not collect statistics needed by the cost-based optimizer.

STATISTICS_LEVEL The values for STATISTICS_LEVEL are as follows:

BASIC - A value of BASIC disables monitoring, and as a result the Automatic Workload Repository (AWR) snapshots are disabled in addition to the ADDM. Using BASIC may be advisable only in a DSS environment where the queries rarely vary day to day, the system has been thoroughly tuned already, the size of the database changes infrequently, the database is static, and the fastest possible execution speed of all queries is the top priority. TYPICAL - Using a value of TYPICAL will collect most statistics required for database self-management and delivers the best overall performance ALL - specifying ALL collects additional statistics such as timed operating system statistics and plan execution statistics, but incurs a significant amount of overhead above and beyond the TYPICAL level that may have a noticeable impact on user transactions.

Locks

Scripts:
1. Display lock wait-for in hierarchy: the lock information to the right of session ID describes the lock that the session is waiting for (not the lock it is holding). (SQL> @?/rdbms/admin/catblock) SQL> @?/rdbms/admin/utllockt

2. This script shows actual DML-Locks (incl. Table-Name), WAIT = YES means that users are waiting for a lock SQL> @show_dml_locks.sql SQL> @show_all_lock.sql 3. This script shows users waiting for a lock, the locker and the SQL-Command they are waiting for a lock, the osuser, schema and PIDs are shown as well. SQL> @show_blocking_sessions.sql 4. This script shows who's blocking who: SQL> @whoblock.sql

Modes of Locking
Exclusive Locks

select.. from table... => No insert into table... => RX update table => RX delete from table... => RX lock table...in row exclusive mode lock table... in share row exlcusive mode lock table... in exclusive mode
Share Locks

=> => =>

RX RX RX

select... from table for update of... lock table ... in row share mode lock table ... in share mode lock table ... in row exclusive mode

=> => => =>

RS RS S SRX

V$ Views for locks


v$lock

LMODE: Lock mode in which the session holds the lock: 0 - none 1 - null (NULL) 2 - row-S (SS) 3 - row-X (SX) 4 - share (S) 5 - S/Row-X (SSX) 6 - exclusive (X)

REQUEST: Lock mode in which the process requests the lock: 0 - none 1 - null (NULL) 2 - row-S (SS) 3 - row-X (SX) 4 - share (S) 5 - S/Row-X (SSX)
v$locked_objects

select count(*) from v$locked_object;

Commands
SQL> lock <table_name> in lock mode; SQL> alter table xxx disable table lock nowait; SQL> alter system kill session 'sid, serial#' immediate;

Configuring for Materialized Views


Oracle Tips by Burleson Consulting

Oracle materialized views are one of the single most important SQL tuning tools and they are a true silver bullet, allowing you to pre-join complex views and pre-compute summaries for superfast response time. I've devoted over a hundred pages to SQL tuning with Oracle materialized views in my book "Oracle Tuning: The Definitive Reference", and also see "Oracle Replication" a deeply-technical book on creating and managing materialized views. Also see my notes now how to identify opportunities for Oracle Materialized Views.

Introduction to Oracle materialized views Finding Oracle materialized view opportunities

Introduction to Oracle materialized views Oracle materialized views perform miracles in our goal to reduce repetitive I/O. You want tips on tuning materialized views internal performance, see:

Oracle materialized views and partitioning Materialized Views Tuning Materialized Views Refreshing Performance

Oracle materialized views were first introduced in Oracle8, and in Oracle materialized views were enhanced to allow very fast dynamic creation of complex objects. Oracle materialized views allow sub-second response times by pre-computing aggregate information, and Oracle dynamically rewrites SQL queries to reference existing Oracle materialized views. In this article, we continue our discussion of Oracle materialized views and discuss how to set up and configure your Oracle database to use this powerful new feature. We begin with a look at the initialization parameters and continue with details of the effective management and use of Oracle materialized views. Without Oracle materialized views you may see unnecessary repeating large-table full-table scans, as summaries are computed, over and over:

Prerequisites for using Oracle materialized views


In order to use Oracle materialized views, the Oracle DBA must set special initialization

parameters and grant special authority to the users of Oracle materialized views. You start by setting these initialization parameters within Oracle to enable the mechanisms for Oracle materialized views and query rewrite, as shown here:

trusted: Assumes that the Oracle materialized view is current. enforced (default): Always goes to Oracle materialized view with fresh data. stale_tolerated: Uses Oracle materialized view with both stale and fresh data

Next, you must grant several system privileges to all users who will be using the Oracle materialized views. In many cases, the Oracle DBA will encapsulate these grant statements into a single role and grant the role to the end users:
grant query rewrite to scott; grant create materialized view to scott; alter session set query_rewrite_enabled = true;

Invoking SQL query rewrite


Once Oracle materialized views have been enabled, Oracle provides several methods for invoking query rewrite. Query rewrite is generally automatic, but you can explicitly enable it by using Isession, alter system, or SQL hints:

ALTER {SESSION|SYSTEM} DISABLE QUERY REWRITE Select /*+REWRITE(mv1)*/...

Refreshing materialized views


In Oracle, if you specify REFRESH FAST for a single-table aggregate Oracle materialized view, you must have created a materialized view log for the underlying table, or the refresh command will fail. When creating an Oracle materialized view, you have the option of specifying whether the refresh occurs manually (ON DEMAND) or automatically (ON COMMIT, DBMS_JOB). To use the fast warehouse refresh facility, you must specify the ON DEMAND mode. To refresh the Oracle materialized view, call one of the procedures in DBMS_MVIEW. The DBMS_MVIEW package provides three types of refresh operations:

DBMS_MVIEW.REFRESH: Refreshes one or more Oracle materialized views DBMS_MVIEW.REFRESH_ALL_MVIEWS: Refreshes all Oracle materialized views DBMS_MVIEW.REFRESH_DEPENDENT: Refreshes all table-based Oracle materialized views

Manual complete refresh


A complete refresh occurs when the Oracle materialized view is initially defined, unless it references a prebuilt table, and a complete refresh may be requested at any time during the life of the Oracle materialized view. Because the refresh involves reading the detail table to compute the results for the Oracle materialized view, this can be a very time-consuming process, especially if huge amounts of data need to be read and processed.

Manual fast (incremental) refresh


If you specify REFRESH FAST (which means that only deltas performed by UPDATE, INSERT, DELETE on the base tables will be refreshed), Oracle performs further verification of the query definition to ensure that fast refresh can always be performed if any of the detail tables change. These additional checks include the following:

An Oracle materialized view log must be present for each detail table. The RowIDs of all the detail tables must appear in the SELECT list of the MVIEW query definition. If there are outer joins, unique constraints must be placed on the join columns of the inner table.

You can use the DBMS_MVIEW package to manually invoke either a fast refresh or a complete refresh, where F equals Fast Refresh and C equals Complete Refresh:
EXECUTE DBMS_MVIEW.REFRESH('emp_dept_sum','F');

Automatic fast refresh of materialized views


The automatic fast refresh feature is completely new in Oracle, so you can refresh a snapshot with DBMS_JOB in a short interval according to the snapshot log. With Oracle, it's possible to refresh automatically on the next COMMIT performed at the master table. This ON COMMIT refreshing can be used with materialized views on single-table aggregates and materialized views containing joins only. ON COMMIT MVIEW logs must be built as ROWID logs, not as primary-key logs. For performance reasons, it's best to create indexes on the ROWIDs of the MVIEW. Note that the underlying table for the MVIEW can be prebuilt. Below is an example of an Oracle materialized view with an ON COMMIT refresh. CREATE MATERIALIZED VIEW empdep

ON PREBUILT TABLE REFRESH FAST ON COMMIT ENABLE QUERY REWRITE AS SELECT empno, ename, dname, loc, e.rowid emp_rowid, d.rowid dep_rowid FROM emp e, dept d WHERE e.deptno = d.deptno;

Creating an Oracle materialized view


To see all the steps in the creation of a materialized view, lets take it one step at a time. The code for each step is shown here: Step 1 optimizer_mode = choose, first_rows, or all_rows job_queue_interval = 3600 job_queue_processes = 1 query_rewrite_enabled = true query_rewrite_integrity = enforced compatible = 8.1.5.0.0 (or greater) Step 2 CREATE MATERIALIZED VIEW emp_sum ENABLE QUERY REWRITE AS SELECT deptno,job,SUM(sal) FROM emp GROUP BY deptno,job PCTFREE 5 PCTUSED 60 NOLOGGING PARALLEL 5 TABLESPACE users STORAGE (INITIAL 50K NEXT 50K) USING INDEX STORAGE (INITIAL 25K NEXT 25K) REFRESH FAST START WITH SYSDATE NEXT SYSDATE + 1/12; Step 3 execute dbms_utility.analyze_schema('SCOTT','ESTIMATE'); execute dbms_mview.refresh('emp_sum'); Step 4

set autotrace on explain SELECT deptno, job, SUM(sal) FROM emp GROUP BY deptno, job; Execution Plan ----------------------------------0 SELECT STATEMENT Optimizer=CHOOSE 1 0 TABLE ACCESS (FULL) OF 'EMP_SUM' Step 5 CREATE MATERIALIZED VIEW LOG ON emp_sum WITH ROWID; CREATE MATERIALIZED VIEW LOG ON dept WITH ROWID; Step 6 EXECUTE DBMS_MVIEW.REFRESH('emp_sum');
1. Set the initialization parameters and bounce the database. 2. Create the materialized view table. Here, we specify that the materialized view will be refreshed every two hours with the refresh fast option. Instead of using DBMS_MVIEW, you can automatically refresh the MVIEW (Snapshot) using Oracle DBMS_JOB Management. 3. Create the optimizer statistics and refresh the materialized view. 4. Test the materialized view. 5. Create the MVIEW log(s) MATERIALIZED VIEW. 6. Execute a manual complete refresh.

Monitoring materialized views


Oracle provides information in the data dictionary to monitor the behavior of Oracle materialized views. When youre monitoring Oracle materialized views, its critical that you check the refresh interval in the dba_jobs view. Here is a SQL statement to check the generated job status for Oracle materialized views:

Conclusion
Oracle materialized views are quite complex in nature and require a significant understanding to be used effectively. In this article, I covered the required set-up methods and the steps for creating Oracle materialized views and appropriate refresh intervals

Types of Histogram

Height-balanced histograms: the column values are divided into bands so that each band contains approximately the same number of rows. The useful information that the histogram provides is where in the range of values the endpoints fall. o Height-Balanced Histogram with Uniform Distribution

Height-Balanced Histogram with Non-Uniform Distribution

Frequency histograms: each value of the column corresponds to a single bucket of the histogram. Each bucket contains the number of occurrences of that single value. Frequency histograms are automatically created instead of height-balanced histograms when the number of distinct values is less than or equal to the number of histogram buckets specified. Query for the type of histogram: SQL> select owner, table_name, column_name, num_distinct, num_buckets, histogram from DBA_TAB_COL_STATISTICS order by 1,2,3

Generating Histogram and Disabling Histogram

DBMS_STATS

Syntax: dbms_stats.gather_table_stats(<schema_name>, <table_name>,METHOD_OPT => FOR COLUMN SIZE <integer> <column_name>) Example: SQL> exec dbms_stats.gather_schema_stats(ownname=> &owner , estimate_percent=> 10 , method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); SQL> DBMS_STATS.GATHER_table_STATS (OWNNAME => &owner, TABNAME => &tab_name, METHOD_OPT => 'FOR COLUMNS SIZE 10 &col_name');

Histograms are specified using the METHOD_OPT argument of the DBMS_STATS gathering procedures. Oracle Corporation recommends setting the METHOD_OPT to FOR ALL COLUMNS SIZE AUTO. With this setting, Oracle automatically determines which columns require histograms and the number of buckets (size) of each histogram. You can also manually specify which columns should have histograms and the size of each histogram. Disabling Histogram: DBMS_STATS to gather statistics using METHOD_OPT => 'FOR COLUMNS SIZE 1

Analyze

Syntax: ANALYZE TABLE <schema.object_name> COMPUTE STATISTICS FOR COLUMNS <column_name> SIZE <number_of_buckets_integer>

Viewing Histogram Statistics

Histogram Types: SQL> select owner, table_name, column_name, num_distinct, num_buckets, histogram from DBA_TAB_COL_STATISTICS order by 1,2,3

Histogram Statistics: see if the histogram is uniformed or skewed SQL> select * from dba_histograms order by 1,2,3

SYS.COL_USAGE$: monitor the usage of predicates on columns in select statements. DBMS_STATS will make use of that info when deciding if it needs to create a histogram on a column.

SQL> select distinct r.name owner, o.name table_name, c.name col_name, equality_preds, equijoin_preds, nonequijoin_preds, range_preds, like_preds, null_preds from sys.col_usage$ u, sys.obj$ o, sys.col$ c, sys.user$ r where o.obj# = u.obj# and c.obj# = u.obj# and c.col# = u.intcol# and o.owner# = r.user# and (u.equijoin_preds > 0 or u.nonequijoin_preds > 0) order by 1,2,3

Histogram Distribution: SQL> SELECT endpoint_number, endpoint_value FROM DBA_HISTOGRAMS WHERE table_name = &tab_name and column_name = &col_name ORDER BY endpoint_number;

Creating the Recovery Catalog Owner


To create the recovery catalog schema in the recovery catalog database: 1. Start SQL*Plus and then connect with administrator privileges to the database containing the recovery catalog. Example: connect to recovery catalog database (catdb) SQL> CONNECT SYS/oracle@catdb AS SYSDBA

2. Create a user, schema and tablespace for the recovery catalog. Example: 1. create tablespace (rmantbs) SQL> create tablespace rmantbs datafile '/u02/oradata/RMANDB/rmantbs.dbf' size 100M AUTOEXTEND OFF BLOCKSIZE 8192 EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64k SEGMENT SPACE MANAGEMENT auto; 2. Create user (rman) SQL> CREATE USER rman IDENTIFIED BY rman TEMPORARY TABLESPACE temp DEFAULT TABLESPACE rmantbs QUOTA UNLIMITED ON rmantbs;

3. Grant the RECOVERY_CATALOG_OWNER role to the schema owner. This role provides the user with privileges to maintain and query the recovery catalog.

Example: SQL> GRANT RECOVERY_CATALOG_OWNER TO rman;

4. Grant other desired privileges to the RMAN user. Example: SQL> GRANT CONNECT, RESOURCE TO rman;

Creating the Recovery Catalog


To create the recovery catalog: 1. Connect to the database that will contain the catalog as the catalog owner. Example: connect to recovery catalog database (catdb) % rman CATALOG rman/cat@catdb Or connect from the RMAN prompt: Example: % rman RMAN> CONNECT CATALOG rman/cat@catdb 2. Run the CREATE CATALOG command to create the catalog. If the catalog tablespace is this user's default tablespace, then: RMAN> CREATE CATALOG; 3. Optionally, start SQL*Plus and query the recovery catalog to see which tables were created: SQL> SELECT TABLE_NAME FROM USER_TABLES

Registering a Database in the Recovery Catalog


To register the target database: 1. Connect to the target database and recovery catalog database. Example: connect to the target database and recovery catalog database (catdb) % rman TARGET / CATALOG rman/cat@catdb

2. If the target database is not mounted, then mount or open it. The recovery catalog database must be open. 3. To use RMAN with a target database, register the database. RMAN> REGISTER DATABASE; After run REGISTER DATABASE:

RMAN creates rows in the repository that contain information about the target database. RMAN performs a full resynchronization with the catalog in which it transfers all pertinent data about the target database from the control file and saves it in the catalog.

4. Test that the registration was successful by running REPORT SCHEMA. This command shows the database structure as it is stored in the repository. RMAN> REPORT SCHEMA;

5. If there are any existing user-created copies of datafiles or archived logs on disk that were created under Oracle release 8.0 or higher, you can add them to the recovery catalog with the CATALOG command. Example: RMAN> CATALOG DATAFILECOPY 'users01.dbf'; RMAN> CATALOG ARCHIVELOG 'archive1_731.dbf', 'archive1_732.dbf';

Connect at Command Line


Syntax: RMAN [ TARGET [=] connectStringSpec | { CATALOG [=] connectStringSpec | NOCATALOG } | AUXILIARY [=] connectStringSpec | LOG [=] ['] filename ['] . . . ]...

connectStringSpec::= ['] [userid] [/ [password]] [@net_service_name] ['] Example: 1. Connects in NOCATALOG mode % rman TARGET SYS/pwd@target_str 2. Connect to target database and catalog % rman TARGET / CATALOG cat_usr/pwd@cat_str 3. Connect to target, catalog and auxiliary databases % rman TARGET / CATALOG cat_usr/pwd@cat_str AUXILIARY aux_usr/pwd@aux_str 4. Connect to target database and run command file % rman TARGET SYS/pwd@target_str @/scripts/test.rcv 5. Connect to target and log % rman TARGET / LOG $ORACLE_HOME/dbs/log/backup.log APPEND # using OS Authentication

Connect from RMAN Prompt


In RMAN prompt:

CONNECT TARGET <connectString>: Establishes a connection between RMAN and the target database. CONNECT CATALOG <connectString>: Establishes a connection between RMAN and the recovery catalog database. You must run this command before running any command that requires a repository. Otherwise, RMAN defaults to NOCATALOG mode and invalidates the use of CONNECT CATALOG in the session. CONNECT AUXILIARY <connectString>: Establishes a connection between RMAN and an auxiliary instance. You can use an auxiliary instance with the DUPLICATE command or during TSPITR.

Example: 1. Connecting Without a Recovery Catalog: % rman NOCATALOG RMAN> CONNECT TARGET sys/pwd@target_str ; 2. Connecting in the Default NOCATALOG Mode: % rman RMAN> CONNECT TARGET sys/pwd@target_str ; # You cannot run CONNECT CATALOG after this point because RMAN has defaulted to NOCATALOG 3.Connecting with a Recovery Catalog: % rman RMAN> CONNECT TARGET / # connects to the target database using OS authentication RMAN> CONNECT CATALOG rman/rman@cat_str

4.Connecting to Target, Recovery Catalog, and Duplicate Databases: % rman RMAN> CONNECT TARGET SYS/sysdba@target_str RMAN> CONNECT CATALOG rman/rman@cat_str RMAN> CONNECT AUXILIARY SYS/sysdba@dupdb

Hiding Passwords When Connecting to Databases


To connect to RMAN from the operating system command line and hide authentication information, you must first start RMAN and then perform either of the following actions:

Run the CONNECT commands at the RMAN prompt. If the password is not provided in the connect string, then RMAN prompts for the password. Run a command file at the RMAN prompt that contains the connection information. You can set the read privileges on the command file to prevent unauthorized access. Example: if you are running RMAN in an UNIX environment, then you can use the following procedure: 1. Start RMAN without connecting to any databases: % rman 2. Place the connection information in a text file. Place the following lines in a file called connect.rman: CONNECT TARGET SYS/oracle@trgt CONNECT CATALOG rman/cat@catdb 3. Change the permissions on the connect script so that everyone can execute the script but only the desired users have read access. For example, enter: % chmod 711 connect.rman 4. Run the script from the RMAN prompt to connect to the target and catalog databases. RMAN> @connect.rman

RMAN Commands

"@"
Run a command file.

"@@"

Run a command file in the same directory as another command file that is currently running. The @@ command differs from the @ command only when run from within a command file.

"ALLOCATE CHANNEL"
Establish a channel, which is a connection between RMAN and a database instance.

"ALLOCATE CHANNEL FOR MAINTENANCE"


Allocate a channel in preparation for issuing maintenance commands such as DELETE.

"BACKUP"
Back up a database, tablespace, datafile, archived log, or backup set.

"BLOCKRECOVER"
Recover an individual data block or set of data blocks within one or more datafiles.

"CATALOG"
Add information about a datafile copy, archived redo log, or control file copy to the repository.

"CHANGE"
Mark a backup piece, image copy, or archived redo log as having the status UNAVAILABLE or AVAILABLE; remove the repository record for a backup or copy; override the retention policy for a backup or copy.

"CONFIGURE"
Configure persistent RMAN settings. These settings apply to all RMAN sessions until explicitly changed or disabled.

"CONNECT"
Establish a connection between RMAN and a target, auxiliary, or recovery catalog database.

"COPY"
Create an image copy of a datafile, control file, or archived redo log.

"CREATE CATALOG"
Create the schema for the recovery catalog.

"CREATE SCRIPT"
Create a stored script and store it in the recovery catalog.

"CROSSCHECK"
Determine whether files managed by RMAN, such as archived logs, datafile copies, and backup pieces, still exist on disk or tape.

"DELETE"
Delete backups and copies, remove references to them from the recovery catalog, and update their control file records to status DELETED.

"DELETE SCRIPT"
Delete a stored script from the recovery catalog.

"DROP CATALOG"
Remove the schema from the recovery catalog.

"DUPLICATE"
Use backups of the target database to create a duplicate database that you can use for testing purposes or to create a standby database.

"EXECUTE SCRIPT"
Run an RMAN stored script.

"EXIT"
Quit the RMAN executable.

"HOST"
Invoke an operating system command-line subshell from within RMAN or run a specific operating system command.

"LIST"
Produce a detailed listing of backup sets or copies.

"PRINT SCRIPT"

Display a stored script.

"QUIT"
Exit the RMAN executable.

"RECOVER"
Apply redo logs or incremental backups to a restored backup set or copy in order to update it to a specified time.

"REGISTER"
Register the target database in the recovery catalog.

"RELEASE CHANNEL"
Release a channel that was allocated with an ALLOCATE CHANNEL command.

"REPLACE SCRIPT"
Replace an existing script stored in the recovery catalog. If the script does not exist, then REPLACE SCRIPT creates it.

"REPORT"
Perform detailed analyses of the content of the recovery catalog.

"RESET DATABASE"
Inform RMAN that the SQL statement ALTER DATABASE OPEN RESETLOGS has been executed and that a new incarnation of the target database has been created, or reset the target database to a prior incarnation.

"RESTORE"
Restore files from backup sets or from disk copies to the default or a new location.

"RESYNC"
Perform a full resynchronization, which creates a snapshot control file and then copies any new or changed information from that snapshot control file to the recovery catalog.

"RUN"

Execute a sequence of one or more RMAN commands, which are one or more statements executed within the braces of RUN.

"SEND"
Send a vendor-specific quoted string to one or more specific channels.

"SET"
Make the following session-level settings: * Control whether RMAN commands are displayed in the message log * Set the DBID when restoring a control file or server parameter file * Specify new filenames for restored datafiles * Specify a limit for the number of permissible block corruptions * Override default archived redo log destinations * Specify the number of copies of each backup piece * Determine which server session corresponds to which channel * Control where RMAN searches for backups when using an Oracle Real Application Clusters configuration * Override the default format of the control file autobackup

"SHOW"
Displays the current CONFIGURE settings.

"SHUTDOWN"
Shut down the target database. This command is equivalent to the SQL*Plus SHUTDOWN command.

"SPOOL"
Write RMAN output to a log file.

"SQL"
Execute a SQL statement from within Recovery Manager.

"STARTUP"
Start up the target database. This command is equivalent to the SQL*Plus STARTUP command.

"SWITCH"

Specify that a datafile copy is now the current datafile, that is, the datafile pointed to by the control file. This command is equivalent to the SQL statement ALTER DATABASE RENAME FILE as it applies to datafiles.

"UPGRADE CATALOG"
Upgrade the recovery catalog schema from an older version to the version required by the RMAN executable.

"VALIDATE"
Examine a backup set and report whether its data is intact. RMAN scans all of the backup pieces in the specified backup sets and looks at the checksums to verify that the contents can be successfully restored. Flashback

Overview
Flashback Time Navigation

Flashback Query Query all data at point in time. Flashback Versions Query See all versions of a row between two times. See transactions that changed the row. Flashback Transaction Query See all changes made by a transaction

Flashback Error Correction

Recovery at all levels


Database level: Flashback Database restores the whole database to a point in time. Table level: Flashback Table restores all rows in a set of tables to a point in time. Flashback Drop restores a dropped table or index. Row level: Flashback Query features are used to restore rows to a point in time.

Oracle Flashback Technology

Flashback Technology

Scenario

Uses/Priviledges

Affects Data

Truncate table undesired changes made by DDL Flashback Logs/ or DML drop user batch job: partial changes Recyclebin/
Drop Drop Table Y

Database

SYSDBA DB connections

Object privileges for objects in the recycle bin before they were dropped
Query Compare current and past data or data from different point-in-time Undo Data, Flashback Archive N

Undo Data, Flashback Archive/


Table Update with the wrong or no WHERE clause

FLASHBACK TABLE system privilege and the appropriate object privileges

Transaction Backout Transaction Query

Reverse transaction(s) with/without Undo Data, Flashback Archive cascade (DBMS_FLASHBACK) Investigate several historical states of data

Undo Data, Flashback Archive/


N

SELECT ANY TRANSACTION system privilege Undo Data, Flashback Archive/ FLASHBACK TABLE system privilege and the appropriate object privileges
N

Version Query Compare versions of a row

Flashback Database Flashback Versions Query and Flashback Transaction Query Flashback Table Flashback Drop (Recylebin) Guaranteed Undo Retention SCN and Time Mapping Enhancements o The mapping granularity is three seconds. o The mapping is retained for: Max(five days, UNDO_RETENTION)

Access the mapping by using the following SQLfunctions: SCN_TO_TIMESTAMP TIMESTAMP_TO_SCN SQL> SELECT current_scn, SCN_TO_TIMESTAMP(current_scn), TIMESTAMP_TO_SCN(systimestamp)FROM v$database;

Flashback database

Flashbacking a database means going back to a previous database state. The Flashback Database feature provides a way to quickly revert entire Oracle database to the state it was in at a past point in time. A new background process RVWR introduced which is responsible for writing flashback logs which stores pre-image(s) of data blocks One can use Flashback Database to back out changes that: o Have resulted in logical data corruptions. o Are a result of user error. This feature is not applicable for recovering the database in case of media failure.

How to Configure Flashback database


Prerequisite

a) Database must be in archivelog mode. b) Last clean shutdown.


Steps

1. Configure the following parameters in parameter file(init.ora) or spfile

DB_RECOVERY_FILE_DEST

dynamically modifiable dynamically modifiable dynamically modifiable

Physical location where RVWR background process writes flashback logs Maximum size flashback logs can occupy in DB_RECOVERY_FILE_DEST Upper limit in minutes on how far back one can flashback the database

DB_RECOVERY_FILE_DEST_SIZE

DB_FLASHBACK_RETENTION_TARGET

Example: SQL> alter system set db_recovery_file_dest='/u01/app/oracle/flash_recovery_area' scope=spfile; SQL> alter system set db_recovery_file_dest_size=2147483648 scope=spfile; SQL> alter system set DB_FLASHBACK_RETENTION_TARGET=2880; (2 days)

2. Turn flashback on: SQL> Startup mount exclusive; SQL> alter database archivelog; SQL> Alter database flashback on; SQL> Alter database open;

3. Check status SQL> SELECT flashback_on, log_mode FROM gv$database; Sql> SELECT estimated_flashback_size FROM gv$flashback_database_log; $ ps -eaf | grep rvwr

Disable Flashback Database

Disabling Flashback Database with ALTER DATABASE FLASHBACK OFF automatically deletes all flashback logs in the flash recovery area.

Flashback Database Using SQL or RMAN Commands

Flashback Using SQL


Use an SCN or a time stamp in the SQL version Example: Flash back the database to a day before using SQL SQL> shutdown immediate; SQL> startup mount exclusive; SQL> flashback database to timestamp(sysdate-1); SQL> alter database open resetlogs;

Flashback Using RMAN


Using RMAN, you can flash back to a time stamp, SCN, or log sequence number (SEQUENCE) and thread number (THREAD). Example: RMAN> FLASHBACK DATABASE TO TIME = TO_DATE('2002-12-10 16:00:00','YYYY-MM-DD HH24:MI:SS'); RMAN> FLASHBACK DATABASE TO SCN=23565; RMAN> FLASHBACK DATABASE TO SEQUENCE=223 THREAD=1;

Views

V$FLASHBACK_DATABASE_LOG - monitor the estimated and actual size of the flashback logs in the flash recovery - Check flash recovery area disk quota: SQL> select retention_target, flashback_size, estimated_flashback_size FROM V$FLASHBACK_DATABASE_LOG; - Determine the current flashback window: SQL> SELECT oldest_flashback_scn,oldest_flashback_time FROM V$FLASHBACK_DATABASE_LOG; V$FLASHBACK_DATABASE_STAT - monitors the overhead of logging flashback data in the flashback logs. It contains at most 24 rows, with one row for each of the last 24 hours. - The flashback generation for the last hour: SQL> select to_char(end_time,'yyyy-mm-dd hh:miAM') end_timestamp, flashback_data, db_data, redo_data from v$flashback_database_stat where rownum=1;

Excluding Tablespaces from Flashback Database


SQL> ALTER TABLESPACE <ts_name> FLASHBACK {ON|OFF} SQL> SELECT name, flashback_on 2 FROM v$tablespace; Notes:

Take the tablespace offline before you perform the database flashback operation. After performing Flashback Database, drop the tablespace or recover the offline files with traditional point-in-time recovery.

Flash back a RESETLOGS operation

You can flash back to a point in time before a RESETLOGS operation: SQL> FLASHBACK DATABASE TO BEFORE RESETLOGS;

Limitations

You cannot use Flashback Database in the following situations: o The control file has been restored or re-created. o A tablespace has been dropped. o A data file has been shrunk.

Flashback Versions Query

Flashback Versions Query provides an easy way to show all versions of all rows in a table between two SCNs or time stamps, whether the rows were inserted, deleted, or updated. Syntax: query the actual table SELECT [pseudo_columns]...FROM table_name VERSION BETWEEN {SCN | TIMESTAMP {expr | MINVALUE} AND {expr | MAXVALUE}} [AS OF {SCN|TIMESTAMP expr}] WHERE [pseudo_column | column] . . The pseudo-columns: VERSIONS_STARTSCN The SCN at which this version of the row was created VERSIONS_STARTTIME The time stamp at which this version of the row was created VERSIONS_ENDSCN The SCN at which this row no longer existed (either changed or deleted) VERSIONS_ENDTIME The time stamp at which this row no longer existed (either changed or deleted) VERSIONS_XID The transaction ID of the transaction that created this version of the rows VERSIONS_OPERATION The operation done by this transaction: I=Insert, D=Delete, U=Update MINVALUE and MAXVALUE resolve to the SCN or time stamp of the oldest and most recent data available, respectively Example 1: Track history of changes made on employee number 111. The period of the past is determined by the oldest SCN available and the read SCN of the query (5525300): SQL> SELECT versions_xid AS XID, versions_startscn AS START_SCN, versions_endscn AS END_SCN, versions_operation AS OPERATION, first_name FROM employees VERSIONS BETWEEN SCN MINVALUE AND MAXVALUE AS OF SCN 5525300 WHERE employee_id = 111; Example 2: Track the salary of employee number 124. Use midnight as your starting point for the Flashback Versions Query: SQL> select versions_startscn startscn,versions_endscn endscn, versions_xid xid,

versions_operation oper, employee_id empid, last_name name, salary sal from hr.employees versions between timestamp trunc(systimestamp) and systimestamp where employee_id = 124; Maximum available versions are dependent on the UNDO_RETENTION parameter. Limitations: o The VERSIONS clause cannot be used to query: External tables Temporary tables Fixed tables Views o The VERSIONS clause cannot span DDLs. o Segment shrink operations are filtered out.

Flashback Transaction Query


Flashback Transaction Query is a diagnostic tool that you can use to view changes made to the database at the transaction level. FLASHBACK_TRANSACTION_QUERY o retrieve transaction information for all tables involved in a transaction. o provides the SQL statements that you can use to undo the changes made by a particular transaction Example 1: returns information about all transactions, both active and committed, in all undo segments in the database. SQL> SELECT operation, undo_sql, table_name FROM flashback_transaction_query; Example 2: returns information relevant to the transaction whose identifier is specified in the WHERE clause. SQL> SELECT operation, undo_sql, table_name FROM flashback_transaction_query WHERE xid = HEXTORAW('8C0024003A000000') ORDER BY undo_change#; Example 3: returns information about all transactions that began and committed within a half-hour interval. SQL> SELECT operation, undo_sql, table_name FROM flashback_transaction_query WHERE start_timestamp >= TO_TIMESTAMP ('2003-10-21 11:00:00','YYYY-MMDD HH:MI:SS') AND commit_timestamp <= TO_TIMESTAMP('2003-10-21 11:30:00','YYYY-MM-DD HH:MI:SS'); Considerations o DDLs are seen as dictionary updates. o Updates on key columns of IOTs are seen as delete statements followed by insert statements. o Dropped objects appear as object numbers. o Dropped users appear as user identifiers. o Minimal supplemental logging may be needed: ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Flashback Table

Overview

Using Flashback Table, you can recover a set of tables to a specific point in time without having to perform traditional point-in-time recovery operations. A Flashback Table operation is done in-place while the database is online, by rolling back only the changes that were made to the given tables and their dependent objects. A Flashback Table statement is executed as a single transaction. All tables must be flashed back successfully or the entire transaction is rolled back. Note: You can use Flashback Versions Query and Flashback Transaction Query to determine the appropriate flashback time.

Examples
Example 1 : uses a time stamp to determine thepoint in time to which to flash back the EMPLOYEES table. SQL> ALTER TABLE employees ENABLE ROW MOVEMENT; SQL> FLASHBACK TABLE employees TO TIMESTAMP (SYSDATE-1); Example 2: uses an SCN instead of a time stamp

because both the EMPLOYEES table and the DEPARTMENTS table are linked by a referential integrity constraint, the FLASHBACK TABLE statement groups both tables together. In the default operation mode, triggers currently enabled on the tables being flashed back are disabled for the duration of the flashback operation and brought back to the enabled state after the completion of the flashback operation.

SQL> ALTER TABLE employees ENABLE ROW MOVEMENT; SQL> ALTER TABLE departments ENABLE ROW MOVEMENT; SQL> FLASHBACK TABLE employees, departments TO SCN 5525293 ENABLE TRIGGERS;

Considerations

Flashback Table is executed within a single transaction. It acquires exclusive DML locks. Statistics are not flashed back. Current indexes and dependent objects are maintained. Operation is written to the alert log file. The operation maintains data integrity. Flashback Table cannot span DDLs. Flashback Table cannot be performed on system tables.

10g Recycle Bin

Introduction
The Recycle Bin is a virtual container where all dropped objects reside.

The objects are occupying the same space as when they were created. Dropped tables and any associated objects such as indexes, constraints, nested tables, and other dependant objects are not moved, they are simply renamed with a prefix of BIN$$. Users can view their dropped tables by querying the new RECYCLEBIN view. Objects in the Recycle Bin will remain in the database until the owner of the dropped objects decides to permanently remove them using the new PURGE command. Objects in the Recycle Bin will be automatically purged by the space reclamation process if o A user creates a new table or adds data that causes their quota to be exceeded. o The tablespace needs to extend its file size to accommodate create/insert operations. When a tablespace or a user is dropped there is NO recycling of the objects. Recyclebin does not work for SYS objects

Technical Details

Tables and Views of Recycle Bin Purging Objects in Recycle Bin Restoring Objects from Recycle Bin Disabling Recycle Bin

Limitations

Protected tables: Are non-SYSTEM tablespace tables Are stored in locally managed tablespaces Do not use FGA or VPD The following dependencies are not protected: Bitmap join indexes Materialized view logs Referential integrity constraints Indexes dropped before tables Purged tables cannot be flashed back.

Flash Recovery Area Occupants

Control files - A copy of the control file is created in the flash recovery area when the database is created. This copy of the control file can be used as one of the mirrored copies of the control file to ensure that at least one copy of the control file is available after a media failure. Archived log files - When the flash recovery area is configured, the initialization parameter LOG_ARCHIVE_DEST_10 is automatically set to the flash recovery area location. The corresponding ARCn processes create archived log files in the flash recovery area and any other defined LOG_ARCHIVE_DEST_n locations. Flashback logs - If Flashback Database is enabled, then its flashback logs are stored in the flash recovery area. Control file and SPFILE autobackups - The flash recovery area holds control file and SPFILE autobackups generated by RMAN, if RMAN is configured for control file autobackup. When RMAN backs up datafile #1, the control file is automatically included in the RMAN backup. Data file copies - For RMAN BACKUP AS COPY image files, the default destination is the flash recovery area. RMAN backup sets - By default, RMAN uses the flash recovery area for both backup sets and image copies. In addition, RMAN puts restored archive log files from tape into the flash recovery area in preparation for a recovery operation.

Defining Flash Recovery Area


The flash recovery area is defined by setting the following initialization parameters:

DB_RECOVERY_FILE_DEST_SIZE: Specify the disk limit, which is the amount of space the flash recovery area is permitted to use with this initialization parameter. The minimum size of the flash recovery area should be at least large enough to contain archive logs that have not been copied to tape. To maximize the benefits of the flash recovery area, it should be large enough to hold a copy of all datafiles, all incremental backups, online redologs, archived redo logs not yet backed up to tape, control files, and control file autobackups. DB_RECOVERY_FILE_DEST: Specify this initialization parameter as a valid destination to create the flash recovery files.

Notes:

You must specify the DB_RECOVERY_FILE_DEST_SIZE initialization parameter before the DB_RECOVERY_FILE_DEST initialization parameter. These parameters are dynamic and can be altered or disabled. SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE = 4G; SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST = '/oracle/frec_area'; All instances in a RAC database must have the same values for these parameters. The flash recovery area is also supported by Data Guard.

Flash Recovery Directory Structure


The flash recovery area directory structure is used by RMAN in a very organized fashion with separate directories for each file type, such as archived logs, backupsets, image copies, control

file autobackups, and so forth. In addition, each subdirectory is further divided by a datestamp,making it easy to locate backupsets or image copies based on their creation date.

Backing Up the Flash Recovery Area


RMAN> BACKUP RECOVERY AREA; This command backs up all flash recovery files created in the current (and any previous) flash recovery area destinations that have not previously been backed up to tape. Files that fall into this category are full and incremental backup sets, control file autobackups, archive logs, and data file copies. Other files such as flashback logs, incremental bitmaps, current control file, and online redo log files are not backed up. RMAN> BACKUP RECOVERY FILES; This command backs up all recovery files on disk that have not previously been backed up to tape. The files that fall into this category are full and incremental backup sets, control file autobackups, archive logs, and data file copies. This can also include files not part of a recovery area

Views

V$RECOVERY_FILE_DEST - information regarding the flash recovery area: SQL> SELECT name, space_limit AS size, space_used AS used, space_reclaimable AS reclaimable, number_of_files AS files FROM v$recovery_file_dest ; V$FLASH_RECOVERY_AREA_USAGE - indicate the flash recovery area disk space usage: SQL> SELECT file_type,percent_space_used AS used,percent_space_reclaimable AS reclaimable, number_of_files AS number FROM v$flash_recovery_area_usage ; New columns o IS_RECOVERY_DEST_FILE Indicates whether the file was created in the flash recovery area Values: YES / NO Tracked in V$CONTROLFILE, V$LOGFILE, V$ARCHIVED_LOG, V$DATAFILE_COPY,V$BACKUP_PIECE o BYTES Size of the file Tracked in V$BACKUP_PIECE

Best Practices for the Database and Flash Recovery Area

Use OMF for the database area: Simplifies the administration of database files Puts database files, control files, and online logs in the database area Use the flash recovery area for recovery-related files: Simplifies the location of database backups Automatically manages the disk space allocated for recovery files

No changes needed for existing scripts Puts database backups, archive logs, and control file backups in the flash recovery area

Recovery from User Error

Flashback Database
Using Flashback Database enables you to return your whole database to a previous state without restoring old copies of your datafiles from backup, as long as you have enabled logging for flashback database in advance. >>more about Flashback Database

Creating Normal and Guaranteed Restore Points


Guaranteed restore points ensure that you can return your database to a specific previous time using Flashback Database. Normal restore points do not provide protection for your data. You can avoid having to record the SCN of the database before an operation from which you wish to recover using point-in-time recovery or Flashback Table, or having to investigate after the operation to determine the correct SCN.

Database Point-in-Time Recovery


You can perform point-in-time recovery (incomplete recovery), bringing one tablespace or the whole database back to its state before the time of the error.

You need: 1) backups from before the time of the error; 2) the redo logs from the time of the backup to the time of the error. DBPITR: 1. Restore whole database backup. 2. Recover the database to the time just before the error. 3. Open RESETLOGS TSPITR: 1. Create auxiliary instance with RMAN or user-managed methods. 2. Recover the tablespace on the auxiliary to the time just before the error. 3. Import data back into the primary database.

Importing Lost Objects from Logical Backup


If you have performed a logical backup by exporting the contents of the affected tables, sometimes you can import the data back into the table. This technique presumes that you are regularly exporting logical backups of your data, and that any changes between exports are unimportant.

Recovery from Media Failures

RMAN Media Recovery General Steps


The generic steps for media recovery using RMAN are as follows: 1. Place the database in the appropriate state: mounted or open (refer to the following Table). For example, mount the database when performing whole database recovery, or open the database when performing online tablespace recovery 2. To perform incomplete recovery, use the SET UNTIL command to specify the time, SCN, or log sequence number at which recovery terminates. Alternatively, specify the UNTIL clause on the RESTORE and RECOVER commands. 3. Restore the necessary files using the RESTORE command. 4. Recover the datafiles using the RECOVER command. 5. Place the database in its normal state (refer to the following Table). For example, open it or bring recovered tablespaces online

Media Failures and Recovery Strategies


Lost/Inaccessible Files Archiving Mode Status Strategy
* Restore whole database from a consistent database backup. * All changes made after the backup are lost. * Open with the RESETLOGS option. Note: The only time you can open a database without performing RESETLOGS after restoring a NOARCHIVELOG backup is when you have not already overwritten the online log files that were current at the time of the most recent backup. * Restore whole database from consistent backup. * Lose all changes made after the last backup. * Open with the RESETLOGS option. * Restore the whole database and control file from consistent backup. * Lose all changes made after the last backup. * Open the database with the RESETLOGS option. * Perform tablespace or datafile recovery while the database is open. * The tablespaces or datafiles are taken offline, restored from backups, recovered, and placed online. * No changes are lost * The database remains available during the recovery. * Restore the backup datafiles * Mount the control file and recover the database completely. * Assuming all redo logs are available, open the database as normal (that is, do not perform a RESETLOGS). * Perform TSPITR on the tablespaces containing the lost datafiles up to the point of the latest available archived redo log. * Restore the lost control files and datafiles from backups and recover the datafiles.

One or more datafiles

NOARCHIVELOG Closed

One or more datafiles and an online redo log One or more datafiles and all control files One or more (but not all) datafiles

NOARCHIVELOG Closed NOARCHIVELOG Closed

ARCHIVELOG

Open

All datafiles One or more datafiles and an archived redo log required for recovery

ARCHIVELOG

Closed

ARCHIVELOG

Open Not

All control files and possibly one ARCHIVELOG

or more datafiles All control files and possibly one or more datafiles, as well as an ARCHIVELOG archived or online redo log required for recovery

open

* No changes are lost * The database is unavailable during recovery. * Open with the RESETLOGS option. * Restore the necessary files from backups * Perform incomplete recovery of the database up to the point of the most recent available log. * Lose all changes contained in the lost log and in all subsequent logs. * Open with the RESETLOGS option.

Not open

Online Redo Log Recovery

The method of recovery from loss of all members of an online log group depends on a number of factors, such as: * The state of the database (open, crashed, closed consistently, and so on) * Whether the lost redo log group was current * Whether the lost redo log group was archived Scenarios: Lose the current group, and the database is not closed consistently (either it is open, or it has crashed): * restore an old backup * perform point-in-time recovery * OPEN RESETLOGS. * lose all transactions that were in the lost log. * take a new full database backup immediately after the OPEN RESETLOGS. o Lose the current redo log group, and the database is closed consistently: * perform OPEN RESETLOGS with no transaction loss. * take a new full database backup after that o Lose a noncurrent redo log group: * use the ALTER DATABASE CLEAR LOGFILE statement to re-create all members in the group. * no transactions are lost. * If the lost redo log group was archived before it was lost, then nothing further is required. * Otherwise, you should immediately take a new full backup of your database.

Recovery from Datafile Block Corruption

Introduction

Use the RMAN BLOCKRECOVER command to perform block media recovery. Block media recovery recovers an individual corrupt datablock or set of datablocks within a datafile. In cases when a small number of blocks require media recovery, you can selectively restore and recover damaged blocks rather than whole datafiles.

Advantages: o Lowers the Mean Time to Recovery (MTTR) because only blocks needing recovery are restored and only necessary corrupt blocks undergo recovery. o Allows affected datafiles to remain online during recovery of the blocks.

Restrictions

You can only perform block media recovery with RMAN. No SQL*Plus recovery interface is available. You can only perform complete recovery of individual blocks. In other words, you cannot stop recovery before all redo has been applied to the block. You can only recover blocks marked media corrupt. The V$DATABASE_BLOCK_CORRUPTION view indicates which blocks in a file were marked corrupt since the most recent BACKUP or BACKUP ... VALIDATE command was run against the file. You must have a full RMAN backup. Incremental backups are not used by block media recovery. Only full backups and archived log files are used. Block media recovery is able to restore blocks from parent incarnation backups and recover the corrupted blocks through a RESETLOGS. Blocks that are marked media corrupt are not accessible to users until recovery is complete. Any attempt to use a block undergoing media recovery results in an error message indicating that the block is media corrupt.

Identify Block Corruption


Block-level data loss usually results from intermittent, random I/O errors that do not cause widespread data loss, as well as memory corruptions that get written to disk.Block corruption are reported in the following locations:

Error messages in standard output The alert log User trace files Results of the SQL commands ANALYZE TABLE and ANALYZE INDEX Results of the DBVERIFY utility Third-party media management output

Example: 1) you discover the following messages in a user trace file: ORA-01578: ORACLE data block corrupted (file # 7, block # 3) ORA-01110: data file 7: '/oracle/oradata/trgt/tools01.dbf' ORA-01578: ORACLE data block corrupted (file # 2, block # 235) ORA-01110: data file 2: '/oracle/oradata/trgt/undotbs01.dbf' 2) You can then specify the corrupt blocks in the BLOCKRECOVER command as follows: RMAN>BLOCKRECOVER DATAFILE 7 BLOCK 3

DATAFILE 2 BLOCK 235;

Block Media Recovery When Redo Is Missing

Block media recovery can survive gaps in the redo stream if the missing or corrupt redo records do not affect the blocks being recovered.Block media recovery only requires an unbroken set of redo changes for the blocks being recovered. Each block is recovered independently during block media recovery, so recovery may be successful for a subset of blocks. When RMAN first detects missing or corrupt redo records during block media recovery, it does not immediately signal an error because the block undergoing recovery may become a newed block later in the redo stream. When a block is newed all previous redo for that block becomes irrelevant because the redo applies to an old incarnation of the block. For example, the database can new a block when users delete all the rows recorded in the block or drop a table.

Data Protection Strategies

Backup and Recovery - A well-designed and well-integrated Backup and Recovery strategy, without data loss, is a must for every database deployment. Effective backup strategies usually include local and remote copies of data, full and incremental backups, online backups, support for desired secondary devices (e.g. hi-speed tape, disks), off site tape archiving, etc. Snapshots - Snapshots are images of all or part of a disk filesystem that are taken periodically and stored in another disk allocation. Thus, in the event of a database corruption, rather than going back to the previous night's tape backup, the DBA may resort to the earliest snapshot in which the corruption does not exist. RAID (Redundant Array Of Inexpensive/Independent Disks) - The fundamental principle behind RAID is the use of multiple hard disk drives in an array that behaves like a single large, fast one. There are many different ways to implement a RAID array, using some combination of mirroring, striping, duplexing and parity technologies. The degree of benefits using RAID depends on the exact type of RAID that is configured, but RAID generally provides some combination of these benefits: higher data security, fault tolerance, improved availability, increased and integrated capacity and improved performance. >>more about RAID Remote Data Mirroring - In Remote Data Mirroring, data is replicated between a primary and a remote secondary storage subsystem by sending track-by-track changes from the primary site to the remote site over a secure network. In the event of an outage or a disaster, the database may be restored and recovered at the mirrored site, and systems may then point to this mirrored site and continue operations.

Data Replication - In contrast to remote data mirroring, Data Replication is a softwarebased solution that copies data from the primary database to one or more secondary databases. These multiple databases together constitute a distributed database system. Transactions can be replicated continuously or on a scheduled basis. Replication usually involves some strategy to resolve conflicting transactions that appear on the same dataset but in different databases. Automated Standby Databases - Automated Standby Databases are an effective means for disaster recovery by providing a completely automated framework to maintain transactionally consistent copies of the primary database. Changes can be transmitted from the primary database to these standby databases in a synchronous manner - enabling zero data loss, or in an asynchronous manner - minimizing any potential performance impact on the production system. This technology also provides an automated framework to switch over to the standby system in the event of a disaster or a corruption on the production site, or even during planned maintenances.

Oracle's Data Protection and Disaster Recovery Solutions

Oracle Data Guard - Oracle Data Guard is the most effective and comprehensive data protection and disaster recovery solution available today for enterprise data. Available as a feature of the Enterprise Edition of the Oracle database, it is a software infrastructure that creates, maintains, manages and monitors one or more standby databases to protect enterprise data from failures, disasters, errors, and corruptions. It maintains these standby databases as transactionally consistent copies of the production database. If the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the production role, thus minimizing the downtime associated with the outage and enabling zero data loss. Click here for an overview of the functional components of Oracle Data Guard. Oracle Streams - Oracle Streams, which is an integrated feature of Oracle Database Enterprise Edition, can be used to maintain one or more replica copies of a production database. These replicas need not all be identical - they can be subsets of a production database, or related by a well defined transformation. Streams also supports bi-directional replication with conflict detection and optional resolution. Its unique flexibility supports replication across large numbers of databases via any network topology. Although some business situations may require the flexibility offered by Streams, it is designed for integration of large distributed database environments, rather than pure disaster protection. Click here for further details on Oracle Streams. Oracle Advanced Replication - Oracle Advanced Replication enables the copying and maintenance of database objects in multiple databases that make up a distributed database system. Oracle Advanced Replication allows an application to update any replicas of a database, and have those changes automatically propagate to other databases, while ensuring global transactional consistency and data integrity. In the event of a disaster at one of the sites, the surviving databases will remain online. Click here for further details on Oracle Advanced Replication. Oracle Recovery Manager - Oracle Recovery Manager (RMAN) is Oracle's utility to manage the database backup, restore and recovery process. It creates and maintains backup policies, and catalogs all backup and recovery activities. The database can be kept

online while RMAN is performing its backup. All data blocks can be analyzed for corruption during backup and restore, to prevent propagation of corrupt data through backups. Most importantly, Recovery Manager ensures all necessary data files are backed up, and the database is recoverable. Click here for further details on Oracle Recovery Manager. OSCP Validated Remote Mirroring - Through Oracle's Storage Compatibility Program (OSCP), Oracle partners have validated their remote mirroring solutions to be used with the Oracle database. Click here for details on the OSCP. The list of partners which have their solutions validated through OSCP is available here.

Control File Backup and Recovery


Backup Control File Restore Control File o Restore Lost Copy of a Multiplexed Control File o Restore Control File from Backup After Loss of All Current Control Files o Restore Control File Using RMAN Recreate Control File

User Managed Backups of Control File


Backing Up the Control File to a Binary File

A binary backup is preferable to a trace file backup because it contains additional information such as the archived log history, offline range for read-only and offline tablespaces, and backup sets and copies (if you use RMAN). But binary control file backups do not include tempfile entries. Syntax: ALTER DATABASE BACKUP CONTROLFILE TO <filename>;

Backing Up the Control File to a Trace File

To back up the control file to a trace file, mount or open the database and issue the following SQL statement: SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE; This command writes a SQL script to the database's trace file where it can be captured and edited to reproduce the control file. Specify neither the RESETLOGS nor NORESETLOGS option in the SQL statement, then the resulting trace file contains versions of the control file for both RESETLOGS and NORESETLOGS options.

Tempfile entries are included in the output using "ALTER TABLESPACE... ADD TEMPFILE" statements.

RMAN Backups of Control File


Auto Backups of Control File

Configure auto backups of control file: RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON; RMAN> CONFIGURE CONTROLFILE AUTOBACKUP OFF; (default) Autobackups of Control File After Backup Acivities: * After every BACKUP command issued at the RMAN prompt. * At the end of a RUN block, if the last command in the block was BACKUP. * Whenever a BACKUP command within a RUN block is followed by a command that is not BACKUP. Autobackups of Control File After Database Structural Changes: * adding a new tablespace * altering the state of a tablespace or datafile (for example, bringing it online) * adding a new online redo log * renaming a file * adding a new redo thread, and so on

Manual Backups of Control File

In manual backups, only RMAN repository data for backups within the current RMAN session is in the control file backup, and a manually backed-up control file cannot be automatically restored. Run BACKUP CURRENT CONTROLFILE Example: Backs up the current control file to the default disk device and assigns a tag RMAN> BACKUP CURRENT CONTROLFILE TAG = mondaypmbackup;

Include a backup of the control file within any backup by using the INCLUDE CURRENT CONTROLFILE option of the BACKUP command : Example: Backs up tablespace users to tape and includes the current control file in the backup RMAN> BACKUP DEVICE TYPE sbt TABLESPACE users INCLUDE CURRENT CONTROLFILE;

Back up datafile 1, because RMAN automatically includes the control file and SPFILE in backups of datafile 1 Backing Up a Control File Copy

Example: Creates the control file copy '/tmp/control01.ctl' on disk and then backs it up to tape: RMAN> BACKUP AS COPY CURRENT CONTROLFILE FORMAT '/tmp/control01.ctl'; RMAN> BACKUP DEVICE TYPE sbt CONTROLFILECOPY '/tmp/control01.ctl';

Restore Lost Copy of a Multiplexed Control File

Copying a Multiplexed Control File to a Default Location


If the disk and file system containing the lost control file are INTACT, then copy one of the intact control files to the location of the missing control file. Do not have to alter the CONTROL_FILES initialization parameter setting. STEPS: to replace a damaged control file by copying a multiplexed control file: o 1. If the instance is still running, shut it down: SQL> SHUTDOWN ABORT
o

2. Correct the hardware problem that caused the media failure. If you cannot repair the hardware problem quickly, then proceed with database recovery by restoring damaged control files to an alternative storage device, as described in "Copying a Multiplexed Control File to a Nondefault Location". 3. Use an intact multiplexed copy of the database's current control file to copy over the damaged control files. Example: % cp /oracle/good_cf.f /oracle/dbs/bad_cf.f

o o

4. Start a new instance and mount and open the database. SQL> STARTUP

Copying a Multiplexed Control File to a Nondefault Location

Assuming that the disk and file system containing the lost control file are NOT INTACT, then CANNOT copy one of the good control files to the location of the missing control file. Alter the CONTROL_FILES initialization parameter to indicate a new location for the missing control file. STEPS: to restore a control file to a nondefault location: o 1. If the instance is still running, shut it down:

SQL> SHUTDOWN ABORT


o

2. Copy the intact control file to new locations. Example: to copy a good version of control01.dbf to a new disk location: % cp $ORACLE_HOME/oradata/oldlocation/control01.dbf $ORACLE_HOME/oradata/newlocation/ control01.dbf

3. Edit the parameter file of the database so that the CONTROL_FILES parameter reflects the current locations of all control files and excludes all control files that were not restored. Example: 1. In the original initialization parameter file: CONTROL_FILES='/oracle/oradata/trgt/control01.dbf','/bad_disk/control02.dbf' 2. Change it to in initialization parameter file: CONTROL_FILES='/oracle/oradata/trgt/control01.dbf','/new_disk/control02.dbf'

o o

4. Start a new instance and mount and open the database. SQL> STARTUP

Restore Control File from Backup After Loss of All Current Control Files Home >> Reference >> Backup and Recovery >> Control File Backup and Recovery >> Restore Control File from Backup After Loss of All Current Control Files Status of Status of Online Logs Datafiles Available Current Restore Procedure If the online logs contain redo necessary for recovery, then restore a backup control file and apply the logs during recovery. You must specify the filename of the online logs containing the changes in order to open the database. After recovery, open RESETLOGS. If the online logs contain redo necessary for recovery, then re-create the control file. Because the online redo logs are inaccessible, open RESETLOGS (when the online logs are accessible it is not necessary to OPEN RESETLOGS after recovery with a created control file). Restore a backup control file, perform complete recovery, and then open RESETLOGS. Restore a backup control file, perform incomplete recovery, and then open RESETLOGS.

Unavailable

Current

Available Unavailable

Backup Backup

Restoring a Backup Control File to the Default Location


If possible, restore the control file to its original location. In this way, you avoid having to specify new control file locations in the initialization parameter file. STEPS: to restore a backup control file to its default location: o 1. If the instance is still running, shut it down: SQL> SHUTDOWN ABORT
o o

2.Correct the hardware problem that caused the media failure. 3.Restore the backup control file to all locations specified in the CONTROL_FILES parameter. For example: if ORACLE_HOME/oradata/trgt/control01.dbf and ORACLE_HOME/oradata/trgt/control02.dbf are the control file locations listed in the server parameter file, then use an operating system utility to restore the backup control file to these locations: % cp /backup/control01.dbf ORACLE_HOME/oradata/trgt/control01.dbf % cp /backup/control02.dbf ORACLE_HOME/oradata/trgt/control02.dbf

o o

4.Start a new instance and mount the database: SQL> STARTUP MOUNT

5.Begin recovery by executing the RECOVER command with the USING BACKUP CONTROLFILE clause. Specify UNTIL CANCEL if you are performing incomplete recovery. Example: SQL> RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL

o o

6.Apply the prompted archived logs. 1) If you then receive another message saying that the required archived log is missing, it probably means that a necessary redo record is located in the online redo logs. This situation can occur when unarchived changes were located in the online logs when the instance crashed. For example: 1. assume that you see the following: ORA-00279: change 55636 generated at 11/08/2002 16:59:47 needed for thread 1 ORA-00289: suggestion : /oracle/work/arc_dest/arcr_1_111.arc

ORA-00280: change 55636 for thread 1 is in sequence #111 Specify log: {<RET>=suggested | filename | AUTO | CANCEL} 2. You can specify the name of an online redo log and press Enter (you may have to try thi few times until you find the correct log): ORACLE_HOME/oradata/redo01.dbf Log applied. Media recovery complete. 2) If the online logs are inaccessible, then you can cancel recovery without applying them. If all datafiles are current, and if redo in the online logs is required for recovery, then you cannot open the database without applying the online logs. If the online logs are inaccessible, then you must re-create the control file, using the procedure described in "Create New Control File After Losing All Current and Backup Control Files". 7.Open the database with the RESETLOGS option after finishing recovery: SQL> ALTER DATABASE OPEN RESETLOGS;
o

Copying a Multiplexed Control File to a Nondefault Location

Assuming that the disk and file system containing the lost control file are NOT INTACT, then CANNOT copy one of the good control files to the location of the missing control file. Alter the CONTROL_FILES initialization parameter to indicate a new location for the missing control file. STEPS: to restore a control file to a nondefault location: o 1.If the instance is still running, then shut it down: SQL> SHUTDOWN ABORT
o

2. Copy the intact control file to new locations. Example: to copy a good version of control01.dbf to a new disk location: % cp $ORACLE_HOME/oradata/oldlocation/control01.dbf $ORACLE_HOME/oradata/newlocation/ control01.dbf

3. Edit the parameter file of the database so that the CONTROL_FILES parameter reflects the current locations of all control files and excludes all control files that were not restored. Example: 1. In the original initialization parameter file:

CONTROL_FILES='/oracle/oradata/trgt/control01.dbf','/bad_disk/control02.dbf' 2. Change it to in initialization parameter file: CONTROL_FILES='/oracle/oradata/trgt/control01.dbf','/new_disk/control02.dbf'


o o

4. Start a new instance and mount and open the database. SQL> STARTUP

Restore Control File Using RMAN


Home >> Reference >> Backup and Recovery >> Control File Backup and Recovery >> Restore Control File Using RMAN See also here Source: http://www.idevelopment.info/ The following examples use Oracle Database 10g and make use of a Recovery Catalog and the Flash Recovery Area (FRA). 1. Restore controlfile from autobackup.
RMAN> restore controlfile from autobackup;

2. Restore controlfile from a specific backup piece.


RMAN> restore controlfile from '/backup_dir/piece_name';

3. Restore controlfile from most recent available controlfile backup.


RMAN> restore controlfile;

The following examples use Oracle Database 10g and do not require the use of a Recovery Catalog or a Flash Recovery Area (FRA). The big difference is the requirement to set the dbid of the database before executing restore with the instance in a nomount state. 4. The following backup used all defaults. If not using a FRA, this backup should be in $ORACLE_HOME/dbs.

RMAN> set dbid=nnnnnnnnn; RMAN> restore controlfile from autobackup;

5. Restore from autobackup looks at the most recent 7 days backups by default. If you want to restore an autobackup that's older then the default you can use the 'maxdays' parameter..
RMAN> set dbid=nnnnnnnnn; RMAN> restore controlfile from autobackup maxdays 20;

6. Restore from autobackup increasing the number of autobackup sequences looked for restore in case your database generated many autobackups in a given day.
RMAN> set dbid=nnnnnnnnn; RMAN> restore controlfile from autobackup maxseq 10;

7. Restoring from autobackup when the backup location is not default.


RMAN> set dbid=nnnnnnnnn; RMAN> set controlfile autobackup format for device type disk to '/tmp/%F'; RMAN> restore controlfile from autobackup;

8. Restore the controlfile from this specific autobackup.


RMAN> set dbid=nnnnnnnnn; RMAN> restore controlfile from '/tmp/c-1140771490-20080502-03';

9. Restore the controlfile from a specific autobackup file to a temporary disk location the replicate the temp controlfile to the respective locations and names given in control_files..
RMAN> set dbid=nnnnnnnnn; RMAN> restore controlfile from '/tmp/c-1140771490-2008050203' to '/tmp/control.tmp'; RMAN> replicate controlfile from '/tmp/control.tmp'

Once you have the controlfile restored and mounted you have access to your previous backup configuration which will also be used during restore as well as the backup information required to restore and recover your database. After you mount the controlfiles from Oracle101g 10.2.x > you can use the RESTORE PREVIEW command to see what backups will be required to restore and recover and what checkpoint you must exceed to open the database resetlogs.

Recreate Control File

Options

How You Backed Up Control File Backed up to a trace file: ALTER DATABASE BACKUP CONTROLFILE TO TRACE NORESETLOGS after you made the last structural change to the database, and if you have saved the SQL command trace output

How to Recreate Control File Use the CREATE CONTROLFILE statement from the trace output asis.

Edit the output of ALTER Backed up to a trace file: ALTER DATABASE BACKUP DATABASE BACKUP CONTROLFILE TO TRACE before you made a structural CONTROLFILE TO TRACE to change to the database reflect the change. Use the control file copy to obtain SQL output. 1) Create a temporary database instance Backed up to a binary file: Backed up the control file with 2) mount the backup control file the ALTER DATABASE BACKUP CONTROLFILE TO 3) then run ALTER DATABASE filename statement BACKUP CONTROLFILE TO TRACE NORESETLOGS. 4) If the control file copy predated a recent structural change, then edit the trace to reflect the change. Execute the CREATE Do not have a control file backup in either TO TRACE CONTROLFILE statement format or TO filename format manually

STEPS

1. Start the database in NOMOUNT mode. SQL> STARTUP NOMOUNT

2 Create the control file with the CREATE CONTROLFILE statement, specifying the NORESETLOGS option (refer to the above for options). Example: CREATE CONTROLFILE REUSE DATABASE SALES NORESETLOGS ARCHIVELOG

MAXLOGFILES 32 MAXLOGMEMBERS 2 MAXDATAFILES 32 MAXINSTANCES 16 MAXLOGHISTORY 1600 LOGFILE GROUP 1 ( '/diska/prod/sales/db/log1t1.dbf', '/diskb/prod/sales/db/log1t2.dbf' ) SIZE 100K GROUP 2 ( '/diska/prod/sales/db/log2t1.dbf', '/diskb/prod/sales/db/log2t2.dbf' ) SIZE 100K, DATAFILE '/diska/prod/sales/db/database1.dbf', '/diskb/prod/sales/db/filea.dbf';

3. After creating the control file, the instance mounts the database. SQL> ALTER DATABASE MOUNT;

4. Recover the database as normal (without specifying the USING BACKUP CONTROLFILE clause): SQL> RECOVER DATABASE

5. Open the database after recovery completes (RESETLOGS option not required): SQL> ALTER DATABASE OPEN;

6. Immediately back up the control file. Example: SQL> ALTER DATABASE BACKUP CONTROLFILE TO '/backup/control01.dbf' REUSE;

Fast Incremental Backup

Overview

Optimizes incremental backups Tracks which blocks have changed since the lastbackup Oracle Database 10g has integrated change tracking: A change tracking file is introduced. Changed blocks are tracked as redo is generated. Database backup automatically uses the changed block list. The size of the block change tracking file is proportional to: - Database size in bytes - Number of enabled threads in a RAC environment - Number of old backups maintained by the block change tracking file -The minimum size for the block change tracking file is 10 MB, and any new space is allocated in 10 MB increments. The background process that performs the writes to the change tracking file is called the change tracking writer (CTWR).

Configuration
Enable block change tracking

SQL> alter database enable block change tracking; The block change-tracking file was automatically named and placed in the directory specified by the DB_CREATE_FILE_DEST initialization parameter SQL> alter database enable block change tracking using file '/u04/oradata/ord/changetracking/chg01.dbf'; Oracle recommends placing the block change-tracking file on the same disk asthe database files; this is automatic if you are using OMF.

Disable block change tracking

SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;


Monitor block change tracking

The dynamic performance view V$BLOCK_CHANGE_TRACKING shows where the block changetracking file is stored, whether it is enabled, and how large it is: SQL> select * from v$block_change_tracking; Monitor how effective the block change tracking is in minimizing the incremental backup I/O (the PCT_READ_FOR_BACKUP column) SQL> SELECT file#, avg(datafile_blocks), avg(blocks_read), avg(blocks_read/datafile_blocks) * 100 AS PCT_READ_FOR_BACKUP, avg(blocks) FROM v$backup_datafile WHERE used_change_tracking = 'YES' AND incremental_level > 0 GROUP BY file#;

Fast Recovery Using Switch Database

Overview

The new RMAN command SWITCH DATABASE is the fastest way to recover a database using backup copies of the database: No files are copied, and no files need to be renamed. It takes one command. RMAN> switch database to copy;

The downside to this method is that your datafiles are now in the flash recovery area. This may cause problems when you create backups: Now your datafiles and backups are in the same location. At your earliest opportunity, you should migrate the datafiles out of the flash recovery area and create new backups.

Differences Between Restore and Switch

RESTORE DATABASE: The restore process copies the backup data files from their backup location to the location specified in the control file, and the recovery process begins. SWITCH DATABASE: The switch process does not copy any backup data files. Instead, RMAN adjusts the control file so that the data files point to the backup file location, and the recovery process begins.

Automatic Storage Management

What is ASM
Automatic Storage Management (ASM) is an integrated, high-performance database file system and disk manager. ASM eliminates the need for you to directly manage potentially thousands of Oracle database files.

ASM Installation and Configuration


ASM Installation 'oraenv Does Not Set oracle_home For +asm Instance': Metalink Note:338441.1 Initialization Parameters for ASM Instances

ASM New Features


10g Release 2 New Features 11g Release 1 New Features 11g Release 2 New Features

ASM Administration

ASM Instance Disk groups and disks Files ASM Dynamic Views ASM Metadata and Internals ASM File Handling Migrating Databases from non-ASM to ASM Migrating Databases from ASM to non-ASM ASMLIB ASM Scripts Gather ASM Metadata

ASM Commands

ASM and SRVCTL ASMCMD asm.sh

Why Use ASM

StripingASM spreads data evenly across all disks in a disk group to optimize performance and utilization. This even distribution of database files eliminates the need for regular monitoring and I/O performance tuning. MirroringASM can increase availability by optionally mirroring any file. ASM mirrors at the file level, unlike operating system mirroring, which mirrors at the disk level. Mirroring means keeping redundant copies, or mirrored copies, of each extent of the file, to help avoid data loss caused by disk failures. The mirrored copy of each file extent is always kept on a different disk from the original copy. If a disk fails, ASM can continue to access affected files by accessing mirrored copies on the surviving disks in the disk group. ASM supports 2-way mirroring, where each file extent gets one mirrored copy, and 3-way mirroring, where each file extent gets two

mirrored copies. Online storage reconfiguration and dynamic rebalancingASM permits you to add or remove disks from your disk storage system while the database is operating. When you add a disk, ASM automatically redistributes the data so that it is evenly spread across all disks in the disk group, including the new disk. This redistribution is known as rebalancing. It is done in the background and with minimal impact to database performance. When you request to remove a disk, ASM first rebalances by evenly relocating all file extents from the disk being removed to the other disks in the disk group. Managed file creation and deletionASM further reduces administration tasks by enabling files stored in ASM disk groups to be Oracle-managed files. ASM automatically assigns filenames when files are created, and automatically deletes files when they are no longer needed.

Key Benefits of ASM


I/O is spread evenly across all available disk drives to prevent hot spots and maximize performance. ASM eliminates the need for over provisioning and maximizes storage resource utilization facilitating database consolidation. Inherent large file support. Performs automatic online redistribution after the incremental addition or removal of storage capacity. Maintains redundant copies of data to provide high availability, or leverage 3rd party RAID functionality. Supports Oracle Database 10g as well as Oracle Real Application Clusters (RAC). Capable of leveraging 3rd party multipathing technologies. For simplicity and easier migration to ASM, an Oracle Database 10g Release 2 database can contain ASM and non-ASM files. Any new files can be created as ASM files whilst existing files can also be migrated to ASM. RMAN commands enable non-ASM managed files to be relocated to an ASM disk group. Oracle Database 10g Enterprise Manager can be used to manage ASM disk and file management activities. ASM reduces Oracle Database 10g cost and complexity without compromising performance or availability.

Note

ASM does not manage binaries, alert logs, trace files, or password files.

11g Release 1 New Features Scalability and Performance


Fast Mirror Resynchronization for ASM redundancy disks groups Preferred Read for ASM redundancy disks groups Support for large allocation units

Optimized rebalance operations

Rolling upgrade and patching support New security features

Separate connect privilege, SYSASM, different from SYSDBA

ASM Instance Introduction


ASM doesn't have to be installed in order to install an Oracle database. To use ASM files, there must be at least one ASM instance configured and started prior to starting a database instance that uses ASM files. As part of the ASM instance startup procedure, the various disk groups and their files are identified. The ASM instance mounts the disks, and then creates an extent map, which is passed to the database instance. The database instance itself is responsible for any actual input/output operations. The ASM instance is only involved during the creation or deletion of files and when disk configurations change (such as dropping or adding a disk). When these types of changes occur, the ASM instance automatically rebalances the disks and provides the necessary information to refresh the extent map in the SGA of the database instance. Of course, this process requires that the ASM instance run concurrently with the database instance, and only shut down after the database instance is closed. The impact of the ASM instance on performance of the database instance is minimal. The former does not process transactions affecting the individual database objects; therefore, the average SGA allocation needed by the instance is no more than 64MB. Unless the server's memory is already at the maximum recommended operating system/DBMS allocation, 64MB should have no impact on the memory available for the database instance.

New Background Processes of ASM Instance


RBAL: coordinating rebalance activity for disk groups ARBn: performing database the data extent movements GMON: monitoring operations that maintain ASM metadata inside disk groups

New Background Processes of Database Instance That Uses ASM


RBAL: performing global opens of the disks in the disk groups ASMB: connecting to foreground processes in ASM instances

ASM Instance Initialization Parameters


INSTANCE_TYPE: Must be set to ASM. This is the only required parameter. ASM_POWER_LIMIT: The default power for disk rebalancing. Controls the speed for a rebalance operation. Default: 1, Range: 0 11. ASM_DISKSTRING: A comma-separated list of strings that limits the set of disks that ASM discovers. May include wildcard characters. Only disks that match one of the strings are discovered. Default: NULL. A NULL value causes ASM to search a default path for all disks in the system to which the ASM instance has read/write access. ASM_DISKGROUPS: A list of the names of disk groups to be mounted by an ASM instance at startup, or when the ALTER DISKGROUP ALL MOUNT statement is used. Default: NULL (If this parameter is not specified, then no disk groups are mounted.)

The difference between ASM Instance and Regular Instance

While it does have an initialization parameter file and a password file, it has no data dictionary, and therefore all connections to an ASM instance are via SYS and SYSTEM using operating system authentication only. Disk group commands such as CREATE DISKGROUP, ALTER DISKGROUP, and DROP DISKGROUP are valid only from an ASM instance. An ASM instance doen't mount database, but it mounts disk groups, since it does not have a control file.

Starting up an ASM Instance


Startup Parameters:

FORCE: Issues a SHUTDOWN ABORT to the ASM instance before restarting it MOUNT, OPEN: Mounts the disk groups specified in the ASM_DISKGROUPS initialization parameter. This is the default if no command parameter is specified. NOMOUNT: Starts up the ASM instance without mounting any disk groups

Note:

Set the ORACLE_SID environment variable to the ASM SID. Default ASM SID for a single instance database is +ASM Default SID for ASM on Real Application Clusters is +ASMnode# The initialization parameter file, which can be a server parameter file, must contain: INSTANCE_TYPE = ASM The STARTUP command tries to mount the disk groups specified by the initialization parameter ASM_DISKGROUPS. If ASM_DISKGROUPS is blank, the ASM instance starts and warns that no disk groups were mounted. You can then mount disk groups with the ALTER DISKGROUP...MOUNT command.

Example: % sqlplus /nolog SQL> CONNECT / AS sysdba Connected to an idle instance. SQL> startup ASM instance started Total System Global Area 130023424 bytes Fixed Size 1976920 bytes Variable Size 102880680 bytes ASM Cache 25165824 bytes ASM diskgroups mounted

Shutting Down an ASM Instance


Shutdown Mode:

NORMAL, IMMEDIATE, or TRANSACTIONAL: ASM waits for any in-progress SQL to complete before doing an orderly dismount of all disk groups and shutting down the ASM instance. If any database instances are connected to the ASM instance, the SHUTDOWN command returns an error and leaves the ASM instance running. ABORT: The ASM instance immediately shuts down without the orderly dismount of disk groups. This causes recovery upon the next startup of ASM. If any database instance is connected to the ASM instance, the database instance aborts.

Example: % sqlplus /nolog SQL> CONNECT / AS sysdba Connected to an idle instance. SQL> shutdown normal ASM diskgroups dismounted ASM instance shutdown

Overview of Disk Group


A disk group is a collection of disks managed as a logical unit. Storage is added and removed from disk groups in units of ASM disks. Every ASM disk has an ASM disk name, which is a name common to all nodes in a cluster. Files in a disk group are striped on the disks using either coarse striping or fine striping.

Coarse striping spreads files in units of 1MB each across all disks. Coarse striping is appropriate for a system with a high degree of concurrent small I/O requests, such as an OLTP environment. Fine striping spreads files in units of 128KB and is appropriate for traditional data warehouse environments or OLTP systems with low concurrency and maximizes response time for individual I/O requests. For files (such as log files) that require low latency, ASM provides fine-grained (128 KB) striping

Failure Groups and Disk Group Mirroring

A failure group is one or more disks within a disk group that share a common resource, such as a disk controller, whose failure would cause the entire set of disks to be unavailable to the group. Disk Group Mirroring o Mirror at extent level o Mix primary and mirror extents on each disk o External redundancy: Defers to hardware mirroring o Normal redundancy: Two-way mirroring At least two failure groups o High redundancy: Three-way mirroring At least three failure groups

Disk Group Dynamic Rebalancing


Automatic online rebalance whenever storage configuration changes Only moves data proportional to storage added No need for manual I/O tuning Online migration to new storage Any impact to ongoing database I/O can be controlled by adjusting the value of the initialization parameter ASM_POWER_LIMIT to a lower value.

Creating A Disk Group


Syntax: CREATE DISKGROUP diskgroup_name [ { HIGH | NORMAL | EXTERNAL } REDUNDANCY ] [ FAILGROUP failgroup_name ] DISK qualified_disk_clause [, qualified_disk_clause ]... [ [ FAILGROUP failgroup_name ] DISK qualified_disk_clause [, qualified_disk_clause ]... ]... ;

Example: SQL> CREATE DISKGROUP dgroup1 NORMAL REDUNDANCY 2 FAILGROUP controller1 DISK 3 '/devices/disk1', 4 '/devices/disk2', 5 '/devices/disk3', 6 '/devices/disk4' 7 FAILGROUP controller2 DISK 8 '/devices/disk5', 9 '/devices/disk6', 10 '/devices/disk7', 11 '/devices/disk8';

Deleting A Disk Group


Syntax: DROP DISKGROUP diskgroup_name [ { INCLUDING | EXCLUDING } CONTENTS ];

Example: DROP DISKGROUP dgroup1 including contents;

Altering A Disk Group


ALTER DISKGROUP ... ADD FAILGROUP ... DISK This adds a disk to a failure group and performs an automatic rebalance ALTER DISKGROUP ... REBALANCE POWER This changes the power limit for this particular rebalance operation ALTER DISKGROUP ... DROP DISK This removes a disk from a failure group within a disk group and performs an automatic rebalance. ALTER DISKGROUP ... UNDROP DISKS This cancels the drop of the disk that was dropped. The UNDROP command operates only on pending drops of disks, not after drop completion. ALTER DISKGROUP ... DROP ... ADD This drops a disk from a failure group and adds another disk in the same command.

ALTER DISKGROUP ... MOUNT This makes a disk group available to all instances. ALTER DISKGROUP ... DISMOUNT This makes a disk group unavailable to all instances. ALTER DISKGROUP ... CHECK ALL This verifies the internal consistency of the disk group.

Migrating Databases from non-ASM to ASM


Migrating Databases from non-ASM to ASM Home >> Reference >> Automatic Storage Management >> Migrating Databases from nonASM to ASM Because ASM files cannot be accessed via the operating system, you must use the Recovery Manager (RMAN) to move database objects from a non-ASM disk location to an ASM disk group. Follow these steps to move these objects: 1. 2. 3. 4. 5. 6. Note the filenames of the control files and the online redo log files. Shut down the database NORMAL, IMMEDIATE, or TRANSACTIONAL. Back up the database. Edit the SPFILE to use OMF for all file destinations. Edit the SPFILE to remove the CONTROL_FILES parameter. Run the following RMAN script, substituting your specific filenames as needed:
STARTUP NOMOUNT; RESTORE CONTROLFILE FROM '/u1/c1.ctl'; ALTER DATABASE MOUNT; BACKUP AS COPY DATABASE FORMAT '+dgroup1'; SWITCH DATABASE TO COPY; # Repeat command for all online redo log members ... SQL "ALTER DATABASE RENAME '/u1/log1' TO '+dgroup1' "; ALTER DATABASE OPEN RESETLOGS; # Repeat command for all temporary tablespaces SQL "ALTER TABLESPACE temp ADD TEMPFILE"; SQL "ALTER DATABASE TEMPFILE '/u1/temp1' DROP";

7. Delete or archive the old database files.


Refer detailed steps to http://www.idevelopment.info/data/Oracle/DBA_tips/Automatic_Storage_Management/ASM_ 33.shtml

ASMLIB

Linux OS Service 'oracleasm'

Service Name

oracleasm

Description

The oracleasm service is used to provision, configure and manage Oracle Automatic Storage Management (ASM) disks via the Oracle Automatic Storage Management library driver (ASMLib). The oracleasm services creates the necessary library interface through which ASM disk devices are made available to Oracle ASM (instance).

Nature

System service

Configuration File

/etc/sysconfig/oracleasm

Oracle Enterprise Linux Version(s) OEL 4 OEL 5 Requirement

Optional - needed only if operating system-level management and configuration of Oracle ASM disk devices is required. Not needed if Oracle ASM (instance-only) management of ASM group/disk devices is required/preferred. oracleasm /etc/init.d/oracleasm
$ oracleasm -h Usage: oracleasm oracleasm oracleasm oracleasm

[--exec-path=<exec_path>] <command> [ <args> ] --exec-path -h -V

The basic oracleasm commands are: configure Configure the Oracle Linux ASMLib driver init Load and initialize the ASMLib driver exit Stop the ASMLib driver

scandisks status listdisks querydisk createdisk deletedisk renamedisk update-driver

Scan the system for Oracle ASMLib disks Display the status of the Oracle ASMLib driver List known Oracle ASMLib disks Determine if a disk belongs to Oracle ASMlib Allocate a device for Oracle ASMLib use Return a device to the operating system Change the label of an Oracle ASMlib disk Download the latest ASMLib driver

Option
configure

Description

Use the configure option to reconfigure the Automatic Storage Management library driver, if necessary:
# /etc/init.d/oracleasm configure

enable disable

Use the disable and enable options to change the actions of the Automatic Storage Management library driver when the system starts. The enable option causes the Automatic Storage Management library driver to load when the system starts:
# /etc/init.d/oracleasm enable

start stop restart

Use the start, stop, and restart options to load or unload the Automatic Storage Management library driver without restarting the system:
# /etc/init.d/oracleasm restart

createdisk

Use the createdisk option to mark a disk device for use with the Automatic Storage Management library driver and give it a name:
# /etc/init.d/oracleasm createdisk DISKNAME devicename

deletedisk

Use the deletedisk option to unmark a named disk device:


# /etc/init.d/oracleasm deletedisk DISKNAME

Caution: Do not use this command to unmark disks that are being used by an Automatic Storage Management disk group. You must delete the disk from the Automatic Storage Management disk group before you unmark it.
querydisk

Use the querydisk option to determine if a disk device or disk name is being used by the Automatic Storage Management library driver:
# /etc/init.d/oracleasm querydisk {DISKNAME | devicename}

listdisks

Use the listdisks option to list the disk names of marked Automatic Storage

Management library driver disks:


# /etc/init.d/oracleasm listdisks scandisks

Use the scandisks option to enable cluster nodes to identify which shared disks have been marked as Automatic Storage Management library driver disks on another node:
# /etc/init.d/oracleasm scandisks

Q&A

Is /dev/oracleasm created?

When ASMLIB is configured, a special filesystem is created and mounted: /dev/oracleasm.


$ df -ha Filesystem /dev/hdc2 none none usbdevfs /dev/hdc1 none /dev/sda1 /dev/sde1 oracleasmfs

Size 13G 0 0 0 101M 250M 8.4G 8.3G 0

Used Avail Use% Mounted on 11G 1.9G 85% / 0 0 - /proc 0 0 - /dev/pts 0 0 - /proc/bus/usb 14M 81M 15% /boot 0 250M 0% /dev/shm 4.8G 3.2G 60% /oradata2 6.6G 1.4G 84% /oradata3 0 0 - /dev/oracleasm

When command oracleasm createdisk is executed, a block device is created under /dev/oracleasm/disks. This is the device discovered by ASMLIB using the string ORCL:*.
$ ll /dev/oracleasm/disks total 0 brw-rw---- 1 oracle dba 8, 97 Apr brw-rw---- 1 oracle dba 8, 81 Apr brw-rw---- 1 oracle dba 8, 65 Apr brw-rw---- 1 oracle dba 8, 49 Apr brw-rw---- 1 oracle dba 8, 33 Apr brw-rw---- 1 oracle dba 8, 17 Apr brw-rw---- 1 oracle dba 8, 129 Apr brw-rw---- 1 oracle dba 8, 113 Apr

28 28 28 28 28 28 28 28

15:20 15:20 15:20 15:20 15:20 15:20 15:20 15:20

VOL001 VOL002 VOL003 VOL004 VOL005 VOL006 VOL007 VOL008

Checking if ASMLIB was installed properly:

[root@arlnx2 asm_tar]# /etc/init.d/oracleasm status

Checking if ASM is loaded: Checking if /dev/oracleasm is mounted:

[ [ OK ]

OK

If the command fails, use strace and generate a log file:

strace -f -o asm_status.out /etc/init.d/oracleasm status

Additional information to verify the installation can be found in note 269194.1

Listing the ASMLIB disks:

$ /etc/init.d/oracleasm listdisks VOL001 VOL002 VOL003 VOL004 VOL005 VOL006 VOL007 VOL008 $ ll /dev/oracleasm/disks total 0 brw-rw---- 1 oracle dba 8, 97 Apr brw-rw---- 1 oracle dba 8, 81 Apr brw-rw---- 1 oracle dba 8, 65 Apr brw-rw---- 1 oracle dba 8, 49 Apr brw-rw---- 1 oracle dba 8, 33 Apr brw-rw---- 1 oracle dba 8, 17 Apr brw-rw---- 1 oracle dba 8, 129 Apr brw-rw---- 1 oracle dba 8, 113 Apr

28 28 28 28 28 28 28 28

15:20 15:20 15:20 15:20 15:20 15:20 15:20 15:20

VOL001 VOL002 VOL003 VOL004 VOL005 VOL006 VOL007 VOL008

You will find an entry under /dev/oracleasm/disks. This is the block device associated to the physical device. If the file exist the command will return information, but if not, plese execute:

strace -f -o asm_listd.out /etc/init.d/oracleasm listdisks

How to identify the physical disk bound to the ASMLIB disk.

Use /etc/init.d/oracleasm querydisk <NAME> where NAME is any name under /dev/oracleasm/disks.

[root@arlnx2 asm_tar]# /etc/init.d/oracleasm querydisk -d VOL1 Disk "VOL1" is a valid ASM disk on device [8, 33]

The command reports the device identified with major,minor numbers which are unique numbers associated to each disk. File /proc/partitions can be used to find the name of the device associated with those numbers:

$ more /proc/partitions major minor #blocks running use aveq name rio rmerge rsect ruse wio wmerge wsect wuse

8 0 8891620 sda 39715 78016 941080 417000 156198 242472 3189752 214180 0 420630 631180 8 1 8891376 sda1 39691 77970 940922 416780 156198 242472 3189752 214180 0 420410 630960 8 16 8891620 sdb 87 250 803 740 0 0 0 0 0 740 740 8 17 8891376 sdb1 57 193 632 480 0 0 0 0 0 480 480 8 32 17783250 sdc 745 2993 8321 8300 0 0 0 0 0 5250 8300 8 33 977904 sdc1 87 139 644 1040 0 0 0 0 0 1040 1040 8 34 977920 sdc2 35 193 456 230 0 0 0 0 0 230 230 8 35 1 sdc3 4 0 8 40 0 0 0 0 0 40 40 8 37 977904 sdc5 57 193 632 1240 0 0 0 0 0 1240 1240 8 38 977904 sdc6 57 193 632 1170 0 0 0 0 0 1170 1170

Also connected as root you can run the same command but referencing the physical device:

[root@arlnx2 dbs]# /etc/init.d/oracleasm querydisk /dev/sdc1 Disk "/dev/sdc1" is marked an ASM disk with the label "VOL1"

Any error on this command will require using strace:

strace -f -o asm_query.out /etc/init.d/oracleasm querydisk <NAME>

How to identify if ASMLIB is used or not

SQL> select path ,library from v$asm_disk; PATH LIBRARY -------------------- ----------------------------------------------------------ORCL:VOL001 ASM Library - Generic Linux, version 2.0.2 (KABI_V2) ORCL:VOL002 ASM Library - Generic Linux, version 2.0.2 (KABI_V2) ORCL:VOL003 ASM Library - Generic Linux, version 2.0.2 (KABI_V2) ORCL:VOL004 ASM Library - Generic Linux, version 2.0.2 (KABI_V2) PATH ----------------------------------------------------------------------------------/dev/oracleasm/disks/ASM7 /dev/oracleasm/disks/ASM2 /dev/oracleasm/disks/ASM1 /dev/oracleasm/disks/ASM5 /dev/oracleasm/disks/ASM6 /dev/oracleasm/disks/ASM4 /dev/oracleasm/disks/ASM3 LIBRARY -------------------------System System System System System System System

Troubleshooting ASM/ASMLIB issues

1) In order to check if the ASMLIB API is correctly configured, please execute the next commands and provide us the output (from each node if this is RAC):
$> $> $> $> cat /etc/*release uname -a rpm -qa |grep oracleasm df -ha

2) Check the discovery path (from each node if this is RAC):


$> /etc/init.d/oracleasm status $> /usr/sbin/oracleasm-discover $> /usr/sbin/oracleasm-discover 'ORCL:*'

3) Please check if the ASMLIB devices can be accessed (from each node if this is RAC):
$> $> $> $> /etc/init.d/oracleasm scandisks /etc/init.d/oracleasm listdisks /etc/init.d/oracleasm querydisk <each disk from previous output> ls -l /dev/oracleasm/disks

4) Upload the next files from each node if this is RAC:


=)> /var/log/messages* =)> /var/log/oracleasm =)> /etc/sysconfig/oracleasm

5) Please show us the partition table (from each node if this is RAC):
$> cat /proc/partitions

6) If you are using multipath devices (mapper devices or emcpower) then show me the output of:
$> ls -l /dev/mpath/* $> ls -l /dev/mapper/* $> ls -l /dev/dm-* $> ls -l /dev/emcpower*

Or if you have another multipath configuration then list the devices:


$> ls -l /dev/<multi path device name>*

7) Finally connect to your ASM instance, execute the next script and upload me the output file (from each node if this is RAC):
spool asm<#>.html SET MARKUP HTML ON set echo on set pagesize 200 alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS'; select 'THIS ASM REPORT WAS GENERATED AT: ==)> ' , sysdate " " from dual;

select 'HOSTNAME ASSOCIATED WITH THIS ASM INSTANCE: ==)> ' , MACHINE " " from

v$session where program like '%SMON%'; select * from v$asm_diskgroup; SELECT * FROM V$ASM_DISK ORDER BY GROUP_NUMBER,DISK_NUMBER; SELECT * FROM V$ASM_CLIENT; select * from V$ASM_ATTRIBUTE; select * from v$asm_operation; select * from gv$asm_operation

select * from v$version; show show show show show parameter parameter parameter parameter parameter asm cluster instance_type instance_name spfile

show sga spool off exit

Gather ASM Metadata


Connect to your ASM instance(s) and execute the next scripts (on each node if this is RAC):

SPOOL ASM_FIRST<instance#>.HTML SET MARKUP HTML ON set echo on set pagesize 200 alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS'; select 'THIS ASM REPORT WAS GENERATED AT: ==)> ' , sysdate " " from dual;

select 'HOSTNAME ASSOCIATED WITH THIS ASM INSTANCE: ==)> ' , MACHINE " " from v$session where program like '%SMON%'; select * from v$asm_diskgroup; SELECT * FROM V$ASM_DISK ORDER BY GROUP_NUMBER,DISK_NUMBER; SELECT * FROM V$ASM_CLIENT; select * from V$ASM_ATTRIBUTE; select * from v$asm_operation; select * from gv$asm_operation select * from v$version; show show show show show parameter parameter parameter parameter parameter asm cluster instance_type instance_name spfile

show sga spool off exit

SPOOL ASM_SECOND<instance#>.HTML SET MARKUP HTML ON SET ECHO ON SET PAGESIZE 200 SELECT SELECT SELECT SELECT SELECT SELECT * * * * * * FROM FROM FROM FROM FROM FROM V$ASM_CLIENT; V$ASM_DISK_STAT ORDER BY GROUP_NUMBER,DISK_NUMBER; V$ASM_DISKGROUP_STAT ORDER BY GROUP_NUMBER; V$ASM_FILE ORDER BY GROUP_NUMBER,FILE_NUMBER; V$ASM_ALIAS ORDER BY GROUP_NUMBER,FILE_NUMBER; V$ASM_TEMPLATE ORDER BY GROUP_NUMBER,ENTRY_NUMBER;

select * from v$version; show show show show show parameter parameter parameter parameter parameter asm cluster instance_type instance_name spfile

SPOOL OFF EXIT

ASM and SRVCTL

Start an ASM instance:


Syntax: srvctl start asm -n node_name [-i asm_instance_name] [-o start_options] [-c <connect_str> | q]
Example: start an ASM instance on the specified node $srvctl start asm -n linuxnode1

Stop an ASM instance:


Syntax: srvctl stop asm -n node_name [-i asm_instance_name] [-o stop_options] [-c <connect_str> | q]
Example: stop an ASM instance on the specified node $srvctl stop asm -n linuxnode1 immediate

Add configuration information (OCR data) about an existing ASM instance:


Syntax: srvctl add asm -n node_name -i asm_instance_name -o oracle_home
Example: $srvctl add asm -n linuxnode1 -i +ASM1 -o $ORACLE_HOME

Remove an ASM instance:


Syntax: srvctl remove asm -n node_name [-i asm_instance_name]

Enable an ASM instance:

Syntax: srvctl enable asm -n node_name [-i ] asm_instance_name

Disable an ASM instance:


Syntax: srvctl disable asm -n node_name [-i asm_instance_name]
Example: $srvctl disable asm -n linuxnod1 -i +ASM1

Show the configuration of an ASM instance:


Syntax: srvctl config asm -n node_name

Obtain the status of an ASM instance:


Syntax: srvctl status asm -n node_name

ASM Command-Line Utility (ASMCMD)ASMCMD is a command-line utility that you can use to easily view andmanipulate files and directories within Automatic Storage Management(ASM) disk groups. It can list the contents of disk groups, performsearches, create and remove directories and aliases, display spaceutilization, and more.ASMCMD works with Automatic Storage Management (ASM) files,directories, and aliases. Before using ASMCMD, you must understandhow these common computing concepts apply to the unique ASMenvironment. The following are some key definitions. System-generated filename or 'fully qualified filename' Every file created in ASM gets a system-generated filename, otherwiseknown as a fully qualified filename. This is analogous to a completepath name in a local file system. An example of a fully qualifiedfilename is the following: +dgroup2/sample/controlfile/Current.256.541956473 ASM generates filenames according to the following scheme: +diskGroupName/databaseName/fileType/fileTypeTag.file.incarnation In the previous fully qualified filename, dgroup2 is the disk groupname, sample is the database name, controlfile is the file type,and so on. Directory

As in other file systems, an ASM directory is a container for files, and itcan be part of a tree structure of other directories. The fully qualified filename in fact represents a hierarchy of directories,with the plus sign (+) as the root. In each disk group, ASM creates adirectory hierarchy that corresponds to the structure of the fullyqualified filenames in the disk group. The directories in this hierarchyare known as system-generated directories. ASMCMD enables you tomove up and down this directory hierarchy with the cd (changedirectory) command. The ASMCMD ls (list directory) command liststhe contents of the current directory, while the pwd command printsthe name of the current directory.When you start ASMCMD, the current directory is set to root (+). For anASM instance with two disk groups, dgroup1 and dgroup2 , enteringan ls command with the root directory as the current directoryproduces the following output: ASMCMD> lsDGROUP1/DGROUP2/ The following example demonstrates navigating the ASM directory tree(refer to the fully qualified filename shown previously): ASMCMD> cd +dgroup1/sample/controlfileASMCMD> lsCurrent.256.541956473Current.257.541956475 You can also create your own directories as subdirectories of thesystem-generated directories. You do so with the ALTER DISKGROUP command or with the ASMCMD mkdir command. Your user-createddirectories can have subdirectories, and you can navigate thehierarchy of both system-generated directories and user-generateddirectories with the cd command The following example creates the directory mydir in the disk group dgroup1 : ASMCMD> mkdir +dgroup1/mydir (Note that the directory dgroup1 is a system-generated directory. Itscontents represent the contents of the disk group dgroup1 .)If you start ASMCMD with the -p flag, ASMCMD always shows thecurrent directory as part of its prompt. ASMCMD [+] > cd dgroup1/mydirASMCMD [+DGROUP1/MYDIR] > Alias An alias is a filename that is a reference (or pointer) to a system-generated filename, but with a more user-friendly name. It is similar toa symbolic link in Unix operating systems. You create aliases to makeit easier to work with ASM filenames. You can create an alias with an ALTER DISKGROUP command or with the

mkalias ASMCMDcommand.An alias has at a minimum the disk group name as part of its completepath. You can create aliases at the disk group level or in any system-generated or user-created subdirectory. The following are examples of aliases: +dgroup1/ctl1.f+dgroup1/sample/ctl1.f+dgroup1/mydir/ctl1.f If you run the ASMCMD ls (list directory) with the -l flag, each alias islisted with the system-generated file that it references. ctl1.f =>+dgroup2/sample/controlfile/Current.256.541956473 Absolute path and Relative path

######################################################### ####### # Adding/Removing/Managing the configuration of ASM instances ######################################################### ####### --Use the following syntax to add configuration information about an existing ASM instance: srvctl add asm -n node_name -i +asm_instance_name -o oracle_home --Use the following syntax to remove an ASM instance: srvctl remove asm -n node_name [-i +asm_instance_name] --Use the following syntax to enable an ASM instance: srvctl enable asm -n node_name [-i ] +asm_instance_name --Use the following syntax to disable an ASM instance: srvctl disable asm -n node_name [-i +asm_instance_name] --Use the following syntax to start an ASM instance: srvctl start asm -n node_name [-i +asm_instance_name] [-o start_options] --Use the following syntax to stop an ASM instance: srvctl stop asm -n node_name [-i +asm_instance_name] [-o stop_options] --Use the following syntax to show the configuration of an ASM instance: srvctl config asm -n node_name --Use the following syntax to obtain the status of an ASM instance: srvctl status asm -n node_name --P.S.: --For all of the SRVCTL commands in this section for which the --option is not required, if you do not specify an instance name, then -i --the command applies to all of the ASM instances on the node.

################################### # Managing DiskGroup inside ASM: ################################### --Note that adding or dropping disks will initiate a rebalance of the data on the disks. --The status of these processes can be shown by selecting from v$asm_operation.

--Quering ASM Disk Groups col name format a25 col DATABASE_COMPATIBILITY format a10 col COMPATIBILITY format a10 select * from v$asm_diskgroup; --or select name, state, type, total_mb, free_mb from v$asm_diskgroup; --Quering ASM Disks col PATH format a55 col name format a25 select name, path, group_number, TOTAL_MB, FREE_MB, READS, WRITES, READ_TIME, WRITE_TIME from v$asm_disk order by 3,1; --or col PATH format a50 col HEADER_STATUS format a12 col name format a25 --select INCARNATION, select name, path, MOUNT_STATUS,HEADER_STATUS, MODE_STATUS, STATE, group_number, OS_MB, TOTAL_MB, FREE_MB, READS, WRITES, READ_TIME, WRITE_TIME, BYTES_READ, BYTES_WRITTEN, REPAIR_TIMER, MOUNT_DATE, CREATE_DATE from v$asm_disk;

################################### #TUNING and Analysis ################################### --Only Performance Statistics --N.B Time in Hundred seconds! col READ_TIME format 9999999999.99 col WRITE_TIME format 9999999999.99 col BYTES_READ format 99999999999999.99 col BYTES_WRITTEN format 99999999999999.99 select name, STATE, group_number, TOTAL_MB, FREE_MB,READS, WRITES, READ_TIME,

WRITE_TIME, BYTES_READ, BYTES_WRITTEN, REPAIR_TIMER,MOUNT_DATE from v$asm_disk order by group_number, name;

--Check the Num of Extents in use per Disk inside one Disk Group. select max(substr(name,1,30)) group_name, count(PXN_KFFXP) extents_per_disk, DISK_KFFXP, GROUP_KFFXP from x$kffxp, v$ASM_DISKGROUP gr where GROUP_KFFXP=&group_nr and GROUP_KFFXP=GROUP_NUMBER group by GROUP_KFFXP, DISK_KFFXP order by GROUP_KFFXP, DISK_KFFXP;

--Find The File distribution Between Disks SELECT * FROM v$asm_alias WHERE name='PWX_DATA.272.669293645'; SELECT GROUP_KFFXP Group#,DISK_KFFXP Disk#,AU_KFFXP AU#,XNUM_KFFXP Extent# FROM X$KFFXP WHERE number_kffxp=(SELECT file_number FROM v$asm_alias WHERE name='PWX_DATA.272.669293645'); --or SELECT GROUP_KFFXP Group#,DISK_KFFXP Disk#,AU_KFFXP AU#,XNUM_KFFXP Extent# FROM X$KFFXP WHERE number_kffxp=&DataFile_Number; --or select d.name, XV.GROUP_KFFXP Group#, XV.DISK_KFFXP Disk#, XV.NUMBER_KFFXP File_Number, XV.AU_KFFXP AU#, XV.XNUM_KFFXP Extent#, XV.ADDR, XV.INDX, XV.INST_ID, XV.COMPOUND_KFFXP, XV.INCARN_KFFXP, XV.PXN_KFFXP, XV.XNUM_KFFXP,XV.LXN_KFFXP, XV.FLAGS_KFFXP, XV.CHK_KFFXP, XV.SIZE_KFFXP from v$asm_disk d, X$KFFXP XV where d.GROUP_NUMBER=XV.GROUP_KFFXP and d.DISK_NUMBER=XV.DISK_KFFXP and number_kffxp=&File_NUM order by 2,3,4;

--List the hierarchical tree of files stored in the diskgroup SELECT concat('+'||gname, sys_connect_by_path(aname, '/')) full_alias_path FROM (SELECT g.name gname, a.parent_index pindex, a.name aname, a.reference_index rindex FROM v$asm_alias a, v$asm_diskgroup g WHERE a.group_number = g.group_number) START WITH (mod(pindex, power(2, 24))) = 0 CONNECT BY PRIOR rindex = pindex; ######################################################### #######

################################### #Create and Modify Disk Group ################################### create diskgroup FRA1 external redundancy disk '/dev/vx/rdsk/oraASMdg/fra1' ATTRIBUTE 'compatible.rdbms' = '11.1', 'compatible.asm' = '11.1'; alter diskgroup FRA1 check all; --on +ASM2 : alter diskgroup FRA1 mount;

--Add a second disk: alter diskgroup FRA1 add disk '/dev/vx/rdsk/oraASMdg/fra2';

--Add several disks with a wildcard: alter diskgroup FRA1 add disk '/dev/vx/rdsk/oraASMdg/fra*';

--Remove a disk from a diskgroup: alter diskgroup FRA1 drop disk 'FRA1_0002';

--Drop the entire DiskGroup drop diskgroup DATA1 including contents; --How to DROP the entire DiskGroup when it is in NOMOUNT Status --Generate the dd command which will reset the header of all the --disks belong the GROUP_NUMBER=0!!!! select 'dd if=/dev/zero of=''' ||PATH||''' bs=8192 count=100' from v$asm_disk where GROUP_NUMBER=0;

select * from v$asm_operation; --------------------------------------------------------------------------------------alter diskgroup FRA1 drop disk 'FRA1_0002'; alter diskgroup FRA1 add disk '/dev/vx/rdsk/fra1dg/fra3'; alter diskgroup FRA1 drop disk 'FRA1_0003'; alter diskgroup FRA1 add disk '/dev/vx/rdsk/fra1dg/fra4'; --When a new diskgroup is created, it is only mounted on the local instance,

--and only the instance-specific entry for the asm_diskgroups parameter is updated. --By manually mounting the diskgroup on other instances, the asm_diskgroups parameter --on those instances are updated. --on +ASM1 : create diskgroup FRA1 external redundancy disk '/dev/vx/rdsk/fradg/fra1' ATTRIBUTE 'compatible.rdbms' = '11.1', 'compatible.asm' = '11.1'; --on +ASM2 : alter diskgroup FRA1 mount;

--It works even for on going balances!!! alter diskgroup DATA1 rebalance power 10;

######################################################### ####### # New ASM Command Line Utility (ASMCMD) Commands and Options ######################################################### ####### ASMCMD Command Reference: Command Description -------------------- cd Command Changes the current directory to the specified directory. - cp Command Enables you to copy files between ASM disk groups on a local instance and remote instances. - du Command Displays the total disk space occupied by ASM files in the specified - ASM directory and all of its subdirectories, recursively. - exit Command Exits ASMCMD. - find Command Lists the paths of all occurrences of the specified name (with wildcards) under the specified directory. - help Command Displays the syntax and description of ASMCMD commands. - ls Command Lists the contents of an ASM directory, the attributes of the specified - file, or the names and attributes of all disk groups. - lsct Command Lists information about current ASM clients. - lsdg Command Lists all disk groups and their attributes. - lsdsk Command Lists disks visible to ASM. - md_backup Command Creates a backup of all of the mounted disk groups. - md_restore Command Restores disk groups from a backup. - mkalias Command Creates an alias for system-generated filenames. - mkdir Command Creates ASM directories. - pwd Command Displays the path of the current ASM directory.

- remap Command Repairs a range of physical blocks on a disk. - rm Command Deletes the specified ASM files or directories. - rmalias Command Deletes the specified alias, retaining the file that the alias points to. --------- kfed tool From Unix Prompt for reading ASM disk header. kfed read /dev/vx/rdsk/fra1dg/fra1

######################################################### ####### # CREATE and Manage Tablespaces and Datafiles on ASM ######################################################### ####### CREATE TABLESPACE my_ts DATAFILE '+disk_group_1' SIZE 100M AUTOEXTEND ON; ALTER TABLESPACE sysaux ADD DATAFILE '+disk_group_1' SIZE 100M; ALTER DATABASE DATAFILE '+DATA1/dbname/datafile/audit.259.668957419' RESIZE 150M;

------------------------create diskgroup DATA1 external redundancy disk '/dev/vx/rdsk/oraASMdg/fra1' ATTRIBUTE 'compatible.rdbms' = '11.1', 'compatible.asm' = '11.1';

select 'alter diskgroup DATA1 add disk ''' || PATH || ''';' from v$asm_disk where GROUP_NUMBER=0 and rownum<=&Num_Disks_to_add; select 'alter diskgroup FRA1 add disk ''' || PATH || ''';' from v$asm_disk where GROUP_NUMBER=0 and rownum<=&Num_Disks_to_add;

--Remove ASM header select 'dd if=/dev/zero of=''' ||PATH||''' bs=8192 count=100' from v$asm_disk where GROUP_NUMBER=0; ===== ################################################### ###### 11gR2 GRID Installation on Red Hat Enterprise 5 ###### ###################################################

#List of Operating System packages: binutils-2.17.50.0.6-6.el5 (x86_64) compat-libstdc++-33-3.2.3-61 (x86_64) <<< both ARCH's are required. compat-libstdc++-33-3.2.3-61 (i386) <<< both ARCH's are required. elfutils-libelf-0.125-3.el5 (x86_64) glibc-2.5-24 (x86_64) <<< both ARCH's are required. glibc-2.5-24 (i686) <<< both ARCH's are required. glibc-common-2.5-24 (x86_64) ksh-20060214-1.7 (x86_64) libaio-0.3.106-3.2 (x86_64) <<< both ARCH's are required. libaio-0.3.106-3.2 (i386) <<< both ARCH's are required. libgcc-4.1.2-42.el5 (i386) <<< both ARCH's are required. libgcc-4.1.2-42.el5 (x86_64) <<< both ARCH's are required. libstdc++-4.1.2-42.el5 (x86_64) <<< both ARCH's are required. libstdc++-4.1.2-42.el5 (i386) <<< both ARCH's are required. make-3.81-3.el5 (x86_64) elfutils-libelf-devel-0.125-3.el5.x86_64.rpm elfutils-libelf-devel-static-0.125-3.el5.x86_64.rpm elfutils-libelf-devel and elfutils-libelf-devel-static glibc-headers-2.5-24.x86_64.rpm kernel-headers-2.6.18-92.el5.x86_64.rpm glibc-devel-2.5-24.x86_64.rpm <<< both ARCH's are required. glibc-devel-2.5-24.i386.rpm <<< both ARCH's are required. gcc-4.1.2-42.el5.x86_64.rpm libgomp-4.1.2-42.el5.x86_64.rpm libstdc++-devel-4.1.2-42.el5.x86_64.rpm gcc-c++-4.1.2-42.el5.x86_64.rpm libaio-devel-0.3.106-3.2.x86_64.rpm <<< both ARCH's are required. libaio-devel-0.3.106-3.2.i386.rpm <<< both ARCH's are required. sysstat-7.0.2-1.el5.x86_64.rpm unixODBC-2.2.11-7.1.x86_64.rpm <<< both ARCH's are required. unixODBC-2.2.11-7.1.i386.rpm <<< both ARCH's are required. unixODBC-devel-2.2.11-7.1.x86_64.rpm <<< both ARCH's are required. unixODBC-devel-2.2.11-7.1.i386.rpm <<< both ARCH's are required. #ASMLIB packages #Platform depenfent but Kernel independent oracleasm-support-2.1.3-1.SLE10.x86_64.rpm oracleasmlib-2.0.4-1.SLE10.x86_64.rpm #Platform and Kernel depended oracleasm-2.6.16.46-0.12-smp-2.0.3-1.x86_64.rpm oracleasm-2.6.16.46-0.12-default-2.0.3-1.x86_64.rpm ################################################### # ASMLib Configuration [root@linux1 /]# /etc/init.d/oracleasm configure Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library

driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: grid Default group to own the driver interface []: asmdba Start Oracle ASM library driver on boot (y/n) [n]: y Fix permissions of Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration [ OK ] Creating /dev/oracleasm mount point [ OK ] Loading module "oracleasm" [ OK ] Mounting ASMlib driver filesystem [ OK ] Scanning system for ASM disks [ OK ] ################################################### # Users and Groups Creation -- groups [root@linux1 [root@linux1 [root@linux1 [root@linux1 [root@linux1 /]# /]# /]# /]# /]# /usr/sbin/groupadd /usr/sbin/groupadd /usr/sbin/groupadd /usr/sbin/groupadd /usr/sbin/groupadd -g -g -g -g -g 1000 1001 1002 1003 1004 oinstall asmadmin dba asmdba asmoper

-- users [root@linux1 /]# useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper grid [root@linux1 /]# useradd -u 1101 -g oinstall -G asmdba,dba oracle ################################################### # Set resource limits [root@linux1 /]# vi /etc/security/limits.conf ## Go to the end grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 [root@linux1 /]# vi /etc/pam.d/login session required pam_limits.so ################################################### # User Profile

[root@linux1 /]# vi to /etc/profile if [ $USER = "oracle" ] || [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi if [ $USER = "root" ]; then umask 022 fi ------------------------------------Additional checks for user profiles Before the installation: - Unset any JAVA environment variables like JAVA_HOME. - Unset any ORACLE environment variables like ORACLE_HOME, PATH, LD_LIBRARY_PATH - Set ORACLE_BASE After the installation: - Set ORACLE_HOME, and include $ORACLE_HOME/bin at the beginning of the PATH string. ################################################### # Network configuration - SCAN Listener component, which needs three IPs registered into the DNS and belong the same subnet used by the public NICs. -Vip and Private IPs as per example from the /etc/hosts of one of the node: 10.0.1.10 10.0.1.11 10.0.1.12 10.0.1.13 linux1.emilianofusaglia.net linux1 linux2.emilianofusaglia.net linux2 linux1-vip.emilianofusaglia.net linux1-vip linux1-vip.emilianofusaglia.net linux2-vip linux1-priv linux2-priv

192.168.1.10 192.168.1.11

################################################### # Kernel Parameters # Disable response to broadcasts. # You do not want yourself becoming a Smurf amplifier. net.ipv4.icmp_echo_ignore_broadcasts = 1 # enable route verification on all interfaces net.ipv4.conf.all.rp_filter = 1 # enable ipV6 forwarding

#net.ipv6.conf.all.forwarding = 1 # Set defaults for BladeFrame # added for Oracle 11g kernel.shmall = physical RAM size / pagesize kernel.shmmax = 1/2 of physical RAM, but not greater than 4GB kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 512 x processes (for example 6815744 for 13312 processes) net.ipv4.ip_local_port_range = 9000 65500 net.ipv4.tcp_wmem = 262144 262144 262144 net.ipv4.tcp_rmem = 4194304 4194304 4194304 vm.hugetlb_shm_group=64948 #MIN UDP CONFIG to Review according to Interconnect traffic & config net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 ################################################### # Create disk partitions and ASM disks linux1:/u01 # fdisk -l /dev/sdh Disk /dev/sdh: 38.6 GB, 38654705664 bytes 255 heads, 63 sectors/track, 4699 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End /dev/sdh1 1 4700 linux1:/u01 # fdisk -l /dev/sdi Blocks Id System 37747712 83 Linux

Disk /dev/sdi: 38.6 GB, 38654705664 bytes 255 heads, 63 sectors/track, 4699 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdi1 1 4700 37747712 83 Linux linux1:/u01 # fdisk -l /dev/sdj Disk /dev/sdj: 38.6 GB, 38654705664 bytes 255 heads, 63 sectors/track, 4699 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdj1 1 4699 37744686 83 Linux linux1:/u01 # linux1:/u01 # fdisk /dev/sdh The number of cylinders for this disk is set to 4699.

There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): d Selected partition 1 Command (m for help): p Disk /dev/sdh: 38.6 GB, 38654705664 bytes 255 heads, 63 sectors/track, 4699 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System

Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-4699, default 1): 1 Last cylinder or +size or +sizeM or +sizeK (1-4699, default 4699): +1024M Command (m for help): p Disk /dev/sdh: 38.6 GB, 38654705664 bytes 255 heads, 63 sectors/track, 4699 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot /dev/sdh1 Start 1 End 125 Blocks Id System 1004031 83 Linux

Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 First cylinder (126-4699, default 126): Using default value 126 Last cylinder or +size or +sizeM or +sizeK (126-4699, default 4699): Using default value 4699 Command (m for help): p Disk /dev/sdh: 38.6 GB, 38654705664 bytes 255 heads, 63 sectors/track, 4699 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System

/dev/sdh1 /dev/sdh2

1 126

125 1004031 83 Linux 4699 36740655 83 Linux

Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. linux1:/u01 #partprobe ----------------------------------------------Once the disks have been sliced the ASM Disks can be created as showed below in the example linux1:/u01 # /etc/init.d/oracleasm Usage: /etc/init.d/oracleasm {start|stop|restart|enable|disable|configure|createdisk|deletedisk|querydisk|listdisks|scandi sks|status} linux1:/u01 # /etc/init.d/oracleasm createdisk OCR1 /dev/sdh1 Marking disk "/dev/sdh1" as an ASM disk: done linux1:/u01 # /etc/init.d/oracleasm createdisk DATA1 /dev/sdh2 Marking disk "/dev/sdh2" as an ASM disk: done linux1:/u01 # /etc/init.d/oracleasm scandisks Scanning system for ASM disks: done linux1:/u01 # -------------------------------------------------After having created all the ASM Disks runs the utility scandisks on all nodes of the cluster, this allows ASM to discover all the new ASM Disks created. linux2:/dev/oracleasm/disks # /etc/init.d/oracleasm scandisks Scanning system for ASM disks: done linux2:/dev/oracleasm/disks # /etc/init.d/oracleasm listdisks DATA1 DATA2 DATA3 OCR1 OCR2 OCR3 linux2:/dev/oracleasm/disks # ################################################### # Create the installation directories #Grid Home mkdir p /u01/GRID/11.2 chown R grid:oinstall /u01/GRID/11.2 chmod R 775 /u01/GRID/11.2 #Oracle Base

mkdir p /u01/oracle chown R oracle:oinstall /u01/oracle chmod R 775 /u01/oracle #Oracle Home mkdir p /u01/oracle/product/11.2 chown R oracle:oinstall /u01/oracle/product/11.2 chmod R 775 /u01/oracle/product/11.2 ################################################### # Run Cluster verify utility /u01/stage/grid > ./runcluvfy.sh stage -pre crsinst -n linux1,linux2 -verbose ################################################### # Start the Installation /u01/stage/grid > ./runInstaller

######################################################### ####### # How to restore OCR and Voting disk after DiskGroup Corruption on Oracle 11g R2. ######################################################### ####### --Location and status of OCR before starting the test: root@host1:/u01/GRID/11.2/cdata # /u01/GRID/11.2/bin/ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 2744 Available space (kbytes) : 259376 ID : 401168391 Device/File Name : +OCRVOTING Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured

Cluster registry integrity check succeeded Logical corruption check succeeded

--Check the existency of BACKUPS: root@host1:/root # /u01/GRID/11.2/bin/ocrconfig -showbackup host1 host1 host1 host1 2010/01/21 14:17:54 2010/01/21 05:58:31 2010/01/21 01:58:30 2010/01/20 05:58:21 /u01/GRID/11.2/cdata/cluster01/backup00.ocr /u01/GRID/11.2/cdata/cluster01/backup01.ocr /u01/GRID/11.2/cdata/cluster01/backup02.ocr /u01/GRID/11.2/cdata/cluster01/day.ocr

host1 2010/01/14 23:12:07 /u01/GRID/11.2/cdata/cluster01/week.ocr PROT-25: Manual backups for the Oracle Cluster Registry are not available

--Identify all the disks belong the Disk group +OCRVOTING: NAME PATH ------------------------------ -----------------------------------------------------------OCRVOTING_0000 /dev/oracle/asm.25.lun OCRVOTING_0001 /dev/oracle/asm.26.lun OCRVOTING_0002 /dev/oracle/asm.27.lun OCRVOTING_0003 /dev/oracle/asm.28.lun OCRVOTING_0004 /dev/oracle/asm.29.lun 5 rows selected.

--Corrupt tht disks belong the Disk group +OCRVOTING: dd if=/tmp/corrupt_disk of=/dev/oracle/asm.25.lun bs=1024 dd if=/tmp/corrupt_disk of=/dev/oracle/asm.26.lun bs=1024 dd if=/tmp/corrupt_disk of=/dev/oracle/asm.27.lun bs=1024 dd if=/tmp/corrupt_disk of=/dev/oracle/asm.28.lun bs=1024 dd if=/tmp/corrupt_disk of=/dev/oracle/asm.29.lun bs=1024

count=1000 count=1000 count=1000 count=1000 count=1000

--OCR Check after Corruption: root@host1:/tmp # /u01/GRID/11.2/bin/ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3

Total space (kbytes) : 262120 Used space (kbytes) : 2712 Available space (kbytes) : 259408 ID : 701409037 Device/File Name : +OCRVOTING Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded

--Stop and Start of database instance after corruption oracle@host1:/u01/oracle/data $ srvctl stop instance -d DB -i DB1 oracle@host1:/u01/oracle/data $ srvctl start instance -d DB -i DB1 --Stop and Start entire Cluster: -host1: root@host1:/tmp # /u01/GRID/11.2/bin/crsctl stop crs CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'host1' CRS-2673: Attempting to stop 'ora.crsd' on 'host1' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'host1' CRS-2673: Attempting to stop 'ora.OCRVOTING.dg' on 'host1' CRS-2673: Attempting to stop 'ora.db.db' on 'host1' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'host1' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'host1' succeeded CRS-2673: Attempting to stop 'ora.host1.vip' on 'host1' CRS-2677: Stop of 'ora.host1.vip' on 'host1' succeeded CRS-2677: Stop of 'ora.OCRVOTING.dg' on 'host1' succeeded CRS-2673: Attempting to stop 'ora.scan2.vip' on 'host1' CRS-2673: Attempting to stop 'ora.scan3.vip' on 'host1' CRS-2673: Attempting to stop 'ora.host2.vip' on 'host1' CRS-2677: Stop of 'ora.scan2.vip' on 'host1' succeeded CRS-2677: Stop of 'ora.scan3.vip' on 'host1' succeeded CRS-2677: Stop of 'ora.host2.vip' on 'host1' succeeded

CRS-2677: CRS-2673: CRS-2673: CRS-2677: CRS-2677: CRS-2673: CRS-2677: CRS-2673: CRS-2673: CRS-2677: CRS-2673: CRS-2677: CRS-2677: CRS-2792: completed CRS-2677: CRS-2673: CRS-2673: CRS-2673: CRS-2673: CRS-2673: CRS-2673: CRS-2677: CRS-2677: CRS-2677: CRS-2677: CRS-2677: CRS-2677: CRS-2673: CRS-2677: CRS-2673: CRS-2673: CRS-2677: CRS-2677: CRS-2793: completed CRS-4133:

Stop of 'ora.db.db' on 'host1' succeeded Attempting to stop 'ora.DATA1.dg' on 'host1' Attempting to stop 'ora.FRA1.dg' on 'host1' Stop of 'ora.DATA1.dg' on 'host1' succeeded Stop of 'ora.FRA1.dg' on 'host1' succeeded Attempting to stop 'ora.asm' on 'host1' Stop of 'ora.asm' on 'host1' succeeded Attempting to stop 'ora.ons' on 'host1' Attempting to stop 'ora.eons' on 'host1' Stop of 'ora.ons' on 'host1' succeeded Attempting to stop 'ora.net1.network' on 'host1' Stop of 'ora.net1.network' on 'host1' succeeded Stop of 'ora.eons' on 'host1' succeeded Shutdown of Cluster Ready Services-managed resources on 'host1' has Stop of 'ora.crsd' on 'host1' succeeded Attempting to stop 'ora.mdnsd' on 'host1' Attempting to stop 'ora.gpnpd' on 'host1' Attempting to stop 'ora.cssdmonitor' on 'host1' Attempting to stop 'ora.ctssd' on 'host1' Attempting to stop 'ora.evmd' on 'host1' Attempting to stop 'ora.asm' on 'host1' Stop of 'ora.cssdmonitor' on 'host1' succeeded Stop of 'ora.mdnsd' on 'host1' succeeded Stop of 'ora.gpnpd' on 'host1' succeeded Stop of 'ora.evmd' on 'host1' succeeded Stop of 'ora.ctssd' on 'host1' succeeded Stop of 'ora.asm' on 'host1' succeeded Attempting to stop 'ora.cssd' on 'host1' Stop of 'ora.cssd' on 'host1' succeeded Attempting to stop 'ora.diskmon' on 'host1' Attempting to stop 'ora.gipcd' on 'host1' Stop of 'ora.gipcd' on 'host1' succeeded Stop of 'ora.diskmon' on 'host1' succeeded Shutdown of Oracle High Availability Services-managed resources on 'host1' has Oracle High Availability Services has been stopped.

--host2: root@host2:/root # /u01/GRID/11.2/bin/crsctl stop crs CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'host2' CRS-2673: Attempting to stop 'ora.crsd' on 'host2' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'host2' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'host2'

CRS-2673: CRS-2673: CRS-2673: CRS-2673: CRS-2673: CRS-2677: CRS-2673: CRS-2677: CRS-2672: CRS-2677: CRS-2673: CRS-2677: CRS-2673: CRS-2677: CRS-2672: CRS-2677: CRS-2672: CRS-2677: CRS-2673: CRS-2677: CRS-2676: CRS-2676: CRS-2676: CRS-2677: CRS-2677: CRS-2673: CRS-2673: CRS-2677: CRS-2677: CRS-2673: CRS-2677: CRS-2673: CRS-2673: CRS-2677: CRS-2673: CRS-2677: CRS-2677: CRS-2792: completed CRS-2677: CRS-2673: CRS-2673: CRS-2673: CRS-2673: CRS-2673: CRS-2673:

Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'host2' Attempting to stop 'ora.LISTENER.lsnr' on 'host2' Attempting to stop 'ora.OCRVOTING.dg' on 'host2' Attempting to stop 'ora.db.db' on 'host2' Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'host2' Stop of 'ora.LISTENER_SCAN2.lsnr' on 'host2' succeeded Attempting to stop 'ora.scan2.vip' on 'host2' Stop of 'ora.scan2.vip' on 'host2' succeeded Attempting to start 'ora.scan2.vip' on 'host1' Stop of 'ora.LISTENER_SCAN3.lsnr' on 'host2' succeeded Attempting to stop 'ora.scan3.vip' on 'host2' Stop of 'ora.LISTENER.lsnr' on 'host2' succeeded Attempting to stop 'ora.host2.vip' on 'host2' Stop of 'ora.scan3.vip' on 'host2' succeeded Attempting to start 'ora.scan3.vip' on 'host1' Stop of 'ora.host2.vip' on 'host2' succeeded Attempting to start 'ora.host2.vip' on 'host1' Stop of 'ora.LISTENER_SCAN1.lsnr' on 'host2' succeeded Attempting to stop 'ora.scan1.vip' on 'host2' Stop of 'ora.scan1.vip' on 'host2' succeeded Start of 'ora.scan2.vip' on 'host1' succeeded Start of 'ora.scan3.vip' on 'host1' succeeded Start of 'ora.host2.vip' on 'host1' succeeded Stop of 'ora.OCRVOTING.dg' on 'host2' succeeded Stop of 'ora.db.db' on 'host2' succeeded Attempting to stop 'ora.DATA1.dg' on 'host2' Attempting to stop 'ora.FRA1.dg' on 'host2' Stop of 'ora.DATA1.dg' on 'host2' succeeded Stop of 'ora.FRA1.dg' on 'host2' succeeded Attempting to stop 'ora.asm' on 'host2' Stop of 'ora.asm' on 'host2' succeeded Attempting to stop 'ora.ons' on 'host2' Attempting to stop 'ora.eons' on 'host2' Stop of 'ora.ons' on 'host2' succeeded Attempting to stop 'ora.net1.network' on 'host2' Stop of 'ora.net1.network' on 'host2' succeeded Stop of 'ora.eons' on 'host2' succeeded Shutdown of Cluster Ready Services-managed resources on 'host2' has Stop of 'ora.crsd' on 'host2' succeeded Attempting to stop 'ora.gpnpd' on 'host2' Attempting to stop 'ora.cssdmonitor' on 'host2' Attempting to stop 'ora.ctssd' on 'host2' Attempting to stop 'ora.evmd' on 'host2' Attempting to stop 'ora.asm' on 'host2' Attempting to stop 'ora.mdnsd' on 'host2'

CRS-2677: CRS-2677: CRS-2677: CRS-2677: CRS-2677: CRS-2677: CRS-2673: CRS-2677: CRS-2673: CRS-2673: CRS-2677: CRS-2677: CRS-2793: completed CRS-4133:

Stop of 'ora.cssdmonitor' on 'host2' succeeded Stop of 'ora.gpnpd' on 'host2' succeeded Stop of 'ora.evmd' on 'host2' succeeded Stop of 'ora.mdnsd' on 'host2' succeeded Stop of 'ora.asm' on 'host2' succeeded Stop of 'ora.ctssd' on 'host2' succeeded Attempting to stop 'ora.cssd' on 'host2' Stop of 'ora.cssd' on 'host2' succeeded Attempting to stop 'ora.diskmon' on 'host2' Attempting to stop 'ora.gipcd' on 'host2' Stop of 'ora.gipcd' on 'host2' succeeded Stop of 'ora.diskmon' on 'host2' succeeded Shutdown of Oracle High Availability Services-managed resources on 'host2' has Oracle High Availability Services has been stopped.

--host1 root@host1:/root # /u01/GRID/11.2/bin/crsctl start crs CRS-4123: Oracle High Availability Services has been started. --host2 root@host2:/u01/GRID/11.2/cdata/cluster01 # /u01/GRID/11.2/bin/crsctl start crs CRS-4123: Oracle High Availability Services has been started.

--CRS Alert log: (Start failed because the Diskgroup is not available) 2010-01-21 16:29:07.785 [cssd(10123)]CRS-1705:Found 0 configured voting files but 1 voting files are required, terminating to ensure data integrity; details at (:CSSNM00065:) in /u01/GRID/11.2/log/host1/cssd/ocssd.log 2010-01-21 16:29:07.785 [cssd(10123)]CRS-1603:CSSD on node host1 shutdown by user. 2010-01-21 16:29:07.918 [ohasd(9931)]CRS-2765:Resource 'ora.cssdmonitor' has failed on server 'host1'. 2010-01-21 16:30:05.489 [/u01/GRID/11.2/bin/orarootagent.bin(10113)]CRS-5818:Aborted command 'start for resource: ora.diskmon 1 1' for resource 'ora.diskmon'. Details at (:CRSAGF00113:) in /u01/GRID/11.2/log/host1/agent/ohasd/orarootagent_root/orarootagent_root.log. 2010-01-21 16:30:09.504 [ohasd(9931)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.diskmon'. Details at (:CRSPE00111:) in /u01/GRID/11.2/log/host1/ohasd/ohasd.log. 2010-01-21 16:30:20.687 [cssd(10622)]CRS-1713:CSSD daemon is started in clustered mode 2010-01-21 16:30:21.801 [cssd(10622)]CRS-1705:Found 0 configured voting files but 1 voting files are required,

terminating to ensure data integrity; details at (:CSSNM00065:) in /u01/GRID/11.2/log/host1/cssd/ocssd.log 2010-01-21 16:30:21.801 [cssd(10622)]CRS-1603:CSSD on node host1 shutdown by user.

--host1 STOP CRS because due to Voting Disk unavailability is not running properly: root@host1:/tmp # /u01/GRID/11.2/bin/crsctl stop crs CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'host1' CRS-2673: Attempting to stop 'ora.crsd' on 'host1' CRS-4548: Unable to connect to CRSD CRS-2675: Stop of 'ora.crsd' on 'host1' failed CRS-2679: Attempting to clean 'ora.crsd' on 'host1' CRS-4548: Unable to connect to CRSD CRS-2678: 'ora.crsd' on 'host1' has experienced an unrecoverable failure CRS-0267: Human intervention required to resume its availability. CRS-2795: Shutdown of Oracle High Availability Services-managed resources on 'host1' has failed CRS-4687: Shutdown command has completed with error(s). CRS-4000: Command Stop failed, or completed with errors.

--Because all the processes are not STOPPING, disable the cluster AUTO Start and reboot --the server for cleaning all the pending processes. root@host1:/tmp # /u01/GRID/11.2/bin/crsctl disable crs CRS-4621: Oracle High Availability Services autostart is disabled. root@host1:/tmp # reboot

--Start the Cluster in EXLUSIVE Mode in order to recreate ASM Diskgroup: root@host1:/root # /u01/GRID/11.2/bin/crsctl start crs -excl CRS-4123: Oracle High Availability Services has been started. CRS-2672: Attempting to start 'ora.gipcd' on 'host1' CRS-2672: Attempting to start 'ora.mdnsd' on 'host1' CRS-2676: Start of 'ora.gipcd' on 'host1' succeeded CRS-2676: Start of 'ora.mdnsd' on 'host1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'host1' CRS-2676: Start of 'ora.gpnpd' on 'host1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host1' CRS-2676: Start of 'ora.cssdmonitor' on 'host1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'host1' CRS-2679: Attempting to clean 'ora.diskmon' on 'host1' CRS-2681: Clean of 'ora.diskmon' on 'host1' succeeded

CRS-2672: CRS-2676: CRS-2676: CRS-2672: CRS-2676: CRS-2672: CRS-2676: CRS-2672: CRS-2676:

Attempting to start 'ora.diskmon' on 'host1' Start of 'ora.diskmon' on 'host1' succeeded Start of 'ora.cssd' on 'host1' succeeded Attempting to start 'ora.ctssd' on 'host1' Start of 'ora.ctssd' on 'host1' succeeded Attempting to start 'ora.asm' on 'host1' Start of 'ora.asm' on 'host1' succeeded Attempting to start 'ora.crsd' on 'host1' Start of 'ora.crsd' on 'host1' succeeded

--Stop ASM and restart it using a pfile example: *.asm_diskgroups='DATA1','FRA1' *.asm_diskstring='/dev/oracle/asm*' *.diagnostic_dest='/u01/oracle' +ASM1.instance_number=1 +ASM2.instance_number=2 *.instance_type='asm' *.large_pool_size=12M *.processes=500 *.sga_max_size=1G *.sga_target=1G *.shared_pool_size=300M

--Recreate ASM Diskgroup --This command FAILS because asmca is not able to update the OCR: asmca -silent -createDiskGroup -diskGroupName OCRVOTING -disk '/dev/oracle/asm.25.lun' -disk '/dev/oracle/asm.26.lun' -disk '/dev/oracle/asm.27.lun' disk '/dev/oracle/asm.28.lun' -disk '/dev/oracle/asm.29.lun' -redundancy HIGH compatible.asm '11.2.0.0.0' -compatible.rdbms '11.2.0.0.0' -compatible.advm '11.2.0.0.0'

--Create the Diskgroup Using SQLPLUS Create Diskgroup and save the ASM spfile inside: create Diskgroup OCRVOTING high redundancy disk '/dev/oracle/asm.25.lun', '/dev/oracle/asm.26.lun', '/dev/oracle/asm.27.lun', '/dev/oracle/asm.28.lun', '/dev/oracle/asm.29.lun' ATTRIBUTE 'compatible.asm'='11.2.0.0.0', 'compatible.rdbms'='11.2.0.0.0'; create spfile='+OCRVOTING' from pfile='/tmp/asm_pfile.ora'; File created. SQL> shut immediate ASM diskgroups dismounted

ASM instance shutdown SQL> startup ASM instance started Total System Global Area 1069252608 bytes Fixed Size 2154936 bytes Variable Size 1041931848 bytes ASM Cache 25165824 bytes ASM diskgroups mounted -- Restore OCR from backup: root@host1:/root # /u01/GRID/11.2/bin/ocrconfig -restore /u01/GRID/11.2/cdata/cluster01/backup00.ocr --Check the OCR status after restore: root@host1:/root # /u01/GRID/11.2/bin/ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 2712 Available space (kbytes) : 259408 ID : 701409037 Device/File Name : +OCRVOTING Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded

--Restore the Voting Disk: root@host1:/root # /u01/GRID/11.2/bin/crsctl replace votedisk +OCRVOTING Successful addition of voting disk 7s16f9fbf4b64f74bfy0ee8826f15eb4. Successful addition of voting disk 9k6af49d3cd54fc5bf28a2fc3899c8c6. Successful addition of voting disk 876eb99563924ff6bfc1defe6865deeb. Successful addition of voting disk 12230b5ef41f4fc2bf2cae957f765fb0. Successful addition of voting disk 47812b7f6p034f33bf13490e6e136b8b. Successfully replaced voting disk group with +OCRVOTING.

CRS-4266: Voting file(s) successfully replaced

--Re-enable CRS auto starup root@host1:/root # /u01/GRID/11.2/bin/crsctl enable crs CRS-4622: Oracle High Availability Services autostart is enabled.

--Stop CRS on host1 root@host1:/root # /u01/GRID/11.2/bin/crsctl stop crs CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'host1' CRS-2673: Attempting to stop 'ora.crsd' on 'host1' CRS-2677: Stop of 'ora.crsd' on 'host1' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'host1' CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'host1' CRS-2673: Attempting to stop 'ora.ctssd' on 'host1' CRS-2673: Attempting to stop 'ora.asm' on 'host1' CRS-2673: Attempting to stop 'ora.mdnsd' on 'host1' CRS-2677: Stop of 'ora.cssdmonitor' on 'host1' succeeded CRS-2677: Stop of 'ora.gpnpd' on 'host1' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'host1' succeeded CRS-2677: Stop of 'ora.ctssd' on 'host1' succeeded CRS-2677: Stop of 'ora.asm' on 'host1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'host1' CRS-2677: Stop of 'ora.cssd' on 'host1' succeeded CRS-2673: Attempting to stop 'ora.diskmon' on 'host1' CRS-2673: Attempting to stop 'ora.gipcd' on 'host1' CRS-2677: Stop of 'ora.gipcd' on 'host1' succeeded CRS-2677: Stop of 'ora.diskmon' on 'host1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'host1' has completed CRS-4133: Oracle High Availability Services has been stopped.

--Start CRS on host1 root@host1:/root # /u01/GRID/11.2/bin/crsctl start crs CRS-4123: Oracle High Availability Services has been started. --Start CRS on host2 root@host2:/root # /u01/GRID/11.2/bin/crsctl start crs CRS-4123: Oracle High Availability Services has been started. --Check if all the Resources are running: root@host1:/root # /u01/GRID/11.2/bin/crsctl stat res -t --------------------------------------------------------------------------------

NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------Local Resources -------------------------------------------------------------------------------ora.DATA1.dg ONLINE ONLINE host1 ONLINE ONLINE host2 ora.FRA1.dg ONLINE ONLINE host1 ONLINE ONLINE host2 ora.LISTENER.lsnr ONLINE ONLINE host1 ONLINE ONLINE host2 ora.OCRVOTING.dg ONLINE ONLINE host1 ONLINE ONLINE host2 ora.asm ONLINE ONLINE host1 Started ONLINE ONLINE host2 Started ora.eons ONLINE ONLINE host1 ONLINE ONLINE host2 ora.gsd OFFLINE OFFLINE host1 OFFLINE OFFLINE host2 ora.net1.network ONLINE ONLINE host1 ONLINE ONLINE host2 ora.ons ONLINE ONLINE host1 ONLINE ONLINE host2 -------------------------------------------------------------------------------Cluster Resources -------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE host1 ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE host2 ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE host2 ora.db.db 1 ONLINE ONLINE host1 Open 2 ONLINE ONLINE host2 Open ora.oc4j 1 OFFLINE OFFLINE ora.scan1.vip

1 ONLINE ora.scan2.vip 1 ONLINE ora.scan3.vip 1 ONLINE ora.host1.vip 1 ONLINE ora.host2.vip 1 ONLINE

ONLINE ONLINE ONLINE ONLINE ONLINE

host1 host2 host2 host1 host2

######################################################### ########## ## How to add an Application VIP to Oracle Cluster 11gR2 ######################################################### ########## Oracle Clusterware includes the utility appvipcfg which allows to easily create application VIPs; below an example based on a cluster 11.2.0.3.1 [root@lnxcld02 ~]# appvipcfg -h Production Copyright 2007, 2008, Oracle.All rights reserved Unknown option: h Usage: appvipcfg create -network=<network_number> -ip=<ip_address> vipname=<vipname> -user=<user_name>[-group=<group_name>] [-failback=0 | 1] delete -vipname=<vipname> --Example to run as root user: [root@lnxcld02 ~]# appvipcfg create -network=1 -ip=192.168.2.200 -vipname=myappvip user=grid -group=oinstall Production Copyright 2007, 2008, Oracle.All rights reserved 2012-02-10 14:39:23: Creating Resource Type 2012-02-10 14:39:23: Executing /home/GRID_INFRA/product/11.2.0.3/bin/crsctl add type app.appvip_net1.type -basetype ora.cluster_vip_net1.type -file /home/GRID_INFRA/product/11.2.0.3/crs/template/appvip.type 2012-02-10 14:39:23: Executing cmd: /home/GRID_INFRA/product/11.2.0.3/bin/crsctl add type app.appvip_net1.type -basetype ora.cluster_vip_net1.type -file /home/GRID_INFRA/product/11.2.0.3/crs/template/appvip.type 2012-02-10 14:39:26: Create the Resource 2012-02-10 14:39:26: Executing /home/GRID_INFRA/product/11.2.0.3/bin/crsctl add resource myappvip -type app.appvip_net1.type -attr "USR_ORA_VIP=192.168.2.200,START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network),STOP_DEPENDENCIES=hard(ora.net1.network),ACL='owner:root: rwx,pgrp:root:r-x,other::r--,group:oinstall:r-x,user:grid:rx',HOSTING_MEMBERS=lnxcld02,APPSVIP_FAILBACK=" 2012-02-10 14:39:26: Executing cmd: /home/GRID_INFRA/product/11.2.0.3/bin/crsctl add resource myappvip -type app.appvip_net1.type -attr

"USR_ORA_VIP=192.168.2.200,START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network),STOP_DEPENDENCIES=hard(ora.net1.network),ACL='owner:root: rwx,pgrp:root:r-x,other::r--,group:oinstall:r-x,user:grid:rx',HOSTING_MEMBERS=lnxcld02,APPSVIP_FAILBACK=" ######################################################### #####################################

[grid@lnxcld02 trace]$ crsctl stat res -t -------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------Local Resources -------------------------------------------------------------------------------ora.DATA1.dg ONLINE ONLINE lnxcld01 ONLINE ONLINE lnxcld02 ora.FRA1.dg ONLINE ONLINE lnxcld01 ONLINE ONLINE lnxcld02 ora.LISTENER.lsnr ONLINE ONLINE lnxcld01 ONLINE ONLINE lnxcld02 ora.OCRVOTING.dg ONLINE ONLINE lnxcld01 ONLINE ONLINE lnxcld02 ora.asm ONLINE ONLINE lnxcld01 Started ONLINE ONLINE lnxcld02 Started ora.gsd OFFLINE OFFLINE lnxcld01 OFFLINE OFFLINE lnxcld02 ora.net1.network ONLINE ONLINE lnxcld01 ONLINE ONLINE lnxcld02 ora.ons ONLINE ONLINE lnxcld01 ONLINE ONLINE lnxcld02 ora.registry.acfs ONLINE ONLINE lnxcld01 ONLINE ONLINE lnxcld02 -------------------------------------------------------------------------------Cluster Resources -------------------------------------------------------------------------------myappvip 1 ONLINE ONLINE lnxcld02

ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE ora.cvu 1 ONLINE ONLINE ora.lnxcld01.vip 1 ONLINE ONLINE ora.lnxcld02.vip 1 ONLINE ONLINE ora.oc4j 1 ONLINE ONLINE ora.scan1.vip 1 ONLINE ONLINE ora.tpolicy.db 1 ONLINE ONLINE 2 ONLINE ONLINE ora.tpolicy.loadbalance_rw.svc 1 ONLINE ONLINE 2 ONLINE ONLINE

lnxcld02 lnxcld02 lnxcld01 lnxcld02 lnxcld02 lnxcld02 lnxcld01 lnxcld02 lnxcld01 lnxcld02 Open Open

######################################################### ##################################### [grid@lnxcld02 ~]$ crsctl stat res myappvip -p NAME=myappvip TYPE=app.appvip_net1.type ACL=owner:root:rwx,pgrp:root:r-x,other::r--,group:oinstall:r-x,user:grid:r-x ACTION_FAILURE_TEMPLATE= ACTION_SCRIPT= ACTIVE_PLACEMENT=1 AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX% APPSVIP_FAILBACK=0 AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=1 CHECK_TIMEOUT=30 DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=vip) DEGREE=1 DESCRIPTION=Application VIP ENABLED=1 FAILOVER_DELAY=0 FAILURE_INTERVAL=0 FAILURE_THRESHOLD=0 GEN_USR_ORA_STATIC_VIP= GEN_USR_ORA_VIP= HOSTING_MEMBERS=lnxcld02 LOAD=1

LOGGING_LEVEL=1 NLS_LANG= NOT_RESTARTING_TEMPLATE= OFFLINE_CHECK_INTERVAL=0 PLACEMENT=balanced PROFILE_CHANGE_TEMPLATE= RESTART_ATTEMPTS=0 SCRIPT_TIMEOUT=60 SERVER_POOLS=* START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network) START_TIMEOUT=0 STATE_CHANGE_TEMPLATE= STOP_DEPENDENCIES=hard(ora.net1.network) STOP_TIMEOUT=0 TYPE_VERSION=2.1 UPTIME_THRESHOLD=7d USR_ORA_ENV= USR_ORA_VIP=192.168.2.200 VERSION=11.2.0.3.0 ######################################################### ####### # Oracle Clusterware Commands release 11.1 ######################################################### ####### Command Description: crs_getperm Lists the permissions associated with a resource. crs_profile Creates, validates, deletes, and updates an Oracle Clusterware application profile crs_register Registers configuration information for an application with the OCR. crs_relocate Relocates an application profile to another node. crs_setperm Sets permissions associated with a resource. crs_stat Lists the status of an application profile. crs_start Starts applications that have been registered. crs_stop Stops an Oracle Clusterware application. crs_unregister Removes the configuration information for an application profile from the OCR.

######################################################### ####### # START-STOP-STATUS ORACLE CLUSTER ######################################################### ####### --To stop or start oracle clusterware on a node in a managed fashion:

-- as root account /etc/init.d/init.crs [stop|start] --or [root@node1 root]# crsctl stop crs --To disable the oracle cluster so it doesn't attempt to start after a reboot, or --to ensure that it will be started automatically on boot: /etc/init.d/init.crs [disable|enable] --or [root@node1 root] # crsctl enable crs

--As the oracle user on docrac1, check the status of the Oracle Clusterware /crs/bin/crs_stat -t --Verify that Oracle Clusterware is running on the node [root@node1 root] #crsctl check crs --Check the status of an individual Oracle Clusterware daemon using the following --syntax, where daemon is crsd, cssd, or evmd: [root@node1 root] # crsctl check daemon -- Check the existence and the location of Voting Disk [root@node1 root] # crsctl query css votedisk ##Cluster Verification Utility (CVU) to verify the OCR integrity. Run the --following command, where the -n all argument retrieves a list of all the cluster --nodes that are configured as part of your cluster: [root]# cluvfy comp ocr -n all [-verbose] ##You use the CVU comp nodeapp command to verify the existence of node --applications, namely the virtual IP (VIP), Oracle Notification Services (ONS), and --Global Service Daemon (GSD), on all the nodes. [root]# cluvfy comp nodeapp [ -n node_list] [-verbose] ##To check the settings for the interconnect: --1. In a command window, log in to the operating system as the root user. --2. To verify the accessibility of the cluster nodes, specified by node_list, from the --local node or from any other cluster node, specified by srcnode, use the --component verification command nodereach as follows: [root]# cluvfy comp nodereach -n node_list [ -srcnode node ] [-verbose] ##To verify the connectivity among the nodes through specific network interfaces, [root]# cluvfy comp nodecon -n node_list -i interface_list [-verbose] [root]# cluvfy comp nodecon -n docrac1, docrac2, docrac3 -i eth0 -verbose

###Oracle Clusterware Diagnostics Collection Script --The diagnostics provide additional information so that Oracle Support Services --can resolve problems. It displays the status of the Cluster Synchronization --Services (CSS), Event Manager (EVM), and the Cluster Ready Services (CRS) daemons [root@node1 root] # CRS_home/bin/diagcollection.pl --collect ######################################################### ####### # DEBUGGING OF ORACLE CLUSTERWARE COMPONENTS ######################################################### ####### --1. In a command window, log in to the operating system as the root user. --2. Use the following command to obtain the module names for a component, where --component_name is crs, evm, css or the name of the component for which you --want to enable debugging: # crsctl lsmodules component_name --For example, viewing the modules of the css component might return the following results: # crsctl lsmodules css The following are the CSS modules :: CSSD COMMCRS COMMNS --3. Use CRSCTL as follows, where component_name is the name of the Oracle --Clusterware component for which you want to enable debugging, module is the --name of module, and debugging_level is a number from 1 to 5: # crsctl debug log component module:debugging_level For example, to enable the lowest level of tracing for the CSSD module of the css component, you would use the following command: # crsctl debug log css CSSD:1 --4. After you have obtained the needed trace information, disable debugging by --setting the debugging_level to 0 for the module, as shown in the following example. # crsctl debug log css CSSD:0 ######################################################### ####### # OLSNODES and OIFCFG utility ######################################################### #######

--The OLSNODES command provides the list of nodes and other information --for all nodes participating in the cluster olsnodes [-n] [-i] [-l] [-v] [-g] [-p] -g Logs cluster verification information with more details. -i Lists all nodes participating in the cluster and includes the Virtual Internet Protocol (VIP) address assigned to each node. -l Displays the local node name. -n Lists al nodes participating in the cluster and includes the assigned node numbers. -p Lists all nodes participating in the cluster and includes the private interconnect assigned to each node. -v Logs cluster verification information in verbose mode. ------------------------------------------------------------------------------------Before you invoke OIFCFG, ensure that you have started Oracle Clusterware on at --least the local node and preferably on all nodes if you intend to include the -global --option on the command. OIFCFG

######################################################### ####### # SRVCTL Utility for OCR management ######################################################### ####### --Using srvctl the standard tool for stopping and starting services. The most used commands are srvctl [stop|start] database -d DBNAME srvctl [stop|start] instance -d DBNAME -i SID srcvtl [stop|start] nodeapps -n HOSTNAME --example ./srvctl status asm -n docrac1 ASM instance +ASM1 is running on node docrac1. --Stop all the node applications running in a Oracle Clusterware: ASM instance, RAC instances --node_name is the name of the node: $ CRS_home/crs/bin/srvctl stop nodeapps -n node_name --Stop Oracle Clusterware using [root]# CRS_home/bin/crsctl stop crs

######################################################### ####### # VOTING - DISKS ######################################################### ####### ################################ #Backup / Restore VOTING Disk ################################ dd if=voting_disk_name of=backup_file_name --Backup Example Using RAW Devices dd if=/dev/sdd1 of=/tmp/voting.dmp --Restore Example Using RAW Devices dd if=backup_file_name of=voting_disk_name -- P.S. --When you use the dd command for making backups of the voting disk, the backup can --be performed while the Cluster Ready Services (CRS) process is active; you do not --need to stop the crsd.bin process before taking a backup of the voting disk. ######################################################### ####### ################################ # Adding/Removing Voting Disk using: crsctl ################################ --Adding Voting Disks crsctl add css votedisk path

-Removing Voting Disks crsctl delete css votedisk path --P.S. --You can dynamically add and remove voting disks after installing Oracle RAC. --If your cluster is down, then you can use the -force option to --modify the voting disk configuration when using either of these --commands without interacting with active Oracle Clusterware --daemons. However, you may corrupt your cluster configuration if --you use the -force option while a cluster node is active.

######################################################### ####### # ORACLE CLUSTER REGISTRY ######################################################### ####### --Oracle RAC environments do not support --more than two OCRs, a primary OCR and a secondary OCR. #################################### #Viewing Available OCR and Backups #################################### --Show the availability of Primary and Secondary OCR files ocrcheck --Show the availability of backup copies ocrconfig -showbackup

################################ #Backup / Restore OCR File ################################ -------------##Backup OCR ---------------Oracle Clusterware automatically creates OCR backups every 4 hours. --The default location for generating backups on Red Hat Linux --systems is CRS_home/cdata/cluster_name --Export the dump file of OCR [root]# ocrconfig -export backup_file_name

--Check the status of the OCR: ocrcheck --P.S. --If this command does not display the message 'Device/File integrity check succeeded' -- for at least one copy of the OCR, then both the primary OCR and the -- OCR mirror have failed. You must restore the OCR from a backup. ------------------------------------------##Restore OCR using automated Backup file -------------------------------------------

--1. list the available backup files [root]# ocrconfig -showbackup --2. Review the contents of the backup using ocrdump command, where --file_name is the name of the OCR backup file: [root]# ocrdump -backupfile file_name --3. As the root user, stop Oracle Clusterware on all the nodes in your Oracle RAC cluster --Repeat this command on each node in your Oracle RAC cluster! [root]# crsctl stop crs

--4. As the root user, restore the OCR by applying an OCR backup file that you --identified in Step 1 [root]# ocrconfig -restore file_name --5. Restart Oracle Clusterware on all the nodes in your cluster by --restarting each node, or by running the following command: [root]# crsctl start crs --6. Use the Cluster Verification Utility (CVU) to verify the OCR integrity. Run the --following command, where the -n all argument retrieves a list of all the cluster --nodes that are configured as part of your cluster: [root]# cluvfy comp ocr -n all [-verbose] ------------------------------------------##Restore OCR using Export file --------------------------------------------1. As the root user, stop Oracle Clusterware on all the nodes in your Oracle RAC cluster --Repeat this command on each node in your Oracle RAC cluster! [root]# crsctl stop crs --2. As the root user, restore the OCR data by importing the contents of the OCR export file [root]# ocrconfig -import file_name --3. Restart Oracle Clusterware on all the nodes in your cluster by --restarting each node, or by running the following command: [root]# crsctl start crs --4. se the Cluster Verification Utility (CVU) to verify the OCR integrity. Run the --following command, where the -n all argument retrieves a list of all the cluster --nodes that are configured as part of your cluster: [root]# cluvfy comp ocr -n all [-verbose]

################################################

# To add a primary or secondary OCR location ################################################ --1. Use the following command to verify that Oracle Clusterware is running on the --node on which the you are going to perform the replace operation: crsctl check crs --2. Run the following command as root using either destination_file or disk to --designate the target location of the primary OCR: ocrconfig -replace ocr destination_file ocrconfig -replace ocr disk --3. Run the following command as root using either destination_file or disk to --designate the target location of the secondary OCR: ocrconfig -replace ocrmirror destination_file ocrconfig -replace ocrmirror disk --4. If any node that is part of your current Oracle RAC cluster is shut down, then run --the following command on the stopped node to let that node rejoin the cluster --after the node is restarted: ocrconfig -repair ocr [device_name] ######################## # Removing an OCR ######################## --To remove an OCR location, at least one OCR must be online. You can remove an OCR --location to reduce OCR-related overhead or to stop mirroring your OCR because you --moved the OCR to a redundant storage system, such as a redundant array of --independent disks (RAID). --To remove an OCR location from your Oracle RAC cluster: --1. Use the OCRCHECK utility to ensure that at least one OCR other than the OCR --that you are removing is online. ocrcheck --2. Run the following command on any node in the cluster to remove one copy of the OCR: ocrconfig -replace ocr ######################################################### ####### # How to install and setup ASMLib packages ######################################################### #######

###### List of Platform depenfent but Kernel independent packages ###### oracleasm-support-2.1.3-1.<distro>.x86_64.rpm oracleasmlib-2.0.4-1.<distro>.x86_64.rpm

###### List of Platform and Kernel dependent packages ###### oracleasm-2.6.16.46-0.12-smp-2.0.3-1.x86_64.rpm oracleasm-2.6.16.46-0.12-default-2.0.3-1.x86_64.rpm -- Install the packages using the command rpm ivh on all the nodes

###### ASMLib Configuration (to repeat on all nodes of the cluster) ###### [root@lrh-node1 /]# /etc/init.d/oracleasm configure Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: grid Default group to own the driver interface []: asmdba Start Oracle ASM library driver on boot (y/n) [n]: y Fix permissions of Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration [ OK ] Creating /dev/oracleasm mount point [ OK ] Loading module "oracleasm" [ OK ] Mounting ASMlib driver filesystem [ OK ] Scanning system for ASM disks [ OK ]

###### Create disk partitions and ASM disks (from one of the nodes of the cluster) ###### -- Having a list of devices dedicated to ASM create one primary partition per disk or LUN using fdisk -- command and than use ASMLib utility to implement one ASM Disk per device. lrh-node1:/u01 # fdisk -l /dev/sdh Disk /dev/sdh: 38.6 GB, 38654705664 bytes 255 heads, 63 sectors/track, 4699 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System

/dev/sdh1 1 4700 37747712 lrh-node1:/u01 # fdisk -l /dev/sdi

83 Linux

Disk /dev/sdi: 38.6 GB, 38654705664 bytes 255 heads, 63 sectors/track, 4699 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdi1 1 4700 37747712 83 Linux lrh-node1:/u01 # fdisk -l /dev/sdj Disk /dev/sdj: 38.6 GB, 38654705664 bytes 255 heads, 63 sectors/track, 4699 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdj1 1 4699 37744686 83 Linux lrh-node1:/u01 # lrh-node1:/u01 # fdisk /dev/sdh The number of cylinders for this disk is set to 4699. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): d Selected partition 1 Command (m for help): p Disk /dev/sdh: 38.6 GB, 38654705664 bytes 255 heads, 63 sectors/track, 4699 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System

Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-4699, default 1): 1 Last cylinder or +size or +sizeM or +sizeK (1-4699, default 4699): +1024M

Command (m for help): p Disk /dev/sdh: 38.6 GB, 38654705664 bytes 255 heads, 63 sectors/track, 4699 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot /dev/sdh1 Start 1 End 125 Blocks Id System 1004031 83 Linux

Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 First cylinder (126-4699, default 126): Using default value 126 Last cylinder or +size or +sizeM or +sizeK (126-4699, default 4699): Using default value 4699 Command (m for help): p Disk /dev/sdh: 38.6 GB, 38654705664 bytes 255 heads, 63 sectors/track, 4699 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot /dev/sdh1 /dev/sdh2 Start 1 126 End Blocks Id System 125 1004031 83 Linux 4699 36740655 83 Linux

Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. lrh-node1:/u01 #partprobe

###### Once the disks have been sliced the ASM Disks can be created ###### lrh-node1:/u01 # /etc/init.d/oracleasm Usage: /etc/init.d/oracleasm {start|stop|restart|enable|disable|configure|createdisk|deletedisk|querydisk|listdisks|scandi sks|status} lrh-node1:/u01 # /etc/init.d/oracleasm createdisk OCR1 /dev/sdh1 Marking disk "/dev/sdh1" as an ASM disk: done

lrh-node1:/u01 # /etc/init.d/oracleasm createdisk DATA1 /dev/sdh2 Marking disk "/dev/sdh2" as an ASM disk: done lrh-node1:/u01 # /etc/init.d/oracleasm scandisks Scanning system for ASM disks: done lrh-node1:/u01 #

-- After having created all the ASM Disks runs the utility scandisks on all nodes of the cluster, -- this allows ASM to discover all the new ASM Disks created. lrh-node2:/dev/oracleasm/disks # /etc/init.d/oracleasm scandisks Scanning system for ASM disks: done lrh-node2:/dev/oracleasm/disks # /etc/init.d/oracleasm listdisks DATA1 DATA2 DATA3 OCR1 OCR2 OCR3 ###### ASM diskstring and diskgroup parameters ###### *.asm_diskstring='ORCL:*' *.asm_diskgroups='OCR_VOT','DATA1','FRA1' ######################################################### ####### # M A N A G E O R A C L E R A C D A T A B A S E U S I N G SRVCLT ######################################################### #######

######################### # Display the Current Policy ######################### --The following command display the current status of the database srvctl config database -d <db_name> -a ################################## # Change the Current Policy to Another Policy ################################## --Use the following SRVCTL command to change the policy srvctl modify database -d <db_name> -y policy_name

######################################################### ####### # Other SRVCTL and SERVICES Management ######################################################### ####### ##Ho to obtain the Statuses of Services with SRVCTL --The following command returns the status of the service running on the database: srvctl status service -d <db_name> -s <service_name>

##How to Start and Stop Services with SRVCTL --Enter the following SRVCTL syntax from the command line: srvctl start service -d database_unique_name [-s service_name_list] [-i inst_name] [-o start_options] srvctl stop service -d database_unique_name -s service_name_list [-i inst_name] [-o start_options]

### Remove and Add Database and Instance to the CRS srvctl remove database -d <db_name> srvctl add database -d <db_name> -o $ORACLE_HOME srvctl add instance -d <db_name> -i <instance_name> -n <hostname> ##Add service to a database with preferred instance RAC01 srvctl add service -d <db_name> -s <service_name> -r RAC1 -a RAC2, RAC3

##Enabling and Disabling Services with SRVCTL --Use the following SRVCTL syntax from the command line to enable and disable services: srvctl enable service -d <db_name> -s <service_name> [-i inst_name] srvctl disable service -d <db_name> -s <service_name> [-i inst_name]

## How to relocate service from instance 1 to instance 3: srvctl relocate service -d <db_name> -s <service_name> -i instance1 -t instace3 --Stop/Sart Listener srvctl stop listener -n <hostname> [-l listener_name] --Stop/Start Instance srvctl start instance -d <db_name> -i <instance_name>

######################################################### ################### How to establish remote connection to a database in restricted or NO MOUNT mode ######################################################### ###################

Connections via listener to an instance that is in RESTRICTED status or in NO MOUNT status fail with TNS-12526, TNS-12527 or TNS-12528 even when supplying the credentials for a privileged account. The lsnrctl services output will show that the service handler for this instance is in state: BLOCKED or RESTRICTED.

DBTEST_PRIV = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = rm01it.emilianofusaglia.net)(PORT = 1522))) (CONNECT_DATA = (UR=A) (SERVICE_NAME = dbtest.emilianofusaglia.net) ) )

Note that the (UR=A) clause is intended to work with a dynamically registered handler so the use of SERVICE_NAME versus SID is preferred. SID may connect to a statically configured handler.

######################################################### #### ########### Duplicate database on 11g from active database ########## ######################################################### #### -- Source database = TRAC -- Duplicate database = TDUP10 ############################## ## PREREQUISITES for Cloning

############################## ## Create the diag structure mkdir -p <diag_directory_DB_Name> ## Generate the PFILE from the Original DB and copy to the target host: alter system create pfile'/tmp/pfile.ora' from spfile; scp /tmp/pfile.ora lclus01:/tmp ## Adjust the following pfile parameters: DB_NAME LOG_FILE_NAME_CONVERT DB_FILE_NAME_CONVERT DB_CREATE_FILE_DEST ##Statup nomount and create the spfile: alter system create spfile='+DATA/TDUP10/spfile_TDUP10.ora' from pfile'/tmp/pfile.ora'; ######################################################### #### ############################## Network Configuration ############################## ##Listener Static Entry: SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/oracle/product/11.2.0.1) (PROGRAM = extproc) ) (SID_DESC = (global_dbname = TDUP10.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.2.0.1) (sid_name = TDUP11) ) ) ##TNS entry TDUP10.emilianofusaglia.net = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = loraclu-scan.emilianofusaglia.net)(PORT = 1526)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = TDUP10.emilianofusaglia.net) ) ) ######################################################### ####

############################## ##RMAN Duplicate ############################## #rman connect target / connect auxiliary sys/xxxxx@TDUP10.emilianofusaglia.net duplicate target database to TDUP10 from active database DB_FILE_NAME_CONVERT 'TRAC','TDUP10' skip tablespace POR_READONLY; ##Register the database into the Grid Infrastructure srvctl add database -d TDUP10 -o $ORACLE_HOME srvctl add instance -d TDUP10 -i TDUP11 -n lclus01 srvctl add instance -d TDUP10 -i TDUP12 -n lclus02 ##Create the password file on each node using the utility orapwd. #cd $ORACLE_HOME/dbs #orapwd file=<password_file_name> password=<sys_password> entries=n ## Add oratab entry <DB_NAME>:<ORACLE_HOME>:N

######################################################### ################ ## How to implement Oracle Resource Manager granting the CONSUMER GROUPS to Clusterware services ## ######################################################### ################ --Create a service for OLTP sessions srvctl add service -d dbrac10 -s DBRAC10_OLTP -r dbrac11,dbrac12,dbrac13 srvctl start service -d dbrac10 -s DBRAC10_OLTP --Create a service for BATCH sessions srvctl add service -d dbrac10 -s DBRAC10_BATCH -r dbrac11,dbrac12,dbrac13 srvctl start service -d dbrac10 -s DBRAC10_BATCH ######################################################### ########## ## Resource Plan Design: ## MGMT_P1=75% SYS_GROUP, MGMT_P2=80% OLTP, MGMT_P2=10% BATCH, MGMT_P2=5% ## ORA$AUTOTASK_SUB_PLAN, MGMT_P2=5% ORA$DIAGNOSTICS, MGMT_P3=70%

OTHER_GROUPS ######################################################### ########## ## Resource Plan Implementation: BEGIN DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA(); DBMS_RESOURCE_MANAGER.CREATE_PLAN(PLAN => 'REAL_TIME_PLAN', COMMENT => 'Respurce Plan for OLTP database'); DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP (CONSUMER_GROUP => 'OLTP',CATEGORY => 'INTERACTIVE', COMMENT => 'OLTP sessions'); DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP (CONSUMER_GROUP => 'BATCH', CATEGORY => 'BATCH', COMMENT => 'BATCH sessions'); DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE (PLAN => 'REAL_TIME_PLAN', GROUP_OR_SUBPLAN => 'OLTP', COMMENT => 'OLTP group', MGMT_P2 => 80); DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE (PLAN => 'REAL_TIME_PLAN', GROUP_OR_SUBPLAN => 'BATCH', COMMENT => 'BATCH group', MGMT_P3 => 70, PARALLEL_DEGREE_LIMIT_P1 => 6, ACTIVE_SESS_POOL_P1 => 4, MAX_IDLE_TIME => 240); DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE (PLAN => 'REAL_TIME_PLAN', GROUP_OR_SUBPLAN => 'SYS_GROUP', COMMENT => 'SYS group', MGMT_P1 => 70); DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE (PLAN => 'REAL_TIME_PLAN', GROUP_OR_SUBPLAN => 'OTHER_GROUPS', COMMENT => 'OTHER group', MGMT_P4 => 50); DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE (PLAN => 'REAL_TIME_PLAN',GROUP_OR_SUBPLAN => 'ORA$AUTOTASK_SUB_PLAN', COMMENT => 'ORA$AUTOTASK_SUB_PLAN group', MGMT_P3 => 20); DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE (PLAN => 'REAL_TIME_PLAN', GROUP_OR_SUBPLAN => 'ORA$DIAGNOSTICS', COMMENT => 'ORA$DIAGNOSTICS group', MGMT_P3 => 10); DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING (DBMS_RESOURCE_MANAGER.SERVICE_NAME, 'DBRAC10_OLTP', 'OLTP'); DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING(DBMS_RESOURCE_MANA GER.SERVICE_NAME, 'DBRAC10_BATCH', 'BATCH'); DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA(); DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA(); DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA(); END; / ######################################################### ########## ## Grant the Switch to the Users BEGIN DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA(); dbms_resource_manager_privs.grant_switch_consumer_group ('PERF_TEST','OLTP',FALSE); dbms_resource_manager_privs.grant_switch_consumer_group ('PERF_TEST','BATCH',FALSE); DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA(); DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();

DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA(); END; / ######################################################### ########## ## Enable the Resource plan with the FORCE Option to avoid the Scheduler window to ## activate a different plan during the job execution. ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'FORCE:REAL_TIME_PLAN'; ######################################################### ########## ## Example of Group and Plan Deletion: BEGIN DBMS_RESOURCE_MANAGER.DELETE_PLAN (PLAN => 'REAL_TIME_PLAN'); END; / BEGIN DBMS_RESOURCE_MANAGER.DELETE_CONSUMER_GROUP(CONSUMER_GROUP => 'OLTP'); END; / BEGIN DBMS_RESOURCE_MANAGER.DELETE_CONSUMER_GROUP(CONSUMER_GROUP => 'BATCH'); END; / ######################################################### ########## ## SQL Queries for monitoring Resource Manager utilization: ######################################################### ########## ## Check the service name used by each session select inst_id, username, SERVICE_NAME, count(*) from gv$session where SERVICE_NAME <>'SYS$BACKGROUND' group by inst_id, username, SERVICE_NAME order by order by 2,3,1; ## List the Active Resource Consumer Groups: select INST_ID, NAME, ACTIVE_SESSIONS, EXECUTION_WAITERS, REQUESTS, CPU_WAIT_TIME, CPU_WAITS, CONSUMED_CPU_TIME, YIELDS, QUEUE_LENGTH, ACTIVE_SESSION_LIMIT_HIT from gV$RSRC_CONSUMER_GROUP where name in ('SYS_GROUP','BATCH','OLTP','OTHER_GROUPS') order by 2,1;

######################################################### ## ## How to implement Oracle Instance Caging ######################################################### ## Instance Caging allows to dynamically limit the amount of CPUs used by each database instance. This option is specifically thought for shared environments where the database administrator has to guarantee resources to all the instances running on the same hardware.

-- Enable Resource Manager: ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'REAL_TIME_PLAN'; --or ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'FORCE:REAL_TIME_PLAN'; FORCE option prevents that the scheduler window activates a different plan during the job execution. (for more info about Resource Manager visit ResourceManager)

-- Enable Instance Caging: ALTER SYSTEM SET CPU_COUNT=4 SCOPE=BOTH SID='*'; ######################################################### ## ## Instance Caging Test ######################################################### ## Hardware SUN 8 Cores 64 Threads One test database capped to 16 CPU - 25% of total hardware capacity. ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'default_plan'; ALTER SYSTEM SET cpu_count=16 scope=both sid='*'; The load generator opens 70 sessions producing intencive CPU activity, which as shown by the reports below do not exceed the 25% od the total server capacity (%Usr column).

Core Utilization for Integer pipeline Core,Int-pipe %Usr %Sys %Usr+Sys ----------------------- -------0,0 21.92 3.99 25.91 0,1 24.58 2.08 26.66 1,0 23.16 2.27 25.43 1,1 24.06 2.64 26.71 2,0 22.73 3.03 25.76 2,1 23.47 3.02 26.49 3,0 23.56 2.86 26.42 3,1 24.71 2.87 27.58 4,0 23.53 2.07 25.60 4,1 24.87 2.92 26.79 5,0 24.98 2.87 27.85 5,1 24.76 3.81 28.57 6,0 23.53 4.11 27.64 6,1 23.99 3.14 27.13 7,0 24.72 3.55 28.26 7,1 22.62 3.59 26.21 ----------------------- -----Avg 22.36 3.05 25.41 If on the Wait Events is present "resmgr:cpu quantum", it means that Instance Caging throttled the amount of CPUs available to the system.
INST_ID SID SQL_ADDR EVENT WAIT_CLASS STATE WAIT_MICRO REMAI NG TOT_TIME LAST_WAIT -------------------------------------------------------- -------------------------------------------------------------------1 676 000000047ECC5BD0 resmgr:cpu quantum Scheduler WAITED KNOWN TIME 11599 11599 43834 2 624 000000047D7D38D0 resmgr:cpu quantum Scheduler WAITED KNOWN TIME 11599 11599 43834 2 395 000000047BCA97A8 resmgr:cpu quantum Scheduler WAITING 11669 -1 11668 0 1 300 000000047C8D6B38 resmgr:cpu quantum Scheduler WAITED KNOWN TIME 11677 11677 589 2 317 00 resmgr:cpu quantum Scheduler WAITED KNOWN TIME 11677 11677 589 1 649 000000047ECC5BD0 resmgr:cpu quantum Scheduler WAITING 14445 -1 14444 0 2 676 000000047D7D38D0 resmgr:cpu quantum Scheduler WAITING 14445 -1 14444 0 1 148 000000047C8D6B38 resmgr:cpu quantum Scheduler WAITING 14827 -1 14826 0 2 1413 000000047D7D36E0 resmgr:cpu quantum Scheduler WAITED KNOWN TIME 18824 18824 98023 1 148 000000047ECC4010 resmgr:cpu quantum Scheduler WAITED KNOWN TIME 18824 18824 98023 1 149 000000047C8D6B38 resmgr:cpu quantum Scheduler WAITING 19122 -1 19121 0 1 69 000000047C8D6B38 resmgr:cpu quantum Scheduler WAITING 19177 -1 19176 0 2 51 000000047BCA97A8 resmgr:cpu quantum Scheduler WAITING 19177 -1 19176 0 2 1106 000000047BCA97A8 PX Deq: Signal ACK RSG Other WAITING 112660 7340 120000 0

######################################################### ####### ## RMAN Setup, Backup and Restore Test ######################################################### ####### --Basic tips for Disk & Tape Backup/Recovery strategy. --To be able to efficiently perform incremental backup check if "Change Block Tracking" is enable: SELECT * FROM V$BLOCK_CHANGE_TRACKING; --If it doesn't exist create it: alter database enable block change tracking using file '+FRA1';

########################### # RMAN Format Description ########################### %a Current database activation id %A Zero-filled activation ID %c The copy number of the backup piece within a set of duplexed backup pieces.bMaximum value is 256 %d Database name %D Current day of the month from the Gregorian calendar in format DD %e Archived log sequence number %f Absolute file number %F Combines the DBID, day, month, year, and sequence into a unique and repeatable generated name %h Archived redo log thread number %I DBID %M Month in the Gregorian calendar in the format MM

%n Database name, padded on the right with x characters to a total length of eight characters %N Tablespace name. Only valid when backing up datafiles as image copies. %p Piece number within the backup set. This value starts at 1 for each backup set and is incremented by 1 for each backup piece created. If a PROXY is specified, the %p variable must be included in the FORMAT string either explicitly or implicitly within %U. %r Resetlogs ID %s Backup set number. This number is a counter in the control file that is incremented for each backup set. The counter value starts at 1 and is unique for the lifetime of the control file. If you restore a backup control file, then duplicate values can result. CREATE CONTROLFILE initializes the counter at 1. %S Zero-filled sequence number %t Backup set time stamp, a 4-byte value derived as the number of seconds elapsed since a fixed reference time. The combination of %s and %t can be used to form a unique name for the backup set. %T Year, month, and day in the Gregorian calendar in the format: YYYYMMDD %u An 8-character name constituted by compressed representations of the backup set or image copy number and the time the backup set or image copy was created %U A system-generated unique filename (default). %U is different for image copies and backup pieces. For a backup piece, %U is a shorthend for %u_%p_%c and guarantees uniqueness in generated backup filenames. For an image copy of a datafile, %U means the following: data-D-%d_id-%I_TS%N_FNO-%f_%u %Y Year in this format: YYYY %% Percent (%) character. For example, %%Y translates to the string %Y ######################################################### #######

##################### ## RMAN STEUP ##################### connect target / connect catalog usr_catalog/xxxxxx@RMAN_Catalog.emilianofusaglia.net REGISTER DATABASE; CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS; CONFIGURE BACKUP OPTIMIZATION ON; CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE'; CONFIGURE CONTROLFILE AUTOBACKUP ON; CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '+FRA1/%F'; CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET; CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' RATE 200M PARMS 'ENV=(NB_ORA_POLICY=ora_monthly)' FORMAT 'backup_%d_%T_%U'; CONFIGURE CHANNEL DEVICE TYPE DISK RATE 200M FORMAT '+BACKUP/%d_%T_%U'; CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE'; CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+FRA1/snapcf_dbrm07.cf';

--Full Database backup BACKUP INCREMENTAL LEVEL = 0 DATABASE PLUS ARCHIVELOG NOT BACKED UP 1 TIMES; --Incremental Database backup BACKUP INCREMENTAL LEVEL = 1 DATABASE PLUS ARCHIVELOG NOT BACKED UP 1 TIMES;

--RMAN Catalog Maintenance and Backup Validation SHOW ALL; run{ ALLOCATE CHANNEL disk1 DEVICE TYPE DISK; allocate channel tape1 type 'SBT_TAPE' PARMS 'ENV=(NB_ORA_CLIENT=lxrm01.emilianofusaglia.net)'; crosscheck backup completed before "sysdate-30"; delete noprompt expired backup; delete noprompt obsolete; } --Rman basic Backup Report SHOW all; list backup summary;

######################################################### ####### ## RESTORE TESTS ######################################################### ####### --Restore Datafile from TAPE SQL> startup mount ORACLE instance started. Total System Global Area 4277059584 bytes Fixed Size 2154936 bytes Variable Size 2499812936 bytes Database Buffers 1728053248 bytes Redo Buffers 47038464 bytes Database mounted. SQL> SQL> SQL> select * from v$recover_file; FILE# ONLINE ONLINE_ ERROR CHANGE# TIME ---------- ------- ------- -------------------------- ---------- -------9 ONLINE ONLINE FILE NOT FOUND 0

RMAN> connect target / connected to target database: DBRM07 (DBID=3653795552, not open) RMAN> connect catalog usr_catalog/xxxxxx@RMAN_Catalog.emilianofusaglia.net connected to recovery catalog database RMAN> restore datafile 9; Starting restore at 07.04.11 starting full resync of recovery catalog full resync complete allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=1393 instance=DBRM071 device type=DISK allocated channel: ORA_DISK_2 channel ORA_DISK_2: SID=1422 instance=DBRM071 device type=DISK allocated channel: ORA_DISK_3

channel ORA_DISK_3: SID=1451 instance=DBRM071 device type=DISK allocated channel: ORA_DISK_4 channel ORA_DISK_4: SID=1480 instance=DBRM071 device type=DISK allocated channel: ORA_DISK_5 channel ORA_DISK_5: SID=1509 instance=DBRM071 device type=DISK allocated channel: ORA_SBT_TAPE_1 channel ORA_SBT_TAPE_1: SID=1538 instance=DBRM071 device type=SBT_TAPE channel ORA_SBT_TAPE_1: Veritas NetBackup for Oracle - Release 6.5 (2010120208) channel ORA_SBT_TAPE_1: starting datafile backup set restore channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set channel ORA_SBT_TAPE_1: restoring datafile 00009 to +DATA1/dbtr07/datafile/indxtbs.562.717369571 channel ORA_SBT_TAPE_1: reading from backup piece 13ld31mb_1_2 channel ORA_SBT_TAPE_1: piece handle=13ld30mb_1_2 tag=TAG20110406T121258 channel ORA_SBT_TAPE_1: restored backup piece 1 channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:04:16 Finished restore at 07.04.11 starting full resync of recovery catalog full resync complete RMAN> recover datafile 9; Starting recover at 07.04.11 using channel ORA_DISK_1 using channel ORA_DISK_2 using channel ORA_DISK_3 using channel ORA_DISK_4 using channel ORA_DISK_5 using channel ORA_SBT_TAPE_1 channel ORA_SBT_TAPE_1: starting incremental datafile backup set restore channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set destination for restore of datafile 00009: +DATA1/dbtr07/datafile/indxtbs.562.717369571 channel ORA_SBT_TAPE_1: reading from backup piece 1eld30qo_1_2 channel ORA_SBT_TAPE_1: piece handle=1eld30qo_1_2 tag=TAG20110406T121520 channel ORA_SBT_TAPE_1: restored backup piece 1 channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:04:25 starting media recovery archived log for thread 1 with sequence 162 is already on disk as file +FRA1/dbtr07/archivelog/2011_04_06/thread_1_seq_162.16655.718385527 archived log for thread 1 with sequence 163 is already on disk as file +FRA1/dbtr07/archivelog/2011_04_06/thread_1_seq_163.8452.718385527 archived log for thread 2 with sequence 141 is already on disk as file +FRA1/dbtr07/archivelog/2011_04_06/thread_2_seq_141.4677.718385531

archived log for thread 3 with sequence 132 is already on disk as file +FRA1/dbtr07/archivelog/2011_04_06/thread_3_seq_132.5466.718385525 archived log for thread 3 with sequence 133 is already on disk as file +FRA1/dbtr07/archivelog/2011_04_06/thread_3_seq_133.1479.718385525 channel ORA_SBT_TAPE_1: starting archived log restore to default destination channel ORA_SBT_TAPE_1: restoring archived log archived log thread=3 sequence=131 channel ORA_SBT_TAPE_1: reading from backup piece 1hld30r8_1_2 channel ORA_SBT_TAPE_1: piece handle=1hld30r8_1_2 tag=TAG20100507T121535 channel ORA_SBT_TAPE_1: restored backup piece 1 channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:02:15 archived log file name=+FRA1/dbtr07/archivelog/2011_04_06/thread_3_seq_131.16592.718387087 thread=3 sequence=131 channel ORA_SBT_TAPE_1: starting archived log restore to default destination channel ORA_SBT_TAPE_1: restoring archived log archived log thread=1 sequence=161 channel ORA_SBT_TAPE_1: reading from backup piece 1ild30r8_1_2 channel ORA_SBT_TAPE_1: piece handle=1ild30r8_1_2 tag=TAG20100507T121535 channel ORA_SBT_TAPE_1: restored backup piece 1 channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:01:58 archived log file name=+FRA1/dbtr07/archivelog/2011_04_06/thread_1_seq_161.13854.718387211 thread=1 sequence=161 channel ORA_SBT_TAPE_1: starting archived log restore to default destination channel ORA_SBT_TAPE_1: restoring archived log archived log thread=2 sequence=140 channel ORA_SBT_TAPE_1: reading from backup piece 1jld30r8_1_2 channel ORA_SBT_TAPE_1: piece handle=1jld30r8_1_2 tag=TAG20100507T121535 channel ORA_SBT_TAPE_1: restored backup piece 1 channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:01:55 archived log file name=+FRA1/dbtr07/archivelog/2011_04_06/thread_2_seq_140.1819.718387319 thread=2 sequence=140 channel default: deleting archived log(s) archived log file name=+FRA1/dbtr07/archivelog/2011_04_06/thread_1_seq_161.13854.718387211 RECID=824 STAMP=718387211 channel default: deleting archived log(s) archived log file name=+FRA1/dbtr07/archivelog/2011_04_06/thread_2_seq_140.1819.718387319 RECID=825 STAMP=718387319 channel default: deleting archived log(s) archived log file name=+FRA1/dbtr07/archivelog/2011_04_06/thread_3_seq_131.16592.718387087 RECID=823 STAMP=718387086

media recovery complete, elapsed time: 00:00:02 Finished recover at 07.04.11

Data Guard Architecture


Below an example of Active Data Guard architecture diagram based on 11g R2, where primary and physical standby are running in RAC.

######################################################### ### ## DATA GUARD IMPLEMENTATION on 11g R1 RAC ## ######################################################### ### --Primary DB: USA10 --Standby DB: EURO10 ############################ --From the Primary Database: alter database force logging; alter alter alter alter alter alter alter alter alter database database database database database database database database database add add add add add add add add add standby standby standby standby standby standby standby standby standby logfile logfile logfile logfile logfile logfile logfile logfile logfile size size size size size size size size size 1G; 1G; 1G; 1G; 1G; 1G; 1G; 1G; 1G;

select * from v$standby_log; alter system set parallel_execution_message_size=8192 scope=spfile; alter system set fast_start_mttr_target=3600; ############################ --From the Standby Site: --Dump the pfile and change the following parameters for the Standby: *.control_files='+DATA1/EURO10/CONTROLFILE/CURRENT01.CTR','+FRA1/EURO10/CONTR OLFILE/CURRENT02.CTR' *.db_file_name_convert='/USA10/','/EURO10/' *.log_file_name_convert='/USA10/','/EURO10/' *.db_unique_name='EURO10' --AS this is a single instance RAC _disable_interface_checking = TRUE

LISTENER = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = lneuron01.emilianofusaglia.net)(PORT = 1526)) )

alter system set local_listener='LISTENER' scope=spfile sid='EURO11'; --mkdir -p /u01/oracle/admin/EURO10/adump mkdir -p /u01/oracle/diag/rdbms/euro10/EURO11/cdump

######################################################### ############# ################## #Network Congiguration ######################## ######################################################### #############

########################################## ## Static Listener Entries: ########################################## --Primary Cluster Node 1 SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (PROGRAM = extproc) ) (SID_DESC = (global_dbname = USA10_DGMGRL.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = USA11) ) (SID_DESC = (global_dbname = USA10_DGB.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = USA11) ) (SID_DESC = (global_dbname = USA10.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = USA11) ) ) --Primary Cluster Node 2 SID_LIST_LISTENER =

(SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (PROGRAM = extproc) ) (SID_DESC = (global_dbname = USA10_DGMGRL.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = USA12) ) (SID_DESC = (global_dbname = USA10_DGB.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = USA12) ) (SID_DESC = (global_dbname = USA10.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = USA12) ) ) --Primary Cluster Node 3 SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (PROGRAM = extproc) ) (SID_DESC = (global_dbname = USA10_DGMGRL.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = USA13) ) (SID_DESC = (global_dbname = USA10_DGB.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = USA13) ) (SID_DESC = (global_dbname = USA10.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = USA13) )

) --Standby Cluster Node 1 SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (PROGRAM = extproc) ) (SID_DESC = (global_dbname = EURO10_DGMGRL.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = EURO11) ) (SID_DESC = (global_dbname = EURO10_DGB.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = EURO11) ) (SID_DESC = (global_dbname = EURO10.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = EURO11) ) )

########################################## ##TNS Entries Primary & Standby Cluster ########################################## EURO11.emilianofusaglia.net = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = lneuron01-vip.emilianofusaglia.net)(PORT = 1526)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = EURO10.emilianofusaglia.net) (INSTANCE_NAME = EURO11) ) ) EURO10.emilianofusaglia.net = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = lneuron01-vip.emilianofusaglia.net)(PORT =

1526)) (FAILOVER = on) (LOAD_BALANCE = on) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = EURO10.emilianofusaglia.net) ) ) USA10.emilianofusaglia.net = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = lnusan01-vip.emilianofusaglia.net)(PORT = 1526)) (ADDRESS = (PROTOCOL = TCP)(HOST = lnusan02-vip.emilianofusaglia.net)(PORT = 1526)) (ADDRESS = (PROTOCOL = TCP)(HOST = lnusan03-vip.emilianofusaglia.net)(PORT = 1526)) (FAILOVER = on) (LOAD_BALANCE = on) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = USA10.emilianofusaglia.net) ) )

########################################## ## Standby Controlfile Setup ## ########################################## alter database create standby controlfile as '/u01/oracle/emiliano/USA10.stby.ctl'; # scp /u01/oracle/emiliano/USA10.stby.ctl lneuron01:/tmp

### Execute the following steps: Startup nomount pfile='/u01/oracle/emiliano/PFILES/EURO10_for_GD.ora'; Create the spfile on ASM Startup nomount exclusive;

## Do not RESTORE the Controlfile automatically DONE by the CLONE Procedure ## Which update also Contro_file parameter into SPFIE!!!!

rman target / RMAN> restore controlfile from '/u01/oracle/emiliano/USA10.stby.ctl'; --########################################## ## Duplicate the Database ########################################## ##From Primary DB Restore the DB to the Standby Side: rman connect target sys/xxxxxxx@USA10 ----PS No domain for Target connect auxiliary sys/xxxxxxx@EURO10.emilianofusaglia.net ----PS Use domain for Auxiliary run { allocate channel p1 type disk; allocate auxiliary channel s1 type disk; duplicate target database for standby from active database dorecover; } --or run { allocate channel p1 type disk; allocate channel p2 type disk; allocate channel p3 type disk; allocate channel p4 type disk; allocate auxiliary channel s1 type disk; allocate auxiliary channel s2 type disk; allocate auxiliary channel s3 type disk; allocate auxiliary channel s4 type disk; duplicate target database for standby from active database dorecover; } ########################################## ## Register the Standby Database to CRS ########################################## srvctl add database -d EURO10 -o $ORACLE_HOME srvctl add instance -d EURO10 -i EURO11 -n lneuron01 ########################################## ## Data Guard Broker Configuration ##########################################

--Primary alter system set dg_broker_config_file1 = '+DATA1/USA10/DATAGUARDCONFIG/brokerconfig01.dat'; alter system set dg_broker_config_file2 = '+DATA1/USA10/DATAGUARDCONFIG/brokerconfig02.dat'; --Standby alter system set dg_broker_config_file1 = '+DATA1/EURO10/DATAGUARDCONFIG/brokerconfig01.dat'; alter system set dg_broker_config_file2 = '+DATA1/EURO10/DATAGUARDCONFIG/brokerconfig02.dat'; --on both databases alter system set dg_broker_start=true; --From a primary node connect to the broker and create the configuration #dgmgrl DGMGRL> connect sys/xxxxxxxx@USA10.emilianofusaglia.net Connected. create configuration 'CONFDG10' as primary database is 'USA10' connect identifier is USA10.emilianofusaglia.net; add database 'EURO10' as connect identifier is EURO10.emilianofusaglia.net; edit database 'USA10' set property 'LogXptMode' = 'SYNC'; edit database 'EURO10' set property 'LogXptMode' = 'SYNC'; edit database 'USA10' set property 'LogXptMode' = 'ASYNC'; edit database 'EURO10' set property 'LogXptMode' = 'ASYNC'; --edit configuration set protection mode as maxavailability; edit configuration set protection mode as maxprotection; --edit configuration set protection mode as maxavailability; --edit configuration set protection mode as maxperformance; enable configuration; edit database 'USA10' set property 'NetTimeout' = '20'; edit database 'EURO10' set property 'NetTimeout' = '20'; exit; DGMGRL> SWITCHOVER to "EURO10";

--Stop Recovery edit database 'EURO10' set state = 'APPLY-OFF'; --Start Recovery edit database 'EURO10' set state = 'APPLY-ON'; --Enabling ArchiveLog Tracing on Primary and Standby Good for Troubleshooting!! edit instance 'USA11' on database 'USA10' set property 'LogArchiveTrace' = '1'; edit instance 'USA12' on database 'USA10' set property 'LogArchiveTrace' = '1'; edit instance 'USA13' on database 'USA10' set property 'LogArchiveTrace' = '1'; edit instance 'EURO11' on database 'EURO10' set property 'LogArchiveTrace' = '6345'; ######################################################### ##################### ################################################### ## How to configure DATA GUARD BROKER ################################################### # Create the data guard broker files --Primary alter system set dg_broker_config_file1 = '+DATA1/TEFOXTR/DATAGUARDCONFIG/brokerconfig01.dat'; alter system set dg_broker_config_file2 = '+FRA1/TEFOXTR/DATAGUARDCONFIG/brokerconfig02.dat'; --Standby alter system set dg_broker_config_file1 = '+DATA1/TEFOXZH/DATAGUARDCONFIG/brokerconfig01.dat'; alter system set dg_broker_config_file2 = '+FRA1/TEFOXZH/DATAGUARDCONFIG/brokerconfig02.dat';

--Start the broker on both databases alter system set dg_broker_start=true;

--From the primary database connect to the broker and create the configuration #dgmgrl DGMGRL> connect sys/xxxxxxx@TEFOXTR.emilianofusaglia.net --Create the configuration for primary database

create configuration 'BRKTEFOX' as primary database is 'TEFOXTR' connect identifier is TEFOXTR_DGBHA.emilianofusaglia.net; --Add standby database add database 'TEFOXZH' as connect identifier is TEFOXZH_DGBHA.emilianofusaglia.net; --Setup the properties edit database 'TEFOXTR' set property 'LogXptMode' = 'SYNC'; edit database 'TEFOXZH' set property 'LogXptMode' = 'SYNC'; --or edit database 'TEFOXTR' set property 'LogXptMode' = 'ASYNC'; edit database 'TEFOXZH' set property 'LogXptMode' = 'ASYNC'; edit configuration set protection mode as maxprotection; --or edit configuration set protection mode as maxavailability; --or edit configuration set protection mode as maxperformance; edit database 'TEFOXTR' set property 'NetTimeout' = '20'; edit database 'TEFOXZH' set property 'NetTimeout' = '20'; edit database 'TEFOXTR' set property 'Binding' = 'MANDATORY'; edit database 'TEFOXZH' set property 'Binding' = 'MANDATORY'; enable configuration; -- Switchover command: DGMGRL> SWITCHOVER to "TEFOXZH"; --Stop Recovery edit database 'TEFOXZH' set state = 'APPLY-OFF'; --Start Recovery edit database 'TEFOXZH' set state = 'APPLY-ON'; edit database 'TEFOXZH' set state = 'APPLY-ON' WITH APPLY INSTANCE ='TEFOXZH1'; --Enable ArchiveLog Tracing on Primary and Standby for Troubleshooting edit instance 'TEFOXTR1' on database 'TEFOXTR' set property 'LogArchiveTrace' = '1'; edit instance 'TEFOXZH1' on database 'TEFOXZH' set property 'LogArchiveTrace' = '6345';

######################################################### ## How to display Oracle Cluster name #########################################################

Oracle Clusterware includes the utility cemutlo which provides cluster name and version. [grid@lnxcld01 ~]$ cemutlo -h Usage: /GRID_INFRA/product/11.2.0.3/bin/cemutlo.bin [-n] [-w] where: -n prints the cluster name -w prints the clusterware version in the following format: <major_version>:<minor_version>:<vendor_info> --Cluster Name [grid@lnxcld01 ~]$ cemutlo -n cloud01 --Cluster Version [grid@lnxcld01 ~]$ cemutlo -w 2:1:

######################################################### ### ## DATA GUARD IMPLEMENTATION on 11g R1 RAC ## ######################################################### ### --Primary DB: USA10 --Standby DB: EURO10 ############################ --From the Primary Database: alter database force logging; alter alter alter alter alter alter alter alter alter database database database database database database database database database add add add add add add add add add standby standby standby standby standby standby standby standby standby logfile logfile logfile logfile logfile logfile logfile logfile logfile size size size size size size size size size 1G; 1G; 1G; 1G; 1G; 1G; 1G; 1G; 1G;

select * from v$standby_log; alter system set parallel_execution_message_size=8192 scope=spfile; alter system set fast_start_mttr_target=3600; ############################ --From the Standby Site: --Dump the pfile and change the following parameters for the Standby: *.control_files='+DATA1/EURO10/CONTROLFILE/CURRENT01.CTR','+FRA1/EURO10/CONTR OLFILE/CURRENT02.CTR' *.db_file_name_convert='/USA10/','/EURO10/' *.log_file_name_convert='/USA10/','/EURO10/' *.db_unique_name='EURO10' --AS this is a single instance RAC _disable_interface_checking = TRUE

LISTENER = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = lneuron01.emilianofusaglia.net)(PORT = 1526)) ) alter system set local_listener='LISTENER' scope=spfile sid='EURO11'; --mkdir -p /u01/oracle/admin/EURO10/adump mkdir -p /u01/oracle/diag/rdbms/euro10/EURO11/cdump

######################################################### ############# ################## #Network Congiguration ######################## ######################################################### #############

########################################## ## Static Listener Entries: ########################################## --Primary Cluster Node 1 SID_LIST_LISTENER = (SID_LIST = (SID_DESC =

(SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (PROGRAM = extproc) ) (SID_DESC = (global_dbname = USA10_DGMGRL.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = USA11) ) (SID_DESC = (global_dbname = USA10_DGB.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = USA11) ) (SID_DESC = (global_dbname = USA10.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = USA11) ) ) --Primary Cluster Node 2 SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (PROGRAM = extproc) ) (SID_DESC = (global_dbname = USA10_DGMGRL.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = USA12) ) (SID_DESC = (global_dbname = USA10_DGB.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = USA12) ) (SID_DESC = (global_dbname = USA10.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = USA12) ) )

--Primary Cluster Node 3 SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (PROGRAM = extproc) ) (SID_DESC = (global_dbname = USA10_DGMGRL.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = USA13) ) (SID_DESC = (global_dbname = USA10_DGB.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = USA13) ) (SID_DESC = (global_dbname = USA10.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = USA13) ) ) --Standby Cluster Node 1 SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (PROGRAM = extproc) ) (SID_DESC = (global_dbname = EURO10_DGMGRL.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = EURO11) ) (SID_DESC = (global_dbname = EURO10_DGB.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7) (sid_name = EURO11) ) (SID_DESC = (global_dbname = EURO10.emilianofusaglia.net) (ORACLE_HOME = /u01/oracle/product/11.1.0.7)

(sid_name = EURO11) ) )

########################################## ##TNS Entries Primary & Standby Cluster ########################################## EURO11.emilianofusaglia.net = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = lneuron01-vip.emilianofusaglia.net)(PORT = 1526)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = EURO10.emilianofusaglia.net) (INSTANCE_NAME = EURO11) ) ) EURO10.emilianofusaglia.net = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = lneuron01-vip.emilianofusaglia.net)(PORT = 1526)) (FAILOVER = on) (LOAD_BALANCE = on) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = EURO10.emilianofusaglia.net) ) ) USA10.emilianofusaglia.net = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = lnusan01-vip.emilianofusaglia.net)(PORT = 1526)) (ADDRESS = (PROTOCOL = TCP)(HOST = lnusan02-vip.emilianofusaglia.net)(PORT = 1526)) (ADDRESS = (PROTOCOL = TCP)(HOST = lnusan03-vip.emilianofusaglia.net)(PORT = 1526)) (FAILOVER = on) (LOAD_BALANCE = on) ) (CONNECT_DATA =

(SERVER = DEDICATED) (SERVICE_NAME = USA10.emilianofusaglia.net) ) )

########################################## ## Standby Controlfile Setup ## ########################################## alter database create standby controlfile as '/u01/oracle/emiliano/USA10.stby.ctl'; # scp /u01/oracle/emiliano/USA10.stby.ctl lneuron01:/tmp

### Execute the following steps: Startup nomount pfile='/u01/oracle/emiliano/PFILES/EURO10_for_GD.ora'; Create the spfile on ASM Startup nomount exclusive;

## Do not RESTORE the Controlfile automatically DONE by the CLONE Procedure ## Which update also Contro_file parameter into SPFIE!!!! rman target / RMAN> restore controlfile from '/u01/oracle/emiliano/USA10.stby.ctl'; --########################################## ## Duplicate the Database ########################################## ##From Primary DB Restore the DB to the Standby Side: rman connect target sys/xxxxxxx@USA10 ----PS No domain for Target connect auxiliary sys/xxxxxxx@EURO10.emilianofusaglia.net ----PS Use domain for Auxiliary run { allocate channel p1 type disk; allocate auxiliary channel s1 type disk; duplicate target database for standby from active database dorecover; } --or

run { allocate channel p1 type disk; allocate channel p2 type disk; allocate channel p3 type disk; allocate channel p4 type disk; allocate auxiliary channel s1 type disk; allocate auxiliary channel s2 type disk; allocate auxiliary channel s3 type disk; allocate auxiliary channel s4 type disk; duplicate target database for standby from active database dorecover; } ########################################## ## Register the Standby Database to CRS ########################################## srvctl add database -d EURO10 -o $ORACLE_HOME srvctl add instance -d EURO10 -i EURO11 -n lneuron01 ########################################## ## Data Guard Broker Configuration ########################################## --Primary alter system set dg_broker_config_file1 = '+DATA1/USA10/DATAGUARDCONFIG/brokerconfig01.dat'; alter system set dg_broker_config_file2 = '+DATA1/USA10/DATAGUARDCONFIG/brokerconfig02.dat'; --Standby alter system set dg_broker_config_file1 = '+DATA1/EURO10/DATAGUARDCONFIG/brokerconfig01.dat'; alter system set dg_broker_config_file2 = '+DATA1/EURO10/DATAGUARDCONFIG/brokerconfig02.dat'; --on both databases alter system set dg_broker_start=true; --From a primary node connect to the broker and create the configuration #dgmgrl DGMGRL> connect sys/xxxxxxxx@USA10.emilianofusaglia.net Connected.

create configuration 'CONFDG10' as primary database is 'USA10' connect identifier is USA10.emilianofusaglia.net; add database 'EURO10' as connect identifier is EURO10.emilianofusaglia.net; edit database 'USA10' set property 'LogXptMode' = 'SYNC'; edit database 'EURO10' set property 'LogXptMode' = 'SYNC'; edit database 'USA10' set property 'LogXptMode' = 'ASYNC'; edit database 'EURO10' set property 'LogXptMode' = 'ASYNC'; --edit configuration set protection mode as maxavailability; edit configuration set protection mode as maxprotection; --edit configuration set protection mode as maxavailability; --edit configuration set protection mode as maxperformance; enable configuration; edit database 'USA10' set property 'NetTimeout' = '20'; edit database 'EURO10' set property 'NetTimeout' = '20'; exit; DGMGRL> SWITCHOVER to "EURO10"; --Stop Recovery edit database 'EURO10' set state = 'APPLY-OFF'; --Start Recovery edit database 'EURO10' set state = 'APPLY-ON'; --Enabling ArchiveLog Tracing on Primary and Standby Good for Troubleshooting!! edit instance 'USA11' on database 'USA10' set property 'LogArchiveTrace' = '1'; edit instance 'USA12' on database 'USA10' set property 'LogArchiveTrace' = '1'; edit instance 'USA13' on database 'USA10' set property 'LogArchiveTrace' = '1'; edit instance 'EURO11' on database 'EURO10' set property 'LogArchiveTrace' = '6345'; ######################################################### #####################

Oracle ASM 11g: Does the ASMCMD cp Command Really Work?


Posted by Alex Gorbachev on Apr 8, 2008

Update 21-Sep-2011: Its definitely was a bug in an early 11g release. As cp command does work in the latest release. Nevertheless, triple check the results if you are using it as part of your backup strategy. Dont forget to test regularly!

Since the introduction of ASM in Oracle 10g Release 1, every ASM administrator has been dreaming of a simple command line tool to copy files between ASM diskgroups and other filesystems. Oracle ASM 10g Release 2 added the handy asmcmd utility, but even though everyone expected a copy command there, it had not been implemented. The only way to copy files to or from an ASM diskgroup was either to use RMAN, to configure XDB for FTP access, or use the DBMS_FILE_TRANSFER package. No wonder that the cp command is the most popular addition to asmcmd tool in Oracle ASM 11g: the hardest barrier to convincing my customers to use ASM has been the inability to access the files and copy them to the OS filesystem using the command-line copy command. Customers wanted to feel the files and be able to easily manipulate them. While working on a Collaborate 08 presentation on Oracle 11g new features out-of-the-box, I was verifying new commands in Oracle ASM 11gs asmcmd utility. It turned our that copying files from or to ASM is still a problem. First, I tried to copy a single text file to an ASM diskgroup:
ASMCMD> cp /home/oracle/.bash_profile +dg2/test.file source /home/oracle/.bash_profile target +dg2/test.file ASMCMD-08012: can not determine file type for file>'/home/oracle/.bash_profile' ORA-15056: additional error message ORA-17503: ksfdopn:DGGetFileAttr15 Failed to open file /home/oracle/.bash_profile ORA-27046: file size is not a multiple of logical block size Additional information: 1 ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 207 ORA-06512: at line 3 (DBD ERROR: OCIStmtExecute)

Hmm . . . Okay. Lets try to do it in multiples of diskgroup blocks:


ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB... MOUNTED EXTERN N 512 4096 1048576 2048... MOUNTED NORMAL N 512 4096 2097152 200... ASMCMD> exit [oracle@lh8 ~]$ dd if=/dev/zero of=/home/oracle/test2.file bs=4k count=10 10+0 records in 10+0 records out [oracle@lh8 ~]$ asmcmd ASMCMD> cp /home/oracle/test2.file +DG2 source /home/oracle/test2.file target +DG2/test2.file ASMCMD-08012: can not determine file type for file->'/home/oracle/test2.file' ORA-15056: additional error message ORA-19762: invalid file type DGGetFileAttr20 ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 207 ORA-06512: at line 3 (DBD ERROR: OCIStmtExecute)

Oops. It seems I cant put any file to ASM. Not that Im very surprised I expected that ASM would automagically try to place all files based on OMF standards and templates. There are, however, only templates and rules for database files in 11g. Right, let me try to backup a controlfile to a filesystem and copy it to ASM:
RMAN> backup format '/tmp/backup.ctl' current controlfile; ... channel ORA_DISK_1: finished piece 1 at 06-APR-08 piece handle=/tmp/backup.ctl tag=TAG20080406T202034 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03 ... ASMCMD> cp /tmp/backup.ctl +dg2/backup.ctl source /tmp/backup.ctl target +dg2/backup.ctl copying file(s)... file, +DG2/backup.ctl, copy committed. ASMCMD> ls -l +dg2/backup.ctl Type Redund Striped Time Sys Name N backup.ctl => +DG2/ASMTESTING/BACKUPSET/TESTING.256.651356493

Alright. That seems to work, except that ASM chose a bizarre location. For some reason, Im not surprised again I kind of expected it to place it somewhere into DB_UNKNOWN like RMAN does when recovering SPFILE from autobackup. In the best Oracle traditions of keeping things consistent, ASMTESTING seems to be reasonable. Was it a hard-coded leftover from the test implementation of the cp command? I wouldnt be surprised, its deja-vu. We can be patient and forgive this for the first release. Lets try to copy it to a filesystem and back:
ASMCMD> cp +dg2/backup.ctl /tmp/backup.ctl2 source +dg2/backup.ctl target /tmp/backup.ctl2 copying file(s)... file, /tmp/backup.ctl2, copy committed. ASMCMD> cp /tmp/backup.ctl2 +dg2/backup.ctl2 source /tmp/backup.ctl2 target +dg2/backup.ctl2 ASMCMD-08012: can not determine file type for file->'/tmp/backup.ctl2' ORA-15056: additional error message ORA-19762: invalid file type DGGetFileAttr20 ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 207 ORA-06512: at line 3 (DBD ERROR: OCIStmtExecute)

Now, that is odd. Perhaps I need to register this file with the database (i.e. in the controlfile)? Lets ask RMAN to catalog this file:
[oracle@lh8 ~]$ ls -l /tmp/backup.ctl* -rw-r----- 1 oracle dba 9797632 Apr 6 20:20 /tmp/backup.ctl -rw-r----- 1 oracle dba 9797632 Apr 6 20:22 /tmp/backup.ctl2 ...

RMAN> catalog start with '/tmp/backup.ctl2'; ... List of Files Which Where Not Cataloged ======================================= File Name: /tmp/backup.ctl2 RMAN-07517: Reason: The file header is corrupted

So the file got corrupted while copying from ASM to a filesystem? Okay. Lets try an ASM -> ASM copy:
ASMCMD> cp +dg2/backup.ctl +dg2/backup.ctl3 source +dg2/backup.ctl target +dg2/backup.ctl3 copying file(s)... file, +DG2/backup.ctl3, copy committed. ... RMAN> catalog start with '+dg2'; ... List of Files Which Where Not Cataloged ======================================= File Name: +dg2/backup.ctl RMAN-07517: Reason: The file header is corrupted File Name: +dg2/backup.ctl3 RMAN-07517: Reason: The file header is corrupted

Hey, both files are actually corrupted, so the corruption occurred in the first copy, from the filesystem to ASM. Alright. Lets try to simply copy a current controlfile within ASM:
ASMCMD> cp +dg1/db11g/controlfile/Current.256.651275203 +dg2/asm_copy source +dg1/db11g/controlfile/Current.256.651275203 target +dg2/asm_copy copying file(s)... file, +DG2/asm_copy, copy committed. ASMCMD> ls -l +dg2/asm_copy Type Redund Striped Sys N +DG2/ASMTESTING/CONTROLFILE/TESTING.258.651358725 ... RMAN> catalog start with '+dg2'; ... List of Files Which Where Not Cataloged ======================================= File Name: +dg2/asm_copy RMAN-07517: Reason: The file header is corrupted Time Name asm_copy =>

These results are disappointing I couldnt make the cp command work even a single time. I should note that I did ask my old good friends, Metalink and Google, about ASMCMD-08012 and the like, but they came back empty. If anyone has been able to test the ASMCMD cp command in 11g, please share your experience.

Das könnte Ihnen auch gefallen