Beruflich Dokumente
Kultur Dokumente
html
http://samadhandba.wordpress.com/category/administration/page/2/
BACKUP AND RECOVERY SCENARIOS Complete Recovery With RMAN Backup.
previous post i have posted a complete recovery with user-managed backup,
here we are going to see the complete recovery using rman backup.
you can perform complete recovery in the following 5 situations.
RMAN Recovery Scenarios of complete recovery.
1. Complete Closed Database Recovery. System datafile is missing
2. Complete Open Database Recovery. Non system datafile is missing
3. Complete Open Database Recovery (when the database is initially closed). Non system datafile is missing
4. Recovery of a Datafile that has no backups.
5. Restore and Recovery of a Datafile to a different location.
1.Complete Closed Database Recovery. System Datafile is missing
In this case complete recovery is performed, only the system datafile is missing,
so the database can be opened without reseting the redologs.
1. rman target /
2. startup mount;
3. restore database or datafile file#;
4. recover database or datafile file#;
5. alter database open;
workshop1:
view plainprint?
SQL> create user sweety identified by sweety;
User created.
Grant succeeded.
SQL> startup
ORACLE instance started.
1219904 bytes
130024128 bytes
Database Buffers
310378496 bytes
Redo Buffers
2973696 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: '/u01/app/oracle/oradata/testdb/system01.dbf'
Database dismounted.
ORACLE instance shut down.
SQL>
Fixed Size
Variable Size
444596224 bytes
1219904 bytes
130024128 bytes
Database Buffers
310378496 bytes
Redo Buffers
2973696 bytes
RMAN>
SQL> conn sys/oracle as sysdba;
Connected.
SQL> col name format a45
SQL> select name , status from v$datafile;
NAME
STATUS
--------------------------------------------- ------/u01/app/oracle/oradata/testdb/system01.dbf
SYSTEM
/u01/app/oracle/oradata/testdb/undotbs01.dbf ONLINE
/u01/app/oracle/oradata/testdb/sysaux01.dbf
/u01/app/oracle/oradata/testdb/users01.dbf
/u03/oradata/test01.dbf
ONLINE
ONLINE
ONLINE
USERNAME
-----------------------------SWEETY
2.Complete Open Database Recovery. Non system datafile is missing,
database is up
1. rman target /
2. sql 'alter tablespace offline immediate';
or
sql 'alter database datafile file# offline;
3. restore datafile 3;
4. recover datafile 3;
5. sql 'alter tablespace online';
or
sql 'alter database datafile file# online;
workshop2:
view plainprint?
SQL> conn sweety/sweety;
Connected.
SQL> create table demo(id number);
Table created.
1 row created.
SQL> commit;
Commit complete.
USERNAME
2 where username='SWEETY';
DEFAULT_TABLESPACE
------------------------------ -----------------------------SWEETY
USERS
System altered.
RMAN>exit
ID
---------123
SQL>
workshop3:
view plainprint?
SQL> conn sweety/sweety;
Connected.
Table created.
1 row created.
SQL> commit;
Commit complete.
SQL> startup
ORACLE instance started.
1219904 bytes
138412736 bytes
Database Buffers
301989888 bytes
Redo Buffers
2973696 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u01/app/oracle/oradata/testdb/users01.dbf'
Database altered.
Database altered.
RMAN> exit
Database altered.
TESTID
---------54321
4.Recovery of a Datafile that has no backups (database is up).
If a non system datafile that was not backed up since the last backup is missing,
recovery can be performed if all archived logs since the creation
of the missing datafile exist. Since the database is up you can check the tablespace name and put it offline. The
option offline immediate is used
to avoid that the update of the datafile header.
Pre requisites: All relevant archived logs.
1. sqlplus '/ as sysdba'
2. alter tablespace offline immediate;
3. alter database create datafile '/user/oradata/u01/dbtst/newdata01.dbf;
4. exit
5. rman target /
6. recover tablespace ;
7. sql 'alter tablespace online';
If the create datafile command needs to be executed to place the datafile on a
location different than the original use:
alter database create datafile '/user/oradata/u01/dbtst/newdata01.dbf' as
'/user/oradata/u02/dbtst/newdata01.dbf'
restriction: controlfile creation time must be prior than datafile creation time.
for more reference refer previous blog post.(user-managed complete recovery).
workshop4:
view plainprint?
SQL> create user john identified by john
2 default tablespace testing;
User created.
Grant succeeded.
Table created.
1 row created.
SQL> commit;
Commit complete.
TESTID
----------
1001
System altered.
Tablespace altered.
---if you want to create datafile in same location
SQL> alter database create datafile '/u03/oradata/test01.dbf';
Database altered.
Database altered.
[oracle@cdbs1 ~]$ rman target /
Tablespace altered.
TESTID
---------1001
5.Restore and Recovery of a Datafile to a different location. Database is up.
If a non system datafile is missing and its original location not available, restore can be made to a different location
and recovery performed.
Pre requisites: All relevant archived logs, complete cold or hot backup.
1. Use OS commands to restore the missing or corrupted datafile to the new
location, ie:
cp -p /user/backup/uman/user01.dbf /user/oradata/u02/dbtst/user01.dbf
2. alter tablespace offline immediate;
3. alter tablespace rename datafile
'/user/oradata/u01/dbtst/user01.dbf' to '/user/oradata/u02/dbtst/user01.dbf';
4. rman target /
5. recover tablespace ;
6. sql 'alter tablespace online';
workshop5:
follow the same example workshop4 for workshop 5 except creating new datafile, here you have to copy the
recent backup file to the new disk location and perform recovery. thats it , rest of the procedures are same.
BACKUP AND RECOVERY SCENARIOS
Complete Recovery With User-managed Backup.
you can perform complete recovery in the below 5 situations.
you cannot recover or create datafile without backup in the following situation:
view plainprint?
SQL> select CONTROLFILE_CREATED from v$database;
CONTROLFILE_CREATED
-------------------07-MAY-2010 01:23:43
view plainprint?
SQL> select creation_time,name from v$datafile;
CREATION_TIME
NAME
5. Restore and Recovery of a Datafile to a different location.(Disk corrupted having recent backup and recover the
datafile in new Disk location).
cp -p /user/backup/uman/system01.dbf /user/oradata/u01/dbtst/system01.dbf
2. startup mount;
3. recover datafile 1;
4. alter database open;
workshop1: system datafile recovery with recent backup
view plainprint?
SQL> create user rajesh identified by rajesh;
User created.
SQL> grant dba to rajesh;
Grant succeeded.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
i manually deleted the datafile system01.dbf for testing purpose only
SQL> startup
ORACLE instance started.
Total System Global Area 444596224 bytes
Fixed Size
Variable Size
1219904 bytes
138412736 bytes
Database Buffers
301989888 bytes
Redo Buffers
2973696 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: '/u01/app/oracle/oradata/testdb/system01.dbf'
Database dismounted.
ORACLE instance shut down.
SQL> host cp /u01/app/oracle/oradata/backup/system01.dbf /u01/app/oracle/oradata/testdb/system01.dbf
system datafile restored from recent backup
1219904 bytes
138412736 bytes
Database Buffers
301989888 bytes
Redo Buffers
2973696 bytes
Database mounted.
SQL> recover datafile 1;
ORA-00279: change 454383 generated at 05/07/2010 01:40:11 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_7_%u_.arc
Log applied.
Media recovery complete.
SQL> alter database open;
Database altered.
Archive Mode
Automatic archival
Enabled
Archive destination
USE_DB_RECOVERY_FILE_DEST
12
14
14
USERNAME
-----------------------------RAJESH
workshop2: Non-system datafile recovery from recent backup when database is open
view plainprint?
SQL> ALTER USER rajesh DEFAULT TABLESPACE users;
User altered.
Connected.
SQL> create table demo(id number);
Table created.
1 row created.
SQL> commit;
Commit complete.
ID
---------123
System altered.
SQL> /
System altered.
Archive Mode
Automatic archival
Enabled
Archive destination
USE_DB_RECOVERY_FILE_DEST
14
16
16
System altered.
Tablespace altered.
Log applied.
Media recovery complete.
SQL> alter tablespace users online;
Tablespace altered.
ID
---------123
3.Complete Open Database Recovery (when the database is initially closed).Non system datafile is missing
If a non system tablespace is missing or corrupted and the database crashed, recovery can be performed after
the database is open.
Pre requisites: A closed or open database backup and archived logs.
1. startup; (you will get ora-1157 ora-1110 and the name of the missing datafile, the database will remain
mounted)
2.
alter database datafile3 offline; (tablespace cannot be used because the database is not open)
3.
4.
Use OS commands to restore the missing or corrupted datafile to its original location, ie:
cp -p /user/backup/uman/user01.dbf /user/oradata/u01/dbtst/user01.dbf
5.
recover datafile 3;
6.
view plainprint?
SQL> conn sys/oracle as sysdba;
Connected.
SQL> alter system switch logfile;
System altered.
USERNAME
DEFAULT_TABLESPACE
------------------------------ -----------------------------RAJESH
USERS
Table created.
1 row created.
SQL> commit;
Commit complete.
ID
---------786
Connected.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> --manually deleting the users01.dbf datafile from testdb folder
warning:for testing purpose only
SQL> host rm -rf /u01/app/oracle/oradata/testdb/users01.dbf
SQL> startup
ORACLE instance started.
1219904 bytes
142607040 bytes
Database Buffers
297795584 bytes
Redo Buffers
2973696 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u01/app/oracle/oradata/testdb/users01.dbf'
Database altered.
Database altered.
Log applied.
Media recovery complete.
Database altered.
ID
---------786
4.Recovery of a Missing Datafile that has no backups (database is open).
If a non system datafile that was not backed up since the last backup is missing,
If the create datafile command needs to be executed to place the datafile on a location different than the original
use:
alter database create datafile '/user/oradata/u01/dbtst/newdata01.dbf' as
'/user/oradata/u02/dbtst/newdata01.dbf'
restriction: datafile should be created after controlfile creation.(i.e,controlfile creation time is prior than datafile
creation time).
workshop 4: Missing Non-system Datafile having no backups
view plainprint?
Session altered.
CONTROLFILE_CREATED
-------------------07-MAY-2010 16:27:22
CREATION_TIME
NAME
Tablespace created.
CREATION_TIME
NAME
-------------------- ---------------------------------------------
CONTROLFILE_CREATED
-------------------07-MAY-2010 16:27:22
User created.
Grant succeeded.
USERNAME
DEFAULT_TABLESPACE
------------------------------ -----------------------------JAY
TESTING
Table created.
1 row created.
SQL> commit;
Commit complete.
ID
---------321
Connected.
SQL> host rm -rf /u01/app/oracle/oradata/testdb/test01.dbf
---manually deleting datafile test01.dbf for testing purpose
ID
---------321
System altered.
Database altered.
----TO CREATE A NEW RECOVERED DATAFILE IN SAME LOCATION.
SQL> alter database create datafile '/u01/app/oracle/oradata/testdb/test01.dbf';
Database altered.
----TO CREATE A NEW RECOVERED DATAFILE IN DIFFERENT LOCATION.
SQL> alter database create datafile '/u01/app/oracle/oradata/testdb/test01.dbf' as '/u03/oradata/test01.dbf';
Database altered.
Log applied.
Media recovery complete.
SQL> alter database datafile 5 online;
Database altered.
ID
---------321
SQL>
1. Use OS commands to restore the missing or corrupted datafile to the new location, ie:
cp -p /user/backup/uman/user01.dbf /user/oradata/u02/dbtst/user01.dbf
2. alter tablespace offline immediate;
3. alter tablespace rename datafile
'/user/oradata/u01/dbtst/user01.dbf' to '/user/oradata/u02/dbtst/user01.dbf';
4. recover tablespace ;
5. alter tablespace online;
workshop 5:
view plainprint?
SQL> create user lachu identified by lachu
2 default tablespace users;
User created.
Grant succeeded.
Table created.
1 row created.
SQL> commit;
Commit complete.
Connected.
SQL> ---manually deleting users01.dbf datafile for testing purpose
SQL> host rm -rf '/u01/app/oracle/oradata/testdb/users01.dbf'
TNAME
TABTYPE CLUSTERID
TABLE
Database altered.
2 '/u01/app/oracle/oradata/testdb/users01.dbf' to '/u03/oradata/users01.dbf';
Tablespace altered.
Log applied.
Database altered.
NAME
--------------------------------------------/u01/app/oracle/oradata/testdb/system01.dbf
/u01/app/oracle/oradata/testdb/undotbs01.dbf
/u01/app/oracle/oradata/testdb/sysaux01.dbf
/u03/oradata/users01.dbf ----------restored in new location (disk)
TNAME
TABTYPE CLUSTERID
TABLE
ID
---------123
Block media recovery recovers an individual corrupt datablock or set of datablocks within a datafile. In cases when
a small number of blocks require media recovery, you can selectively restore and recover damaged blocks rather
than whole datafiles.
Its possible to perform Block Media Recovery with having only OS based hot backups and having NO RMAN
backups.
Look at the following demonstration. Here:
1. Create a new user antony and a table corrupt_test in that schema.
2. Take OS backup (hot backup) of the users01.dbf where the table resides
3. Corrupt the data in that table and get block corruption error.
4. Connect with RMAN and try to use BLOCKRECOVER command. As we havent any backup, we get an error.
5. Catalog the hot backup to the RMAN repository.
6. Use BLOCKRECOVER command and recover the corrupted data block using cataloged hot backup of the
datafile.
7. Query the table and get the data back!
Here is the scenario
view plainprint?
SQL> CREATE USER antony IDENTIFIED BY antony;
User created.
Grant succeeded.
Table created.
1 row created.
SQL> COMMIT;
Commit complete.
SEGMENT_NAME
TABLESPACE_NAME
--------------- -----------------------------CORRUPT_TEST
USERS
SEGMENT_NAME
TABLESPACE_NAME NAME
USERS
/u01/app/oracle/oradata/orcl/users01.dbf
Tablespace altered.
Tablespace altered.
HEADER_BLOCK
-----------67
SQL>
System altered.
SQL> EXIT
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of blockrecover command at 05/06/2010 01:42:25
RMAN-06026: some targets not found - aborting restore
RMAN>
RMAN> CATALOG DATAFILECOPY '/u01/app/oracle/oradata/backup/users01_backup.dbf';
<span style="font-size: x-small;">
cataloged datafile copy
datafile copy filename=/u01/app/oracle/oradata/backup/users01_backup.dbf recid=1 stamp=718249432</span>
RMAN> BLOCKRECOVER DATAFILE 4 BLOCK 68;
RMAN> EXIT
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options</span>
ID
---------123
Command Line History and Editing in SQL*Plus and RMAN on Linux
rlwrap (readline wrapper) utility provides a command history and editing of keyboard input for any other command.
This article explains how to install rlwrap and set it up for SQL*Plus and RMAN.
[oracle@cdbs1 ~]$ rlrmanRecovery Manager: Release 10.2.0.1.0 - Production on Wed May 5 17:14:57 2010
RMAN> exit
Instead of rlrman and rlsqlplus, you can use your own alias name for rman and sqlplus. More than that now you
can use up and down arrow for previous past queries.
[oracle@cdbs1 ~]$ alias rajesh='rlwrap rman'
[oracle@cdbs1 ~]$ rajesh
Recovery Manager: Release 10.2.0.1.0 - Production on Wed May 5 17:15:27 2010
RMAN>
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Archive Mode
Automatic archival
Enabled
Archive destination
USE_DB_RECOVERY_FILE_DEST
7
7
NAME
--------ORCL
Archive Mode
Automatic archival
Enabled
Archive destination
USE_DB_RECOVERY_FILE_DEST
7
7
NAME
--------ORCL
SQL>
ASM Diskgroups
Create Diskgroup
CREATE DISKGROUP disk_group_1 NORMAL
REDUNDANCY
FAILGROUP failure_group_1 DISK
'/devices/diska1' NAME diska1,
'/devices/diska2' NAME diska2,
FAILGROUP failure_group_2 DISK
'/devices/diskb1' NAME diskb1,
'/devices/diskb2' NAME diskb2;
Add disks
ALTER DISKGROUP DATA ADD DISK '/dev/sda3';
Drop a disk
ALTER DISKGROUP DATA DROP DISK DATA_0001;
Rebalance diskgroup
Check Diskgroup
ALTER DISKGROUP DATA CHECK;
ALTER DISKGROUP DATA CHECK NOREPAIR;
srvctl commands
ADD
srvctl add asm -n rac3 -i +ASM3 -o /opt/oracle/app/product/10.2.0/asm
ENABLE
srvctl enable asm -n rac3 -i +ASM3
DISABLE
srvctl disable asm -n rac3 -i +ASM3
START
srvctl start asm -n rac3
STOP
srvctl stop asm -n rac3
CONFIG
srvctl config asm -n rac1
REMOVE
MODIFY
srvctl modify asm -o -n rac1
asmcmd Commands
cd -----changes the current directory to the specified directory
du -----Displays the total disk space occupied by ASM files in the specified
ASM directory and all its subdirectories, recursively.
find -----Lists the paths of all occurrences of the specified name ( with wildcards) under the specified directory.
ls +data/testdb ----Lists the contents of an ASM director, the attributes of the specified file, or the names and
attributes of all disk groups.
lsct -----Lists information about current ASM clients.
rm -f
rmalias ---------Deletes the specified alias, retaining the file that the alias points to
lsdsk ----------Lists disks visible to ASM.
md_backup ------Creates a backup of all of the mounted disk groups.
md_restore ------Restores disk groups from a backup.
remap ----repairs a range of physical blocks on a disk.
cp ------copies files into and out of ASM.
**ASM diskgroup to OS file system.
**OS file system to ASM diskgroup.
**ASM diskgroup to another ASM diskgroup on the same server.
**ASM disk group to ASM diskgroup on a remote server.
DISABLE
alter system stop rolling migration;
*.log_archive_dest_1='LOCATION=+DATA'
*.log_file_name_convert='+DATA/VISKDR','+DATA/VISK' ##added for DG
solution:
Database
Name:
Role:
Enabled:
rajesh
PRIMARY
YES
Database
Name:
Role:
Enabled:
jeyanthi
PHYSICAL STANDBY
NO
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "jeyanthi" on database "jeyanthi"
Starting instance "jeyanthi"...
ORACLE instance started.
Database mounted.
Continuing to reinstate database "jeyanthi" ...
Reinstatement of database "jeyanthi" succeeded
DGMGRL> show configuration verbose;
Configuration
Name:
jeyanthi
Enabled:
YES
Protection Mode:
MaxAvailability
- Primary database
Fast-Start Failover
Threshold: 30 seconds
Observer: rac3
then,
stop and start the observer.(start from another machine)
DGMGRL> stop observer
Done.
DGMGRL> connect sys/oracle@jeyanthi
Connected.
DGMGRL> start observer
Observer started
Configuration
Name:
jeyanthi
Enabled:
YES
Protection Mode:
MaxAvailability
- Primary database
Fast-Start Failover
Threshold: 30 seconds
Observer: rac2
SUCCESS
Configuration of 10g Data Guard Broker and Observer for Switchover
Configuring Data Guard Broker for Switchover, General Review.
On a previous document, 10g Data Guard, Physical Standby Creation, step by step I did describe how to
implement a Data Guard
configuration; on this document I'm adding how to configure the broker and observer, setup the database to
Maximum Availability and
managing switchover from Data Guard Manager, DGMGRL.
Data Guard Broker permit to manage a Data Guard Configuration, from both the Enterprise Manager Grid Control
console, or from a
terminal in command line mode. In this document I will explore command line mode.
Pre requisites include the use of 10g Oracle server, using spfile on both the primary and standby and a third server
for the Observer,
and configure the listeners to include a service for the Data Guard Broker.
The Enviroment
2 Linux servers, Oracle Distribution 2.6.9-55 EL i686 i386 GNU/Linux, the Primary and Standby databases
are located on these
servers.
1 Linux server, RH Linux 2.6.9-42.ELsmp x86_64 GNU/Linux, The Data Guard Broker Observer is located on
this server
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0
ssh is configured for user oracle on both nodes
Oracle Home is on identical path on both nodes
Primary database ANTONY
Standby database JOHN
)
)
SID_LIST_LISTENER_VMRACTEST =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = john)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1 )
(SID_NAME = john)
)
(SID_DESC =
(SID_NAME= john)
(GLOBAL_DBNAME = john_DGMGRL)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1 )
)
)
Tnsnames.ora on Node 1, 2 and the observer node
ANTONY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = antony_DGMGRL)
)
)
JOHN =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = john_DGMGRL)
)
)
Setup the Broker configuration files
The broker configuration files are automatically created when the broker is started using ALTER SYSTEM SET
DG_BROKER_START=TRUE.
The default destination can be modified using the parameters DG_BROKER_CONFIG_FILE1 and
DG_BROKER_CONFIG_FILE2
On Primary:
SQL>SHOW PARAMETERS DG_BROKER_CONFIG
NAME
TYPE
VALUE
string
/u01/app/oracle/product/10.2.0
/db_1/dbs/dr1antony.dat
dg_broker_config_file2
string
/u01/app/oracle/product/10.2.0
/db_1/dbs/dr2antony.dat
On standby:
SQL> SHOW PARAMETERS DG_BROKER_CONFIG
NAME
TYPE
VALUE
string
/u01/app/oracle/product/10.2.0
/db_1/dbs/dr1john.dat
dg_broker_config_file2
string
/u01/app/oracle/product/10.2.0
/db_1/dbs/dr2john.dat
Configuration
Name:
antony
Enabled:
NO
Protection Mode:
MaxPerformance
Database
Name:
Role:
john
PHYSICAL STANDBY
Enabled:
NO
Properties:
InitialConnectIdentifier
= 'john'
LogXptMode
= 'ARCH'
Dependency
= ''
DelayMins
Binding
MaxFailure
= '0'
= 'OPTIONAL'
= '0'
MaxConnections
= '1'
ReopenSecs
= '300'
NetTimeout
= '180'
LogShipping
= 'ON'
PreferredApplyInstance
= ''
ApplyInstanceTimeout
= '0'
ApplyParallel
= 'AUTO'
StandbyFileManagement
ArchiveLagTarget
LogArchiveMaxProcesses
LogArchiveMinSucceedDest
= 'auto'
= '0'
= '30'
= '1'
DbFileNameConvert
LogFileNameConvert
FastStartFailoverTarget
StatusReport
= ''
= '(monitor)'
InconsistentProperties
InconsistentLogXptProps
= '(monitor)'
= '(monitor)'
SendQEntries
= '(monitor)'
LogXptStatus
= '(monitor)'
RecvQEntries
= '(monitor)'
HostName
= 'rac2'
SidName
= 'john'
LocalListenerAddress
StandbyArchiveLocation
= '(ADDRESS=(PROTOCOL=TCP)(HOST=rac2.localdo main)(PORT=1521))'
= '/u01/app/oracle/oradata/john/arch/'
AlternateLocation
= ''
LogArchiveTrace
= '0'
LogArchiveFormat
LatestLog
= '%t_%s_%r.arc'
= '(monitor)'
TopWaitEvents
= '(monitor)'
Database
Name:
Role:
Enabled:
antony
PRIMARY
NO
Properties:
InitialConnectIdentifier
= 'antony'
LogXptMode
= 'ASYNC'
Dependency
= ''
DelayMins
Binding
MaxFailure
= '0'
= 'OPTIONAL'
= '0'
MaxConnections
= '1'
ReopenSecs
= '300'
NetTimeout
= '180'
LogShipping
= 'ON'
PreferredApplyInstance
= ''
ApplyInstanceTimeout
= '0'
ApplyParallel
= 'AUTO'
StandbyFileManagement
= 'auto'
ArchiveLagTarget
= '0'
LogArchiveMaxProcesses
= '30'
LogArchiveMinSucceedDest
DbFileNameConvert
LogFileNameConvert
FastStartFailoverTarget
StatusReport
InconsistentProperties
InconsistentLogXptProps
= '1'
= '/u01/app/oracle/oradata/john/, /u01/app/o racle/oradata/antony/'
= '/u01/app/oracle/oradata/john/, /u01/app/o racle/oradata/antony/'
= ''
= '(monitor)'
= '(monitor)'
= '(monitor)'
SendQEntries
= '(monitor)'
LogXptStatus
= '(monitor)'
RecvQEntries
= '(monitor)'
HostName
= 'rac1'
SidName
LocalListenerAddress
StandbyArchiveLocation
AlternateLocation
= 'antony'
= '(ADDRESS=(PROTOCOL=TCP)(HOST=rac1.localdo main)(PORT=1521))'
= '/u01/app/oracle/oradata/antony/arch/'
= ''
LogArchiveTrace
= '0'
LogArchiveFormat
= '%t_%s_%r.arc'
LatestLog
= '(monitor)'
TopWaitEvents
= '(monitor)'
Configuration
Name:
antony
Enabled:
YES
Protection Mode:
MaxPerformance
Database
Name:
Role:
john
PHYSICAL STANDBY
Enabled:
YES
Properties:
InitialConnectIdentifier
= 'john'
LogXptMode
= 'ARCH'
Dependency
= ''
DelayMins
Binding
MaxFailure
= '0'
= 'OPTIONAL'
= '0'
MaxConnections
= '1'
ReopenSecs
= '300'
NetTimeout
= '180'
LogShipping
= 'ON'
PreferredApplyInstance
= ''
ApplyInstanceTimeout
= '0'
ApplyParallel
= 'AUTO'
StandbyFileManagement
= 'auto'
ArchiveLagTarget
= '0'
LogArchiveMaxProcesses
= '30'
LogArchiveMinSucceedDest
= '1'
DbFileNameConvert
= '/u01/app/oracle/oradata/antony/, /u01/app/oracle/oradata/john/'
LogFileNameConvert
= '/u01/app/oracle/oradata/antony/, /u01/app/oracle/oradata/john/'
FastStartFailoverTarget
StatusReport
InconsistentProperties
InconsistentLogXptProps
= ''
= '(monitor)'
= '(monitor)'
= '(monitor)'
SendQEntries
= '(monitor)'
LogXptStatus
= '(monitor)'
RecvQEntries
= '(monitor)'
HostName
= 'rac2'
SidName
= 'john'
LocalListenerAddress
= '(ADDRESS=(PROTOCOL=TCP)(HOST=rac2.localdomain)(PORT=1521))'
StandbyArchiveLocation
= '/u01/app/oracle/oradata/john/arch/'
AlternateLocation
= ''
LogArchiveTrace
= '0'
LogArchiveFormat
= '%t_%s_%r.arc'
LatestLog
= '(monitor)'
TopWaitEvents
= '(monitor)'
Configuration
Name:
antony
Enabled:
YES
Protection Mode:
MaxPerformance
Database
Name:
Role:
john
PHYSICAL STANDBY
Enabled:
YES
Properties:
InitialConnectIdentifier
= 'john'
LogXptMode
= 'ARCH'
Dependency
= ''
DelayMins
Binding
MaxFailure
= '0'
= 'OPTIONAL'
= '0'
MaxConnections
= '1'
ReopenSecs
= '300'
NetTimeout
= '180'
LogShipping
= 'ON'
PreferredApplyInstance
= ''
ApplyInstanceTimeout
= '0'
ApplyParallel
= 'AUTO'
StandbyFileManagement
ArchiveLagTarget
LogArchiveMaxProcesses
LogArchiveMinSucceedDest
DbFileNameConvert
= 'auto'
= '0'
= '30'
= '1'
= '/u01/app/oracle/oradata/antony/, /u01/app/oracle/oradata/john/'
LogFileNameConvert
FastStartFailoverTarget
StatusReport
InconsistentProperties
InconsistentLogXptProps
= '/u01/app/oracle/oradata/antony/, /u01/app/oracle/oradata/john/'
= ''
= '(monitor)'
= '(monitor)'
= '(monitor)'
SendQEntries
= '(monitor)'
LogXptStatus
= '(monitor)'
RecvQEntries
= '(monitor)'
HostName
= 'rac2'
SidName
LocalListenerAddress
StandbyArchiveLocation
= 'john'
= '(ADDRESS=(PROTOCOL=TCP)(HOST=rac2.localdomain)(PORT=1521))'
= '/u01/app/oracle/oradata/john/arch/'
AlternateLocation
= ''
LogArchiveTrace
= '0'
LogArchiveFormat
LatestLog
TopWaitEvents
= '%t_%s_%r.arc'
= '(monitor)'
= '(monitor)'
These are the steps required to enable and check Fast Start Failover and the Observer:
1. Ensure standby redologs are configured on all databases.
on primary:
SQL> SELECT TYPE,MEMBER FROM V$LOGFILE;
TYPE
MEMBER
ONLINE /u01/app/oracle/oradata/antony/redo02.log
ONLINE /u01/app/oracle/oradata/antony/redo01.log
STANDBY /u01/app/oracle/oradata/antony/redoby04.log
STANDBY /u01/app/oracle/oradata/antony/redoby05.log
STANDBY /u01/app/oracle/oradata/antony/redoby06.log
On standby:
TYPE
MEMBER
---------- -------------------------------------------------ONLINE
/u01/app/oracle/oradata/john/redo03.log
ONLINE
/u01/app/oracle/oradata/john/redo02.log
ONLINE
/u01/app/oracle/oradata/john/redo01.log
STANDBY
/u01/app/oracle/oradata/john/redoby04.log
STANDBY
/u01/app/oracle/oradata/john/redoby05.log
STANDBY
/u01/app/oracle/oradata/john/redoby06.log
note: if ORA-12514: TNS:listener does not currently know of service requested in connect descriptor Failed.
You are no longer connected to ORACLE
Please connect again.
you must start instance (primary database) manually
SQL> conn / as sysdba
SQL> startup mount;
Configuration
Name:
antony
Enabled:
YES
Protection Mode:
MaxAvailability
Database
Name:
Role:
john
PHYSICAL STANDBY
Enabled:
YES
Configuration
Name:
antony
Enabled:
YES
Protection Mode:
MaxAvailability
Fast-Start Failover
Threshold: 30 seconds
Observer: rac1
Database
Name:
Role:
Enabled:
antony
PRIMARY
YES
Database
Name:
Role:
Enabled:
john
PHYSICAL STANDBY
YES
DGMGRL>
Database dismounted.
ORACLE instance shut down.
Operation requires shutdown of instance "john" on database "john"
Shutting down instance "john"...
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "antony" on database "antony"
Starting instance "antony"...
ORACLE instance started.
Database mounted.
Operation requires startup of instance "john" on database "john"
Starting instance "john"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "john"
DGMGRL>
Configuration
Name:
antony
Enabled:
YES
Protection Mode:
MaxAvailability
Databases:
antony - Physical standby database
- Fast-Start Failover target
john
- Primary database
Fast-Start Failover
Threshold: 30 seconds
Observer: rac1
Database
Name:
Role:
Enabled:
john
PRIMARY
YES
Database
Name:
Role:
Enabled:
antony
PHYSICAL STANDBY
YES
17328
1 0 12:25 ?
00:00:00 ora_smon_whiteowl
oracle
Configuration
Name:
whiteowl
Enabled:
YES
Protection Mode:
MaxAvailability
Fast-Start Failover
Threshold: 30 seconds
Observer: rac1
'blackowl'
Hostname:
'rac2'
Instance name:
'blackowl'
Service Name:
'blackowl'
Standby Type:
'physical'
Enabled:
'yes'
Required:
'yes'
Default state:
'PRIMARY'
Intended state:
PFILE:
'PRIMARY'
''
Number of resources: 1
Resources:
Name: blackowl (default) (verbose name='blackowl')
Current status for "blackowl":
'whiteowl'
Hostname:
'rac1'
Instance name:
'whiteowl'
Service Name:
'whiteowl'
Standby Type:
'physical'
Enabled:
'yes'
Required:
'yes'
Default state:
'STANDBY'
Intended state:
PFILE:
'STANDBY'
''
Number of resources: 1
Resources:
Name: whiteowl (default) (verbose name='whiteowl')
Current status for "whiteowl":
Warning: ORA-16817: unsynchronized Fast-Start Failover configuration
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "whiteowl" on database "whiteowl"
Starting instance "whiteowl"...
ORACLE instance started.
Database mounted.
Continuing to reinstate database "whiteowl" ...
Reinstatement of database "whiteowl" succeeded
12:26:02.89 Monday, January 25, 2010
then check,
DGMGRL> show configuration verbose;
Configuration
Name:
whiteowl
Enabled:
YES
Protection Mode:
MaxAvailability
Fast-Start Failover
Threshold: 30 seconds
Observer: rac1
after connecting with the observer, i gave show configuration verbose command and show database verbose
'whiteowl' command, it showed the below error message.
note: here my database name whiteowl
ORA-16820:
Fast-Start Failover observer is no longer observing this databaseCause:
A previously started observer was
no longer actively observing this database. A significant amount of time elapsed since this database last heard from
the observer. Possible reasons were: - The node where the observer was running was not available.
- The network connection between the observer and this database was not available.
- Observer process was terminated unexpectedly.
Action: Check the reason why the observer cannot contact this database. If the problem cannot be corrected,
stop the current observer by connecting to the Data Guard configuration and issue the DGMGRL "STOP OBSERVER"
command. Then restart the observer on another node. You may use the DGMGRL "START OBSERVER" command to
start the observer on the other node.
i checked the listeners, tnsnames.ora files and tnsping command in primary,standby, observer machines
and then as above mentioned i stop the observer and then start the observer from primary database machine.
now its working fine.
DGMGRL> show configuration verbose;
Configuration
Name:
whiteowl
Enabled:
YES
Protection Mode:
MaxAvailability
Fast-Start Failover
Threshold: 30 seconds
Observer: rac1
DGMGRL>
Step by Step, document for creating Physical Standby Database, 10g DATA GUARD
10g Data Guard, Physical Standby Creation, step by step
The Enviroment
2 Linux servers, Oracle Distribution 2.6.9-55 EL i686 i386 GNU/Linux
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0
ssh is configured for user oracle on both nodes
No Archive Mode
Automatic archival
Disabled
Archive destination
USE_DB_RECOVERY_FILE_DEST
0
1
NAME
--------WHITE
NAME
-------------------------------------------------------------------------------/u01/app/oracle/oradata/white/system01.dbf
/u01/app/oracle/oradata/white/undotbs01.dbf
/u01/app/oracle/oradata/white/sysaux01.dbf
/u01/app/oracle/oradata/white/users01.dbf
NAME
TYPE
VALUE
string
white
SQL>
USERNAME
SYSDB SYSOP
TRUE TRUE
GROUP# TYPE
MEMBER
Database altered.
Database altered.
Database altered.
GROUP# TYPE
MEMBER
6 rows selected.
File created.
(or)
File created.
Edit the pfile to add the standby parameters, here shown highlighted:
white.__db_cache_size=184549376
white.__java_pool_size=4194304
white.__large_pool_size=4194304
white.__shared_pool_size=88080384
white.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/white/adump'
*.background_dump_dest='/u01/app/oracle/admin/white/bdump'
*.compatible='10.2.0.1.0'
*.control_files='/u01/app/oracle/oradata/white/control01.ctl','/u01/app/oracle/oradata/white/control02.ctl','/u01/a
pp/oracle/oradata/white/control03.ctl'
*.core_dump_dest='/u01/app/oracle/admin/white/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='white'
*.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=whiteXDB)'
*.job_queue_processes=10
*.open_cursors=300
*.pga_aggregate_target=94371840
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=285212672
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/u01/app/oracle/admin/white/udump'
db_unique_name='white'
LOG_ARCHIVE_CONFIG='DG_CONFIG=(white,black)'
LOG_ARCHIVE_DEST_1='LOCATION=/u01/app/oracle/oradata/white/arch/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=white'
LOG_ARCHIVE_DEST_2='SERVICE=black LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=black'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
#Standby role parameters-----------------------------------------fal_server=black
fal_client=white
standby_file_management=auto
db_file_name_convert='/u01/app/oracle/oradata/black/','/u01/app/oracle/oradata/white/'
log_file_name_convert='/u01/app/oracle/oradata/black/','/u01/app/oracle/oradata/white/'
Once the new parameter file is ready we create from it the spfile:
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup nomount pfile=/u01/app/oracle/product/10.2.0/db_1/dbs/initwhite.ora
ORA-16032: parameter LOG_ARCHIVE_DEST_1 destination string cannot be translated
ORA-07286: sksagdi: cannot obtain device information.
Linux Error: 2: No such file or directory
note: create a archive log destination(location) folder as per in parameter file and then startup the database.
SQL> startup nomount pfile=/u01/app/oracle/product/10.2.0/db_1/dbs/initwhite.ora
ORACLE instance started.
1218992 bytes
Variable Size
96470608 bytes
Database Buffers
184549376 bytes
Redo Buffers
2973696 bytes
File created.
Enable Archiving
On 10g you can enable archive log mode by mounting the database and executing the archivelog command:
SQL> startup mount
ORACLE instance started.
Fixed Size
1218992 bytes
Variable Size
96470608 bytes
Database Buffers
184549376 bytes
Redo Buffers
2973696 bytes
Database mounted.
SQL> alter database archivelog;
Database altered.
Database altered.
Archive Mode
Automatic archival
Enabled
Archive destination
/u01/app/oracle/oradata/white/arch/
2
2
SQL>
Create an RMAN backup which we will use later to create the standby:
'/u01/app/oracle/backup/%U';
In this simple example, I am backing up the primary database to disk; therefore, I must make the backupsets
available to the standby host if I want to use them as the basis for my duplicate operation:
[oracle@rac2 ~]$ cd /u01/app/oracle/backup
[oracle@rac2 backup]$ ls -lart
total 636080
drwxrwxr-x 9 oracle oinstall
100% 6944KB
6.8MB/s
00:00
06l3v448_1_1
100%
51MB 16.9MB/s
00:03
WHITE_01l3v1uv_1_1.bckp
100%
48MB
WHITE_02l3v203_1_1.bckp
100% 507MB
2.7MB/s
00:18
1.5MB/s
05:47
WHITE_03l3v2jf_1_1.bckp
00:07
WHITE_04l3v2jv_1_1.bckp
100% 1315KB
00:01
1.3MB/s
NOTE:
The primary and standby database location for backup folder must be same.
for eg: /u01/app/oracle/backup folder
On the standby node create the required directories to get the datafiles
mkdir -p /u01/app/oracle/oradata/black
mkdir -p /u01/app/oracle/oradata/black/arch
mkdir -p /u01/app/oracle/admin/black
mkdir -p /u01/app/oracle/admin/black/adump
mkdir -p /u01/app/oracle/admin/black/bdump
mkdir -p /u01/app/oracle/admin/black/udump
mkdir -p /u01/app/oracle/flash_recovery_area/WHITE
mkdir -p /u01/app/oracle/flash_recovery_area/WHITE/onlinelog
initwhite.ora
100% 1704
1.7KB/s
00:00
Copy and edit the primary init.ora to set it up for the standby role,as here shown highlighted:
black.__db_cache_size=188743680
black.__java_pool_size=4194304
black.__large_pool_size=4194304
black.__shared_pool_size=83886080
black.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/black/adump'
*.background_dump_dest='/u01/app/oracle/admin/black/bdump'
*.compatible='10.2.0.1.0'
*.control_files='/u01/app/oracle/oradata/black/control01.ctl','/u01/app/oracle/oradata/black/control02.ctl','/u01/ap
p/oracle/oradata/black/control03.ctl'
*.core_dump_dest='/u01/app/oracle/admin/black/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='/u01/app/oracle/oradata/white/','/u01/app/oracle/oradata/black/'
*.db_name='white'
*.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.db_unique_name='black'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=blackXDB)'
*.fal_client='black'
*.fal_server='white'
*.job_queue_processes=10
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(white,black)'
*.LOG_ARCHIVE_DEST_1='LOCATION=/u01/app/oracle/oradata/black/arch/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=black'
Configure the listener and tnsnames to support the database on both nodes
Configure listener.ora on both servers to hold entries for both databases
#on RAC2 Machine
LISTENER_VMRACTEST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT = 1521))
)
)
SID_LIST_LISTENER_VMRACTEST =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = white)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
(SID_NAME = white)
)
)
LISTENER_VMRACTEST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT = 1521))
)
)
SID_LIST_LISTENER_VMRACTEST =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = black)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
(SID_NAME = black)
)
)
WHITE =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = white)
)
)
BLACK =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = black)
)
)
#on rac1 machine
LISTENER_VMRACTEST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT = 1521))
)
)
WHITE =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = white)
)
)
BLACK =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = black)
)
)
Start the listener and check tnsping on both nodes to both services
#on machine rac1
[oracle@rac1 tmp]$ lsnrctl stop LISTENER_VMRACTEST
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=rac1.localdomain)(PORT=1521)))
The command completed successfully
[oracle@rac1 tmp]$ lsnrctl start LISTENER_VMRACTEST
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=rac1.localdomain)(PORT=1521)))
STATUS of the LISTENER
-----------------------Alias
Version
LISTENER_VMRACTEST
TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date
21-JAN-2010 00:00:00
Uptime
Trace Level
off
Security
SNMP
OFF
/u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora
/u01/app/oracle/product/10.2.0/db_1/network/log/listener_vmractest.log
TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:00:21
TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:00:29
LISTENER_VMRACTEST
TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date
21-JAN-2010 00:23:08
Uptime
Trace Level
off
Security
SNMP
OFF
/u01/app/oracle/product/10.2.0/db_1/network/admin/list ener.ora
/u01/app/oracle/product/10.2.0/db_1/network/log/listen er_vmractest.log
TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:23 :14
TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:23 :18
Set Up the Environment to Support the Standby Database on the standby node.
Create a passwordfile for the standby:
[oracle@rac1 ~]$ orapwd file=$ORACLE_HOME/dbs/orapwblack password=oracle
note: sys password must be identical for both primary and standby database
1218992 bytes
Variable Size
92276304 bytes
Database Buffers
188743680 bytes
Redo Buffers
2973696 bytes
File created.
1218992 bytes
Variable Size
92276304 bytes
Database Buffers
188743680 bytes
Redo Buffers
2973696 bytes
SQL> alter database recover managed standby database disconnect from session;
Test the configuration by generating archive logs from the primary and then querying the standby to see if the logs
are being successfully applied.
On the Primary:
Archive Mode
Automatic archival
Enabled
Archive destination
/u01/app/oracle/oradata/white/arch/
10
10
On the Standby:
SQL> archive log list;
Database log mode
Archive Mode
Automatic archival
Enabled
Archive destination
/u01/app/oracle/oradata/black/arch/
8
0
10
This chapter show how to create the Rman catalog, how to register a database with it and how to review some of
the information contained in the catalog.
The catalog database is usually a small database it contains and maintains the metadata of all rman backups
performed using the catalog.
step1: create a tablespace for storing recovery catalog information in recovery catalog database
here my recovery catalog database is demo1
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
SQL> startup
ORACLE instance started.
step 2: create a user for recovery catalog and assign a tablespace and resources to that user
SQL> create user sai identified by sai default tablespace rman quota unlimited on rman;
step 3: Connect to recovery catalog and register the database with recovery catalog:
[oracle@rac2 bin]$ . oraenv
ORACLE_SID = [oracle] ? demo1
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 is /u01/app/oracle
. oraenv
ORACLE_SID
3. Register database
example:
verification:
connect to the recovery catalog database demo1 and connect as recovery catalog user sai;
SQL> conn sai/sai;
Connected.
DB_KEY
DB_ID CURR_DBINC_KEY
1 3710360247
141 2484479252
142
example:
DB_KEY DBINC_KEY
DBID NAME
RESETLOGS_CHANGE# RESETLOGS
2 3710360247 DEMO1
142 2484479252 ANTO
594567 29-DEC-09
522753 30-DEC-09
DB_KEY DBINC_KEY
DBID NAME
RESETLOGS_CHANGE# RESETLOGS
2 3710360247 DEMO1
594567 29-DEC-09
on primary database
on standby database
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/oradata/archive
Oldest online log sequence 13
Next log sequence to archive 0
Current log sequence 18
shutdown immediate
startup nomount
alter database mount standby database;
alter database recover automatic standby database;
select local.thread#
,
local.sequence# from
(select thread#
,
sequence#
from
v$archived_log
10
11
12
13
14
15
still i the archive logs are not applied to the standby database.
finally i tried recovering a standby database using rman , el-caro blog document
i got a solution, now my primary and standby database has equal archives.
In
10g you can use an incremental backup and recover the standby using the
same to compensate for the missing archivelogs as shown below
In
the case below archivelogs with sequence numbers 137 and 138 which are
required on the standby are deleted to simulate this problem.
CURRENT_SCN
----------548283
Step 2: On the primary database create the needed incremental backup from the above SCN
RMAN> backup device type disk incremental from scn 548283 database format '/u01/backup/bkup_%U';
RMAN>
Move the backup files to a new folder called new_incr so that they are the only files in that folder.
Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done
RMAN>
From the alert.log you will notice that the standby database is still looking for the old log files
*************************************************
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 137-137
DBID 768471617 branch 600609988
**************************************************
Copy the standby control file to the standby site and restart the standby database in managed recovery mode...
NOW CHECK THE ARCHIVE LOG LIST ON BOTH PRIMARY AND STANDBY DATABASE,
SQL> archive log list;
Database log mode
Archive Mode
Automatic archival
Enabled
Archive destination
/u01/app/oracle/oradata/archive
20
22
22
Archive Mode
Automatic archival
Enabled
Archive destination
/u01/app/oracle/oradata/archive
20
0
22
SQL>
changing database dbid
SQL> startup mount
ORACLE instance started.
[oracle@rac1 ~]$
SQL> alter database open resetlogs;
Database altered.
DBID
---------3682222232
INTERNAL OPERATION OF HOT BACKUP
What Happens When A Tablespace/Database Is Kept In Begin Backup Mode
This document explains in detail about what happens when a tablespace/datafile is kept in hot backup/begin
backup mode.
To perform online/hot backup we have to put the tablespace in begin backup mode followed by copying the
datafiles and then putting the tablespace to end backup.
In 8i, 9i we have to put each tablespace individually in begin/end backup mode to perform the online backup. From
10g onwards the entire database can be put in begin/end backup mode.
Example :
One danger in making online backups is the possibility of inconsistent data within a block. For example, assume
that you are backing up block 100 in datafile users.dbf. Also, assume that the copy utility reads the entire block
while DBWR is in the middle of updating the block. In this case, the copy utility may read the old data in the top
half of the block and the new data in the bottom top half of the block. The result is called a fractured block,
meaning that the data contained in this block is not consistent. at a given SCN.
1. The first time a block is changed in a datafile that is in hot backup mode, the entire block is written to the redo
log files, not just the changed bytes. Normally only the changed bytes (a redo vector) is written. In hot backup
mode, the entire block is logged the first time. This is because you can get into a situation where the process
copying the datafile and DBWR are working on the same block simultaneously.
Lets say they are and the OS blocking read factor is 512bytes (the OS reads 512 bytes from disk at a time). The
backup program goes to read an 8k Oracle block. The OS gives it 4k. Meanwhile -- DBWR has asked to rewrite this
block. the OS schedules the DBWR write to occur right now. The entire 8k block is rewritten. The backup program
starts running again (multi-tasking OS here) and reads the last 4k of the block. The backup program has now
gotten an fractured block -- the head and tail are from two points in time.
We cannot deal with that during recovery. Hence, we log the entire block image so that during recovery, this block
is totally rewritten from redo and is consistent with itself atleast. We can recover it from there.
2. The datafile headers which contain the SCN of the last completed checkpoint are not updated while a file is in
hot backup mode. This lets the recovery process understand what archive redo log files might be needed to fully
recover this file.
To limit the effect of this additional logging, you should ensure you only place one tablepspace at a time in backup
mode and bring the tablespace out of backup mode as soon as you have backed it up. This will reduce the number
of blocks that may have to be logged to the minimum possible.
Try to take the hot/online backups when there is less / no load on the database, so that less redo will be
generated.
v$ASM view, Automatic Storage Management views
The following v$ASM views describe the structure and components of ASM:
v$ASM_ALIAS
This view displays all system and user-defined aliases. There is one row for every alias present in every diskgroup
mounted by the ASM instance. The RDBMS instance displays no rows in this view.
V$ASM_ATTRIBUTE
This Oracle Database 11g view displays one row for each ASM attribute defined. Theseattributes are listed when
they are defined in CREATE DISKGROUP or ALTER DISKGROUP statements. DISK_REPAIR_TIMER is an example of
an attribute.
V$ASM_CLIENT
This view displays one row for each RDBMS instance that has an opened ASM diskgroup.
V$ASM_DISK
This view contains specifics about all disks discovered by the ASM isntance, including mount status, disk state, and
size. There is one row for every disk discovered by the ASM instance.
V$ASM_DISK_IOSTAT
This displays information about disk I/O statistics for each ASM Client. If this view is queried from the database
instance, only the rows for that instance are shown.
V$ASM_DISK_STAT
This view contains similar content as the v$ASM_DISK, except v$ASM_DISK_STAT reads disk information from
cache and thus performs no disk discovery. Thsi view is primarily used form quick acces to the disk information
without the overhead of disk discovery.
V$ASM_DISKGROUP
This view displays one row for every ASM diskgroup discovered by the ASM instance on the node.
V$ASM_DISKGROUP_STAT
This view contains all the similar view contents as the v$ASM_DISKGROUP, except that v$ASM_DISK_STAT reads
disk information from the cache and thus performs no disk discovery. This view is primarily used for quick access to
the diskgroup information without the overhead of disk discovery.
V$ASM_FILE
This view displays information about ASM files. There is one row for every ASM file in every diskgroup mounted by
the ASM instance. In a RDBMS instance, V$ASM_FILE displays no row.
V$ASM_OPERATION
This view describes the progress of an influx ASM rebalance operation. In a RDBMS instance, v$ASM_OPERATION
displays no rows.
V$ASM_TEMPLATE
This view contains information on user and system-defined templated. v$ASM_TEMPLATE displays one row for
every template present in every diskgroup mounted by the ASM instance. In a RDBMS instance, v$ASM_TEMPLATE
displays one row for every template present in every diskgroup mounted by the ASM instance with which the
RDBMS instance communicates.
thats it,
oracle DBA Tips (PART-II)
V$ALERT_TYPES provides information such as group and type for each alert.
V$METRICNAME contains the names, identifiers, and other information about the
system metrics.
Lists all DDL locks held in the database and all outstanding
DBA_DDL_LOCKS
requests for a DDL lock
Lists all DML locks held in the database and all outstanding
DBA_DML_LOCKS
requests for a DML lock
Lists all locks or latches held in the database and all outstanding
DBA_LOCK
requests for a lock or latch
Displays a row for each lock or latch that is being held, and one
DBA_LOCK_INTERNAL
row for each outstanding request for a lock or latch
v$sess_io
v$session_longops
v$session_wait
v$sysstat
v$resource_limit
v$sqlarea
v$latch
Checkpoint information
V$PARAMETER Displays the names of control files as specified in the CONTROL_FILES initialization parameter
Redo entries record data that you can use to reconstruct all changes made to the
database, including the undo segments. Therefore, the redo log also protects rollback
data. When you recover the database using redo data, the database reads the change
vectors in the redo records and applies the changes to the relevant blocks.
Oracle Database assigns each redo log file a new log sequence number every time a
log switch occurs and LGWR begins writing to it. When the database archives redo log
files, the archived log retains its log sequence number. A redo log file that is cycled
back for use is given the next available log sequence number.
Oracle Database uses the checksum to detect corruption in a redo log block. The
database verifies the redo log block when the block is read from an archived log
during recovery and when it writes the block to an archive log file. An error is raised
and written to the alert log if corruption is detected.
If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in the
statement.
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;
This statement clears the corrupted redo logs and avoids archiving them. The cleared
redo logs are available for use even though they were not archived.
If you clear a log file that is needed for recovery of a backup, then you can no longer
recover from that backup. The database writes a message in the alert log describing the
backups from which you cannot recover.
Note:
If you clear an unarchived redo log file, you should make
another backup of the database.
If you want to clear an unarchived redo log that is needed to bring an offline
tablespace online, use the UNRECOVERABLE DATAFILE clause in the ALTER
DATABASE CLEAR LOGFILE statement.
Get information about the history of a database using the LogMiner utility
note:When you use manual archiving mode, you cannot specify any standby databases in
the archiving destinations.
Several combinations of these characteristics are possible. To obtain the current status
and other information about each destination for an instance, query the
V$ARCHIVE_DEST view.
DEFER indicates that the location is temporarily disabled.
The availability state of the destination is DEFER, unless there is a failure of its parent destination, in which case its
state becomes ENABLE.
Displays all redo log groups for the database and indicates
V$LOG
which need to be archived.
44.Bigfile Tablespaces
A bigfile tablespace is a tablespace with a single, but very large (up to 4G blocks)
datafile. Traditional smallfile tablespaces, in contrast, can contain multiple datafiles,
but the files cannot be as large. The benefits of bigfile tablespaces are the following:
A bigfile tablespace with 8K blocks can contain a 32 terabyte datafile. A bigfile
tablespace with 32K blocks can contain a 128 terabyte datafile. The maximum
number of datafiles in an Oracle Database is limited (usually to 64K files).
Therefore, bigfile tablespaces can significantly enhance the storage capacity of an
Oracle Database.
RESIZE: The RESIZE clause lets you resize the single datafile in a bigfile
tablespace to an absolute size, without referring to the datafile. For example:
An error is raised if you specify an ADD DATAFILE clause for a bigfile tablespace.
USER_TABLESPACES
V$TABLESPACE
47.Temporary Tablespaces
You can view the allocation and deallocation of space in a temporary tablespace sort
segment using the V$SORT_SEGMENT view. The V$TEMPSEG_USAGE view identifies
the current sort users in those segments.
You also use different views for viewing information about tempfiles than you would
for datafiles. The V$TEMPFILE and DBA_TEMP_FILES views are analogous to the
V$DATAFILE and DBA_DATA_FILES views.
It shares the namespace of tablespaces, so its name cannot be the same as any
tablespace.
You can specify a tablespace group name wherever a tablespace name would
appear when you assign a default temporary tablespace for the database or a
temporary tablespace for a user.
You do not explicitly create a tablespace group. Rather, it is created implicitly when
you assign the first temporary tablespace to the group. The group is deleted when the
last temporary tablespace it contains is removed from it.
Now group2 contains both lmtemp and lmtemp2, while group1 consists of only
tmtemp3.
You can remove a tablespace from a group as shown in the following statement:
Tablespace lmtemp3 no longer belongs to any group. Further, since there are no longer
any members of group1, this results in the implicit deletion of group1.
2.You can determine the current default tablespace type for the database by querying the
DATABASE_PROPERTIES data dictionary view as follows:
SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME = 'DEFAULT_TBS_TYPE';
3.To view the time zone names in the file being used by your database, use the following
query:
SELECT * FROM V$TIMEZONE_NAMES;
4.You can cancel FORCE LOGGING mode using the following SQL statement:
ALTER DATABASE NO FORCE LOGGING;
8.Bigfile tablespaces can contain only one file, but that file can have up to 4G blocks. The maximum number of
datafiles in an Oracle Database is limited (usually to 64K files).
block sizes can have any of the following power-of-two values: 2K, 4K, 8K, 16K or 32K.
12.All SGA components allocate and deallocate space in units of granules. Oracle Database tracks SGA memory use
in internal numbers of granules for each SGA component.
14.An optional COMMENT clause lets you associate a text string with the parameter
update. When you specify SCOPE as SPFILE or BOTH, the comment is written to the
server parameter file.
example:ALTER SYSTEM
SET LOG_ARCHIVE_DEST_4='LOCATION=/u02/oracle/rbdb1/',MANDATORY,'REOPEN=2'
COMMENT='Add new destimation on Nov 29'
SCOPE=SPFILE;
DBA_SERVICES
ALL_SERVICES or V$SERVICES
V$ACTIVE_SERVICES
V$SERVICE_STATS
V$SERVICE_EVENTS
V$SERVICE_WAIT_CLASSES
V$SERV_MOD_ACT_STATS
V$SERVICE_METRICS
V$SERVICE_METRICS_HISTORY
The following additional views also contain some information about services:
V$SESSION
V$ACTIVE_SESSION_HISTORY
DBA_RSRC_GROUP_MAPPINGS
DBA_SCHEDULER_JOB_CLASSES
DBA_THRESHOLDS
DATABASE_PROPERTIES
GLOBAL_NAME
V$DATABASE
20. You can determine the sessions that are blocking the quiesce operation by querying the V$BLOCKING_QUIESCE
view:
--------ACTIVE
Cause
One or more obsolete and/or parameters were specified in the SPFILE or the PFILE on the server side.
Action
See alert log for a list of parameters that are obsolete. or deprecated. Remove them from the SPFILE or the server
side PFILE.
So somebody, somewhere has put obsolete and/or deprecated parameter(s) in my initDB.ora file. To find out which
you could from SQL*PLUS, issue the following statement, to find the sinner.
Or if you are the one, who has made the changes to initDB.ora, you might know which one. In my case somebody
had been messing around with the parameter log_archive_start;
In order to remove this, you should create a pfile from spfile and back to spfile? Thats the way to do it.
SQL> startup
ORA-32004: obsolete and/or deprecated parameter(s) specified
ORACLE instance started.
System altered.
SQL>
BDUMP, UDUMP, ALERT LOG FILES IN ORACLE 11G
The 11g New Features Guide notes important OFA changes, namely the removal of $ORACLE_HOME as an anchor
for diagnostic and alert files:
"The database installation process has been redesigned to be based on the ORACLE_BASE environment variable.
Until now, setting this variable has been optional and the only required variable has been ORACLE_HOME.
With this feature, ORACLE_BASE is the only required input, and the ORACLE_HOME setting will be derived from
ORACLE_BASE."
New in Oracle 11g we see the new ADR (Automatic Diagnostic Repository) and Incident Packaging System, all
designed to allow quick access to alert and diagnostic information.
The new $ADR_HOME directory is located by default at $ORACLE_BASE/diag, with the directories for each instance
at $ORACLE_HOME/diag/$ORACLE_SID, at the same level as the traditional bdump, udump and cdump directories
and the initialization parameters background_dump_dest and user_dump_dest are deprecated in 11g.
You can use the new initialization parameter diagnostic_dest to specify an alternative location for the diag directory
contents.
alert - A new alert directory for the plain text and XML versions of the alert log.
trace - A replacement for the ancient background dump (bdump) and user dump (udump) destinations.
cdump - The old core dump directory retains it's 10g name.
Oracle now writes two alert logs, the traditional alert log in plain text plus a new XML formatted alert.log which is
named as log.xml.
"Prior to Oracle 11g, the alert log resided in $ORACLE_HOME/admin/$ORACLE_SID/bdump directory, but it now
resides in the $ORACLE_BASE/diag/$ORACLE_SID directory".
Fortunately, you can re-set it to the 10g and previous location by specifying the BDUMP location for the
diagnostic_dest parameter.
But best of all, you no longer require server access to see your alert log since it is now accessible via standard SQL
using the new v$diag_info view:
For complete details, see MetaLink Note:438148.1 - "Finding alert.log file in 11g".
In this example database name is test and instances name are test1 and test2.
step 1:
System altered.
System altered.
step 2:
set the LOG_ARCHIVE_DEST_1 parameter. since these parameters will be identical for all nodes, we will use
sid='*'. However, you may need to modify this for your situation if the directories are different on each node.
System altered.
step 3:
System altered.
Note that we illustrate the command for backward compatibility purposes, but in oracle database 10g onwards, the
parameter is actually deprecated. Automatic archiving will be enabled by default whenever an oracle database is
placed in archivelog mode.
step 4:
Set CLUSTER_DATABASE to FALSE for the local instance, which you will then mount to put the database into
archivelog mode. By having CLUSTER_DATABASE=FALSE, the subsequent shutdown and startup mount will actually
do a Mount Exclusive by default, which is necessary to put the database in archivelog mode, and also to enable the
flashback database feature:
System altered.
step 5;
Shut down all instances. Ensure that all instances are shut down cleanly:
step 6:
Mount the database from instance test1 (where CLUSTER_DATABASE was set to FALSE) and then put the database
into archivelog mode.
Database altered.
NOTE:
If you did not shut down all instances cleanly in step 5,
putting the database in archivelog mode will fail
with an ORA-265 Error.
step 7:
Confirm that the database is in archivelog mode, with the appropriate parameters, by issuing the ARCHIVE LOG
LIST command:
step 8
Confirm the location of the RECOVERY_FILE_DEST via a SHOW PARAMETER.
Step 9:
Once the database is in archivelog mode, you can enable flashback while the database is still mounted in Exclusive
mode (CLUSTER_DATABASE=FALSE).
Database altered.
Step 10:
Confirm that Flashback is enabled and verify the retention target:
FLASHBACK_ON CURRENT_SCN
------------------ -----------
YES 0
step 11:
Reset the CLUSTER_DATABASE parameter back to TRUE for all instances:
System altered.
step 12:
shutdown the instance and then restart all cluster database instances.
All instances will now be archiving their redo threads.
Database dismounted.
ORACLE instance shut down.
on test1 instance:
on test2 instance:
Grid Control
DBCA
Manual
RCONFIG(from 10gR2)
here is an example of converting a single instance asm file database to rac instance database using rconfig,
for converting a normal database file system single instance to rac instance, before following
the steps
for converting the non-asm files to asm files using the steps as shown in the link
http://oracleinstance.blogspot.com/2009/12/migrate-from-database-file-system-to.html
go the $ORACLE_HOME/assistants/rconfig/sampleXMLs
there u can find ConvertToRAC.xml file
cp $ORACLE_HOME/assistants/rconfig/sampleXMLs/ConvertToRAC.xml /u01/convertdb.xml
Following illustrate how to convert single instance database to RAC using the RCONFIG tool:
The Convert verify option in the ConvertToRAC.xml file has three options:
Convert verify="YES": rconfig performs checks to ensure that the prerequisites for single-instance to RAC
conversion have been met before it starts conversion
Convert verify="NO": rconfig does not perform prerequisite checks, and starts conversion
Convert verify="ONLY" : rconfig only performs prerequisite checks; it does not start conversion after completing
prerequisite checks
modify the convertdb.xml file according to your environment. Following is the example:
--n:Password--oracle--/n:Password---n:Role--sysdba--/n:Role---/n:Credentials---/n:SourceDBInfo---!--ASMInfo element is required only if the current non-rac database uses ASM Storage -----n:ASMInfo SID="+ASM1"---------------------your ASM Instance name
--n:Credentials---n:User--sys--/n:User---n:Password--oracle--/n:Password-- ----your ASM instance password
--n:Role--sysdba--/n:Role---/n:Credentials---/n:ASMInfo---!--Specify the list of nodes that should have rac instances running. LocalNode should be the first node in this
nodelist. -----n:NodeList---n:Node name="rac1"/-------your rac1 hostname
--n:Node name="rac2"/------your rac2 hostname
--/n:NodeList---!--Specify prefix for rac instances. It can be same as the instance name for non-rac database or different. The
instance number will be attached to this prefix. -----n:InstancePrefix--test--/n:InstancePrefix-----your database name
--!--Specify port for the listener to be configured for rac database.If port="", alistener existing on localhost will be
used for rac database.The listener will be extended to all nodes in the nodelist -----n:Listener port="1551"/-----listener port number
--!--Specify the type of storage to be used by rac database. Allowable values are CFS|ASM. The non-rac database
should have same storage type. -----n:SharedStorage type="ASM"------your storage type
--!--Specify Database Area Location to be configured for rac database.If this field is left empty, current storage will
be used for rac database. For CFS, this field will have directory path. -----n:TargetDatabaseArea-- --/n:TargetDatabaseArea------leave blank
--!--Specify Flash Recovery Area to be configured for rac database. If this field is left empty, current recovery area
of non-rac database will be configured for rac database. If current database is not using recovery Area, the
resulting rac database will not have a recovery area. -----n:TargetFlashRecoveryArea-- --/n:TargetFlashRecoveryArea---leave blank
--/n:SharedStorage---/n:Convert---/n:ConvertToRAC---/n:RConfig--
---------------------------------------------------------------------------------------------------------------------------------------
Once you modify the convert.xml file according to your environment, use the following command to run the tool:
finally, change sid in /etc/oratab as test1 in rac1 machine and test2 in rac2 machine
thats it.
then check
srvctl config database -d test
srvctl status database -d test
crs_stat -t
DBAs wanting to create a 10g Real Applications Cluster face many configuration decisions. One of the more
potentially confusing decisions involves the choice of filesystems. Gone are the days when DBAs simply had to
choose between "raw" and "cooked". DBAs setting up a 10g RAC can still choose raw devices, but they also have
several filesystem options, and these options vary considerably from platform to platform. Further, some storage
options cannot be used for all the files in the RAC setup. This article gives an overview of the RAC storage options
available.
RAC Review
Let's begin by reviewing the structure of a Real Applications Cluster. Physically, a RAC consists of several nodes
(servers), connected to each other by a private interconnect. The database files are kept on a shared storage
subsystem, where they're accessible to all nodes. And each node has a public network connection.
In terms of software and configuration, the RAC has three basic components: cluster software and/or Cluster Ready
Services, database software, and a method of managing the shared storage subsystem.
The cluster software can be vendor-supplied or Oracle-supplied, depending on platform. Cluster Ready Services, or
CRS, is a new feature in 10g. Where vendor clusterware is used, CRS interacts with the vendor clusterware to
coordinate cluster membership information; without vendor clusterware, CRS, which is also known as Oracle OSD
Clusterware, provides complete cluster management.
The database software is Oracle 10g with the RAC option, of course.
Finally, the shared storage subsystem can be managed by one of the following options: raw devices; Automatic
Storage Management (ASM); Vendor-supplied cluster file system (CFS), Oracle Cluster File System (OCFS), or
vendor-supplied logical volume manager (LVM); or Networked File System (NFS) on a certified Network Attached
Storage (NAS) device.
Storage Options
ASM
CFS
OCFS
LVM
NFS
Before I delve into each of these storage options, a word about file types. A regular single-instance database has
three basic types of files: database software and dump files; datafiles, spfile, control files and log files, often
referred to as "database files"; and it may have recovery files, if using RMAN. A RAC database has an additional
type of file referred to as "CRS files". These consist of the Oracle Cluster Registry (OCR) and the voting disk.
Not all of these files have to be on the shared storage subsystem. The database files and CRS files must be
accessible to all instances, so must be on the shared storage subsystem. The database software can be on the
shared subsystem and shared between nodes; or each node can have its own ORACLE_HOME. The flash recovery
area must be shared by all instances, if used.
Some storage options can't handle all of these file types. To take an obvious example, the database software and
dump files can't be stored on raw devices. This isn't important for the dump files, but it does mean that choosing
raw devices precludes having a shared ORACLE_HOME on the shared storage device.
And to further complicate the picture, no OS platform is certified for all of the shared storage options. For example,
only Linux and SPARC Solaris are supported with NFS, and the NFS must be on a certified NAS device. The
following table spells out which platforms and file types can use each storage option.
Table 2.
Platforms and file types able to use each storage option
Storage option--- Platforms--------------------File types supported---File types not supported Raw All platforms
Database, CRS
Software/Dump files, Recovery
ASM
All platforms
Database, Recovery
CRS, Software/Dump
LVM
OCFS
Windows, Linux
NFS
All
None
None
None
(Note: Mike Ault and Madhu Tumma have summarized the storage choices by platform in more detail in this
excerpt from their recent book, Oracle 10g Grid Computing with RAC, which I used as one source for this table.)
Now that we have an idea of where we can use these storage options, let's examine each option in a little more
detail. We'll tackle them in order of Oracle's recommendation, starting with Oracle's least preferred, raw devices,
and finishing up with Oracle's top recommendation, ASM.
Raw devices
Raw devices need little explanation. As with single-instance Oracle, each tablespace requires a partition. You will
also need to store your software and dump files elsewhere.
Pros: You won't need to install any vendor or Oracle-supplied clusterware or additional drivers.
Cons: You won't be able to have a shared oracle home, and if you want to configure a flash recovery area, you'll
need to choose another option for it. Manageablility is an issue. Further, raw devices are a terrible choice if you
expect to resize or add tablespaces frequently, as this involves resizing or adding a partition.
NFS
NFS also requires little explanation. It must be used with a certified NAS device; Oracle has certified a number of
NAS filers with its products, including products from EMC, HP, NetApp and others. NFS on NAS can be a costeffective alternative to a SAN for Linux and Solaris, especially if no SAN hardware is already installed.
If you're considering a vendor CFS or LVM, you'll need to check the 10g Real Application Clusters Installation Guide
for your platform and the Certify pages on MetaLink. A discussion of all the certified cluster file systems is beyond
the scope of this article. Pros and cons depend on the specific solution, but some general observations can be
made:
Pros: You can store all types of files associated with the instance on the CFS / logical volumes.
Cons: Depends on CFS / LVM. And you won't be enjoying the manageability advantage of ASM.
OCFS
OCFS is the Oracle-supplied CFS for Linux and Windows. This is the only CFS that can be used with these
platforms. The current version of OCFS was designed specifically to store RAC files, and is not a full-featured CFS.
You can store database, CRS and recovery files on it, but it doesn't fully support generic filesystem operations.
Thus, for example, you cannot install a shared ORACLE_HOME on an OCFS device.
The next version of OCFS, OCFS2, is currently out in beta version and will support generic filesystem operations,
including a shared ORACLE_HOME.
Oracle recommends ASM for 10g RAC deployments, although CRS files cannot be stored on ASM. In fact, RAC
installations using Oracle Database Standard Edition must use ASM.
ASM is a little bit like a logical volume manager and provides many of the benefits of LVMs. But it also provides
benefits LVMs don't: file-level striping/mirroring, and ease of manageability. Instead of running LVM software, you
run an ASM instance, a new type of "instance" that largely consists of processes and memory and stores its
information in the ASM disks it's managing.
Pros: File-level striping and mirroring; ease of manageability through Oracle syntax and OEM.
Cons: ASM files can only be managed through an Oracle application such as RMAN. This can be a weakness if you
prefer third-party backup software or simple backup scripts. Cannot store CRS files or database software.
Convert RAC instance to SINGLE instance DATABASE
converting RAC instance to SINGLE instance database
--------------------------------------------------In this article, see how the rac instance database is converted into single instance database
step1:stop instance 2 from any node
step 2:change the parameter cluster_database
step 3: [optional]
removing information from clusterware
Connected to:
SQL> startup
ORA-01081: cannot start already-running ORACLE - shut it down first
SQL> show parameter cluster_database
System altered.
Database altered.
System altered.
System altered
System altered.
System altered.
database mounted
piece handle=+DATA/mydb/backupset/2009_12_08/nnsnf0_tag20091208t110241_0.262.705064919
tag=TAG20091208T110241 comment=NONE
RMAN> exit
Connected.
TABLESPACE_NAME
FILE_NAME
------------------------------ ---------------------------------------------
USERS
+DATA/mydb/datafile/users.261.705064915
UNDOTBS1
+DATA/mydb/datafile/undotbs1.259.705064821
SYSAUX
+DATA/mydb/datafile/sysaux.258.705064283
SYSTEM
+DATA/mydb/datafile/system.257.705063763
NAME
----
+DATA/ctf1.dbf
NO
16384
594
Tablespace altered.
FILE_NAME
---------------------------------------------
+DATA/mydb/tempfile/temp.263.705065455
otherwise,
Create temporary tablespace in ASM disk group.
Database altered.
MEMBER
GROUP#
-------------------------------------------------- ----------
/u01/new/oracle/oradata/mydb/redo03.log
/u01/new/oracle/oradata/mydb/redo02.log
/u01/new/oracle/oradata/mydb/redo01.log
Database altered.
Database altered.
Database altered.
MEMBER
GROUP#
-------------------------------------------------- ----------
/u01/new/oracle/oradata/mydb/redo03.log
/u01/new/oracle/oradata/mydb/redo02.log
/u01/new/oracle/oradata/mydb/redo01.log
+DATA/mydb/onlinelog/group_4.264.705065691
+DATA/mydb/onlinelog/group_5.265.705065703
+DATA/mydb/onlinelog/group_6.266.705065719
System altered.
Database altered.
Database altered.
Database altered.
Database altered.
Add additional control file.
Database altered.
System altered.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
NAME
--------------------------------------+DATA/cf1.dbf
+DATA/cf2.dbf
run {
BACKUP AS BACKUPSET SPFILE;
RESTORE SPFILE TO "+DISK/spfile";
}