Sie sind auf Seite 1von 172

http://oracleinstance.blogspot.in/2010/03/oracle-10g-installation-in-linux-5.

html
http://samadhandba.wordpress.com/category/administration/page/2/
BACKUP AND RECOVERY SCENARIOS Complete Recovery With RMAN Backup.
previous post i have posted a complete recovery with user-managed backup,
here we are going to see the complete recovery using rman backup.
you can perform complete recovery in the following 5 situations.
RMAN Recovery Scenarios of complete recovery.
1. Complete Closed Database Recovery. System datafile is missing
2. Complete Open Database Recovery. Non system datafile is missing
3. Complete Open Database Recovery (when the database is initially closed). Non system datafile is missing
4. Recovery of a Datafile that has no backups.
5. Restore and Recovery of a Datafile to a different location.
1.Complete Closed Database Recovery. System Datafile is missing
In this case complete recovery is performed, only the system datafile is missing,
so the database can be opened without reseting the redologs.
1. rman target /
2. startup mount;
3. restore database or datafile file#;
4. recover database or datafile file#;
5. alter database open;
workshop1:
view plainprint?
SQL> create user sweety identified by sweety;

User created.

SQL> grant dba to sweety;

Grant succeeded.

SQL> shu immediate


Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> host rm -rf /u01/app/oracle/oradata/testdb/system01.dbf

SQL> startup
ORACLE instance started.

Total System Global Area 444596224 bytes


Fixed Size
Variable Size

1219904 bytes
130024128 bytes

Database Buffers

310378496 bytes

Redo Buffers

2973696 bytes

Database mounted.
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: '/u01/app/oracle/oradata/testdb/system01.dbf'

SQL> shutdown immediate


ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
SQL>

[oracle@cdbs1 ~]$ rman target /

Recovery Manager: Release 10.2.0.1.0 - Production on Fri May 7 23:53:51 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database (not started)

RMAN> startup mount

Oracle instance started


database mounted

Total System Global Area

Fixed Size
Variable Size

444596224 bytes

1219904 bytes
130024128 bytes

Database Buffers

310378496 bytes

Redo Buffers

2973696 bytes

RMAN> RESTORE DATABASE;

Starting restore at 07-MAY-10


using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK

channel ORA_DISK_1: starting datafile backupset restore


channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to /u01/app/oracle/oradata/testdb/system01.dbf
restoring datafile 00002 to /u01/app/oracle/oradata/testdb/undotbs01.dbf

restoring datafile 00003 to /u01/app/oracle/oradata/testdb/sysaux01.dbf


restoring datafile 00004 to /u01/app/oracle/oradata/testdb/users01.dbf
restoring datafile 00005 to /u03/oradata/test01.dbf
channel ORA_DISK_1: reading from backup piece
/u01/app/oracle/flash_recovery_area/TESTDB/backupset/2010_05_07/o1_mf_nnndf_TAG20100507T232259_5y8n
vxt2_.bkp
channel ORA_DISK_1: restored backup piece 1
piece
handle=/u01/app/oracle/flash_recovery_area/TESTDB/backupset/2010_05_07/o1_mf_nnndf_TAG20100507T23225
9_5y8nvxt2_.bkp tag=TAG20100507T232259
channel ORA_DISK_1: restore complete, elapsed time: 00:02:52
Finished restore at 07-MAY-10

RMAN> RECOVER DATABASE;

Starting recover at 07-MAY-10


using channel ORA_DISK_1

starting media recovery

RMAN> sql 'alter database open';

sql statement: alter database open

RMAN>
SQL> conn sys/oracle as sysdba;
Connected.
SQL> col name format a45
SQL> select name , status from v$datafile;

NAME

STATUS

--------------------------------------------- ------/u01/app/oracle/oradata/testdb/system01.dbf

SYSTEM

/u01/app/oracle/oradata/testdb/undotbs01.dbf ONLINE
/u01/app/oracle/oradata/testdb/sysaux01.dbf
/u01/app/oracle/oradata/testdb/users01.dbf
/u03/oradata/test01.dbf

ONLINE
ONLINE

ONLINE

SQL> select username from dba_users


2 where username='SWEETY';

USERNAME
-----------------------------SWEETY
2.Complete Open Database Recovery. Non system datafile is missing,
database is up
1. rman target /
2. sql 'alter tablespace offline immediate';
or
sql 'alter database datafile file# offline;
3. restore datafile 3;
4. recover datafile 3;
5. sql 'alter tablespace online';
or
sql 'alter database datafile file# online;
workshop2:

view plainprint?
SQL> conn sweety/sweety;

Connected.
SQL> create table demo(id number);

Table created.

SQL> insert into demo values(123);

1 row created.

SQL> commit;

Commit complete.

SQL> conn sys/oracle as sysdba;


Connected.
SQL> select username,default_tablespace from dba_users

USERNAME

2 where username='SWEETY';

DEFAULT_TABLESPACE

------------------------------ -----------------------------SWEETY

USERS

SQL> host rm -rf /u01/app/oracle/oradata/testdb/users01.dbf

SQL> conn sweety/sweety


Connected.
SQL> alter system flush buffer_cache;

System altered.

SQL> select * from demo;


select * from demo
*
ERROR at line 1:
ORA-01116: error in opening database file 4
ORA-01110: data file 4: '/u01/app/oracle/oradata/testdb/users01.dbf'
ORA-27041: unable to open file
Linux Error: 2: No such file or directory
Additional information: 3

[oracle@cdbs1 ~]$ rman target /

Recovery Manager: Release 10.2.0.1.0 - Production on Sat May 8 01:35:09 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: TESTDB (DBID=2501713962)

RMAN> sql 'alter database datafile 4 offline';

using target database control file instead of recovery catalog


sql statement: alter database datafile 4 offline

RMAN> restore datafile 4;

Starting restore at 08-MAY-10


using channel ORA_DISK_1
...
channel ORA_DISK_1: restore complete, elapsed time: 00:00:09

Finished restore at 08-MAY-10

RMAN> recover datafile 4;

Starting recover at 08-MAY-10


using channel ORA_DISK_1

starting media recovery


......
media recovery complete, elapsed time: 00:00:05
Finished recover at 08-MAY-10

RMAN> sql 'alter database datafile 4 online';

sql statement: alter database datafile 4 online

RMAN>exit

SQL> conn sweety/sweety;


Connected.
SQL> select * from demo;

ID
---------123

SQL>

3.Complete Open Database Recovery (when the database is initially closed).

Non system datafile is missing


A user datafile is reported missing when trying to startup the database. The datafile can be turned offline and the
database started up. Restore and
recovery are performed using Rman. After recovery is performed the datafile can be turned online again.
1. sqlplus /nolog
2. connect / as sysdba
3. startup mount
4. alter database datafile '' offline;
5. alter database open;
6. exit;
7. rman target /
8. restore datafile '';
9. recover datafile '';
10. sql 'alter tablespace online';

workshop3:

view plainprint?
SQL> conn sweety/sweety;
Connected.

SQL> create table test ( testid number);

Table created.

SQL> insert into test values(54321);

1 row created.

SQL> commit;

Commit complete.

SQL> conn sys/oracle as sysdba;


Connected.
SQL>shu immediate
SQL> host rm -rf /u01/app/oracle/oradata/testdb/users01.dbf

SQL> startup
ORACLE instance started.

Total System Global Area 444596224 bytes


Fixed Size
Variable Size

1219904 bytes
138412736 bytes

Database Buffers

301989888 bytes

Redo Buffers

2973696 bytes

Database mounted.
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u01/app/oracle/oradata/testdb/users01.dbf'

SQL> alter database datafile 4 offline;

Database altered.

SQL> alter database open;

Database altered.

[oracle@cdbs1 ~]$ rman target /

Recovery Manager: Release 10.2.0.1.0 - Production on Sat May 8 01:51:45 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: TESTDB (DBID=2501713962)

RMAN> restore datafile 4;

Starting restore at 08-MAY-10


using target database control file instead of recovery catalog
.....
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04
Finished restore at 08-MAY-10

RMAN> recover datafile 4;

Starting recover at 08-MAY-10


using channel ORA_DISK_1

starting media recovery


.....
media recovery complete, elapsed time: 00:00:08
Finished recover at 08-MAY-10

RMAN> exit

SQL> alter database datafile 4 online;

Database altered.

SQL> conn sweety/sweety;


Connected.
SQL> select * from test;

TESTID
---------54321
4.Recovery of a Datafile that has no backups (database is up).
If a non system datafile that was not backed up since the last backup is missing,
recovery can be performed if all archived logs since the creation
of the missing datafile exist. Since the database is up you can check the tablespace name and put it offline. The
option offline immediate is used
to avoid that the update of the datafile header.
Pre requisites: All relevant archived logs.
1. sqlplus '/ as sysdba'
2. alter tablespace offline immediate;
3. alter database create datafile '/user/oradata/u01/dbtst/newdata01.dbf;
4. exit
5. rman target /
6. recover tablespace ;
7. sql 'alter tablespace online';
If the create datafile command needs to be executed to place the datafile on a
location different than the original use:
alter database create datafile '/user/oradata/u01/dbtst/newdata01.dbf' as
'/user/oradata/u02/dbtst/newdata01.dbf'
restriction: controlfile creation time must be prior than datafile creation time.
for more reference refer previous blog post.(user-managed complete recovery).

workshop4:
view plainprint?
SQL> create user john identified by john
2 default tablespace testing;

User created.

SQL> grant dba to john;

Grant succeeded.

SQL> conn john/john;


Connected.
SQL> create table test_tb( testid number);

Table created.

SQL> insert into test_tb values(1001);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from test_tb;

TESTID
----------

1001

SQL> conn sys/oracle as sysdba;


Connected.
SQL> host rm -rf /u03/oradata/test01.dbf

SQL> alter system flush buffer_cache;

System altered.

SQL> conn john/john;


Connected.
SQL> select * from test_tb;
select * from test_tb
*
ERROR at line 1:
ORA-01116: error in opening database file 5
ORA-01110: data file 5: '/u03/oradata/test01.dbf'
ORA-27041: unable to open file
Linux Error: 2: No such file or directory
Additional information: 3
SQL> conn sys/oracle as sysdba;
Connected.
SQL> alter tablespace testing offline immediate;

Tablespace altered.
---if you want to create datafile in same location
SQL> alter database create datafile '/u03/oradata/test01.dbf';

Database altered.

---if you want to create a datafile in different location(disk).


SQL> alter database create datafile '/u03/oradata/test01.dbf' as '/u01/app/oracle/oradata/testdb/test01.dbf';

Database altered.
[oracle@cdbs1 ~]$ rman target /

Recovery Manager: Release 10.2.0.1.0 - Production on Sat May 8 02:15:28 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: TESTDB (DBID=2501713962)

RMAN> recover tablespace testing;

Starting recover at 08-MAY-10


using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=145 devtype=DISK

starting media recovery

SQL> alter tablespace testing online;

Tablespace altered.

SQL> conn john/john;


Connected.
SQL> select * from test_tb;

TESTID
---------1001
5.Restore and Recovery of a Datafile to a different location. Database is up.
If a non system datafile is missing and its original location not available, restore can be made to a different location
and recovery performed.
Pre requisites: All relevant archived logs, complete cold or hot backup.
1. Use OS commands to restore the missing or corrupted datafile to the new
location, ie:
cp -p /user/backup/uman/user01.dbf /user/oradata/u02/dbtst/user01.dbf
2. alter tablespace offline immediate;
3. alter tablespace rename datafile
'/user/oradata/u01/dbtst/user01.dbf' to '/user/oradata/u02/dbtst/user01.dbf';
4. rman target /
5. recover tablespace ;
6. sql 'alter tablespace online';
workshop5:

follow the same example workshop4 for workshop 5 except creating new datafile, here you have to copy the
recent backup file to the new disk location and perform recovery. thats it , rest of the procedures are same.
BACKUP AND RECOVERY SCENARIOS
Complete Recovery With User-managed Backup.
you can perform complete recovery in the below 5 situations.

User Managed Recovery Scenarios of complete recovery.


1. Complete Closed Database Recovery. System datafile is missing(with recent backups)
2. Complete Open Database Recovery. Non system datafile is missing(with backups).
3. Complete Open Database Recovery (when the database is initially closed). Non system datafile is missing(with
backups)
4. Recovery of a Missing Datafile that has no backups.(Disk corrupted and no backups available)
restriction: datafile should be created after controlfile creation.(i.e,controlfile creation time is prior than datafile
creation time).

you cannot recover or create datafile without backup in the following situation:
view plainprint?
SQL> select CONTROLFILE_CREATED from v$database;
CONTROLFILE_CREATED
-------------------07-MAY-2010 01:23:43
view plainprint?
SQL> select creation_time,name from v$datafile;
CREATION_TIME

NAME

-------------------- --------------------------------------------30-JUN-2005 19:10:11 /u01/app/oracle/oradata/testdb/system01.dbf


30-JUN-2005 19:55:01 /u01/app/oracle/oradata/testdb/undotbs01.dbf
30-JUN-2005 19:10:27 /u01/app/oracle/oradata/testdb/sysaux01.dbf
30-JUN-2005 19:10:40 /u01/app/oracle/oradata/testdb/users01.dbf

5. Restore and Recovery of a Datafile to a different location.(Disk corrupted having recent backup and recover the
datafile in new Disk location).

User Managed Recovery Scenarios


User managed recovery scenarios do require that the database is in archive log mode, and that backups of all
datafiles and control files are made with the tablespaces set to begin backup, if the database is open while the copy
is made. At the end of the copy of each tablespace it is necessaire to take it out of backup mode. Alternatively
complete backups can be made with the database shutdown. Online redologs can be optionally backed up.
Files to be copied:
select name from v$datafile;
select member from v$logfile; # optional
select name from v$controlfile;
1.Complete Closed Database Recovery. System tablespace is missing
If the system tablespace is missing or corrupted the database cannot be started up
so a complete closed database recovery must be performed.
Pre requisites: A closed or open database backup and archived logs.
1. Use OS commands to restore the missing or corrupted system datafile to its original location from recent
backup, ie:

cp -p /user/backup/uman/system01.dbf /user/oradata/u01/dbtst/system01.dbf
2. startup mount;
3. recover datafile 1;
4. alter database open;
workshop1: system datafile recovery with recent backup

view plainprint?
SQL> create user rajesh identified by rajesh;
User created.
SQL> grant dba to rajesh;
Grant succeeded.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
i manually deleted the datafile system01.dbf for testing purpose only
SQL> startup
ORACLE instance started.
Total System Global Area 444596224 bytes
Fixed Size
Variable Size

1219904 bytes
138412736 bytes

Database Buffers

301989888 bytes

Redo Buffers

2973696 bytes

Database mounted.
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: '/u01/app/oracle/oradata/testdb/system01.dbf'

SQL> shutdown immediate

ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
SQL> host cp /u01/app/oracle/oradata/backup/system01.dbf /u01/app/oracle/oradata/testdb/system01.dbf
system datafile restored from recent backup

SQL*Plus: Release 10.2.0.1.0 - Production on Fri May 7 12:51:16 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Enter user-name: sys as sysdba


Enter password:
Connected to an idle instance.

SQL> startup mount


ORACLE instance started.

Total System Global Area 444596224 bytes


Fixed Size
Variable Size

1219904 bytes
138412736 bytes

Database Buffers

301989888 bytes

Redo Buffers

2973696 bytes

Database mounted.
SQL> recover datafile 1;
ORA-00279: change 454383 generated at 05/07/2010 01:40:11 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_7_%u_.arc

ORA-00280: change 454383 for thread 1 is in sequence #7

Specify log: {=suggested | filename | AUTO | CANCEL}


auto
ORA-00279: change 456007 generated at 05/07/2010 12:46:10 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_8_%u_.arc
ORA-00280: change 456007 for thread 1 is in sequence #8
ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_7_5y7hkty0_.arc' no longer
needed for this recovery
.
.
.
ORA-00279: change 456039 generated at 05/07/2010 12:46:22 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_11_%u_.arc
ORA-00280: change 456039 for thread 1 is in sequence #11
ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_10_5y7hl7dr_.arc' no longer
needed for this recovery

Log applied.
Media recovery complete.
SQL> alter database open;

Database altered.

SQL> archive log list;

Database log mode

Archive Mode

Automatic archival

Enabled

Archive destination

USE_DB_RECOVERY_FILE_DEST

Oldest online log sequence


Next log sequence to archive
Current log sequence

12
14
14

SQL> select username from dba_users


2 where username='RAJESH';

USERNAME
-----------------------------RAJESH

2.Complete Open Database Recovery. Non system tablespace is missing


If a non system tablespace is missing or corrupted while the database is open,
the database remain open.

recovery can be performed while

Pre requisites: A closed or open database backup and archived logs.


1. Use OS commands to restore the missing or corrupted datafile to its original location, ie:
cp -p /user/backup/uman/user01.dbf /user/oradata/u01/dbtst/user01.dbf

2. alter tablespace offline immediate;


3. recover tablespace ;
4. alter tablespace online;

workshop2: Non-system datafile recovery from recent backup when database is open
view plainprint?
SQL> ALTER USER rajesh DEFAULT TABLESPACE users;

User altered.

SQL> conn rajesh/rajesh;

Connected.
SQL> create table demo(id number);

Table created.

SQL> insert into demo values(123);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from demo;

ID
---------123

SQL> conn sys/oracle as sysdba;


Connected.
SQL> alter system switch logfile;

System altered.

SQL> /

System altered.

SQL> archive log list;


Database log mode

Archive Mode

Automatic archival

Enabled

Archive destination

USE_DB_RECOVERY_FILE_DEST

Oldest online log sequence


Next log sequence to archive
Current log sequence

14
16
16

i manually deleted the datafile users01.dbf for testing purpose only


SQL> conn rajesh/rajesh;
Connected.
SQL> alter system flush buffer_cache;

System altered.

SQL> select * from demo;


select * from demo
*
ERROR at line 1:
ORA-00376: file 4 cannot be read at this time
ORA-01110: data file 4: '/u01/app/oracle/oradata/testdb/users01.dbf'

SQL> conn sys/oracle as sysdba;


Connected.
SQL> host cp -p /u01/app/oracle/oradata/backup/users01.dbf /u01/app/oracle/oradata/testdb/users01.dbf
restore the users01.dbf datafile from recent backup to the testdb folder

SQL> alter tablespace users offline immediate;

Tablespace altered.

SQL> recover tablespace users;


ORA-00279: change 454383 generated at 05/07/2010 01:40:11 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_7_%u_.arc
ORA-00280: change 454383 for thread 1 is in sequence #7

Specify log: {=suggested | filename | AUTO | CANCEL}


auto
ORA-00279: change 456007 generated at 05/07/2010 12:46:10 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_8_%u_.arc
ORA-00280: change 456007 for thread 1 is in sequence #8
ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_7_5y7hkty0_.arc' no longer
needed for this recovery
.....
......
ORA-00279: change 456044 generated at 05/07/2010 12:46:28 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_13_%u_.arc
ORA-00280: change 456044 for thread 1 is in sequence #13
ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_12_5y7hldl2_.arc' no longer
needed for this recovery

Log applied.
Media recovery complete.
SQL> alter tablespace users online;

Tablespace altered.

SQL> conn rajesh/rajesh;


Connected.
SQL> select * from demo;

ID
---------123
3.Complete Open Database Recovery (when the database is initially closed).Non system datafile is missing
If a non system tablespace is missing or corrupted and the database crashed, recovery can be performed after
the database is open.
Pre requisites: A closed or open database backup and archived logs.
1. startup; (you will get ora-1157 ora-1110 and the name of the missing datafile, the database will remain
mounted)
2.

alter database datafile3 offline; (tablespace cannot be used because the database is not open)

3.

alter database open;

4.

Use OS commands to restore the missing or corrupted datafile to its original location, ie:
cp -p /user/backup/uman/user01.dbf /user/oradata/u01/dbtst/user01.dbf

5.

recover datafile 3;

6.

alter tablespace online;

workshop 3:Non system datafile is missing

view plainprint?
SQL> conn sys/oracle as sysdba;
Connected.
SQL> alter system switch logfile;

System altered.

SQL> select username,default_tablespace from dba_users


2 where username='RAJESH';

USERNAME

DEFAULT_TABLESPACE

------------------------------ -----------------------------RAJESH

USERS

SQL> conn rajesh/rajesh;


Connected.
SQL> create table testtbl (id number);

Table created.

SQL> insert into testtbl values(786);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from testtbl;

ID
---------786

SQL> conn sys/oracle as sysdba;

Connected.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> --manually deleting the users01.dbf datafile from testdb folder
warning:for testing purpose only
SQL> host rm -rf /u01/app/oracle/oradata/testdb/users01.dbf

SQL> startup
ORACLE instance started.

Total System Global Area 444596224 bytes


Fixed Size
Variable Size

1219904 bytes
142607040 bytes

Database Buffers

297795584 bytes

Redo Buffers

2973696 bytes

Database mounted.
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u01/app/oracle/oradata/testdb/users01.dbf'

SQL> alter database datafile 4 offline;

Database altered.

SQL> alter database open;

Database altered.

SQL> host cp -p /u01/app/oracle/oradata/backup/users01.dbf /u01/app/oracle/oradata/testdb/users01.dbf


copying user01.dbf from the recent backup to the testdb folder
SQL> recover datafile 4;
ORA-00279: change 454383 generated at 05/07/2010 01:40:11 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_7_%u_.arc
ORA-00280: change 454383 for thread 1 is in sequence #7

Specify log: { =suggested | filename | AUTO | CANCEL} auto


ORA-00279: change 456007 generated at 05/07/2010 12:46:10 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_8_%u_.arc
ORA-00280: change 456007 for thread 1 is in sequence #8
ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_7_5y7hkty0_.arc' no longer
needed for this recovery
......
.........
ORA-00279: change 456046 generated at 05/07/2010 12:46:29 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_14_%u_.arc
ORA-00280: change 456046 for thread 1 is in sequence #14
ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_13_5y7hlfbc_.arc' no longer
needed for this recovery

Log applied.
Media recovery complete.

SQL> alter database datafile 4 online;

Database altered.

SQL> conn rajesh/rajesh;


Connected.
SQL> select * from testtbl;

ID
---------786
4.Recovery of a Missing Datafile that has no backups (database is open).
If a non system datafile that was not backed up since the last backup is missing,

recovery can be performed if all archived logs since the creation


of the missing datafile exist.
Pre requisites: All relevant archived logs.
1. alter tablespace offline immediate;
2. alter database create datafile '/user/oradata/u01/dbtst/newdata01.dbf';
3. recover tablespace ;
4. alter tablespace online;

If the create datafile command needs to be executed to place the datafile on a location different than the original
use:
alter database create datafile '/user/oradata/u01/dbtst/newdata01.dbf' as
'/user/oradata/u02/dbtst/newdata01.dbf'

restriction: datafile should be created after controlfile creation.(i.e,controlfile creation time is prior than datafile
creation time).
workshop 4: Missing Non-system Datafile having no backups
view plainprint?

SQL> alter session set nls_date_format='DD-MON-YYYY hh24:mi:ss';

Session altered.

SQL> select controlfile_created from v$database;

CONTROLFILE_CREATED
-------------------07-MAY-2010 16:27:22

SQL> col name format a45


SQL> select creation_time,name from v$datafile;

CREATION_TIME

NAME

-------------------- --------------------------------------------30-JUN-2005 19:10:11 /u01/app/oracle/oradata/testdb/system01.dbf


30-JUN-2005 19:55:01 /u01/app/oracle/oradata/testdb/undotbs01.dbf
30-JUN-2005 19:10:27 /u01/app/oracle/oradata/testdb/sysaux01.dbf
30-JUN-2005 19:10:40 /u01/app/oracle/oradata/testdb/users01.dbf
you cannot re-create the any one of the listed above datafile , without backup.
SQL> create tablespace testing datafile
2 '/u01/app/oracle/oradata/testdb/test01.dbf' size 2m;

Tablespace created.

SQL> select creation_time,name from v$datafile;

CREATION_TIME

NAME

-------------------- ---------------------------------------------

30-JUN-2005 19:10:11 /u01/app/oracle/oradata/testdb/system01.dbf


30-JUN-2005 19:55:01 /u01/app/oracle/oradata/testdb/undotbs01.dbf
30-JUN-2005 19:10:27 /u01/app/oracle/oradata/testdb/sysaux01.dbf
30-JUN-2005 19:10:40 /u01/app/oracle/oradata/testdb/users01.dbf
07-MAY-2010 16:32:07 /u01/app/oracle/oradata/testdb/test01.dbf
we can re-create test01.dbf file without backup.
SQL> select controlfile_created from v$database;

CONTROLFILE_CREATED
-------------------07-MAY-2010 16:27:22

---we can recover the datafile test01.dbf without backup using


view plainprint?
create datafile command in recovery
---in this example i am going to create a table in testing tablespace
view plainprint?
and deleted the test01.dbf datafile and recover it without backup and
view plainprint?
create datafile recovery command.

SQL> create user jay identified by jay


2 default tablespace testing;

User created.

SQL> grant dba to jay;

Grant succeeded.

SQL> select username,default_tablespace from dba_users


2 where username='JAY';

USERNAME

DEFAULT_TABLESPACE

------------------------------ -----------------------------JAY

TESTING

SQL> conn jay/jay;


Connected.
SQL> create table demo (id number);

Table created.

SQL> insert into demo values(321);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from demo;

ID
---------321

SQL> conn sys/oracle as sysdba;

Connected.
SQL> host rm -rf /u01/app/oracle/oradata/testdb/test01.dbf
---manually deleting datafile test01.dbf for testing purpose

SQL> conn jay/jay;


Connected.
SQL> select * from demo;

ID
---------321

SQL> alter system flush buffer_cache;

System altered.

SQL> select * from demo;


select * from demo
*
ERROR at line 1:
ORA-01116: error in opening database file 5
ORA-01110: data file 5: '/u01/app/oracle/oradata/testdb/test01.dbf'
ORA-27041: unable to open file
Linux Error: 2: No such file or directory
Additional information: 3

SQL> alter database datafile 5 offline;

Database altered.
----TO CREATE A NEW RECOVERED DATAFILE IN SAME LOCATION.
SQL> alter database create datafile '/u01/app/oracle/oradata/testdb/test01.dbf';
Database altered.
----TO CREATE A NEW RECOVERED DATAFILE IN DIFFERENT LOCATION.
SQL> alter database create datafile '/u01/app/oracle/oradata/testdb/test01.dbf' as '/u03/oradata/test01.dbf';

Database altered.

SQL> recover datafile 5;


ORA-00279: change 454443 generated at 05/07/2010 16:32:07 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_8_%u_.arc
ORA-00280: change 454443 for thread 1 is in sequence #8

Specify log: {=suggested | filename | AUTO | CANCEL}


auto
ORA-00279: change 454869 generated at 05/07/2010 16:41:38 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_9_%u_.arc
ORA-00280: change 454869 for thread 1 is in sequence #9
ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_8_5y7xcbrm_.arc' no longer
needed for this recovery
.....
.......
ORA-00279: change 454874 generated at 05/07/2010 16:41:45 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_11_%u_.arc

ORA-00280: change 454874 for thread 1 is in sequence #11


ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_10_5y7xck8j_.arc' no longer
needed for this recovery

Log applied.
Media recovery complete.
SQL> alter database datafile 5 online;

Database altered.

SQL> conn jay/jay;


Connected.
SQL> select * from demo;

ID
---------321

SQL>

5.Restore and Recovery of a Datafile to a different location.


If a non system datafile is missing and its original location not available, restore can be made to a different location
and recovery performed.
Pre requisites: All relevant archived logs.

1. Use OS commands to restore the missing or corrupted datafile to the new location, ie:
cp -p /user/backup/uman/user01.dbf /user/oradata/u02/dbtst/user01.dbf
2. alter tablespace offline immediate;
3. alter tablespace rename datafile

'/user/oradata/u01/dbtst/user01.dbf' to '/user/oradata/u02/dbtst/user01.dbf';
4. recover tablespace ;
5. alter tablespace online;
workshop 5:
view plainprint?
SQL> create user lachu identified by lachu
2 default tablespace users;

User created.

SQL> grant dba to lachu;

Grant succeeded.

SQL> conn lachu/lachu;


Connected.
SQL> create table test_tb(id number);

Table created.

SQL> insert into test_tb values(123);

1 row created.

SQL> commit;

Commit complete.

SQL> conn sys/oracle as sysdba;

Connected.
SQL> ---manually deleting users01.dbf datafile for testing purpose
SQL> host rm -rf '/u01/app/oracle/oradata/testdb/users01.dbf'

SQL> conn lachu/lachu;


Connected.
SQL> select * from tab;

TNAME

TABTYPE CLUSTERID

------------------------------ ------- ---------TEST_TB

TABLE

SQL> select * from test_tb;


select * from test_tb
*
ERROR at line 1:
ORA-00376: file 4 cannot be read at this time
ORA-01110: data file 4: '/u01/app/oracle/oradata/testdb/users01.dbf'

SQL> conn sys/oracle as sysdba;


Connected.
SQL> alter database datafile 4 offline;

Database altered.

SQL> host cp -p /u01/app/oracle/oradata/backup/users01.dbf /u03/oradata/users01.dbf


--restore datafile user01.dbf to new disk from the recent backup of the database.

SQL> alter tablespace users rename datafile

2 '/u01/app/oracle/oradata/testdb/users01.dbf' to '/u03/oradata/users01.dbf';
Tablespace altered.

SQL> recover datafile 4;


ORA-00279: change 454383 generated at 05/07/2010 01:40:11 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_7_%u_.arc
ORA-00280: change 454383 for thread 1 is in sequence #7

Specify log: {=suggested | filename | AUTO | CANCEL}


auto
ORA-00279: change 456007 generated at 05/07/2010 12:46:10 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_8_%u_.arc
ORA-00280: change 456007 for thread 1 is in sequence #8
ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_7_5y7hkty0_.arc' no longer
needed for this recovery
....
......
ORA-00279: change 457480 generated at 05/07/2010 13:09:30 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_15_%u_.arc
ORA-00280: change 457480 for thread 1 is in sequence #15
ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_14_5y7jxlvg_.arc' no longer
needed for this recovery

Log applied.

Media recovery complete.


SQL> alter database datafile 4 online;

Database altered.

SQL> select name from v$datafile;

NAME
--------------------------------------------/u01/app/oracle/oradata/testdb/system01.dbf
/u01/app/oracle/oradata/testdb/undotbs01.dbf
/u01/app/oracle/oradata/testdb/sysaux01.dbf
/u03/oradata/users01.dbf ----------restored in new location (disk)

SQL> conn lachu/lachu;


Connected.
SQL> select * from tab;

TNAME

TABTYPE CLUSTERID

------------------------------ ------- ---------TEST_TB

TABLE

SQL> select * from test_tb;

ID
---------123
Block media recovery recovers an individual corrupt datablock or set of datablocks within a datafile. In cases when
a small number of blocks require media recovery, you can selectively restore and recover damaged blocks rather
than whole datafiles.

More theoretical information read

Its possible to perform Block Media Recovery with having only OS based hot backups and having NO RMAN
backups.
Look at the following demonstration. Here:
1. Create a new user antony and a table corrupt_test in that schema.
2. Take OS backup (hot backup) of the users01.dbf where the table resides
3. Corrupt the data in that table and get block corruption error.
4. Connect with RMAN and try to use BLOCKRECOVER command. As we havent any backup, we get an error.
5. Catalog the hot backup to the RMAN repository.
6. Use BLOCKRECOVER command and recover the corrupted data block using cataloged hot backup of the
datafile.
7. Query the table and get the data back!
Here is the scenario
view plainprint?
SQL> CREATE USER antony IDENTIFIED BY antony;

User created.

SQL> GRANT DBA TO antony;

Grant succeeded.

SQL> CONN antony/antony;


Connected.
SQL> CREATE TABLE corrupt_test (id NUMBER);

Table created.

SQL> INSERT INTO corrupt_test VALUES(123);

1 row created.

SQL> COMMIT;

Commit complete.

SQL> COLUMN segment_name format a15


SQL> SELECT segment_name, tablespace_name from dba_segments
2 WHERE segment_name='CORRUPT_TEST';

SEGMENT_NAME

TABLESPACE_NAME

--------------- -----------------------------CORRUPT_TEST

USERS

SQL> COLUMN tablespace_name format a15


SQL> COLUMN name FORMAT a43
SQL> SELECT segment_name, a.tablespace_name, b.name
2 FROM dba_segments a, v$datafile b
3 WHERE a.header_file=b.file#
4 AND a.segment_name='CORRUPT_TEST';

SEGMENT_NAME

TABLESPACE_NAME NAME

--------------- --------------- ------------------------------------------CORRUPT_TEST

USERS

/u01/app/oracle/oradata/orcl/users01.dbf

SQL> ALTER TABLESPACE USERS BEGIN BACKUP;

Tablespace altered.

SQL> host cp /u01/app/oracle/oradata/orcl/users01.dbf /u01/app/oracle/oradata/backup/users01_backup.dbf

SQL> ALTER TABLESPACE USERS END BACKUP;

Tablespace altered.

SQL> SELECT header_block FROM dba_segments WHERE segment_name='CORRUPT_TEST';

HEADER_BLOCK
-----------67

SQL>

[oracle@cdbs1 ~]$ dd of=/u01/app/oracle/oradata/orcl/users01.dbf bs=8192 conv=notrunc seek=68 << EOF


> rajeshkumar testing block corruption
> EOF
0+1 records in
0+1 records out
[oracle@cdbs1 ~]$

SQL> Conn antony/antony


Connected.

SQL> ALTER SYSTEM FLUSH BUFFER_CACHE;

System altered.

SQL> select * from corrupt_test;


select * from corrupt_test
*
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 4, block # 67)
ORA-01110: data file 4: '/u01/app/oracle/oradata/orcl/users01.dbf'

SQL> EXIT

[oracle@cdbs1 ~]$ rman target /


<span style="font-size: x-small;">
Recovery Manager: Release 10.2.0.1.0 - Production on Thu May 6 01:41:46 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: ORCL (DBID=1245940166)</span>


RMAN> BLOCKRECOVER DATAFILE 4 BLOCK 68;

<span style="font-size: x-small;">Starting blockrecover at 06-MAY-10


using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=143 devtype=DISK

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of blockrecover command at 05/06/2010 01:42:25
RMAN-06026: some targets not found - aborting restore

RMAN-06023: no backup or copy of datafile 4 found to restore</span>

RMAN>
RMAN> CATALOG DATAFILECOPY '/u01/app/oracle/oradata/backup/users01_backup.dbf';
<span style="font-size: x-small;">
cataloged datafile copy
datafile copy filename=/u01/app/oracle/oradata/backup/users01_backup.dbf recid=1 stamp=718249432</span>
RMAN> BLOCKRECOVER DATAFILE 4 BLOCK 68;

<span style="font-size: x-small;">Starting blockrecover at 06-MAY-10


using channel ORA_DISK_1

channel ORA_DISK_1: restoring block(s) from datafile copy /u01/app/oracle/oradata/backup/users01_backup.dbf

starting media recovery


media recovery complete, elapsed time: 00:00:02

Finished blockrecover at 06-MAY-10

RMAN> EXIT

Recovery Manager complete.</span>


<span style="font-size: x-small;">[oracle@cdbs1 ~]$sqlplus </span>
<span style="font-size: x-small;">
SQL*Plus: Release 10.2.0.1.0 - Production on Thu May 6 01:45:04 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Enter user-name: sys as sysdba


Enter password:

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options</span>

SQL> conn antony/antony


Connected.
SQL> select * from CORRUPT_TEST;

ID
---------123
Command Line History and Editing in SQL*Plus and RMAN on Linux
rlwrap (readline wrapper) utility provides a command history and editing of keyboard input for any other command.

This article explains how to install rlwrap and set it up for SQL*Plus and RMAN.

Download the latest rlwrap software from the following URL.


http://utopia.knoware.nl/~hlub/uck/rlwrap/
Unzip and install the software using the following commands.
gunzip rlwrap*.gz
tar -xvf rlwrap*.tar
cd rlwrap*
./configure
make
make check
make install
Run the following commands, or better still append then to the ".bashrc" of the oracle software owner.

alias rlsqlplus='rlwrap sqlplus'


alias rlrman='rlwrap rman'
You can now start SQL*Plus or RMAN using "rlsqlplus" and "rlrman" respectively, and you will have a basic
command history and the current line will be editable using the arrow and delete keys.

[oracle@cdbs1 ~]$ rlrmanRecovery Manager: Release 10.2.0.1.0 - Production on Wed May 5 17:14:57 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

RMAN> exit

Recovery Manager complete.

Instead of rlrman and rlsqlplus, you can use your own alias name for rman and sqlplus. More than that now you
can use up and down arrow for previous past queries.
[oracle@cdbs1 ~]$ alias rajesh='rlwrap rman'
[oracle@cdbs1 ~]$ rajesh
Recovery Manager: Release 10.2.0.1.0 - Production on Wed May 5 17:15:27 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

RMAN>

[oracle@cdbs1 ~]$ alias lakshmi='rlwrap sqlplus'


[oracle@cdbs1 ~]$ lakshmi
SQL*Plus: Release 10.2.0.1.0 - Production on Wed May 5 17:21:38 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Enter user-name: sys as sysdba


Enter password:

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> archive log list;


Database log mode

Archive Mode

Automatic archival

Enabled

Archive destination

USE_DB_RECOVERY_FILE_DEST

Oldest online log sequence

Next log sequence to archive


Current log sequence

7
7

SQL> select name from v$database;

NAME
--------ORCL

SQL> archive log list;


Database log mode

Archive Mode

Automatic archival

Enabled

Archive destination

USE_DB_RECOVERY_FILE_DEST

Oldest online log sequence

Next log sequence to archive


Current log sequence

7
7

SQL> select name from v$database;

NAME
--------ORCL
SQL>

Automated Storage Management (ASM) Pocket Reference Guide


Automated Storage Management (ASM) Pocket Reference Guide
by charles kim

ASM Diskgroups
Create Diskgroup
CREATE DISKGROUP disk_group_1 NORMAL
REDUNDANCY
FAILGROUP failure_group_1 DISK
'/devices/diska1' NAME diska1,
'/devices/diska2' NAME diska2,
FAILGROUP failure_group_2 DISK
'/devices/diskb1' NAME diskb1,
'/devices/diskb2' NAME diskb2;

Drop disk groups


DROP DISKGROUP DATA INCLUDING CONTENTS;

Add disks
ALTER DISKGROUP DATA ADD DISK '/dev/sda3';

Drop a disk
ALTER DISKGROUP DATA DROP DISK DATA_0001;

Resize all disks in a disk group


ALTER DISKGROUP DATA RESIZE ALL SIZE 100G;

UNDROP DISKS clause of the ALTER DISKGROUP


ALTER DISKGROUP DATA UNDROP DISKS;

Rebalance diskgroup

ALTER DISKGROUP DATA REBALANCE POWER 5;

Check Diskgroup
ALTER DISKGROUP DATA CHECK;
ALTER DISKGROUP DATA CHECK NOREPAIR;

Diskgroup Metadata Backup


md_backup -b asm_backup.mdb.txt -g data,fra

ASM Specific Init.ora Parameters


*.cluster_database=true
*.asm_diskstring='/dev/sd*1'
*.instance_type=asm
*.shared_pool_size=100M
*.large_pool_size=80M
*.db_cache_size=60M
*.asm_diskgroups='DATA','FRA'

Initialize ASM for non-RAC


./localconfig add

Manually start CSSD (non-RAC)


/etc/init.d/init.cssd start

Manually stop CSSD ( non-RAC)


/etc/init.d/init.cssd stop

Resetting CSS to new Oracle Home


localconfig reset /apps/oracle/product/11.1.0/ASM

ASM Dictionary Views

v$asm_alias ---list all aliases in all currently mounted diskgroups


v$asm_client ---list all the databases currently accessing the diskgroups
v$asm_disk ----lists all the disks discovered by the ASM instance.
v$asm_diskgroup ---Lists all the diskgroups discovered by the ASM instance.
v$asm_file ---Lists all files that belong to diskgroups mounted by the ASM instance.
v$asm_operation ---Reports information about current active operations. Rebalance activity is reported in this
view.
v$asm_template ---Lists all the templates currently mounted by the ASM instance.
v$asm_diskgroup_stat ---same as v$asm_diskgroup but does discover new disgroups. Use this view instead of
v$asm_diskgroup.
v$asm_disk_stat ---same as v$asm_disk but does not discover new disks. Use this view instead of v$asm_disk.

srvctl commands
ADD
srvctl add asm -n rac3 -i +ASM3 -o /opt/oracle/app/product/10.2.0/asm

ENABLE
srvctl enable asm -n rac3 -i +ASM3

DISABLE
srvctl disable asm -n rac3 -i +ASM3

START
srvctl start asm -n rac3

STOP
srvctl stop asm -n rac3

CONFIG
srvctl config asm -n rac1

REMOVE

srvctl remove asm -n rac1


STATUS
srvctl status asm
srvctl status asm -n rac1

MODIFY
srvctl modify asm -o -n rac1

ASMLIB commands ( as root)


/etc/init.d/oracleasm start
/etc/init.d/oracleasm stop
/etc/init.d/oracleasm restart
/etc/init.d/oracleasm configure
/etc/init.d/oracleasm status
/etc/init.d/oracleasm enable
/etc/init.d/oracleasm disable
/etc/init.d/oracleasm listdisks
/etc/init.d/oracleasm deletedisk
/etc/init.d/oracleasm scandisks
/etc/init.d/oracleasm querydisk /dev/sdb1
/etc/init.d/oracleasm createdisk /dev/sdb1 VOL1
/etc/init.d/oracleasm renamedisk /dev/sdb1 VOL1

asmcmd Commands
cd -----changes the current directory to the specified directory
du -----Displays the total disk space occupied by ASM files in the specified
ASM directory and all its subdirectories, recursively.
find -----Lists the paths of all occurrences of the specified name ( with wildcards) under the specified directory.
ls +data/testdb ----Lists the contents of an ASM director, the attributes of the specified file, or the names and
attributes of all disk groups.
lsct -----Lists information about current ASM clients.

lsdg ----Lists all disk groups and their attributes


mkalias ----Creates an alias for a system generated filename.
mkdir -----Creates ASM directories.
pwd --------Displays the path of the current ASM directory.
rm

-------Deletes the specified ASM Files or directories.

rm -f
rmalias ---------Deletes the specified alias, retaining the file that the alias points to
lsdsk ----------Lists disks visible to ASM.
md_backup ------Creates a backup of all of the mounted disk groups.
md_restore ------Restores disk groups from a backup.
remap ----repairs a range of physical blocks on a disk.
cp ------copies files into and out of ASM.
**ASM diskgroup to OS file system.
**OS file system to ASM diskgroup.
**ASM diskgroup to another ASM diskgroup on the same server.
**ASM disk group to ASM diskgroup on a remote server.

SYSASM Role (Starting in Oracle Database 11g)


SQL> Grant sysasm to sys; ---sysdba deprecated sqlplus / as sysasm

ASM Rolling Upgrades START


alter system start rolling migration to 11.2.0.2;

DISABLE
alter system stop rolling migration;

Database INIT parameters for ASM.


*.control_files='+DATA/orcl/controlfile/control1.ctl','+FRA/orcl/controlfile/control2.ctl'
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+DATA'
*.db_recovery_file_dest='+DATA'

*.log_archive_dest_1='LOCATION=+DATA'
*.log_file_name_convert='+DATA/VISKDR','+DATA/VISK' ##added for DG

MIGRATE to ASM using RMAN


run
{
backup as copy database format '+DATA';
switch database to copy;
#For each logfile
sql "alter database rename '/data/oracle/VISK/redo1a.rdo' to '+DATA' ";
alter database open resetlogs;
#For each tempfile
sql "alter tablespace TEMP add tempfile" ;
}

Restore Database to ASM using SET NEWNAME


run
{
allocate channel d1 type disk;
#For each datafile
set newname for datafile 1 to '+DATA';
restore database;
switch datafile all;
release channel d1;
}
Error: ORA-16825: Fast-Start Failover and other errors or warnings detected for the database
ORA-16795:
database resource guard detects that database re-creation is required
ORA-16825:
Fast-Start Failover and other errors or warnings detected for the database
ORA-16817:

unsynchronized Fast-Start Failover configuration

solution:

DGMGRL> show database rajesh

Database
Name:
Role:
Enabled:

rajesh
PRIMARY
YES

Intended State: ONLINE


Instance(s):
rajesh

Current status for "rajesh":


Error: ORA-16825: Fast-Start Failover and other errors or warnings detected for the database

DGMGRL> show database jeyanthi

Database
Name:
Role:
Enabled:

jeyanthi
PHYSICAL STANDBY
NO

Intended State: ONLINE


Instance(s):
jeyanthi

Current status for "jeyanthi":


Error: ORA-16661: the standby database needs to be reinstated

DGMGRL> reinstate database jeyanthi;


Reinstating database "jeyanthi", please wait...
Operation requires shutdown of instance "jeyanthi" on database "jeyanthi"
Shutting down instance "jeyanthi"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "jeyanthi" on database "jeyanthi"
Starting instance "jeyanthi"...
ORACLE instance started.
Database mounted.
Continuing to reinstate database "jeyanthi" ...
Reinstatement of database "jeyanthi" succeeded
DGMGRL> show configuration verbose;

Configuration
Name:

jeyanthi

Enabled:

YES

Protection Mode:

MaxAvailability

Fast-Start Failover: ENABLED


Databases:
jeyanthi - Physical standby database
- Fast-Start Failover target
rajesh

- Primary database

Fast-Start Failover
Threshold: 30 seconds
Observer: rac3

Current status for "jeyanthi":


Warning: ORA-16607: one or more databases have failed

then,
stop and start the observer.(start from another machine)
DGMGRL> stop observer
Done.
DGMGRL> connect sys/oracle@jeyanthi
Connected.
DGMGRL> start observer
Observer started

DGMGRL> show configuration verbose

Configuration
Name:

jeyanthi

Enabled:

YES

Protection Mode:

MaxAvailability

Fast-Start Failover: ENABLED


Databases:
jeyanthi - Physical standby database
- Fast-Start Failover target
rajesh

- Primary database

Fast-Start Failover
Threshold: 30 seconds
Observer: rac2

Current status for "jeyanthi":

SUCCESS
Configuration of 10g Data Guard Broker and Observer for Switchover
Configuring Data Guard Broker for Switchover, General Review.

On a previous document, 10g Data Guard, Physical Standby Creation, step by step I did describe how to
implement a Data Guard
configuration; on this document I'm adding how to configure the broker and observer, setup the database to
Maximum Availability and
managing switchover from Data Guard Manager, DGMGRL.
Data Guard Broker permit to manage a Data Guard Configuration, from both the Enterprise Manager Grid Control
console, or from a
terminal in command line mode. In this document I will explore command line mode.
Pre requisites include the use of 10g Oracle server, using spfile on both the primary and standby and a third server
for the Observer,
and configure the listeners to include a service for the Data Guard Broker.

The Enviroment
2 Linux servers, Oracle Distribution 2.6.9-55 EL i686 i386 GNU/Linux, the Primary and Standby databases
are located on these
servers.
1 Linux server, RH Linux 2.6.9-42.ELsmp x86_64 GNU/Linux, The Data Guard Broker Observer is located on
this server
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0
ssh is configured for user oracle on both nodes
Oracle Home is on identical path on both nodes
Primary database ANTONY
Standby database JOHN

Step by Step Implementation of Data Guard Broker


Enable Data Guard Broker Start on the Primary and Standby databases
SQL> ALTER SYSTEM SET DG_BROKER_START=TRUE SCOPE=BOTH;
System altered.
Setup the Local_Listener parameter on both the Primary and Standby databases

SQL> ALTER SYSTEM SET LOCAL_LISTENER='LISTENER_VMRACTEST' SCOPE=BOTH;


System altered.
Setup the tnsnames to enable communication with both the Primary and Standby databases
The listener.ora should include a service named global_db_nameDGMGRL to enable the broker to start the
databases on the event of
switchover. This configuration needs to be included on both servers.
Listener.ora on Node 1
LISTENER_VMRACTEST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT = 1521)(IP = FIRST))
)
)
SID_LIST_LISTENER_VMRACTEST =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = antony)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1 )
(SID_NAME = antony)
)
(SID_DESC =
(SID_NAME= antony)
(GLOBAL_DBNAME = antony_DGMGRL)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1 )
)
)
Listener.ora on Node 2
LISTENER_VMRACTEST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT = 1521)(IP = FIRST))

)
)
SID_LIST_LISTENER_VMRACTEST =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = john)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1 )
(SID_NAME = john)
)
(SID_DESC =
(SID_NAME= john)
(GLOBAL_DBNAME = john_DGMGRL)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1 )
)
)
Tnsnames.ora on Node 1, 2 and the observer node
ANTONY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = antony_DGMGRL)
)
)
JOHN =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = john_DGMGRL)
)

)
Setup the Broker configuration files
The broker configuration files are automatically created when the broker is started using ALTER SYSTEM SET
DG_BROKER_START=TRUE.
The default destination can be modified using the parameters DG_BROKER_CONFIG_FILE1 and
DG_BROKER_CONFIG_FILE2
On Primary:
SQL>SHOW PARAMETERS DG_BROKER_CONFIG

NAME

TYPE

VALUE

------------------------------------ ----------- -----------------------------dg_broker_config_file1

string

/u01/app/oracle/product/10.2.0

/db_1/dbs/dr1antony.dat
dg_broker_config_file2

string

/u01/app/oracle/product/10.2.0

/db_1/dbs/dr2antony.dat

On standby:
SQL> SHOW PARAMETERS DG_BROKER_CONFIG

NAME

TYPE

VALUE

------------------------------------ ----------- -----------------------------dg_broker_config_file1

string

/u01/app/oracle/product/10.2.0

/db_1/dbs/dr1john.dat
dg_broker_config_file2

string

/u01/app/oracle/product/10.2.0

/db_1/dbs/dr2john.dat

Next create from within the DGMGRL the configuration


[oracle@rac1 ~]$ dgmgrl
DGMGRL for Linux: Version 10.2.0.1.0 - Production

Copyright (c) 2000, 2005, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.


DGMGRL> connect sys/oracle@antony
Connected.
DGMGRL> create configuration ANTONY AS
> PRIMARY DATABASE IS antony
> CONNECT IDENTIFIER IS antony;

Configuration "antony" created with primary database "antony"

Add the standby to the configuration and check it

DGMGRL> ADD DATABASE john AS


> CONNECT IDENTIFIER IS john
> MAINTAINED AS PHYSICAL;
Database "john" added

DGMGRL> SHOW CONFIGURATION;

Configuration
Name:

antony

Enabled:

NO

Protection Mode:

MaxPerformance

Fast-Start Failover: DISABLED


Databases:
antony - Primary database
john

- Physical standby database

Current status for "antony":


DISABLED

DGMGRL> SHOW DATABASE VERBOSE john;

Database
Name:
Role:

john
PHYSICAL STANDBY

Enabled:

NO

Intended State: OFFLINE


Instance(s):
john

Properties:
InitialConnectIdentifier

= 'john'

LogXptMode

= 'ARCH'

Dependency

= ''

DelayMins
Binding
MaxFailure

= '0'
= 'OPTIONAL'
= '0'

MaxConnections

= '1'

ReopenSecs

= '300'

NetTimeout

= '180'

LogShipping

= 'ON'

PreferredApplyInstance

= ''

ApplyInstanceTimeout

= '0'

ApplyParallel

= 'AUTO'

StandbyFileManagement
ArchiveLagTarget
LogArchiveMaxProcesses
LogArchiveMinSucceedDest

= 'auto'
= '0'
= '30'
= '1'

DbFileNameConvert

= '/u01/app/oracle/oradata/antony/, /u01/app /oracle/oradata/john/'

LogFileNameConvert

= '/u01/app/oracle/oradata/antony/, /u01/app /oracle/oradata/john/'

FastStartFailoverTarget
StatusReport

= ''
= '(monitor)'

InconsistentProperties
InconsistentLogXptProps

= '(monitor)'
= '(monitor)'

SendQEntries

= '(monitor)'

LogXptStatus

= '(monitor)'

RecvQEntries

= '(monitor)'

HostName

= 'rac2'

SidName

= 'john'

LocalListenerAddress
StandbyArchiveLocation

= '(ADDRESS=(PROTOCOL=TCP)(HOST=rac2.localdo main)(PORT=1521))'
= '/u01/app/oracle/oradata/john/arch/'

AlternateLocation

= ''

LogArchiveTrace

= '0'

LogArchiveFormat
LatestLog

= '%t_%s_%r.arc'
= '(monitor)'

TopWaitEvents

= '(monitor)'

Current status for "john":


DISABLED

DGMGRL> show database verbose antony;

Database
Name:
Role:
Enabled:

antony
PRIMARY
NO

Intended State: OFFLINE


Instance(s):
antony

Properties:

InitialConnectIdentifier

= 'antony'

LogXptMode

= 'ASYNC'

Dependency

= ''

DelayMins
Binding
MaxFailure

= '0'
= 'OPTIONAL'
= '0'

MaxConnections

= '1'

ReopenSecs

= '300'

NetTimeout

= '180'

LogShipping

= 'ON'

PreferredApplyInstance

= ''

ApplyInstanceTimeout

= '0'

ApplyParallel

= 'AUTO'

StandbyFileManagement

= 'auto'

ArchiveLagTarget

= '0'

LogArchiveMaxProcesses

= '30'

LogArchiveMinSucceedDest
DbFileNameConvert
LogFileNameConvert
FastStartFailoverTarget
StatusReport
InconsistentProperties
InconsistentLogXptProps

= '1'
= '/u01/app/oracle/oradata/john/, /u01/app/o racle/oradata/antony/'
= '/u01/app/oracle/oradata/john/, /u01/app/o racle/oradata/antony/'
= ''

= '(monitor)'
= '(monitor)'
= '(monitor)'

SendQEntries

= '(monitor)'

LogXptStatus

= '(monitor)'

RecvQEntries

= '(monitor)'

HostName

= 'rac1'

SidName
LocalListenerAddress
StandbyArchiveLocation
AlternateLocation

= 'antony'
= '(ADDRESS=(PROTOCOL=TCP)(HOST=rac1.localdo main)(PORT=1521))'
= '/u01/app/oracle/oradata/antony/arch/'
= ''

LogArchiveTrace

= '0'

LogArchiveFormat

= '%t_%s_%r.arc'

LatestLog

= '(monitor)'

TopWaitEvents

= '(monitor)'

Current status for "antony":


DISABLED

DGMGRL> enable configuration;


Enabled.
DGMGRL> show configuration;

Configuration
Name:

antony

Enabled:

YES

Protection Mode:

MaxPerformance

Fast-Start Failover: DISABLED


Databases:
antony - Primary database
john

- Physical standby database

Current status for "antony":


SUCCESS

DGMGRL> enable database john;


Enabled.
DGMGRL> SHOW DATABASE VERBOSE john;

Database
Name:
Role:

john
PHYSICAL STANDBY

Enabled:

YES

Intended State: ONLINE


Instance(s):
john

Properties:
InitialConnectIdentifier

= 'john'

LogXptMode

= 'ARCH'

Dependency

= ''

DelayMins
Binding
MaxFailure

= '0'
= 'OPTIONAL'
= '0'

MaxConnections

= '1'

ReopenSecs

= '300'

NetTimeout

= '180'

LogShipping

= 'ON'

PreferredApplyInstance

= ''

ApplyInstanceTimeout

= '0'

ApplyParallel

= 'AUTO'

StandbyFileManagement

= 'auto'

ArchiveLagTarget

= '0'

LogArchiveMaxProcesses

= '30'

LogArchiveMinSucceedDest

= '1'

DbFileNameConvert

= '/u01/app/oracle/oradata/antony/, /u01/app/oracle/oradata/john/'

LogFileNameConvert

= '/u01/app/oracle/oradata/antony/, /u01/app/oracle/oradata/john/'

FastStartFailoverTarget
StatusReport
InconsistentProperties
InconsistentLogXptProps

= ''
= '(monitor)'
= '(monitor)'
= '(monitor)'

SendQEntries

= '(monitor)'

LogXptStatus

= '(monitor)'

RecvQEntries

= '(monitor)'

HostName

= 'rac2'

SidName

= 'john'

LocalListenerAddress

= '(ADDRESS=(PROTOCOL=TCP)(HOST=rac2.localdomain)(PORT=1521))'

StandbyArchiveLocation

= '/u01/app/oracle/oradata/john/arch/'

AlternateLocation

= ''

LogArchiveTrace

= '0'

LogArchiveFormat

= '%t_%s_%r.arc'

LatestLog

= '(monitor)'

TopWaitEvents

= '(monitor)'

Current status for "john":


SUCCESS

Enabling the configuration and databases


DGMGRL> enable configuration;
Enabled.
DGMGRL> show configuration;

Configuration
Name:

antony

Enabled:

YES

Protection Mode:

MaxPerformance

Fast-Start Failover: DISABLED


Databases:
antony - Primary database
john

- Physical standby database

Current status for "antony":


SUCCESS

DGMGRL> enable database john;


Enabled.
DGMGRL> SHOW DATABASE VERBOSE john;

Database
Name:
Role:

john
PHYSICAL STANDBY

Enabled:

YES

Intended State: ONLINE


Instance(s):
john

Properties:
InitialConnectIdentifier

= 'john'

LogXptMode

= 'ARCH'

Dependency

= ''

DelayMins
Binding
MaxFailure

= '0'
= 'OPTIONAL'
= '0'

MaxConnections

= '1'

ReopenSecs

= '300'

NetTimeout

= '180'

LogShipping

= 'ON'

PreferredApplyInstance

= ''

ApplyInstanceTimeout

= '0'

ApplyParallel

= 'AUTO'

StandbyFileManagement
ArchiveLagTarget
LogArchiveMaxProcesses
LogArchiveMinSucceedDest
DbFileNameConvert

= 'auto'
= '0'
= '30'
= '1'
= '/u01/app/oracle/oradata/antony/, /u01/app/oracle/oradata/john/'

LogFileNameConvert
FastStartFailoverTarget
StatusReport
InconsistentProperties
InconsistentLogXptProps

= '/u01/app/oracle/oradata/antony/, /u01/app/oracle/oradata/john/'
= ''
= '(monitor)'
= '(monitor)'
= '(monitor)'

SendQEntries

= '(monitor)'

LogXptStatus

= '(monitor)'

RecvQEntries

= '(monitor)'

HostName

= 'rac2'

SidName
LocalListenerAddress
StandbyArchiveLocation

= 'john'
= '(ADDRESS=(PROTOCOL=TCP)(HOST=rac2.localdomain)(PORT=1521))'
= '/u01/app/oracle/oradata/john/arch/'

AlternateLocation

= ''

LogArchiveTrace

= '0'

LogArchiveFormat
LatestLog
TopWaitEvents

= '%t_%s_%r.arc'
= '(monitor)'
= '(monitor)'

Current status for "john":


SUCCESS

Enabling Fast Start Failover and the Observer

These are the steps required to enable and check Fast Start Failover and the Observer:
1. Ensure standby redologs are configured on all databases.
on primary:
SQL> SELECT TYPE,MEMBER FROM V$LOGFILE;

TYPE

MEMBER

------- -------------------------------------------------ONLINE /u01/app/oracle/oradata/antony/redo03.log

ONLINE /u01/app/oracle/oradata/antony/redo02.log
ONLINE /u01/app/oracle/oradata/antony/redo01.log
STANDBY /u01/app/oracle/oradata/antony/redoby04.log
STANDBY /u01/app/oracle/oradata/antony/redoby05.log
STANDBY /u01/app/oracle/oradata/antony/redoby06.log

On standby:

SQL> SELECT TYPE,MEMBER FROM V$LOGFILE;

TYPE

MEMBER

---------- -------------------------------------------------ONLINE

/u01/app/oracle/oradata/john/redo03.log

ONLINE

/u01/app/oracle/oradata/john/redo02.log

ONLINE

/u01/app/oracle/oradata/john/redo01.log

STANDBY

/u01/app/oracle/oradata/john/redoby04.log

STANDBY

/u01/app/oracle/oradata/john/redoby05.log

STANDBY

/u01/app/oracle/oradata/john/redoby06.log

2. Ensure the LogXptMode Property is set to SYNC.


Note: These commands will succeed only if database is configured with standby redo logs.
DGMGRL> EDIT DATABASE antony SET PROPERTY 'LogXptMode'='SYNC';
Property "LogXptMode" updated
DGMGRL> EDIT DATABASE john SET PROPERTY 'LogXptMode'='SYNC';
Property "LogXptMode" updated

3.Specify the FastStartFailoverTarget property

DGMGRL> EDIT DATABASE antony SET PROPERTY FastStartFailoverTarget='john';


Property "faststartfailovertarget" updated
DGMGRL> EDIT DATABASE john SET PROPERTY FastStartFailoverTarget='antony';

Property "faststartfailovertarget" updated

4.Upgrade the protection mode to MAXAVAILABILITY, if necessary.

DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY;


Operation requires shutdown of instance "antony" on database "antony"
Shutting down instance "antony"...
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "antony" on database "antony"
Starting instance "antony"...
ORACLE instance started.
Database mounted.

note: if ORA-12514: TNS:listener does not currently know of service requested in connect descriptor Failed.
You are no longer connected to ORACLE
Please connect again.
you must start instance (primary database) manually
SQL> conn / as sysdba
SQL> startup mount;

5. Enable Flashback Database on the Primary and Standby Databases.


On Both databases
To enter the standby into Flashback mode you must shutdown the both databases, then while the primary is down
execute the
following commands on the standby:
SQL> ALTER SYSTEM SET UNDO_RETENTION=3600 SCOPE=SPFILE;
System altered.
SQL> ALTER SYSTEM SET UNDO_MANAGEMENT='AUTO' SCOPE=SPFILE;
System altered.

SQL> startup mount;

SQL> ALTER DATABASE FLASHBACK ON;

Enable fast start failover

[oracle@rac1 ~]$ dgmgrl


DGMGRL for Linux: Version 10.2.0.1.0 - Production

Copyright (c) 2000, 2005, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.


DGMGRL> connect sys/oracle@antony;
Connected.
DGMGRL> show configuration verbose;

Configuration
Name:

antony

Enabled:

YES

Protection Mode:

MaxAvailability

Fast-Start Failover: DISABLED


Databases:
antony - Primary database
john

- Physical standby database

Current status for "antony":


SUCCESS

DGMGRL> show database john;

Database

Name:
Role:

john
PHYSICAL STANDBY

Enabled:

YES

Intended State: ONLINE


Instance(s):
john

Current status for "john":


SUCCESS

DGMGRL> ENABLE FAST_START FAILOVER;


Enabled.
start the observer
Start the observer from a third server on background. You may use a script like this:
---------------- script start on next line -------------------#!/bin/ksh
# startobserver
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
export
BASE_PATH=/u01/app/oracle/oracle/scripts/general:/opt/CTEact/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/
bin:/etc:/usr/local/maint/oracle:/usr/ccs/bin:/usr/openwin/bin:/usr/dt/bin:/usr/local/bin:.
export PATH=$ORACLE_HOME/bin:$BASE_PATH
dgmgrl << eof
connect sys/oracle@antony
START OBSERVER;
eof
---------------- script end on previous line -------------------[oracle@rac3 ~]$ nohup ./startobserver &
nohup: appending output to `nohup.out'
[1] 27392
Verify the fast-start failover configuration.

[oracle@rac3 ~]$ dgmgrl


DGMGRL for Linux: Version 10.2.0.1.0 - Production

Copyright (c) 2000, 2005, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.


DGMGRL> connect sys/oracle@antony
Connected.
DGMGRL> show configuration verbose

Configuration
Name:

antony

Enabled:

YES

Protection Mode:

MaxAvailability

Fast-Start Failover: ENABLED


Databases:
antony - Primary database
john

- Physical standby database


- Fast-Start Failover target

Fast-Start Failover
Threshold: 30 seconds
Observer: rac1

Current status for "antony":


SUCCESS

Check that primary and standby are healthy


This check must return 'SUCCESS' as the status for both databases, otherwise it means there is a configuration
problem.
DGMGRL> show database antony

Database
Name:
Role:
Enabled:

antony
PRIMARY
YES

Intended State: ONLINE


Instance(s):
antony

Current status for "antony":


SUCCESS

DGMGRL> show database john

Database
Name:
Role:
Enabled:

john
PHYSICAL STANDBY
YES

Intended State: ONLINE


Instance(s):
john

Current status for "john":


SUCCESS

DGMGRL>

EXECUTE THE SWITCHOVER:

DGMGRL> SWITCHOVER TO john;


Performing switchover NOW, please wait...

Operation requires shutdown of instance "antony" on database "antony"


Shutting down instance "antony"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires shutdown of instance "john" on database "john"
Shutting down instance "john"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "antony" on database "antony"
Starting instance "antony"...
ORACLE instance started.
Database mounted.
Operation requires startup of instance "john" on database "john"
Starting instance "john"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "john"
DGMGRL>

DGMGRL> show configuration verbose

Configuration
Name:

antony

Enabled:

YES

Protection Mode:

MaxAvailability

Fast-Start Failover: ENABLED

Databases:
antony - Physical standby database
- Fast-Start Failover target
john

- Primary database

Fast-Start Failover
Threshold: 30 seconds
Observer: rac1

Current status for "antony":


SUCCESS

DGMGRL> show database john

Database
Name:
Role:
Enabled:

john
PRIMARY
YES

Intended State: ONLINE


Instance(s):
john

Current status for "john":


SUCCESS

DGMGRL> show database antony

Database
Name:
Role:
Enabled:

antony
PHYSICAL STANDBY
YES

Intended State: ONLINE


Instance(s):
antony

Current status for "antony":


SUCCESS
fast-start failover DATAGUARD BROKER
here is an example to check the fast start failover in dataguard environment
i manually killed the mandatory background process smon of primary database.
then the observer automatically initiate the standby database to primary database, and reinstate the old primary
database to standby.

here, primary database name: whiteowl


physical standby database name: blackowl

observer name: observer

from primary database machine rac1:


[oracle@rac1 ~]$ ps -eaf | grep smon
oracle

17328

1 0 12:25 ?

00:00:00 ora_smon_whiteowl

oracle

17886 17865 0 12:34 pts/4

00:00:00 grep smon

[oracle@rac1 ~]$ kill -9 17428

startup the primary database in mount stage.

DGMGRL> show configuration verbose;

Configuration
Name:

whiteowl

Enabled:

YES

Protection Mode:

MaxAvailability

Fast-Start Failover: ENABLED


Databases:
whiteowl - Physical standby database (disabled)
- Fast-Start Failover target
blackowl - Primary database

Fast-Start Failover
Threshold: 30 seconds
Observer: rac1

Current status for "whiteowl":


Warning: ORA-16608: one or more databases have warnings

DGMGRL> show site verbose 'blackowl';


Site
Name:

'blackowl'

Hostname:

'rac2'

Instance name:

'blackowl'

Service Name:

'blackowl'

Standby Type:

'physical'

Enabled:

'yes'

Required:

'yes'

Default state:

'PRIMARY'

Intended state:
PFILE:

'PRIMARY'
''

Number of resources: 1
Resources:
Name: blackowl (default) (verbose name='blackowl')
Current status for "blackowl":

Warning: ORA-16817: unsynchronized Fast-Start Failover configuration

DGMGRL> show site verbose 'whiteowl';


Site
Name:

'whiteowl'

Hostname:

'rac1'

Instance name:

'whiteowl'

Service Name:

'whiteowl'

Standby Type:

'physical'

Enabled:

'yes'

Required:

'yes'

Default state:

'STANDBY'

Intended state:
PFILE:

'STANDBY'
''

Number of resources: 1
Resources:
Name: whiteowl (default) (verbose name='whiteowl')
Current status for "whiteowl":
Warning: ORA-16817: unsynchronized Fast-Start Failover configuration

on observer: machine rac3

12:19:20.23 Monday, January 25, 2010


Initiating fast-start failover to database "blackowl"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "blackowl"
12:19:51.84 Monday, January 25, 2010

12:24:33.93 Monday, January 25, 2010


Initiating reinstatement for database "whiteowl"...

Reinstating database "whiteowl", please wait...


Operation requires shutdown of instance "whiteowl" on database "whiteowl"
Shutting down instance "whiteowl"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "whiteowl" on database "whiteowl"
Starting instance "whiteowl"...
ORACLE instance started.
Database mounted.
Continuing to reinstate database "whiteowl" ...
Reinstatement of database "whiteowl" succeeded
12:26:02.89 Monday, January 25, 2010

then check,
DGMGRL> show configuration verbose;

Configuration
Name:

whiteowl

Enabled:

YES

Protection Mode:

MaxAvailability

Fast-Start Failover: ENABLED


Databases:
whiteowl - Physical standby database
- Fast-Start Failover target
blackowl - Primary database

Fast-Start Failover
Threshold: 30 seconds
Observer: rac1

Current status for "whiteowl":


SUCCESS
Data Guard errors and solution- i faced
today i started the primary and standby database, i got this error message in primary database
ORA-16649: database will open after Data Guard broker has evaluated Fast-Start Failover status

after connecting with the observer, i gave show configuration verbose command and show database verbose
'whiteowl' command, it showed the below error message.
note: here my database name whiteowl
ORA-16820:
Fast-Start Failover observer is no longer observing this databaseCause:
A previously started observer was
no longer actively observing this database. A significant amount of time elapsed since this database last heard from
the observer. Possible reasons were: - The node where the observer was running was not available.
- The network connection between the observer and this database was not available.
- Observer process was terminated unexpectedly.

Action: Check the reason why the observer cannot contact this database. If the problem cannot be corrected,
stop the current observer by connecting to the Data Guard configuration and issue the DGMGRL "STOP OBSERVER"
command. Then restart the observer on another node. You may use the DGMGRL "START OBSERVER" command to
start the observer on the other node.

what i have done?

i checked the listeners, tnsnames.ora files and tnsping command in primary,standby, observer machines
and then as above mentioned i stop the observer and then start the observer from primary database machine.
now its working fine.
DGMGRL> show configuration verbose;

Configuration
Name:

whiteowl

Enabled:

YES

Protection Mode:

MaxAvailability

Fast-Start Failover: ENABLED


Databases:
whiteowl - Primary database
blackowl - Physical standby database
- Fast-Start Failover target

Fast-Start Failover
Threshold: 30 seconds
Observer: rac1

Current status for "whiteowl":


SUCCESS

DGMGRL>
Step by Step, document for creating Physical Standby Database, 10g DATA GUARD
10g Data Guard, Physical Standby Creation, step by step

primary database name: white on rac2 machine

standby database name: black on rac1 machine

Creating a Data Guard Physical Standby environment, General Review.


Manually setting up a Physical standby database is a simple task when all prerequisites and setup steps are
carefully met and executed.
In this example I did use 2 hosts, that host a RAC database. All RAC preinstall requisites are then in place and no
additional configuration was
necessary to implement Data Guard Physical Standby manually.

The Enviroment
2 Linux servers, Oracle Distribution 2.6.9-55 EL i686 i386 GNU/Linux
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0
ssh is configured for user oracle on both nodes

Oracle Home is on identical path on both nodes


Implementation notes:
Once you have your primary database up and running these are the steps to follow:
1. Enable Forced Logging
2. Create a Password File
3. Configure a Standby Redo Log
4. Enable Archiving
5. Set Primary Database Initialization Parameters
Having followed these steps to implement the Physical Standby you need to follow these steps:
1. Create a Control File for the Standby Database
2. Backup the Primary Database and transfer a copy to the Standby node.
3. Prepare an Initialization Parameter File for the Standby Database
4. Configure the listener and tnsnames to support the database on both nodes
5. Set Up the Environment to Support the Standby Database on the standby node.
6. Start the Physical Standby Database
7. Verify the Physical Standby Database Is Performing Properly
Step by Step Implementation of a Physical Standby Environment
Primary Database Steps
Primary Database General View

SQL> archive log list;


Database log mode

No Archive Mode

Automatic archival

Disabled

Archive destination

USE_DB_RECOVERY_FILE_DEST

Oldest online log sequence


Current log sequence

0
1

SQL> select name from v$database;

NAME
--------WHITE

SQL> select name from v$datafile;

NAME
-------------------------------------------------------------------------------/u01/app/oracle/oradata/white/system01.dbf
/u01/app/oracle/oradata/white/undotbs01.dbf
/u01/app/oracle/oradata/white/sysaux01.dbf
/u01/app/oracle/oradata/white/users01.dbf

SQL> show parameters unique

NAME

TYPE

VALUE

------------------------------------ ----------- -----------------------------db_unique_name

string

white

SQL>

Enable Forced Logging


In order to implement Standby Database we enable 'Forced Logging'.
This option ensures that even in the event that a 'nologging' operation is done, force logging takes precedence
and all operations are logged
into the redo logs.
SQL> ALTER DATABASE FORCE LOGGING;
Database altered.
Create a Password File
A password file must be created on the Primary and copied over to the Standby site. The sys password must be
identical on both sites. This is
a key pre requisite in order to be able to ship and apply archived logs from Primary to Standby.

[oracle@rac2 ~]$ cd $ORACLE_HOME/dbs


[oracle@rac2 dbs]$ orapwd file=orapwwhite password=oracle force=y

SQL> select * from v$pwfile_users;

USERNAME

SYSDB SYSOP

------------------------------ ----- ----SYS

TRUE TRUE

Configure a Standby Redo Log


A Standby Redo log is added to enable Data Guard Maximum Availability and Maximum Protection modes. It is
important to configure the
Standby Redo Logs (SRL) with the same size as the online redo logs.
In this example I'm using Oracle Managed Files, that's why I don't need to provide the SRL path and file name. If
you are not using OMF's
you then must pass the full qualified name.
SQL> select group#,type,member from v$logfile;

GROUP# TYPE

MEMBER

---------- ------- -------------------------------------------------3 ONLINE /u01/app/oracle/oradata/white/redo03.log


2 ONLINE /u01/app/oracle/oradata/white/redo02.log
1 ONLINE /u01/app/oracle/oradata/white/redo01.log
SQL> select bytes from v$log;
BYTES
---------52428800
52428800
52428800

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 4


2 '/u01/app/oracle/oradata/white/stby04.log' size 50m;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 5


2 '/u01/app/oracle/oradata/white/stby05.log' size 50m;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 6


2 '/u01/app/oracle/oradata/white/stby06.log' size 50m;

Database altered.

SQL> SELECT GROUP#,TYPE,MEMBER FROM V$LOGFILE;

GROUP# TYPE

MEMBER

---------- ------- -------------------------------------------------3 ONLINE /u01/app/oracle/oradata/white/redo03.log


2 ONLINE /u01/app/oracle/oradata/white/redo02.log
1 ONLINE /u01/app/oracle/oradata/white/redo01.log
4 STANDBY /u01/app/oracle/oradata/white/stby04.log
5 STANDBY /u01/app/oracle/oradata/white/stby05.log
6 STANDBY /u01/app/oracle/oradata/white/stby06.log

6 rows selected.

Set Primary Database Initialization Parameters


Data Guard must use spfile, in order to configure it we create and configure the standby parameters on a regular
pfile, and once it is ready we
convert it to an spfile.
Several init.ora parameters control the behavior of a Data Guard environment. In this example the Primary
database init.ora is configured so
that it can hold both roles, as Primary or Standby.

SQL> CREATE PFILE FROM SPFILE;

File created.

(or)

SQL> CREATE PFILE='/tmp/initwhite.ora' from spfile;

File created.

Edit the pfile to add the standby parameters, here shown highlighted:

white.__db_cache_size=184549376
white.__java_pool_size=4194304
white.__large_pool_size=4194304
white.__shared_pool_size=88080384
white.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/white/adump'
*.background_dump_dest='/u01/app/oracle/admin/white/bdump'
*.compatible='10.2.0.1.0'

*.control_files='/u01/app/oracle/oradata/white/control01.ctl','/u01/app/oracle/oradata/white/control02.ctl','/u01/a
pp/oracle/oradata/white/control03.ctl'
*.core_dump_dest='/u01/app/oracle/admin/white/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='white'
*.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area'
*.db_recovery_file_dest_size=2147483648

*.dispatchers='(PROTOCOL=TCP) (SERVICE=whiteXDB)'
*.job_queue_processes=10
*.open_cursors=300
*.pga_aggregate_target=94371840
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=285212672
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/u01/app/oracle/admin/white/udump'
db_unique_name='white'
LOG_ARCHIVE_CONFIG='DG_CONFIG=(white,black)'
LOG_ARCHIVE_DEST_1='LOCATION=/u01/app/oracle/oradata/white/arch/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=white'
LOG_ARCHIVE_DEST_2='SERVICE=black LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=black'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
#Standby role parameters-----------------------------------------fal_server=black
fal_client=white
standby_file_management=auto
db_file_name_convert='/u01/app/oracle/oradata/black/','/u01/app/oracle/oradata/white/'
log_file_name_convert='/u01/app/oracle/oradata/black/','/u01/app/oracle/oradata/white/'

Once the new parameter file is ready we create from it the spfile:
SQL> shutdown immediate;
Database closed.

Database dismounted.
ORACLE instance shut down.
SQL> startup nomount pfile=/u01/app/oracle/product/10.2.0/db_1/dbs/initwhite.ora
ORA-16032: parameter LOG_ARCHIVE_DEST_1 destination string cannot be translated
ORA-07286: sksagdi: cannot obtain device information.
Linux Error: 2: No such file or directory
note: create a archive log destination(location) folder as per in parameter file and then startup the database.
SQL> startup nomount pfile=/u01/app/oracle/product/10.2.0/db_1/dbs/initwhite.ora
ORACLE instance started.

Total System Global Area 285212672 bytes


Fixed Size

1218992 bytes

Variable Size

96470608 bytes

Database Buffers

184549376 bytes

Redo Buffers

2973696 bytes

SQL> create spfile from pfile;

File created.

SQL> shutdown immediate;


ORA-01507: database not mounted

ORACLE instance shut down.

Enable Archiving
On 10g you can enable archive log mode by mounting the database and executing the archivelog command:
SQL> startup mount
ORACLE instance started.

Total System Global Area 285212672 bytes

Fixed Size

1218992 bytes

Variable Size

96470608 bytes

Database Buffers

184549376 bytes

Redo Buffers

2973696 bytes

Database mounted.
SQL> alter database archivelog;

Database altered.

SQL> alter database open;

Database altered.

SQL> archive log list;


Database log mode

Archive Mode

Automatic archival

Enabled

Archive destination

/u01/app/oracle/oradata/white/arch/

Oldest online log sequence

Next log sequence to archive


Current log sequence

2
2

SQL>

Standby Database Steps


Here, i am going to create standby database using backup of the primary database datafiles,redologs, controlfile
by rman. compare with user managed backup, rman is comfortable and flexible method.

Create an RMAN backup which we will use later to create the standby:

[oracle@rac2 ~]$ . oraenv


ORACLE_SID = [oracle] ? white
[oracle@rac2 ~]$ rman target=/

Recovery Manager: Release 10.2.0.1.0 - Production on Wed Jan 20 18:41:51 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: WHITE (DBID=3603807872)

RMAN> backup full database format '/u01/app/oracle/backup/%d_%U.bckp' plus archivelog format


'/u01/app/oracle/backup/%d_%U.bckp';

Next, create a standby controlfile backup via RMAN:


RMAN> configure channel device type disk format '/u01/app/oracle/backup/%U';

new RMAN configuration parameters:


CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT

'/u01/app/oracle/backup/%U';

new RMAN configuration parameters are successfully stored


released channel: ORA_DISK_1

RMAN> BACKUP CURRENT CONTROLFILE FOR STANDBY;

RMAN> BACKUP ARCHIVELOG ALL;

In this simple example, I am backing up the primary database to disk; therefore, I must make the backupsets
available to the standby host if I want to use them as the basis for my duplicate operation:
[oracle@rac2 ~]$ cd /u01/app/oracle/backup
[oracle@rac2 backup]$ ls -lart
total 636080
drwxrwxr-x 9 oracle oinstall

4096 Jan 20 18:42 ..

-rw-r----- 1 oracle oinstall 50418176 Jan 20 18:43 WHITE_01l3v1uv_1_1.bckp


-rw-r----- 1 oracle oinstall 531472384 Jan 20 18:54 WHITE_02l3v203_1_1.bckp
-rw-r----- 1 oracle oinstall

7143424 Jan 20 18:54 WHITE_03l3v2jf_1_1.bckp

-rw-r----- 1 oracle oinstall

1346560 Jan 20 18:54 WHITE_04l3v2jv_1_1.bckp

-rw-r----- 1 oracle oinstall

7110656 Jan 20 19:19 05l3v41r_1_1

drwxr-xr-x 2 oracle oinstall

4096 Jan 20 19:20 .

-rw-r----- 1 oracle oinstall 53174272 Jan 20 19:21 06l3v448_1_1


[oracle@rac2 backup]$ scp * oracle@rac1:/u01/app/oracle/backup/
05l3v41r_1_1

100% 6944KB

6.8MB/s

00:00

06l3v448_1_1

100%

51MB 16.9MB/s

00:03

WHITE_01l3v1uv_1_1.bckp

100%

48MB

WHITE_02l3v203_1_1.bckp

100% 507MB

2.7MB/s

00:18

1.5MB/s

05:47

WHITE_03l3v2jf_1_1.bckp

100% 6976KB 996.6KB/s

00:07

WHITE_04l3v2jv_1_1.bckp

100% 1315KB

00:01

1.3MB/s

NOTE:
The primary and standby database location for backup folder must be same.
for eg: /u01/app/oracle/backup folder

On the standby node create the required directories to get the datafiles
mkdir -p /u01/app/oracle/oradata/black
mkdir -p /u01/app/oracle/oradata/black/arch
mkdir -p /u01/app/oracle/admin/black
mkdir -p /u01/app/oracle/admin/black/adump
mkdir -p /u01/app/oracle/admin/black/bdump
mkdir -p /u01/app/oracle/admin/black/udump
mkdir -p /u01/app/oracle/flash_recovery_area/WHITE
mkdir -p /u01/app/oracle/flash_recovery_area/WHITE/onlinelog

Prepare an Initialization Parameter File for the Standby Database

Copy from the primary pfile to the standby destination


[oracle@rac2 ~]$ cd /u01/app/oracle/product/10.2.0/db_1/dbs/
[oracle@rac2 dbs]$ scp initwhite.ora oracle@rac1:/tmp/initblack.ora

initwhite.ora

100% 1704

1.7KB/s

00:00

Copy and edit the primary init.ora to set it up for the standby role,as here shown highlighted:

black.__db_cache_size=188743680
black.__java_pool_size=4194304
black.__large_pool_size=4194304
black.__shared_pool_size=83886080
black.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/black/adump'
*.background_dump_dest='/u01/app/oracle/admin/black/bdump'
*.compatible='10.2.0.1.0'

*.control_files='/u01/app/oracle/oradata/black/control01.ctl','/u01/app/oracle/oradata/black/control02.ctl','/u01/ap
p/oracle/oradata/black/control03.ctl'
*.core_dump_dest='/u01/app/oracle/admin/black/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='/u01/app/oracle/oradata/white/','/u01/app/oracle/oradata/black/'
*.db_name='white'
*.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.db_unique_name='black'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=blackXDB)'
*.fal_client='black'
*.fal_server='white'
*.job_queue_processes=10
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(white,black)'
*.LOG_ARCHIVE_DEST_1='LOCATION=/u01/app/oracle/oradata/black/arch/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=black'

*.LOG_ARCHIVE_DEST_2='SERVICE=white LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)


DB_UNIQUE_NAME=white'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
*.LOG_ARCHIVE_FORMAT='%t_%s_%r.arc'
*.LOG_ARCHIVE_MAX_PROCESSES=30
*.log_file_name_convert='/u01/app/oracle/oradata/white/','/u01/app/oracle/oradata/black/'
*.open_cursors=300
*.pga_aggregate_target=94371840
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=285212672
*.standby_file_management='auto'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/u01/app/oracle/admin/black/udump'

Configure the listener and tnsnames to support the database on both nodes
Configure listener.ora on both servers to hold entries for both databases
#on RAC2 Machine
LISTENER_VMRACTEST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT = 1521))
)
)

SID_LIST_LISTENER_VMRACTEST =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = white)

(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
(SID_NAME = white)
)
)

#on rac1 machine

LISTENER_VMRACTEST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT = 1521))
)
)

SID_LIST_LISTENER_VMRACTEST =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = black)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
(SID_NAME = black)
)
)

Configure tnsnames.ora on both servers to hold entries for both databases


#on rac2 machine
LISTENER_VMRACTEST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT = 1521))
)

WHITE =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = white)
)
)
BLACK =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = black)
)
)
#on rac1 machine
LISTENER_VMRACTEST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT = 1521))
)
)
WHITE =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = white)

)
)
BLACK =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = black)
)
)
Start the listener and check tnsping on both nodes to both services
#on machine rac1
[oracle@rac1 tmp]$ lsnrctl stop LISTENER_VMRACTEST

LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 20-JAN-2010 23:59:41

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=rac1.localdomain)(PORT=1521)))
The command completed successfully
[oracle@rac1 tmp]$ lsnrctl start LISTENER_VMRACTEST

LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:00:00

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Starting /u01/app/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 10.2.0.1.0 - Production


System parameter file is /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener_vmractest.log

Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac1.localdomain)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=rac1.localdomain)(PORT=1521)))
STATUS of the LISTENER
-----------------------Alias
Version

LISTENER_VMRACTEST
TNSLSNR for Linux: Version 10.2.0.1.0 - Production

Start Date

21-JAN-2010 00:00:00

Uptime

0 days 0 hr. 0 min. 0 sec

Trace Level

off

Security

ON: Local OS Authentication

SNMP

OFF

Listener Parameter File


Listener Log File

/u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora

/u01/app/oracle/product/10.2.0/db_1/network/log/listener_vmractest.log

Listening Endpoints Summary...


(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac1.localdomain)(PORT=1521)))
Services Summary...
Service "black" has 1 instance(s).
Instance "black", status UNKNOWN, has 1 handler(s) for this service...
Service "black_DGMGRL" has 1 instance(s).
Instance "black", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
[oracle@rac1 tmp]$ tnsping black

TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:00:21

Copyright (c) 1997, 2005, Oracle. All rights reserved.

Used parameter files:


/u01/app/oracle/product/10.2.0/db_1/network/admin/sqlnet.ora

Used TNSNAMES adapter to resolve the alias


Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT =
1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = black)))
OK (10 msec)
[oracle@rac1 tmp]$ tnsping white

TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:00:29

Copyright (c) 1997, 2005, Oracle. All rights reserved.

Used parameter files:


/u01/app/oracle/product/10.2.0/db_1/network/admin/sqlnet.ora

Used TNSNAMES adapter to resolve the alias


Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT =
1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = white)))
OK (10 msec)

#on rac2 machine


[oracle@rac2 dbs]$ lsnrctl stop LISTENER_VMRACTEST

LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:22:48

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=rac2.localdomain)(PORT=1 521)))


The command completed successfully
[oracle@rac2 dbs]$ lsnrctl start LISTENER_VMRACTEST

LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:23:08

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Starting /u01/app/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 10.2.0.1.0 - Production


System parameter file is /u01/app/oracle/product/10.2.0/db_1/network/admin/liste ner.ora
Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener _vmractest.log
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac2.localdomain)(PORT=1 521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=rac2.localdomain)(PORT=1 521)))


STATUS of the LISTENER
-----------------------Alias
Version

LISTENER_VMRACTEST
TNSLSNR for Linux: Version 10.2.0.1.0 - Production

Start Date

21-JAN-2010 00:23:08

Uptime

0 days 0 hr. 0 min. 0 sec

Trace Level

off

Security

ON: Local OS Authentication

SNMP

OFF

Listener Parameter File


Listener Log File

/u01/app/oracle/product/10.2.0/db_1/network/admin/list ener.ora

/u01/app/oracle/product/10.2.0/db_1/network/log/listen er_vmractest.log

Listening Endpoints Summary...


(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac2.localdomain)(PORT=1521)))
Services Summary...
Service "white" has 1 instance(s).
Instance "white", status UNKNOWN, has 1 handler(s) for this service...
Service "white_DGMGRL" has 1 instance(s).
Instance "white", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
[oracle@rac2 dbs]$ tnsping white

TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:23 :14

Copyright (c) 1997, 2005, Oracle. All rights reserved.

Used parameter files:


/u01/app/oracle/product/10.2.0/db_1/network/admin/sqlnet.ora

Used TNSNAMES adapter to resolve the alias


Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac2.loc aldomain)(PORT =
1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = whi te)))
OK (0 msec)
[oracle@rac2 dbs]$ tnsping black

TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:23 :18

Copyright (c) 1997, 2005, Oracle. All rights reserved.

Used parameter files:


/u01/app/oracle/product/10.2.0/db_1/network/admin/sqlnet.ora

Used TNSNAMES adapter to resolve the alias


Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1.loc aldomain)(PORT =
1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = bla ck)))
OK (10 msec)

Set Up the Environment to Support the Standby Database on the standby node.
Create a passwordfile for the standby:
[oracle@rac1 ~]$ orapwd file=$ORACLE_HOME/dbs/orapwblack password=oracle
note: sys password must be identical for both primary and standby database

Append an entry to oratab:

[oracle@rac1 ~]$ echo "black:/u01/app/oracle/product/10.2.0/db_1:N" >> /etc/oratab

Startup nomount the Standby database

Nomount the standby instance in preparation for the duplicate operation:


Startup nomount the Standby database and generate an spfile

[oracle@rac1 ~]$ . oraenv


ORACLE_SID = [whiteowl] ? black
[oracle@rac1 ~]$ sqlplus '/as sysdba'

SQL*Plus: Release 10.2.0.1.0 - Production on Thu Jan 21 00:38:03 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile='/tmp/initblack.ora'


ORACLE instance started.

Total System Global Area 285212672 bytes


Fixed Size

1218992 bytes

Variable Size

92276304 bytes

Database Buffers

188743680 bytes

Redo Buffers

2973696 bytes

SQL> create spfile from pfile='/tmp/initblack.ora';

File created.

SQL> shutdown immediate


ORA-01507: database not mounted

ORACLE instance shut down.


SQL> startup nomount
ORACLE instance started.

Total System Global Area 285212672 bytes


Fixed Size

1218992 bytes

Variable Size

92276304 bytes

Database Buffers

188743680 bytes

Redo Buffers

2973696 bytes

Create the standby database using rman:


[oracle@rac1 ~]$ . oraenv
ORACLE_SID = [oracle] ? black
[oracle@rac1 ~]$ rman target=sys/oracle@white auxiliary=/

Recovery Manager: Release 10.2.0.1.0 - Production on Thu Jan 21 00:43:11 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: WHITE (DBID=3603807872)


connected to auxiliary database: WHITE (not mounted)

RMAN> DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK;

Start the redo apply:

SQL> alter database recover managed standby database disconnect from session;

Test the configuration by generating archive logs from the primary and then querying the standby to see if the logs
are being successfully applied.

On the Primary:

SQL> alter system switch logfile;


SQL> alter system archive log current;

SQL> archive log list;


Database log mode

Archive Mode

Automatic archival

Enabled

Archive destination

/u01/app/oracle/oradata/white/arch/

Oldest online log sequence

Next log sequence to archive


Current log sequence

10
10

On the Standby:
SQL> archive log list;
Database log mode

Archive Mode

Automatic archival

Enabled

Archive destination

/u01/app/oracle/oradata/black/arch/

Oldest online log sequence


Next log sequence to archive
Current log sequence

8
0
10

SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG


2 ORDER BY SEQUENCE#;

Stop the managed recovery process on the standby:

SQL> alter database recover managed standby database cancel;


Recover Catalog RMAN

This chapter show how to create the Rman catalog, how to register a database with it and how to review some of
the information contained in the catalog.

The catalog database is usually a small database it contains and maintains the metadata of all rman backups
performed using the catalog.

1.Creating and Register a database with Recovery Catalog

step1: create a tablespace for storing recovery catalog information in recovery catalog database
here my recovery catalog database is demo1

[oracle@rac2 bin]$ . oraenv


ORACLE_SID = [oracle] ? demo1
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 is /u01/app/oracle
[oracle@rac2 bin]$ sqlplus '/as sysdba'

SQL*Plus: Release 11.1.0.6.0 - Production on Thu Dec 31 10:28:22 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production

With the Partitioning, Real Application Clusters, OLAP, Data Mining


and Real Application Testing options

SQL> startup
ORACLE instance started.

Total System Global Area 481267712 bytes


Fixed Size 1300716 bytes
Variable Size 226494228 bytes
Database Buffers 247463936 bytes
Redo Buffers 6008832 bytes
Database mounted.
Database opened.

SQL> CREATE TABLESPACE RMAN DATAFILE '/u01/app/oracle/oradata/demo1/rman01.dbf' size 1000m;

step 2: create a user for recovery catalog and assign a tablespace and resources to that user

SQL> create user sai identified by sai default tablespace rman quota unlimited on rman;

SQL> grant connect,resource, recovery_catalog_owner to sai;

step 3: Connect to recovery catalog and register the database with recovery catalog:
[oracle@rac2 bin]$ . oraenv
ORACLE_SID = [oracle] ? demo1
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 is /u01/app/oracle

[oracle@rac2 bin]$ rman target /

RMAN> connect catalog sai/sai@demo1;

RMAN> create catalog;

RMAN> register database;

RMAN> report schema;

2.How to register a new database with RMAN recovery catalog


Replace username/password with the actual username and password for recovery catalog; and
DEMO1 with the name of the recovery catalog database and new database name ANTO

1. Change SID to the database you want to register

. oraenv
ORACLE_SID

2. Connect to RMAN catalog database

rman target / catalog username/password@DEMO1

3. Register database

RMAN> register database;

example:

[oracle@rac2 bin]$ . oraenv


ORACLE_SID = [anto] ? anto
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 is /u01/app/oracle
[oracle@rac2 bin]$ rman target / catalog sai/sai@demo1;

Recovery Manager: Release 11.1.0.6.0 - Production on Thu Dec 31 10:32:15 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: ANTO (DBID=2484479252)


connected to recovery catalog database

RMAN> register database;

database registered in recovery catalog


starting full resync of recovery catalog
full resync complete

verification:

connect to the recovery catalog database demo1 and connect as recovery catalog user sai;
SQL> conn sai/sai;
Connected.

SQL> select * from db;

DB_KEY

DB_ID CURR_DBINC_KEY

---------- ---------- --------------

1 3710360247

141 2484479252

142

3. Unregister the database from recovery catalog:

Login as rman catalog owner in sql*plus prompt


SQL> select * from rc_database where dbid = DBID;

SQL> exec dbms_rcvcat.unregisterdatabase(DBKEY, DBID);

example:

SQL> select * from rc_database where dbid = DBID;

DB_KEY DBINC_KEY

DBID NAME

RESETLOGS_CHANGE# RESETLOGS

---------- ---------- ---------- -------- ----------------- --------1


141

2 3710360247 DEMO1
142 2484479252 ANTO

594567 29-DEC-09
522753 30-DEC-09

SQL>exec dbms_rcvcat.unregisterdatabase(141, 2484479252);

SQL> select * from rc_database where dbid = DBID;

DB_KEY DBINC_KEY

DBID NAME

RESETLOGS_CHANGE# RESETLOGS

---------- ---------- ---------- -------- ----------------- --------1

2 3710360247 DEMO1

594567 29-DEC-09

SUCCESSFULLY, REMOVED THE DATABASE ANTO FROM THE RECOVER CATALOG


Recovering a Standby database from a missing archivelog
Hi friends,
today i came across one issue recovering a standby database from a missing archivelog files.

on primary database

SQL> archive log list;


Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/oradata/archive
Oldest online log sequence 16
Next log sequence to archive 18
Current log sequence 18

on standby database
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/oradata/archive
Oldest online log sequence 13
Next log sequence to archive 0
Current log sequence 18

i tried to solve the problem using shutdownabort.com document


Register a missing log file
alter database register physical logfile '';
If FAL doesn't work and it says the log is already registered
alter database register or replace physical logfile '';

If that doesn't work, try this...

shutdown immediate
startup nomount
alter database mount standby database;
alter database recover automatic standby database;

wait for the recovery to finish - then cancel


shutdown immediate
startup nomount
alter database mount standby database;
alter database recover managed standby database disconnect;
Check which logs are missing
Run this on the standby...

select local.thread#
,

local.sequence# from
(select thread#
,

sequence#

from

v$archived_log

where dest_id=1) local


where local.sequence# not in
(select sequence#
from v$archived_log
where dest_id=2 and
thread# = local.thread#)
/
THREAD# SEQUENCE#
---------- ---------1

10

11

12

13

14

15

still i the archive logs are not applied to the standby database.

finally i tried recovering a standby database using rman , el-caro blog document
i got a solution, now my primary and standby database has equal archives.

A Physical Standby database relies on continuous application of


archivelogs from a Primary Database to be in synch with it. In Oracle
Database versions prior to 10g in the event of an archivelog gone
missing or corrupt you had to rebuild the standby database from scratch.

In
10g you can use an incremental backup and recover the standby using the
same to compensate for the missing archivelogs as shown below

In
the case below archivelogs with sequence numbers 137 and 138 which are
required on the standby are deleted to simulate this problem.

Step 1: On the standby database check the current scn.

SQL> select current_scn from v$database;

CURRENT_SCN
----------548283

Step 2: On the primary database create the needed incremental backup from the above SCN

login to primary database rman target /

RMAN> backup device type disk incremental from scn 548283 database format '/u01/backup/bkup_%U';

Starting backup at 28-DEC-09

using channel ORA_DISK_1


backup will be obsolete on date 04-JAN-10
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/demo1/system01.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/demo1/sysaux01.dbf
input datafile file number=00005 name=/u01/app/oracle/oradata/demo1/rman01.dbf
input datafile file number=00006 name=/u01/app/oracle/oradata/demo1/rman02.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/demo1/undotbs01.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/demo1/users01.dbf
channel ORA_DISK_1: starting piece 1 at 28-DEC-09
channel ORA_DISK_1: finished piece 1 at 28-DEC-09
piece handle=/u01/backup/bkup_07l21ukv_1_1 tag=TAG20091228T143302 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:24:19

using channel ORA_DISK_1


backup will be obsolete on date 04-JAN-10
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 28-DEC-09
channel ORA_DISK_1: finished piece 1 at 28-DEC-09
piece handle=/u01/backup/bkup_08l2202v_1_1 tag=TAG20091228T143302 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
Finished backup at 28-DEC-09

RMAN>

Step 3: Cancel managed recovery at the standby database

SQL>recover managed standby database cancel;


Media recovery complete.

Move the backup files to a new folder called new_incr so that they are the only files in that folder.

Step 4: Catalog the Incremental Backup Files at the Standby Database

[oracle@rac1 bin]$ . oraenv


ORACLE_SID = [RAC1] ? stby
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 is /u01/app/oracle
[oracle@rac1 bin]$ rman target /

Recovery Manager: Release 11.1.0.6.0 - Production on Mon Dec 28 15:01:33 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: DEMO1 (DBID=3710229940, not open)

RMAN> catalog start with '/u01/backup/new_incr';

using target database control file instead of recovery catalog


searching for all files that match the pattern /u01/backup/new_incr

List of Files Unknown to the Database


=====================================
File Name: /u01/backup/new_incr/bkup_08l2202v_1_1
File Name: /u01/backup/new_incr/bkup_07l21ukv_1_1

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

List of Cataloged Files


=======================
File Name: /u01/backup/new_incr/bkup_08l2202v_1_1
File Name: /u01/backup/new_incr/bkup_07l21ukv_1_1

Step 5: Apply the Incremental Backup to the Standby Database

RMAN> recover database noredo;

Starting recover at 28-DEC-09


allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=141 device type=DISK
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /u01/app/oracle/oradata/stby/system01.dbf
destination for restore of datafile 00002: /u01/app/oracle/oradata/stby/sysaux01.dbf
destination for restore of datafile 00003: /u01/app/oracle/oradata/stby/undotbs01.dbf
destination for restore of datafile 00004: /u01/app/oracle/oradata/stby/users01.dbf
destination for restore of datafile 00005: /u01/app/oracle/oradata/stby/rman01.dbf
destination for restore of datafile 00006: /u01/app/oracle/oradata/stby/rman02.dbf
channel ORA_DISK_1: reading from backup piece /u01/backup/new_incr/bkup_07l21ukv_1_1
channel ORA_DISK_1: piece handle=/u01/backup/new_incr/bkup_07l21ukv_1_1 tag=TAG20091228T143302
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished recover at 28-DEC-09

RMAN>

Step 6: Put the standby database back to managed recovery mode.

SQL> recover managed standby database nodelay disconnect;


Media recovery complete.

From the alert.log you will notice that the standby database is still looking for the old log files

*************************************************
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 137-137
DBID 768471617 branch 600609988
**************************************************

This is because the controlfile has not been updated.


Hence the standby controlfile has to be recreated

On the primary DATABASE

SQL> alter database create standby controlfile as


2 '/u01/control01.ctl';

Copy the standby control file to the standby site and restart the standby database in managed recovery mode...

NOW CHECK THE ARCHIVE LOG LIST ON BOTH PRIMARY AND STANDBY DATABASE,
SQL> archive log list;
Database log mode

Archive Mode

Automatic archival

Enabled

Archive destination

/u01/app/oracle/oradata/archive

Oldest online log sequence

20

Next log sequence to archive


Current log sequence

22
22

SQL> archive log list;


Database log mode

Archive Mode

Automatic archival

Enabled

Archive destination

/u01/app/oracle/oradata/archive

Oldest online log sequence

20

Next log sequence to archive


Current log sequence

0
22

SQL>
changing database dbid
SQL> startup mount
ORACLE instance started.

Total System Global Area 481267712 bytes


Fixed Size 1300716 bytes
Variable Size 226494228 bytes
Database Buffers 247463936 bytes
Redo Buffers 6008832 bytes
Database mounted.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
[oracle@rac1 ~]$ . oraenv
ORACLE_SID = [demo2] ?

The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 is /u01/app/oracle


[oracle@rac1 ~]$ nid target = /

DBNEWID: Release 11.1.0.6.0 - Production on Thu Dec 24 20:05:44 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Connected to database DEMO2 (DBID=3682169720)

Connected to server version 11.1.0

Control Files in database:


/u01/app/oracle/oradata/demo2/control01.ctl
/u01/app/oracle/oradata/demo2/control02.ctl
/u01/app/oracle/oradata/demo2/control03.ctl

Change database ID of database DEMO2? (Y/[N]) => y

Proceeding with operation


Changing database ID from 3682169720 to 3682222232
Control File /u01/app/oracle/oradata/demo2/control01.ctl - modified
Control File /u01/app/oracle/oradata/demo2/control02.ctl - modified
Control File /u01/app/oracle/oradata/demo2/control03.ctl - modified
Datafile /u01/app/oracle/oradata/demo2/system01.dbf - dbid changed
Datafile /u01/app/oracle/oradata/demo2/sysaux01.dbf - dbid changed
Datafile /u01/app/oracle/oradata/demo2/undotbs01.dbf - dbid changed
Datafile /u01/app/oracle/oradata/demo2/users01.dbf - dbid changed
Datafile /u01/app/oracle/oradata/demo2/temp01.dbf - dbid changed
Control File /u01/app/oracle/oradata/demo2/control01.ctl - dbid changed
Control File /u01/app/oracle/oradata/demo2/control02.ctl - dbid changed
Control File /u01/app/oracle/oradata/demo2/control03.ctl - dbid changed

Instance shut down

Database ID for database DEMO2 changed to 3682222232.


All previous backups and archived redo logs for this database are unusable.
Database is not aware of previous backups and archived logs in Recovery Area.
Database has been shutdown, open database with RESETLOGS option.
Succesfully changed database ID.
DBNEWID - Completed succesfully.

[oracle@rac1 ~]$
SQL> alter database open resetlogs;

Database altered.

SQL> select dbid from v$database;

DBID
---------3682222232
INTERNAL OPERATION OF HOT BACKUP
What Happens When A Tablespace/Database Is Kept In Begin Backup Mode

This document explains in detail about what happens when a tablespace/datafile is kept in hot backup/begin
backup mode.

To perform online/hot backup we have to put the tablespace in begin backup mode followed by copying the
datafiles and then putting the tablespace to end backup.

In 8i, 9i we have to put each tablespace individually in begin/end backup mode to perform the online backup. From
10g onwards the entire database can be put in begin/end backup mode.

Make sure that the database is in archivelog mode

Example :

Performing a single tablespace backup

+ sql>alter tablespace system begin backup;

+ Copy the corresponding datafiles using appropriate O/S commands.

+ sql>alter tablespace system end backup;

Performing a full database backup (starting from 10g)

+ sql> alter database begin backup;

+ Copy all the datafiles using appropriate O/S commands.

+ sql> alter database end backup;

One danger in making online backups is the possibility of inconsistent data within a block. For example, assume
that you are backing up block 100 in datafile users.dbf. Also, assume that the copy utility reads the entire block
while DBWR is in the middle of updating the block. In this case, the copy utility may read the old data in the top
half of the block and the new data in the bottom top half of the block. The result is called a fractured block,
meaning that the data contained in this block is not consistent. at a given SCN.

Therefore oracle internally manages the consistency as below :

1. The first time a block is changed in a datafile that is in hot backup mode, the entire block is written to the redo
log files, not just the changed bytes. Normally only the changed bytes (a redo vector) is written. In hot backup
mode, the entire block is logged the first time. This is because you can get into a situation where the process
copying the datafile and DBWR are working on the same block simultaneously.

Lets say they are and the OS blocking read factor is 512bytes (the OS reads 512 bytes from disk at a time). The
backup program goes to read an 8k Oracle block. The OS gives it 4k. Meanwhile -- DBWR has asked to rewrite this
block. the OS schedules the DBWR write to occur right now. The entire 8k block is rewritten. The backup program
starts running again (multi-tasking OS here) and reads the last 4k of the block. The backup program has now
gotten an fractured block -- the head and tail are from two points in time.

We cannot deal with that during recovery. Hence, we log the entire block image so that during recovery, this block
is totally rewritten from redo and is consistent with itself atleast. We can recover it from there.

2. The datafile headers which contain the SCN of the last completed checkpoint are not updated while a file is in
hot backup mode. This lets the recovery process understand what archive redo log files might be needed to fully
recover this file.

To limit the effect of this additional logging, you should ensure you only place one tablepspace at a time in backup
mode and bring the tablespace out of backup mode as soon as you have backed it up. This will reduce the number
of blocks that may have to be logged to the minimum possible.

Try to take the hot/online backups when there is less / no load on the database, so that less redo will be
generated.
v$ASM view, Automatic Storage Management views
The following v$ASM views describe the structure and components of ASM:

v$ASM_ALIAS
This view displays all system and user-defined aliases. There is one row for every alias present in every diskgroup
mounted by the ASM instance. The RDBMS instance displays no rows in this view.

V$ASM_ATTRIBUTE
This Oracle Database 11g view displays one row for each ASM attribute defined. Theseattributes are listed when
they are defined in CREATE DISKGROUP or ALTER DISKGROUP statements. DISK_REPAIR_TIMER is an example of
an attribute.

V$ASM_CLIENT
This view displays one row for each RDBMS instance that has an opened ASM diskgroup.

V$ASM_DISK

This view contains specifics about all disks discovered by the ASM isntance, including mount status, disk state, and
size. There is one row for every disk discovered by the ASM instance.

V$ASM_DISK_IOSTAT
This displays information about disk I/O statistics for each ASM Client. If this view is queried from the database
instance, only the rows for that instance are shown.

V$ASM_DISK_STAT
This view contains similar content as the v$ASM_DISK, except v$ASM_DISK_STAT reads disk information from
cache and thus performs no disk discovery. Thsi view is primarily used form quick acces to the disk information
without the overhead of disk discovery.

V$ASM_DISKGROUP
This view displays one row for every ASM diskgroup discovered by the ASM instance on the node.

V$ASM_DISKGROUP_STAT
This view contains all the similar view contents as the v$ASM_DISKGROUP, except that v$ASM_DISK_STAT reads
disk information from the cache and thus performs no disk discovery. This view is primarily used for quick access to
the diskgroup information without the overhead of disk discovery.

V$ASM_FILE
This view displays information about ASM files. There is one row for every ASM file in every diskgroup mounted by
the ASM instance. In a RDBMS instance, V$ASM_FILE displays no row.

V$ASM_OPERATION
This view describes the progress of an influx ASM rebalance operation. In a RDBMS instance, v$ASM_OPERATION
displays no rows.

V$ASM_TEMPLATE
This view contains information on user and system-defined templated. v$ASM_TEMPLATE displays one row for
every template present in every diskgroup mounted by the ASM instance. In a RDBMS instance, v$ASM_TEMPLATE
displays one row for every template present in every diskgroup mounted by the ASM instance with which the
RDBMS instance communicates.

thats it,
oracle DBA Tips (PART-II)

ORACLE DBA TIPS:- PART-2


---------------------------------26.Retrieving Threshold Information
SELECT metrics_name, warning_value, critical_value, consecutive_occurrences
FROM DBA_THRESHOLDS
WHERE metrics_name LIKE '%CPU Time%';

27.Viewing Alert Data


The following dictionary views provide information about server alerts:
DBA_THRESHOLDS lists the threshold settings defined for the instance.

DBA_OUTSTANDING_ALERTS describes the outstanding alerts in the database.

DBA_ALERT_HISTORY lists a history of alerts that have been cleared.

V$ALERT_TYPES provides information such as group and type for each alert.

V$METRICNAME contains the names, identifiers, and other information about the
system metrics.

V$METRIC and V$METRIC_HISTORY views contain system-level metric values in


memory.

28.The following views can help you to monitor locks:

for getting information about locks, we have to run two scripts


utllockt.sql and catblock.sql
Lists the locks currently held by Oracle Database and outstanding
V$LOCK
requests for a lock or latch

Displays a session if it is holding a lock on an object for which


DBA_BLOCKERS
another session is waiting

Displays a session if it is waiting for a locked object


DBA_WAITERS

Lists all DDL locks held in the database and all outstanding
DBA_DDL_LOCKS
requests for a DDL lock

Lists all DML locks held in the database and all outstanding
DBA_DML_LOCKS
requests for a DML lock

Lists all locks or latches held in the database and all outstanding
DBA_LOCK
requests for a lock or latch

Displays a row for each lock or latch that is being held, and one
DBA_LOCK_INTERNAL
row for each outstanding request for a lock or latch

Lists all locks acquired by every transaction on the system


V$LOCKED_OBJECT

29.Process and Session Views


v$process
v$locked_object
v$session

v$sess_io
v$session_longops
v$session_wait
v$sysstat
v$resource_limit
v$sqlarea
v$latch

30.What Is a Control File?


Every Oracle Database has a control file, which is a small binary file that records the
physical structure of the database. The control file includes:
The database name

Names and locations of associated datafiles and redo log files

The timestamp of the database creation

The current log sequence number

Checkpoint information

31.The following views display information about control files:


V$DATABASE Displays database information from the control file

V$CONTROLFILE Lists the names of control files

V$CONTROLFILE_RECORD_SECTION Displays information about control file record sections

V$PARAMETER Displays the names of control files as specified in the CONTROL_FILES initialization parameter

32.Redo Log Contents


Redo log files are filled with redo records. A redo record, also called a redo entry, is
made up of a group of change vectors, each of which is a description of a change made
to a single block in the database.

Redo entries record data that you can use to reconstruct all changes made to the
database, including the undo segments. Therefore, the redo log also protects rollback
data. When you recover the database using redo data, the database reads the change
vectors in the redo records and applies the changes to the relevant blocks.

33.Log Switches and Log Sequence Numbers


A log switch is the point at which the database stops writing to one redo log file and
begins writing to another. Normally, a log switch occurs when the current redo log file
is completely filled and writing must continue to the next redo log file.
You can also force log switches manually.

Oracle Database assigns each redo log file a new log sequence number every time a
log switch occurs and LGWR begins writing to it. When the database archives redo log
files, the archived log retains its log sequence number. A redo log file that is cycled
back for use is given the next available log sequence number.

34. Setting the Size of Redo Log Members


The minimum size permitted for a redo log file is 4 MB.

35.Setting the ARCHIVE_LAG_TARGET Initialization Parameter


The ARCHIVE_LAG_TARGET initialization parameter specifies the target of how many
seconds of redo the standby could lose in the event of a primary shutdown or failure if
the Oracle Data Guard environment is not configured in a no-data-loss mode. It also
provides an upper limit of how long (in seconds) the current log of the primary
database can span. Because the estimated archival time is also considered, this is not

the exact log switch time.


The following initialization parameter setting sets the log switch interval to 30 minutes
(a typical value).
ARCHIVE_LAG_TARGET = 1800
A value of 0 disables this time-based log switching functionality. This is the default
setting.
You can set the ARCHIVE_LAG_TARGET initialization parameter even if there is no
standby database. For example, the ARCHIVE_LAG_TARGET parameter can be set
specifically to force logs to be switched and archived.

36.Verifying Blocks in Redo Log Files


If you set the initialization parameter DB_BLOCK_CHECKSUM to TRUE, the database
computes a checksum for each database block when it is written to disk, including
each redo log block as it is being written to the current log. The checksum is stored the header of the block.

Oracle Database uses the checksum to detect corruption in a redo log block. The
database verifies the redo log block when the block is read from an archived log
during recovery and when it writes the block to an archive log file. An error is raised
and written to the alert log if corruption is detected.

37.Clearing a Redo Log File


A redo log file might become corrupted while the database is open, and ultimately
stop database activity because archiving cannot continue. In this situation the ALTER
DATABASE CLEAR LOGFILE statement can be used to reinitialize the file without
shutting down the database.
The following statement clears the log files in redo log group number 3:
ALTER DATABASE CLEAR LOGFILE GROUP 3;
This statement overcomes two situations where dropping redo logs is not possible:
If there are only two log groups

The corrupt redo log file belongs to the current group


If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in the
statement.
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;
This statement clears the corrupted redo logs and avoids archiving them. The cleared
redo logs are available for use even though they were not archived.
If you clear a log file that is needed for recovery of a backup, then you can no longer
recover from that backup. The database writes a message in the alert log describing the
backups from which you cannot recover.
Note:
If you clear an unarchived redo log file, you should make
another backup of the database.
If you want to clear an unarchived redo log that is needed to bring an offline
tablespace online, use the UNRECOVERABLE DATAFILE clause in the ALTER
DATABASE CLEAR LOGFILE statement.

38.Viewing Redo Log Information


Displays the redo log file information from the control file
V$LOG
Identifies redo log groups and members and member status
V$LOGFILE
Contains log history information
V$LOG_HISTORY

39.You can use archived redo logs to:


Recover a database

Update a standby database

Get information about the history of a database using the LogMiner utility

40.Changing the database ARCHIVING mode:


(1) shutdown
(2) startup mount
(3) alter database archivelog;
(4) alter database open;

41.Performing Manual Archiving


ALTER DATABASE ARCHIVELOG MANUAL;
ALTER SYSTEM ARCHIVE LOG ALL;

note:When you use manual archiving mode, you cannot specify any standby databases in
the archiving destinations.

42.Understanding Archive Destination Status


Each archive destination has the following variable characteristics that determine its
status:
Valid/Invalid: indicates whether the disk location or service name information is

specified and valid


Enabled/Disabled: indicates the availability state of the location and whether the

database can use the destination


Active/Inactive: indicates whether there was a problem accessing the destination

Several combinations of these characteristics are possible. To obtain the current status
and other information about each destination for an instance, query the
V$ARCHIVE_DEST view.

The LOG_ARCHIVE_DEST_STATE_n (where n is an integer from 1 to 10) initialization


parameter lets you control the availability state of the specified destination (n).
ENABLE indicates that the database can use the destination.


DEFER indicates that the location is temporarily disabled.

ALTERNATE indicates that the destination is an alternate.

The availability state of the destination is DEFER, unless there is a failure of its parent destination, in which case its
state becomes ENABLE.

43.Viewing Information About the Archived Redo Log


You can display information about the archived redo logs using the following sources:
(1)Dynamic Performance Views

(2)The ARCHIVE LOG LIST Command

Dynamic Performance Views


------------------------Shows if the database is in ARCHIVELOG or NOARCHIVELOG
V$DATABASE
mode and if MANUAL (archiving mode) has been specified.

Displays historical archived log information from the control


V$ARCHIVED_LOG
file. If you use a recovery catalog, the RC_ARCHIVED_LOG
view contains similar information.

Describes the current instance, all archive destinations, and


V$ARCHIVE_DEST
the current value, mode, and status of these destinations.

Displays information about the state of the various archive


V$ARCHIVE_PROCESSES

processes for an instance.

Contains information about any backups of archived logs. If


V$BACKUP_REDOLOG
you use a recovery catalog, the RC_BACKUP_REDOLOG
contains similar information.

Displays all redo log groups for the database and indicates
V$LOG
which need to be archived.

Contains log history information such as which logs have


V$LOG_HISTORY
been archived and the SCN range for each archived log.

44.Bigfile Tablespaces
A bigfile tablespace is a tablespace with a single, but very large (up to 4G blocks)
datafile. Traditional smallfile tablespaces, in contrast, can contain multiple datafiles,
but the files cannot be as large. The benefits of bigfile tablespaces are the following:
A bigfile tablespace with 8K blocks can contain a 32 terabyte datafile. A bigfile
tablespace with 32K blocks can contain a 128 terabyte datafile. The maximum
number of datafiles in an Oracle Database is limited (usually to 64K files).
Therefore, bigfile tablespaces can significantly enhance the storage capacity of an
Oracle Database.

45.Altering a Bigfile Tablespace


Two clauses of the ALTER TABLESPACE statement support datafile transparency
when you are using bigfile tablespaces:

RESIZE: The RESIZE clause lets you resize the single datafile in a bigfile
tablespace to an absolute size, without referring to the datafile. For example:

ALTER TABLESPACE bigtbs RESIZE 80G;

AUTOEXTEND (used outside of the ADD DATAFILE clause):


With a bigfile tablespace, you can use the AUTOEXTEND clause outside of the ADD
DATAFILE clause. For example:
ALTER TABLESPACE bigtbs AUTOEXTEND ON NEXT 20G;

An error is raised if you specify an ADD DATAFILE clause for a bigfile tablespace.

46.Identifying a Bigfile Tablespace


The following views contain a BIGFILE column that identifies a tablespace as a bigfile
tablespace:
DBA_TABLESPACES

USER_TABLESPACES

V$TABLESPACE

47.Temporary Tablespaces
You can view the allocation and deallocation of space in a temporary tablespace sort
segment using the V$SORT_SEGMENT view. The V$TEMPSEG_USAGE view identifies
the current sort users in those segments.

You also use different views for viewing information about tempfiles than you would
for datafiles. The V$TEMPFILE and DBA_TEMP_FILES views are analogous to the
V$DATAFILE and DBA_DATA_FILES views.

48.Creating a Locally Managed Temporary Tablespace


CREATE TEMPORARY TABLESPACE lmtemp TEMPFILE '/u02/oracle/data/lmtemp01.dbf'
SIZE 20M REUSE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 16M;

Altering a Locally Managed Temporary Tablespace


----------------------------------------------ALTER TABLESPACE lmtemp
ADD TEMPFILE '/u02/oracle/data/lmtemp02.dbf' SIZE 18M REUSE;
(or)
ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' RESIZE 18M;

ALTER TABLESPACE lmtemp TEMPFILE OFFLINE;


ALTER TABLESPACE lmtemp TEMPFILE ONLINE;

ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' OFFLINE;


ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' ONLINE;

Assigning Default Temporary Tablespace:


-------------------------------------ALTER DATABASE DEFAULT TEMPORARY TABLESPACE lmttemp;

49.Multiple Temporary Tablespaces: Using Tablespace Groups:


A tablespace group enables a user to consume temporary space from multiple
tablespaces. A tablespace group has the following characteristics:

It contains at least one tablespace. There is no explicit limit on the maximum


number of tablespaces that are contained in a group.

It shares the namespace of tablespaces, so its name cannot be the same as any
tablespace.

You can specify a tablespace group name wherever a tablespace name would
appear when you assign a default temporary tablespace for the database or a
temporary tablespace for a user.

You do not explicitly create a tablespace group. Rather, it is created implicitly when
you assign the first temporary tablespace to the group. The group is deleted when the
last temporary tablespace it contains is removed from it.

The view DBA_TABLESPACE_GROUPS lists tablespace groups and their member


tablespaces.

50.Creating a Tablespace Group


-----------------------------CREATE TEMPORARY TABLESPACE lmtemp2 TEMPFILE '/u02/oracle/data/lmtemp201.dbf'
SIZE 50M
TABLESPACE GROUP group1;

ALTER TABLESPACE lmtemp TABLESPACE GROUP group2;

Changing Members of a Tablespace Group


-------------------------------------You can add a tablespace to an existing tablespace group by specifying the existing
group name in the TABLESPACE GROUP clause of the CREATE TEMPORARY
TABLESPACE or ALTER TABLESPACE statement.
The following statement adds a tablespace to an existing group. It creates and adds
tablespace lmtemp3 to group1, so that group1 contains tablespaces lmtemp2 and
lmtemp3.
CREATE TEMPORARY TABLESPACE lmtemp3 TEMPFILE '/u02/oracle/data/lmtemp301.dbf'
SIZE 25M
TABLESPACE GROUP group1;
The following statement also adds a tablespace to an existing group, but in this case
because tablespace lmtemp2 already belongs to group1, it is in effect moved from
group1 to group2:

ALTER TABLESPACE lmtemp2 TABLESPACE GROUP group2;

Now group2 contains both lmtemp and lmtemp2, while group1 consists of only
tmtemp3.
You can remove a tablespace from a group as shown in the following statement:

ALTER TABLESPACE lmtemp3 TABLESPACE GROUP '';

Tablespace lmtemp3 no longer belongs to any group. Further, since there are no longer
any members of group1, this results in the implicit deletion of group1.

Assigning a Tablespace Group as the Default Temporary Tablespace


---------------------------------------------------------------ALTER DATABASE sample DEFAULT TEMPORARY TABLESPACE group2;
oracle DBA tips (PART-I)
ORACLE DBA TIPS:- PART-1
--------------------------------1.To dynamically change the default tablespace type after database creation, use the SET
DEFAULT TABLESPACE clause of the ALTER DATABASE statement:
ALTER DATABASE SET DEFAULT BIGFILE TABLESPACE;

2.You can determine the current default tablespace type for the database by querying the
DATABASE_PROPERTIES data dictionary view as follows:
SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME = 'DEFAULT_TBS_TYPE';

3.To view the time zone names in the file being used by your database, use the following
query:
SELECT * FROM V$TIMEZONE_NAMES;

4.You can cancel FORCE LOGGING mode using the following SQL statement:
ALTER DATABASE NO FORCE LOGGING;

5.The V$SGA_TARGET_ADVICE view provides information that helps you decide on a


value for SGA_TARGET.

6.The fixed views V$SGA_DYNAMIC_COMPONENTS and V$SGAINFO display the current


actual size of each SGA component.

7.Checking Your Current Release Number


SELECT * FROM PRODUCT_COMPONENT_VERSION;

SELECT * FROM v$VERSION;

8.Bigfile tablespaces can contain only one file, but that file can have up to 4G blocks. The maximum number of
datafiles in an Oracle Database is limited (usually to 64K files).

9.Specifying a Flash Recovery Area with the following initialization parameters:


DB_RECOVERY_FILE_DEST
DB_RECOVERY_FILE_DEST_SIZE
In a RAC environment, the settings for these two parameters must be the same on all
instances.

10.DB_BLOCK_SIZE Initialization Parameter


You cannot change the block size after database creation except by re-creating the
database.

11.Nonstandard Block Sizes


Tablespaces of nonstandard block sizes can be created using the CREATE
TABLESPACE statement and specifying the BLOCKSIZE clause. These nonstandard

block sizes can have any of the following power-of-two values: 2K, 4K, 8K, 16K or 32K.

12.All SGA components allocate and deallocate space in units of granules. Oracle Database tracks SGA memory use
in internal numbers of granules for each SGA component.

13.Viewing Information about the SGA:


v$SGA
v$SGAINFO
v$SGASTAT
v$SGA_DYNAMIC_COMPONENTS
v$SGA_DYNAMIC_FREE_MEMORY
v$SGA_RESIZE_OPS
v$SGA_CURRENT_RESIZE_OPS
v$SGA_TARGET_ADVICE

14.An optional COMMENT clause lets you associate a text string with the parameter
update. When you specify SCOPE as SPFILE or BOTH, the comment is written to the
server parameter file.
example:ALTER SYSTEM
SET LOG_ARCHIVE_DEST_4='LOCATION=/u02/oracle/rbdb1/',MANDATORY,'REOPEN=2'
COMMENT='Add new destimation on Nov 29'
SCOPE=SPFILE;

15.Viewing Parameter Settings


show parameters, v$parameter, v$parameter2, v$spparameter.

16.You can find service information in the following service-specific views:

DBA_SERVICES

ALL_SERVICES or V$SERVICES

V$ACTIVE_SERVICES

V$SERVICE_STATS

V$SERVICE_EVENTS

V$SERVICE_WAIT_CLASSES

V$SERV_MOD_ACT_STATS

V$SERVICE_METRICS

V$SERVICE_METRICS_HISTORY

The following additional views also contain some information about services:

V$SESSION

V$ACTIVE_SESSION_HISTORY

DBA_RSRC_GROUP_MAPPINGS

DBA_SCHEDULER_JOB_CLASSES

DBA_THRESHOLDS

17.Viewing Information About the Database

DATABASE_PROPERTIES

GLOBAL_NAME

V$DATABASE

18.Starting an Instance, Mounting a Database, and Starting Complete Media Recovery


If you know that media recovery is required, you can start an instance, mount a
database to the instance, and have the recovery process automatically start by using
the STARTUP command with the RECOVER clause:
STARTUP OPEN RECOVER
If you attempt to perform recovery when no recovery is required, Oracle Database
issues an error message.

19.Placing a Database into a Quiesced State


To place a database into a quiesced state, issue the following statement:
ALTER SYSTEM QUIESCE RESTRICTED;
Non-DBA active sessions will continue until they become inactive.

20. You can determine the sessions that are blocking the quiesce operation by querying the V$BLOCKING_QUIESCE
view:

select bl.sid, user, osuser, type, program


from v$blocking_quiesce bl, v$session se
where bl.sid = se.sid;

21.You cannot perform a cold backup when the database is in


the quiesced state, because Oracle Database background processes
may still perform updates for internal purposes even while the
database is quiesced. In addition, the file headers of online datafiles
continue to appear to be accessible. They do not look the same as if
a clean shutdown had been performed. However, you can still take
online backups while the database is in a quiesced state.

22.Restoring the System to Normal Operation


The following statement restores the database to normal operation:

ALTER SYSTEM UNQUIESCE;

23.Viewing the Quiesce State of an Instance


You can query the ACTIVE_STATE column of the V$INSTANCE view to see the current
state of an instance. The column values has one of these values:

NORMAL: Normal unquiesced state.


QUIESCING: Being quiesced, but some non-DBA sessions are still active.
QUIESCED: Quiesced; no non-DBA sessions are active or allowed.

24.Suspending and Resuming a Database


The ALTER SYSTEM SUSPEND statement halts all input and output (I/O) to datafiles (file header and file data) and
control files. The suspended state lets you back up a database without I/O interference. When the database is
suspended all preexisting I/O operations are allowed to complete and any new database accesses are placed in a
queued state.

The following statements illustrate ALTER SYSTEM SUSPEND/RESUME usage. The


V$INSTANCE view is queried to confirm database status.
SQL> ALTER SYSTEM SUSPEND;
System altered
SQL> SELECT DATABASE_STATUS FROM V$INSTANCE;
DATABASE_STATUS
--------SUSPENDED
SQL> ALTER SYSTEM RESUME;
System altered
SQL> SELECT DATABASE_STATUS FROM V$INSTANCE;
DATABASE_STATUS

--------ACTIVE

25.The DB_WRITER_PROCESSES initialization parameter specifies the number of DBWn processes.


Oracle Database allows a maximum of 20 database writer processes
(DBW0-DBW9 and DBWa-DBWj).
obsolete and/or deprecated parameter(s) specified
ORA-32004 obsolete and/or deprecated parameter(s) specified

Cause

One or more obsolete and/or parameters were specified in the SPFILE or the PFILE on the server side.

Action

See alert log for a list of parameters that are obsolete. or deprecated. Remove them from the SPFILE or the server
side PFILE.

So somebody, somewhere has put obsolete and/or deprecated parameter(s) in my initDB.ora file. To find out which
you could from SQL*PLUS, issue the following statement, to find the sinner.

SQL> select name, isspecified from v$obsolete_parameter where isspecified='TRUE';

Or if you are the one, who has made the changes to initDB.ora, you might know which one. In my case somebody
had been messing around with the parameter log_archive_start;

In order to remove this, you should create a pfile from spfile and back to spfile? Thats the way to do it.

SQL> startup
ORA-32004: obsolete and/or deprecated parameter(s) specified
ORACLE instance started.

Total System Global Area 167772160 bytes


Fixed Size 1247900 bytes
Variable Size 88081764 bytes
Database Buffers 75497472 bytes
Redo Buffers 2945024 bytes
Database mounted.
Database opened.
SQL>
SQL> alter system reset log_archive_start scope=spfile sid='*' ;

System altered.

SQL> shutdown immediate


Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
SQL> startup
ORACLE instance started.

Total System Global Area 167772160 bytes


Fixed Size 1247900 bytes
Variable Size 88081764 bytes
Database Buffers 75497472 bytes
Redo Buffers 2945024 bytes
Database mounted.
Database opened.
SQL> disconnect
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL>
BDUMP, UDUMP, ALERT LOG FILES IN ORACLE 11G
The 11g New Features Guide notes important OFA changes, namely the removal of $ORACLE_HOME as an anchor
for diagnostic and alert files:

"The database installation process has been redesigned to be based on the ORACLE_BASE environment variable.
Until now, setting this variable has been optional and the only required variable has been ORACLE_HOME.

With this feature, ORACLE_BASE is the only required input, and the ORACLE_HOME setting will be derived from
ORACLE_BASE."

Let's take a look at changes to the Oracle11g OFA standard.

Enter new admin subdirectories

New in Oracle 11g we see the new ADR (Automatic Diagnostic Repository) and Incident Packaging System, all
designed to allow quick access to alert and diagnostic information.

The new $ADR_HOME directory is located by default at $ORACLE_BASE/diag, with the directories for each instance
at $ORACLE_HOME/diag/$ORACLE_SID, at the same level as the traditional bdump, udump and cdump directories
and the initialization parameters background_dump_dest and user_dump_dest are deprecated in 11g.

You can use the new initialization parameter diagnostic_dest to specify an alternative location for the diag directory
contents.

In 11g, each $ORACLE_HOME/diag/$ORACLE_SID directory may contain these new directories:

alert - A new alert directory for the plain text and XML versions of the alert log.

incident - A new directory for the incident packaging software.

incpkg - A directory for packaging an incident into a bundle.

trace - A replacement for the ancient background dump (bdump) and user dump (udump) destinations.

cdump - The old core dump directory retains it's 10g name.

Let's see how the 11g alert log has changed.

Alert log changes in 11g

Oracle now writes two alert logs, the traditional alert log in plain text plus a new XML formatted alert.log which is
named as log.xml.

"Prior to Oracle 11g, the alert log resided in $ORACLE_HOME/admin/$ORACLE_SID/bdump directory, but it now
resides in the $ORACLE_BASE/diag/$ORACLE_SID directory".

Fortunately, you can re-set it to the 10g and previous location by specifying the BDUMP location for the
diagnostic_dest parameter.

But best of all, you no longer require server access to see your alert log since it is now accessible via standard SQL
using the new v$diag_info view:

select name, value from v$diag_info;

For complete details, see MetaLink Note:438148.1 - "Finding alert.log file in 11g".

ENABLE ARCHIVELOG AND FLASHBACK IN RAC DATABASE


Step by step process of putting a RAC database in archive log mode and then enabling the flashback Database
option.

Enabling archive log in RAC Database:

A database must be in archivelog mode before enabling flashback.

In this example database name is test and instances name are test1 and test2.

step 1:

creating recovery_file_dest in asm disk

SQL> alter system set db_recovery_file_dest_size=200m sid='*';

System altered.

SQL> alter system set db_recovery_file_dest='+DATA' sid='*';

System altered.

SQL> archive log list;


Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 12
Current log sequence 14
SQL>

step 2:

set the LOG_ARCHIVE_DEST_1 parameter. since these parameters will be identical for all nodes, we will use
sid='*'. However, you may need to modify this for your situation if the directories are different on each node.

SQL> alter system set log_archive_dest_1='LOCATION=USE-DB_RECOVERY_FILE_DEST';

System altered.

step 3:

set LOG_ARCHIVE_START to TRUE for all instances to enable automatic archiving.

SQL> alter system set log_archive_start=true scope=spfile sid='*';

System altered.
Note that we illustrate the command for backward compatibility purposes, but in oracle database 10g onwards, the
parameter is actually deprecated. Automatic archiving will be enabled by default whenever an oracle database is
placed in archivelog mode.

step 4:

Set CLUSTER_DATABASE to FALSE for the local instance, which you will then mount to put the database into
archivelog mode. By having CLUSTER_DATABASE=FALSE, the subsequent shutdown and startup mount will actually
do a Mount Exclusive by default, which is necessary to put the database in archivelog mode, and also to enable the
flashback database feature:

SQL> alter system set cluster_database=false scope=spfile sid='test1';

System altered.

step 5;
Shut down all instances. Ensure that all instances are shut down cleanly:

SQL> shutdown immediate

step 6:
Mount the database from instance test1 (where CLUSTER_DATABASE was set to FALSE) and then put the database
into archivelog mode.

SQL> startup mount


ORA-32004: obsolete and/or deprecated parameter(s) specified
ORACLE instance started.
Database mounted.

SQL> alter database archivelog;

Database altered.

NOTE:
If you did not shut down all instances cleanly in step 5,
putting the database in archivelog mode will fail
with an ORA-265 Error.

SQL> alter database archivelog;


*
ERROR at line 1:
ORA-00265: instance recovery required, cannot set ARCHIVELOG mode

step 7:
Confirm that the database is in archivelog mode, with the appropriate parameters, by issuing the ARCHIVE LOG
LIST command:

SQL> archive log list;


Database log mode Archive Mode
Automatic archival Enabled

Archive destination USE-DB_RECOVERY_FILE_DEST


Oldest online log sequence 13
Next log sequence to archive 15
Current log sequence 15

step 8
Confirm the location of the RECOVERY_FILE_DEST via a SHOW PARAMETER.

SQL> show parameter recovery_file

NAME TYPE VALUE


------------------------------------ ----------- -----------------------------db_recovery_file_dest string +DATA
db_recovery_file_dest_size big integer 200M

Step 9:
Once the database is in archivelog mode, you can enable flashback while the database is still mounted in Exclusive
mode (CLUSTER_DATABASE=FALSE).

SQL> alter database flashback on;

Database altered.

Step 10:
Confirm that Flashback is enabled and verify the retention target:

SQL> select flashback_on,current_scn from v$database;

FLASHBACK_ON CURRENT_SCN
------------------ -----------

YES 0

SQL> show parameter flash

NAME TYPE VALUE


------------------------------------ ----------- -----------------------------db_flashback_retention_target integer 1440

step 11:
Reset the CLUSTER_DATABASE parameter back to TRUE for all instances:

SQL> alter system set cluster_database=true scope=spfile sid=' * ';

System altered.

step 12:
shutdown the instance and then restart all cluster database instances.
All instances will now be archiving their redo threads.

SQL> shu immediate


ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.

start the database, using srvctl command or normal startup

[root@rac1 bin]# ./srvctl status database -d test


Instance test1 is not running on node rac1
Instance test2 is not running on node rac2

[root@rac1 bin]# ./srvctl start database -d test

[root@rac1 bin]# ./srvctl status database -d test


Instance test1 is running on node rac1
Instance test2 is running on node rac2
[root@rac1 bin]#

on test1 instance:

SQL> archive log list;


Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE-DB_RECOVERY_FILE_DEST
Oldest online log sequence 14
Next log sequence to archive 16
Current log sequence 16
SQL>

on test2 instance:

SQL> archive log list;


Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE-DB_RECOVERY_FILE_DEST
Oldest online log sequence 3
Next log sequence to archive 5
Current log sequence 5
SQL>

wow, both are in archive log mode

Convert single instance to RAC instance Database


converting a single instance database to rac instance database:

Oracle provides following methods to convert a single instance database to RAC:

Grid Control
DBCA
Manual
RCONFIG(from 10gR2)

here is an example of converting a single instance asm file database to rac instance database using rconfig,
for converting a normal database file system single instance to rac instance, before following
the steps
for converting the non-asm files to asm files using the steps as shown in the link

http://oracleinstance.blogspot.com/2009/12/migrate-from-database-file-system-to.html

after converting a non-asm files to asm files system,

go the $ORACLE_HOME/assistants/rconfig/sampleXMLs
there u can find ConvertToRAC.xml file

recommended you to copy the file to any location for example

cp $ORACLE_HOME/assistants/rconfig/sampleXMLs/ConvertToRAC.xml /u01/convertdb.xml

Following illustrate how to convert single instance database to RAC using the RCONFIG tool:

The Convert verify option in the ConvertToRAC.xml file has three options:

Convert verify="YES": rconfig performs checks to ensure that the prerequisites for single-instance to RAC
conversion have been met before it starts conversion
Convert verify="NO": rconfig does not perform prerequisite checks, and starts conversion
Convert verify="ONLY" : rconfig only performs prerequisite checks; it does not start conversion after completing
prerequisite checks

modify the convertdb.xml file according to your environment. Following is the example:

sample ConvertToRAC.xml file edit as follows

here my database name to convert is "test"


-------------------------------------------------------------------------------------------------------------------------------

xml version="1.0" encoding="UTF-8"?


--n:RConfig xmlns:n="http://www.oracle.com/rconfig"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.oracle.com/rconfig"---n:ConvertToRAC---!-- Verify does a precheck to ensure all pre-requisites are met, before the conversion is attempted. Allowable
values are: YES|NO|ONLY -----n:Convert verify="YES"---!--Specify current OracleHome of non-rac database for SourceDBHome -----n:SourceDBHome--/u01/app/oracle/product/10g/db_1--/n:SourceDBHome-----your source database home
--!--Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome -----n:TargetDBHome--/u01/app/oracle/product/10g/db_1--/n:TargetDBHome-- ---your target database home
--!--Specify SID of non-rac database and credential. User with sysdba role is required to perform conversion -----n:SourceDBInfo SID="test"---------------your database name
--n:Credentials---n:User--sys--/n:User--

--n:Password--oracle--/n:Password---n:Role--sysdba--/n:Role---/n:Credentials---/n:SourceDBInfo---!--ASMInfo element is required only if the current non-rac database uses ASM Storage -----n:ASMInfo SID="+ASM1"---------------------your ASM Instance name
--n:Credentials---n:User--sys--/n:User---n:Password--oracle--/n:Password-- ----your ASM instance password
--n:Role--sysdba--/n:Role---/n:Credentials---/n:ASMInfo---!--Specify the list of nodes that should have rac instances running. LocalNode should be the first node in this
nodelist. -----n:NodeList---n:Node name="rac1"/-------your rac1 hostname
--n:Node name="rac2"/------your rac2 hostname
--/n:NodeList---!--Specify prefix for rac instances. It can be same as the instance name for non-rac database or different. The
instance number will be attached to this prefix. -----n:InstancePrefix--test--/n:InstancePrefix-----your database name
--!--Specify port for the listener to be configured for rac database.If port="", alistener existing on localhost will be
used for rac database.The listener will be extended to all nodes in the nodelist -----n:Listener port="1551"/-----listener port number
--!--Specify the type of storage to be used by rac database. Allowable values are CFS|ASM. The non-rac database
should have same storage type. -----n:SharedStorage type="ASM"------your storage type
--!--Specify Database Area Location to be configured for rac database.If this field is left empty, current storage will
be used for rac database. For CFS, this field will have directory path. -----n:TargetDatabaseArea-- --/n:TargetDatabaseArea------leave blank
--!--Specify Flash Recovery Area to be configured for rac database. If this field is left empty, current recovery area
of non-rac database will be configured for rac database. If current database is not using recovery Area, the
resulting rac database will not have a recovery area. -----n:TargetFlashRecoveryArea-- --/n:TargetFlashRecoveryArea---leave blank

--/n:SharedStorage---/n:Convert---/n:ConvertToRAC---/n:RConfig--

---------------------------------------------------------------------------------------------------------------------------------------

Once you modify the convert.xml file according to your environment, use the following command to run the tool:

go to $ORACLE_HOME/bin and run


./ rconfig /u01/convertdb.xml

finally, change sid in /etc/oratab as test1 in rac1 machine and test2 in rac2 machine
thats it.
then check
srvctl config database -d test
srvctl status database -d test
crs_stat -t

hope it will help you.


RAC FILE SYSTEM OPTIONS ( BASIC CONCEPT BEFORE LEARNING RAC)
its important to know the rac filesystem options ,
RAC Filesystem Options
Submitted by Natalka Roshak on orafaq website.

DBAs wanting to create a 10g Real Applications Cluster face many configuration decisions. One of the more
potentially confusing decisions involves the choice of filesystems. Gone are the days when DBAs simply had to
choose between "raw" and "cooked". DBAs setting up a 10g RAC can still choose raw devices, but they also have
several filesystem options, and these options vary considerably from platform to platform. Further, some storage
options cannot be used for all the files in the RAC setup. This article gives an overview of the RAC storage options
available.
RAC Review

Let's begin by reviewing the structure of a Real Applications Cluster. Physically, a RAC consists of several nodes
(servers), connected to each other by a private interconnect. The database files are kept on a shared storage
subsystem, where they're accessible to all nodes. And each node has a public network connection.

In terms of software and configuration, the RAC has three basic components: cluster software and/or Cluster Ready
Services, database software, and a method of managing the shared storage subsystem.
The cluster software can be vendor-supplied or Oracle-supplied, depending on platform. Cluster Ready Services, or
CRS, is a new feature in 10g. Where vendor clusterware is used, CRS interacts with the vendor clusterware to
coordinate cluster membership information; without vendor clusterware, CRS, which is also known as Oracle OSD
Clusterware, provides complete cluster management.
The database software is Oracle 10g with the RAC option, of course.
Finally, the shared storage subsystem can be managed by one of the following options: raw devices; Automatic
Storage Management (ASM); Vendor-supplied cluster file system (CFS), Oracle Cluster File System (OCFS), or
vendor-supplied logical volume manager (LVM); or Networked File System (NFS) on a certified Network Attached
Storage (NAS) device.
Storage Options

Let me clarify the foregoing alphabet soup with a table:


Table 1. Storage options for the shared storage subsystem.
Storage----------- Option Raw

Raw devices, no filesystem

ASM

Automatic Storage Management

CFS

Cluster File System

OCFS

Oracle Cluster File System

LVM

Logical Volume Manager

NFS

Network File System (must be on certified NAS device)

Before I delve into each of these storage options, a word about file types. A regular single-instance database has
three basic types of files: database software and dump files; datafiles, spfile, control files and log files, often
referred to as "database files"; and it may have recovery files, if using RMAN. A RAC database has an additional
type of file referred to as "CRS files". These consist of the Oracle Cluster Registry (OCR) and the voting disk.

Not all of these files have to be on the shared storage subsystem. The database files and CRS files must be
accessible to all instances, so must be on the shared storage subsystem. The database software can be on the
shared subsystem and shared between nodes; or each node can have its own ORACLE_HOME. The flash recovery
area must be shared by all instances, if used.

Some storage options can't handle all of these file types. To take an obvious example, the database software and
dump files can't be stored on raw devices. This isn't important for the dump files, but it does mean that choosing
raw devices precludes having a shared ORACLE_HOME on the shared storage device.

And to further complicate the picture, no OS platform is certified for all of the shared storage options. For example,
only Linux and SPARC Solaris are supported with NFS, and the NFS must be on a certified NAS device. The
following table spells out which platforms and file types can use each storage option.
Table 2.
Platforms and file types able to use each storage option
Storage option--- Platforms--------------------File types supported---File types not supported Raw All platforms
Database, CRS
Software/Dump files, Recovery
ASM

All platforms

Certified Vendor CFS

Database, Recovery

CRS, Software/Dump

AIX, HP Tru64 UNIX, SPARC Solaris All

LVM

HP-UX, HP Tru64 UNIX, SPARC Solaris

OCFS

Windows, Linux

NFS

Linux, SPARC Solaris

All

None

None

Database, CRS, Recovery Software/Dump files


All

None

(Note: Mike Ault and Madhu Tumma have summarized the storage choices by platform in more detail in this
excerpt from their recent book, Oracle 10g Grid Computing with RAC, which I used as one source for this table.)

Now that we have an idea of where we can use these storage options, let's examine each option in a little more
detail. We'll tackle them in order of Oracle's recommendation, starting with Oracle's least preferred, raw devices,
and finishing up with Oracle's top recommendation, ASM.
Raw devices

Raw devices need little explanation. As with single-instance Oracle, each tablespace requires a partition. You will
also need to store your software and dump files elsewhere.

Pros: You won't need to install any vendor or Oracle-supplied clusterware or additional drivers.
Cons: You won't be able to have a shared oracle home, and if you want to configure a flash recovery area, you'll
need to choose another option for it. Manageablility is an issue. Further, raw devices are a terrible choice if you
expect to resize or add tablespaces frequently, as this involves resizing or adding a partition.
NFS

NFS also requires little explanation. It must be used with a certified NAS device; Oracle has certified a number of
NAS filers with its products, including products from EMC, HP, NetApp and others. NFS on NAS can be a costeffective alternative to a SAN for Linux and Solaris, especially if no SAN hardware is already installed.

Pros: Ease of use and relatively low cost.


Cons: Not suitable for all deployments. Analysts recommend SANs over NAS for large-scale transaction-intensive
applications, although there's disagreement on how big is too big for NAS.
Vendor CFS and LVMs

If you're considering a vendor CFS or LVM, you'll need to check the 10g Real Application Clusters Installation Guide
for your platform and the Certify pages on MetaLink. A discussion of all the certified cluster file systems is beyond
the scope of this article. Pros and cons depend on the specific solution, but some general observations can be
made:

Pros: You can store all types of files associated with the instance on the CFS / logical volumes.
Cons: Depends on CFS / LVM. And you won't be enjoying the manageability advantage of ASM.
OCFS

OCFS is the Oracle-supplied CFS for Linux and Windows. This is the only CFS that can be used with these
platforms. The current version of OCFS was designed specifically to store RAC files, and is not a full-featured CFS.
You can store database, CRS and recovery files on it, but it doesn't fully support generic filesystem operations.
Thus, for example, you cannot install a shared ORACLE_HOME on an OCFS device.

The next version of OCFS, OCFS2, is currently out in beta version and will support generic filesystem operations,
including a shared ORACLE_HOME.

Pros: Provides a CFS option for Linux and Windows.


Cons: Cannot store regular filesystem files such as Oracle software. Easier to manage than raw devices, but not as
manageable as NFS or ASM.
ASM

Oracle recommends ASM for 10g RAC deployments, although CRS files cannot be stored on ASM. In fact, RAC
installations using Oracle Database Standard Edition must use ASM.

ASM is a little bit like a logical volume manager and provides many of the benefits of LVMs. But it also provides
benefits LVMs don't: file-level striping/mirroring, and ease of manageability. Instead of running LVM software, you
run an ASM instance, a new type of "instance" that largely consists of processes and memory and stores its
information in the ASM disks it's managing.

Pros: File-level striping and mirroring; ease of manageability through Oracle syntax and OEM.
Cons: ASM files can only be managed through an Oracle application such as RMAN. This can be a weakness if you
prefer third-party backup software or simple backup scripts. Cannot store CRS files or database software.
Convert RAC instance to SINGLE instance DATABASE
converting RAC instance to SINGLE instance database
--------------------------------------------------In this article, see how the rac instance database is converted into single instance database
step1:stop instance 2 from any node
step 2:change the parameter cluster_database
step 3: [optional]
removing information from clusterware

[root@rac1 bin]# ./srvctl stop instance -i test2 -d test


[root@rac1 bin]# ./srvctl remove instance -i test2 -d test
Remove instance test2 from the database test? (y/[n]) y
[root@rac1 bin]#

[oracle@rac1 ~]$ . oraenv


ORACLE_SID = [oracle] ? test1
The Oracle base for ORACLE_HOME=/u01/new/oracle/product/11.1.0/db_1 is /u01/new/oracle
[oracle@rac1 ~]$ sqlplus '/as sysdba'

SQL*Plus: Release 11.1.0.6.0 - Production on Wed Dec 9 11:10:25 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production


With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> startup
ORA-01081: cannot start already-running ORACLE - shut it down first
SQL> show parameter cluster_database

NAME TYPE VALUE


------------------------------------ ----------- -----------------------------cluster_database boolean TRUE
cluster_database_instances integer 2
SQL>
SQL> alter system set cluster_database=false scope=spfile;

System altered.

SQL> shu immediate


Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 481267712 bytes


Fixed Size 1300716 bytes
Variable Size 167773972 bytes
Database Buffers 306184192 bytes
Redo Buffers 6008832 bytes
Database mounted.
Database opened.

SQL> show parameter cluster_database

NAME TYPE VALUE


------------------------------------ ----------- -----------------------------cluster_database boolean FALSE
cluster_database_instances integer 1
SQL>

removing the database information from clusterware


--------------------------------------------------

[root@rac1 bin]# ./srvctl status database -d test


Instance test1 is not running on node rac1
[root@rac1 bin]# ./srvctl status database -d test
Instance test1 is not running on node rac1
[root@rac1 bin]# ./srvctl stop instance -i test1 -d test
[root@rac1 bin]# ./srvctl remove instance -i test1 -d test
Remove instance test1 from the database test? (y/[n]) y
[root@rac1 bin]# ./srvctl stop database -d test
[root@rac1 bin]#
migrate from database file system to ASM
TO Migrate the database files from disk
to asm disk is as follows,
1.configure flash recovery area.
2.Migrate datafiles to ASM.
3.Control file to ASM.
4.Create Temporary tablespace.
5.Migrate Redo logfiles
6.Migrate spfile to ASM.

step 1:Configure flash recovery area.


SQL> connect sys/sys@prod1 as sysdba
Connected.
SQL> alter database disable block change tracking;

Database altered.

SQL> alter system set db_recovery_file_dest_size=500m;

System altered.

SQL> alter system set db_recovery_file_dest=+RECOVERYDEST;

System altered

step 2 and 3: Migrate data files and control file


to ASM.

use RMAN to migrate the data files to ASM disk groups.


All data files will be migrated to the newly created disk group, DATA

SQL> alter system set db_create_file_dest='+DATA';

System altered.

SQL> alter system set control_files='+DATA/ctf1.dbf' scope=spfile;

System altered.

SQL> shu immediate

[oracle@rac1 bin]$ ./rman target /

RMAN> startup nomount

Oracle instance started

RMAN> restore controlfile from '/u01/new/oracle/oradata/mydb/control01.ctl';

Starting restore at 08-DEC-09

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=146 device type=DISK

channel ORA_DISK_1: copied control file copy

output file name=+DATA/ctf1.dbf

Finished restore at 08-DEC-09

RMAN> alter database mount;

database mounted

released channel: ORA_DISK_1

RMAN> backup as copy database format '+DATA';


Starting backup at 08-DEC-09

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=146 device type=DISK

channel ORA_DISK_1: starting datafile copy

input datafile file number=00001 name=/u01/new/oracle/oradata/mydb/system01.dbf

output file name=+DATA/mydb/datafile/system.257.705063763 tag=TAG20091208T110241 RECID=1


STAMP=705064274

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:08:39

channel ORA_DISK_1: starting datafile copy

input datafile file number=00002 name=/u01/new/oracle/oradata/mydb/sysaux01.dbf

output file name=+DATA/mydb/datafile/sysaux.258.705064283 tag=TAG20091208T110241 RECID=2


STAMP=705064812

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:08:56

channel ORA_DISK_1: starting datafile copy

input datafile file number=00003 name=/u01/new/oracle/oradata/mydb/undotbs01.dbf

output file name=+DATA/mydb/datafile/undotbs1.259.705064821 tag=TAG20091208T110241 RECID=3


STAMP=705064897

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:25

channel ORA_DISK_1: starting datafile copy

copying current control file

output file name=+DATA/mydb/controlfile/backup.260.705064907 tag=TAG20091208T110241 RECID=4


STAMP=705064912

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07

channel ORA_DISK_1: starting datafile copy

input datafile file number=00004 name=/u01/new/oracle/oradata/mydb/users01.dbf

output file name=+DATA/mydb/datafile/users.261.705064915 tag=TAG20091208T110241 RECID=5


STAMP=705064915

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03

channel ORA_DISK_1: starting full datafile backup set

channel ORA_DISK_1: specifying datafile(s) in backup set

including current SPFILE in backup set

channel ORA_DISK_1: starting piece 1 at 08-DEC-09

channel ORA_DISK_1: finished piece 1 at 08-DEC-09

piece handle=+DATA/mydb/backupset/2009_12_08/nnsnf0_tag20091208t110241_0.262.705064919
tag=TAG20091208T110241 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01

Finished backup at 08-DEC-09

RMAN> switch database to copy;

datafile 1 switched to datafile copy "+DATA/mydb/datafile/system.257.705063763"

datafile 2 switched to datafile copy "+DATA/mydb/datafile/sysaux.258.705064283"

datafile 3 switched to datafile copy "+DATA/mydb/datafile/undotbs1.259.705064821"

datafile 4 switched to datafile copy "+DATA/mydb/datafile/users.261.705064915"

RMAN> alter database open;


database opened

RMAN> exit

Recovery Manager complete.

SQL> conn sys/oracle as sysdba

Connected.

SQL> select tablespace_name,file_name from dba_data_files;

TABLESPACE_NAME

FILE_NAME

------------------------------ ---------------------------------------------

USERS

+DATA/mydb/datafile/users.261.705064915

UNDOTBS1

+DATA/mydb/datafile/undotbs1.259.705064821

SYSAUX

+DATA/mydb/datafile/sysaux.258.705064283

SYSTEM

+DATA/mydb/datafile/system.257.705063763

SQL> select name from v$controlfile;

NAME
----

+DATA/ctf1.dbf

NO

16384

594

step 4:Migrate temp tablespace to ASM.

SQL> alter tablespace temp add tempfile size 100m;

Tablespace altered.

SQL> select file_name from dba_temp_files;

FILE_NAME

---------------------------------------------

+DATA/mydb/tempfile/temp.263.705065455

otherwise,
Create temporary tablespace in ASM disk group.

SQL> CREATE TABLESPACE temp1 TEMPFILE +diskgroup1;

SQL> alter database default temporary tablespace temp1;

Database altered.

step 5:Migrate redo logs to ASM.

SQL> select member,group# from v$logfile;

MEMBER

GROUP#

-------------------------------------------------- ----------

/u01/new/oracle/oradata/mydb/redo03.log

/u01/new/oracle/oradata/mydb/redo02.log

/u01/new/oracle/oradata/mydb/redo01.log

SQL> alter database add logfile group 4 size 5m;

Database altered.

SQL> alter database add logfile group 5 size 5m;

Database altered.

SQL> alter database add logfile group 6 size 5m;

Database altered.

SQL> select member,group# from v$logfile;

MEMBER

GROUP#

-------------------------------------------------- ----------

/u01/new/oracle/oradata/mydb/redo03.log

/u01/new/oracle/oradata/mydb/redo02.log

/u01/new/oracle/oradata/mydb/redo01.log

+DATA/mydb/onlinelog/group_4.264.705065691

+DATA/mydb/onlinelog/group_5.265.705065703

+DATA/mydb/onlinelog/group_6.266.705065719

SQL> alter system switch logfile;

System altered.

SQL> alter database drop logfile group 2;

Database altered.

SQL> alter database drop logfile group 3;

Database altered.

SQL> alter database drop logfile group 4;

Database altered.

SQL> alter database drop logfile group 1;

Database altered.
Add additional control file.

If an additional control file is required for redundancy,

you can create it in ASM as you would on any other filesystem.


SQL> connect sys/sys@prod1 as sysdba
Connected to an idle instance.
SQL> startup mount
ORACLE instance started.

SQL> alter database backup controlfile to '+DATA/cf2.dbf';

Database altered.

SQL> alter system set control_files='+DATA/cf1.dbf '


,'+DATA/cf2.dbf' scope=spfile;

System altered.

SQL> shutdown immediate;


ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

SQL> select name from v$controlfile;

NAME
--------------------------------------+DATA/cf1.dbf
+DATA/cf2.dbf

step 6:Migrate spfile to ASM:

Create a copy of the SPFILE in the ASM disk group.


In this example, the SPFILE for the migrated database will be stored as +DISK/spfile.

If the database is using an SPFILE already, then run these commands:

run {
BACKUP AS BACKUPSET SPFILE;
RESTORE SPFILE TO "+DISK/spfile";
}

If you are not using an SPFILE, then use CREATE SPFILE

from SQL*Plus to create the new SPFILE in ASM.


For example, if your parameter file is called /private/init.ora,
use the following command:

SQL> create spfile='+DISK/spfile' from pfile='/private/init.ora';

After successfully migrating all the data files


over to ASM, the old data files are no longer
needed and can be removed. Your single-instance
database is now running on ASM!

Das könnte Ihnen auch gefallen