Sie sind auf Seite 1von 9

9i: How To Setup Oracle streams replication. (Doc ID 224255.

1)

APPLIES TO:
Oracle Server - Enterprise Edition - Version 9.2.0.1 and later
Information in this document applies to any platform.

PURPOSE
Oracle Streams enables the sharing of data and events in a data stream, either
within a database or from one database to another.
This article is intended to assist Replication DBAs in setting up and
configuring Oracle Streams Replication.
SCOPE
To be used by Oracle support analysts and replication DBAs to setup replication
using Streams in Oracle 9.2.x.
This article discusses the steps to setup Streams Replication from one Oracle
database to another.
The Global Database Name of the Source Database is V920.IDC.ORACLE.COM
The Global Database Name of the Destination Database is TEST920.IDC.ORACLE.COM
In the example setup, DEPT table belonging to SCOTT schema has been used for
demonstration purpose.
DETAILS

INTRODUCTION
------------Oracle 9.2 has introduced a more flexible and efficient way of implementing
replication using streams.
In a nutshell, replication using streams is implemented in the following way.
1) A background capture process is configured to capture changes made to
tables,schemas, or the entire database. The capture process captures
changes from the redo log and formats each captured change into a logical
change record (LCR).
The capture process uses logminer to mine the redo/archive logs to format LCRs.
2) The capture process enqueues LCR events into a queue that is specified.
3) This queue is scheduled to Propagate events from one queue to another in a
different database.
4) A background apply process dequeues the events and applies them at the
destination database.
STREAMS SETUP

To Bottom

------------The Setup is divided into the following 4 sections:


Section 1 : Initialization Parameters Relevant to Streams
Section 2 : Steps to be carried out at the Destination Database
Section 3 : Steps to be carried out at the Source Database
Section 4 : Export, import and instantiation of tables from
Source to Destination Database

Section 1 - Initialization Parameters Relevant to Streams


------------------------------------------------------------1.1 COMPATIBLE :
To use Streams, Compatible must be set to 9.2.0.
1.2 GLOBAL_NAMES :
This parameter must be set to TRUE at each database if you want to
use Streams to share information between databases. Streams uses
the GLOBAL_NAME of the database to identify changes from or to a
particular database. Do NOT modify the GLOBAL NAME of a Streams
database after it has been configured. For example, the
system-generated rules for capture, propagation, and apply typically
specify the global name of the source database. In addition, changes
captured by the Streams capture process automatically include the
current global name of the source database. If the global name must
be modified on the database, do it at a time when NO user changes
are possible on the database so that the Streams configuration can be
recreated.
1.3 JOB_QUEUE_PROCESSES :
This parameter specifies the number of processes that can
handle requests created by DBMS_JOB. Ensure that it is set to 2 or higher.
1.4 AQ_TM_PROCESSES :
Setting the parameter to 1 or more starts the specified number of
queue monitor processes.
1.5 LOGMNR_MAX_PERSISTENT_SESSIONS :
This parameter specifies the maximum number of persistent LOGMINER mining
sessions. Streams Capture Process uses LOGMINER to mine the redo logs.
If there is a need to run multiple Streams capture processes on a single
database, then this parameter needs to be set equal to or higher than the
number of planned capture processes.
1.6 LOG_PARALLELISM :
This parameter must be set to 1 at each database that captures events.
1.7 PARALLEL_MAX_SERVERS :
Each capture process and apply process may use multiple parallel execution
servers. The apply process by default needs two parallel servers.
So this parameter needs to set to at least 2 even for a single non-parallel
apply process.
Specify a value for this parameter to ensure that there are enough
parallel execution servers.
1.8 SHARED_POOL_SIZE :
Each capture process needs 10MB of shared pool space, but Streams is

limited to using a maximum of 10% of the shared pool.


So shared_pool_size has to be set to at least 100MB.
If you wish to run multiple capture processes, then this parameter needs
to be set to an even higher value.
1.9 OPEN_LINKS :
Specifies the maximum number of concurrent open connections to remote
databases in one session. Ensure that it is set to 4 or higher.
1.10 The databases involved in Streams must be running in ARCHIVELOG mode.
Section 2 - Steps to be carried out at the Destination Database (TEST920.IDC.ORACLE.COM)
----------------------------------------------------------------------------------------------------2.1 Create Streams Administrator :
connect SYS/password as SYSDBA
create user STRMADMIN identified by STRMADMIN;

2.2 Grant the necessary privileges to the Streams Administrator :

GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE to STRMADMIN;


GRANT SELECT ANY DICTIONARY TO STRMADMIN;
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT

EXECUTE
EXECUTE
EXECUTE
EXECUTE
EXECUTE
EXECUTE
EXECUTE
EXECUTE

ON
ON
ON
ON
ON
ON
ON
ON

DBMS_AQ TO STRMADMIN;
DBMS_AQADM TO STRMADMIN;
DBMS_FLASHBACK TO STRMADMIN;
DBMS_STREAMS_ADM TO STRMADMIN;
DBMS_CAPTURE_ADM TO STRMADMIN;
DBMS_APPLY_ADM TO STRMADMIN;
DBMS_RULE_ADM TO STRMADMIN;
DBMS_PROPAGATION_ADM TO STRMADMIN;

BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'ENQUEUE_ANY',
grantee => 'STRMADMIN',
admin_option => FALSE);
END;
/
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'DEQUEUE_ANY',
grantee => 'STRMADMIN',
admin_option => FALSE);
END;
/
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'MANAGE_ANY',
grantee => 'STRMADMIN',

admin_option => TRUE);


END;
/
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
/
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.ALTER_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.ALTER_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
/
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_EVALUATION_CONTEXT,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
/

2.3 Create streams queue :

connect STRMADMIN/STRMADMIN
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'STREAMS_QUEUE_TABLE',
queue_name => 'STREAMS_QUEUE',
queue_user => 'STRMADMIN');
END;
/

2.4 Add apply rules for the table at the destination database :

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'SCOTT.DEPT',
streams_type => 'APPLY',
streams_name => 'STRMADMIN_APPLY',
queue_name => 'STRMADMIN.STREAMS_QUEUE',
include_dml => true,
include_ddl => true,
source_database => 'V920.IDC.ORACLE.COM');
END;
/

2.5 Specify an 'APPLY USER' at the destination database:


This is the user who would apply all DML statements and DDL statements.
The user specified in the APPLY_USER parameter must have the necessary
privileges to perform DML and DDL changes on the apply objects.

BEGIN
DBMS_APPLY_ADM.ALTER_APPLY(
apply_name => 'STRMADMIN_APPLY',
apply_user => 'SCOTT');
END;
/

2.6 If you do not wish the apply process to abort for every error that it
encounters, you can set the below paramter.
The default value is 'Y' which means that apply process would abort due to
any error.
When set to 'N', the apply process will not abort for any error that it
encounters, but the error details would be logged in DBA_APPLY_ERROR.

BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'STRMADMIN_APPLY',
parameter => 'DISABLE_ON_ERROR',
value => 'N' );
END;
/

2.7 Start the Apply process :


BEGIN
DBMS_APPLY_ADM.START_APPLY(apply_name => 'STRMADMIN_APPLY');
END;
/

Section 3 - Steps to be carried out at the Source Database (V920.IDC.ORACLE.COM)


-------------------------------------------------------------------------------------------3.1 Move LogMiner tables from SYSTEM tablespace:
By default, all LogMiner tables are created in the SYSTEM tablespace.
It is a good practice to create an alternate tablespace for the LogMiner
tables.

CREATE TABLESPACE LOGMNRTS DATAFILE 'logmnrts.dbf' SIZE 25M AUTOEXTEND ON


MAXSIZE UNLIMITED;
BEGIN
DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
END;
/

3.2 Turn on supplemental logging for DEPT table :


connect SYS/password as SYSDBA
ALTER TABLE scott.dept ADD SUPPLEMENTAL LOG GROUP dept_pk
(deptno) ALWAYS;

3.3 Create Streams Administrator and Grant the necessary privileges :


Repeat steps 2.1 and 2.2 for creating the user and granting the required
privileges.
3.4 Create a database link to the destination database :

connect STRMADMIN/STRMADMIN
CREATE DATABASE LINK TEST920.IDC.ORACLE.COM connect to
STRMADMIN identified by STRMADMIN using 'TEST920.IDC.ORACLE.COM';

Test the database link to be working properly by querying against the


destination database.
Eg : select * from global_name@TEST920.IDC.ORACLE.COM;
3.5 Create streams queue:

BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_name => 'STREAMS_QUEUE',
queue_table =>'STREAMS_QUEUE_TABLE',
queue_user => 'STRMADMIN');
END;
/

3.6 Add capture rules for the table at the source database:
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'SCOTT.DEPT',
streams_type => 'CAPTURE',
streams_name => 'STRMADMIN_CAPTURE',
queue_name => 'STRMADMIN.STREAMS_QUEUE',
include_dml => true,
include_ddl => true,
source_database => 'V920.IDC.ORACLE.COM');
END;
/

3.7 Add propagation rules for the table at the source database.
This step will also create a propagation job to the destination database.
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => 'SCOTT.DEPT',
streams_name => 'STRMADMIN_PROPAGATE',
source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@TEST920.IDC.ORACLE.COM',
include_dml => true,
include_ddl => true,
source_database => 'V920.IDC.ORACLE.COM');

END;
/

Section 4 - Export, import and instantiation of tables from Source to Destination Database
-----------------------------------------------------------------------------------------------4.1 If the objects are not present in the destination database, perform an
export of the objects from the source database and import them into the
destination database
Export from the Source Database:
Specify the OBJECT_CONSISTENT=Y clause on the export command.
By doing this, an export is performed that is consistent for each
individual object at a particular system change number (SCN).
exp USERID=SYSTEM@V920.IDC.ORACLE.COM TABLES=SCOTT.DEPT FILE=tables.dmp
GRANTS=Y ROWS=Y LOG=exportTables.log OBJECT_CONSISTENT=Y
INDEXES=Y STATISTICS = NONE

Import into the Destination Database:


Specify STREAMS_INSTANTIATION=Y clause in the import command.
By doing this, the streams metadata is updated with the appropriate
information in the destination database corresponding to the SCN that
is recorded in the export file.
imp USERID=SYSTEM@TEST920.IDC.ORACLE.COM FULL=Y CONSTRAINTS=Y
FILE=tables.dmp IGNORE=Y GRANTS=Y ROWS=Y COMMIT=Y LOG=importTables.log
STREAMS_INSTANTIATION=Y

4.2 If the objects are already present in the desination database, check that they are also
consistent at data level, otherwise the apply process may fail with error ORA-1403 when
apply a DML on a not consistent row. There are 2 ways of instanitating the objects
at the destination site.
1. By means of Metadata-only export/import :
Export from the Source Database by specifying ROWS=N
exp USERID=SYSTEM@V920.IDC.ORACLE.COM TABLES=SCOTT.DEPT FILE=tables.dmp
ROWS=N LOG=exportTables.log OBJECT_CONSISTENT=Y
Import into the destination database using IGNORE=Y
imp USERID=SYSTEM@TEST920.IDC.ORACLE.COM FULL=Y FILE=tables.dmp IGNORE=Y
LOG=importTables.log STREAMS_INSTANTIATION=Y
2. By Manaually instantiating the objects

Get the Instantiation SCN at the source database:


connect STRMADMIN/STRMADMIN@source
set serveroutput on
DECLARE
iscn NUMBER; -- Variable to hold instantiation SCN value
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iscn);
END;

Instantiate the objects at the destination database with this SCN value.
The SET_TABLE_INSTANTIATION_SCN procedure controls which LCRs for a table
are to be applied by the apply process.
If the commit SCN of an LCR from the source database is less than or
equal to this instantiation SCN , then the apply process discards the LCR.
Else, the apply process applies the LCR.

connect STRMADMIN/STRMADMIN@destination
BEGIN
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
source_object_name => 'SCOTT.DEPT',
source_database_name => 'V920.IDC.ORACLE.COM',
instantiation_scn => &iscn);
END;
Enter value for iscn:

Finally start the Capture Process:


The setup is now ready to replicate data between the two databases using
Oracle Streams.
note:note:note:note:note:
connect STRMADMIN/STRMADMIN@source
BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'STRMADMIN_CAPTURE');
END;
/

Das könnte Ihnen auch gefallen