Beruflich Dokumente
Kultur Dokumente
Initial load of the data can be done separately to perform initial sync-up of data
between source and destination. This can be done using Data Pump or any other
option
Key Characteristics
Trail files contain data changes in transportable and platform independent format
and provide continuous data capture from source, even
if target system is unavailable. They can be used for recovering and synchronizing
the data after system is online.
Use Oracle GoldenGate Release 12.1.2 or later to take advantage of the increased
functionality and enhanced performance features. With OracleGoldenGate Release
12.1.2, Replicat can operate in integrated mode for improved scalability within
Oracle target environments. The apply processing functionality within the Oracle
database is leveraged to automatically handle referential integrity and data
description language (DDL) so that the operations are applied in the correct order.
Extract can also be used in integrated capture mode with an Oracle database,
introduced with Oracle GoldenGate Release 11.2.1. Extract integrates with an Oracle
database log mining server to receive change data from the database in the form of
logical change records (LCRs). Extract can be configured to capture from a local or
downstream mining database.
Database Configuration
This section contains the configuration best practices for the source and target
databases used in an Oracle GoldenGate replicated environment. It is assumed that
the Extract and Data Pump processes are both running on the source environment and
one or more Replicat processes are running on the target database. In an active-
active bi-directional Oracle GoldenGate environment, or when the target database
may be converted to a source database,
combine both target and source database configuration steps.
Source Database
The source database should be configured with the following:
1. Run the database in ARCHIVELOG mode
Oracle GoldenGate Extract mines the Oracle redo for data that can be replicated.
The database must be running in ARCHIVELOG mode. When using Extract in integrated
capture mode, the LogMiner server can seamlessly mine redo from the log buffer,
online and archive log files.
2. Enable force logging modeIn order to ensure that the required redo information
is contained in the Oracle redo logs for segments being replicated, it is important
to override any NOLOGGING operations which would prevent the required redo
information from being generated. If you are replicating the entire database,
enable database force logging mode. Check the existing force logging status by
executing the following command
If the database is currently not in force logging mode, enable force logging by
executing the following commands:
There are cases when you do not want to replicate some application data that are
loaded with NOLOGGING operations. In those cases, isolate the tables and indexes
into separate tablespaces and then you can enable and disable logging according to
your requirements. You must first disable database force logging mode by executing
the following commands:
4. Configure the Streams pool When using Extract in integrated capture mode, an
area of Oracle memory called the Streams pool must be configured in the System
Global Area (SGA) of the database. If you are using Extract in classic mode (non-
integrated capture mode), the Streams pool is not necessary.
The size requirement of the Streams pool for Extract in integrated capture mode is
based on two integrated capture mode parameters:
•MAX_SGA_SIZE – controls the amount of shared memory used by the LogMiner server.
The default value is 1GB and, in most cases, this is adequate. This is not the same
as the database initialization parameter
SGA_MAX_SIZE
. By monitoring Automatic Workload Repository (AWR) reports during peak times for a
high number of background process waits on ‘LogMiner preparer: mem
ory’ or ‘LogMiner reader: buffer’ with high Avg wait (ms) times (>5ms) and high %
bg time (>25%), increasing the MAX_SGA_SIZE parameter by 25% can improve Extract
performance
PARALLELISM
– controls the number of LogMiner preparer (LMP) server processes used by the
LogMiner server. The default value for Oracle Database Enterprise Edition is 2 and
is adequate for most workloads. Oracle Database Standard Edition defaults to 1 and
cannot be increased. To identify when to increase the parallelism parameter, use
the Oracle Streams Performance Advisor (SPADV) and evaluate if all LogMiner
preparer (LMP) processes are nearing 100% CPU.
SQL> @$ORACLE_HOME/rdbms/admin/utlspadv.sql
Target Database
The target database should be configured with the following:
1. Run the database in ARCHIVELOG mode Although Oracle GoldenGate does not require
the target database to run in ARCHIVELOG mode, Oracle recommends doing so for high
availability and recoverability. If the target database is configured to fail over
or switch over to a source database, ARCHIVELOG
mode is required. The target database should also be involved in a backup strategy
to match the recovery options on the source database. In theevent of a failure on
the source environment and if an incomplete recovery is carried out, the target
database also needs recovery to make sure the replicated objects are not from a
point in time ahead of the source.
2. Enable force logging When replicating bi-directionally or if the source and
target database need to switch roles, force logging should be enabled to prevent
missing redo data required by Oracle GoldenGate Extract. Refer to the previous
section titled “Source Database” for instructions on how to enable force logging
mode
Filtering of the columns of a table can be set at the Extract, Pump or Replicat
level.
GoldenGate supports the following topologies. More details can be found here.
Unidirectional
Bidirectional
Peer-to-peer
Broadcast
Consolidation
Cascasding
Manager
Extract
Pump
Replicate
Oracle Database
TimesTen
MySQL
IBM DB2
Microsoft SQL Server
Informix
Teradata
Sybase
Enscribe
SQL/MX
Goldengate supports both DML and DDL Replication from the source to target.
What are the supplemental logging pre-requisites?
When a transaction is committed on the source database, only new data is written to
the Redo log. However for Oracle to apply these transactions on the destination
database, the before image key values are required to identify the effected rows.
This data is also placed in the trail file and used to identify the rows on the
destination, using the key value the transactions are executed against them.
List important considerations for bi-directional replication?
The customer should consider the following points in an active-active replication
environment.
No, OGG binaries are not supported on DBFS. You can however store parameter files,
data files (trail files), and checkpoint files on DBFS.
What is the default location of the GLOBALS file?
A GLOBALS file is located under Oracle GoldenGate installation directory (OGG HOME)
Where can filtering of data for a column be configured?
Filtering of the columns of a table can be set at the Extract, Pump or Replicat
level.
Is it a requirement to configure a PUMP extract process in OGG replication?
A PUMP extract is an option, but it is highly recommended to use this to safe guard
against network failures. Normally it is configured when you are setting up OGG
replication across the network.
What are the differences between the Classic and integrated Capture?
Classic Capture:
The Classic Capture mode is the traditional Extract process that accesses the
database redo logs (optionally archive logs) to capture the DML changes occurring
on the objects specified in the parameter files.
At the OS level, the GoldenGate user must be a part of the same database group
which owns the database redo logs.
This capture mode is available for other RDBMS as well.
There are some data types that are not supported in Classic Capture mode.
Classic capture can’t read data from the compressed tables/tablespaces.
In the Integrated Capture mode, GoldenGate works directly with the database log
mining server to receive the data changes in the form of logical change records
(LCRs).
IC mode does not require any special setup for the databases using ASM,
transparent data encryption, or Oracle RAC.
This feature is only available for oracle databases in Version 11.2.0.3 or
higher.
It also supports various object types which were previously not supported by
Classic Capture.
This Capture mode supports extracting data from source databases using
compression.
Integrated Capture can be configured in an online or downstream mode.
List the minimum parameters that can be used to create the extract process?
The following are the minimium required parameters which must be defined in the
extract parameter file.
EXTRACT NAME
USERID
EXTTRAIL
TABLE
Macro is an easier way to build your parameter file. Once a macro is written it can
be called from different parameter files. Common parameters like username/password
and other parameters can be included in these macros. A macro can either be another
parameter file or a library.
Where can macros be invoked?
Manager
Extract
Replicat
Globals
Sample:
MACRO #macro_name
PARAMS (#param1, #param2, …)
BEGIN
< macro_body >
END;
I want to configure multiple extracts to write to the same exttrail file? Is this
possible?
Only one Extract process can write to one exttrail at a time. So you can’t
configure multiple extracts to write to the same exttrail.
What type of Encryption is supported in Goldengate?
You can encrypt the password/data using the AES in three different keys
a) 128 bit
b) 192 bit and
c) 256 bit
Is there a way to check the syntax of the commands in the parameter file without
actually running the GoldenGate process
Yes, you can place the SHOWSYNTAX parameter in the parameter file and try starting.
If there is any error you will see it.
How can you increase the maximum size of the read operation into the buffer that
holds the results of the reads from the transaction log?
If you are using the Classical Extract you may use the TRANSLOGOPTION ASMBUFSIZE
parameter to control the read size for ASM Databases.
What information can you expect when there us data in the discard file?
You can use the following command to write the trail data to a new trail file.
SEND EXTRACT ext_name, ROLLOVER
How can you determine if the parameters for a process was recently changed
When ever a process is started, the parameters in the .prm file for the process is
written to the process REPORT. You can look at the older process reports to view
the parameters which were used to start up the process. By comparing the older and
the current reports you can identify the changes in the parameters.
Ans:
he Data Pump (not to be confused with the Oracle Export Import Data Pump) is an
optional secondary Extract group that is created on the source system. When Data
Pump is not used, the Extract process writes to a remote trail that is located on
the target system using TCP/IP. When Data Pump is configured, the Extract process
writes to a local trail and from here Data Pump will read the trail and write the
data over the network to the remote trail located on the target system.
The advantages of this can be seen as it protects against a network failure as in
the absence of a storage device on the local system, the Extract process writes
data into memory before the same is sent over the network. Any failures in the
network could then cause the Extract process to abort (abend). Also if we are doing
any complex data transformation or filtering, the same can be performed by the Data
Pump. It will also be useful when we are consolidating data from several sources
into one central target where data pump on each individual source system can write
to one common trail file on the target.
ANS:
7809
ANS:
GoldenGate maintains its own Checkpoints which is a known position in the trail
file from where the Replicat process will start processing after any kind of error
or shutdown.
This ensures data integrity and a record of these checkpoints is either maintained
in files stored on disk or table in the database which is the preferred option.
ANS:
GoldenGate supports the following topologies. More details can be found here.
Unidirectional
Bidirectional
Peer-to-peer
Broadcast
Consolidation
Cascasding
What are the main components of the Goldengate replication?
The replication configuration consists of the following processes.
Manager
Extract
Pump
Replicate
The Classic Capture mode is the traditional Extract process that accesses the
database redo logs (optionally archive logs) to capture the DML changes occurring
on the objects specified in the parameter files.
At the OS level, the GoldenGate user must be a part of the same database group
which owns the database redo logs.
This capture mode is available for other RDBMS as well.
There are some data types that are not supported in Classic Capture mode.
Classic capture can’t read data from the compressed tables/tablespaces.
[/sociallocker]
Integrated Capture (IC):
In the Integrated Capture mode, GoldenGate works directly with the database log
mining server to receive the data changes in the form of logical change records
(LCRs).
IC mode does not require any special setup for the databases using ASM,
transparent data encryption, or Oracle RAC.
This feature is only available for oracle databases in Version 11.2.0.3 or
higher.
It also supports various object types which were previously not supported by
Classic Capture.
This Capture mode supports extracting data from source databases using
compression.
Integrated Capture can be configured in an online or downstream mode.
List the minimum parameters that can be used to create the extract process?
The following are the minimium required parameters which must be defined in the
extract parameter file.
EXTRACT NAME
USERID
EXTTRAIL
TABLE
I want to configure multiple extracts to write to the same exttrail file? Is this
possible?
Only one Extract process can write to one exttrail at a time. So you can’t
configure multiple extracts to write to the same exttrail.
What type of Encryption is supported in Goldengate?
Oracle Goldengate provides 3 types of Encryption.
What are the different password encrytion options available with OGG?
You can encrypt a password in OGG using
A number of system architecture solutions are offered for data replication and
synchronization:
• One-to-one (source to target)
• One-to-many (one source to many targets)
• Many to one (hub and spoke)
• Cascading
• Bi-directional (active active)
• Bi-directional (active passive)
Check to make sure that the Extract has processed all the records in the data
source (Online Redo/archive logs)
GGSCI> send extract , logend
(The above command should print YES)
Source:
Target:
Source:
Target:
What are the basic resources required to configure Oracle GoldenGate high
availability solution with Oracle Clusterware?
There are 3 basic resources required:
Virtual IP
Shared storage
Action script
Password Encryption.
Network Encryption.
What are the different password encrytion options available with OGG?
Ans:
he Data Pump (not to be confused with the Oracle Export Import Data Pump) is an
optional secondary Extract group that is created on the source system. When Data
Pump is not used, the Extract process writes to a remote trail that is located on
the target system using TCP/IP. When Data Pump is configured, the Extract process
writes to a local trail and from here Data Pump will read the trail and write the
data over the network to the remote trail located on the target system.
ANS:
7809
ANS:
GoldenGate maintains its own Checkpoints which is a known position in the trail
file from where the Replicat process will start processing after any kind of error
or shutdown.
This ensures data integrity and a record of these checkpoints is either maintained
in files stored on disk or table in the database which is the preferred option.
ANS:
GoldenGate supports the following topologies. More details can be found here.
Unidirectional
Bidirectional
Peer-to-peer
Broadcast
Consolidation
Cascasding
Manager
Extract
Pump
Replicate
The Classic Capture mode is the traditional Extract process that accesses the
database redo logs (optionally archive logs) to capture the DML changes occurring
on the objects specified in the parameter files.
At the OS level, the GoldenGate user must be a part of the same database group
which owns the database redo logs.
This capture mode is available for other RDBMS as well.
There are some data types that are not supported in Classic Capture mode.
Classic capture can’t read data from the compressed tables/tablespaces.
[/sociallocker]
Integrated Capture (IC):
In the Integrated Capture mode, GoldenGate works directly with the database log
mining server to receive the data changes in the form of logical change records
(LCRs).
IC mode does not require any special setup for the databases using ASM,
transparent data encryption, or Oracle RAC.
This feature is only available for oracle databases in Version 11.2.0.3 or
higher.
It also supports various object types which were previously not supported by
Classic Capture.
This Capture mode supports extracting data from source databases using
compression.
Integrated Capture can be configured in an online or downstream mode.
List the minimum parameters that can be used to create the extract process?
The following are the minimium required parameters which must be defined in the
extract parameter file.
EXTRACT NAME
USERID
EXTTRAIL
TABLE
I want to configure multiple extracts to write to the same exttrail file? Is this
possible?
Only one Extract process can write to one exttrail at a time. So you can’t
configure multiple extracts to write to the same exttrail.
What type of Encryption is supported in Goldengate?
Oracle Goldengate provides 3 types of Encryption.
What are the different password encrytion options available with OGG?
You can encrypt a password in OGG using
Unidirectional
Bidirectional
Peer-to-peer
Broadcast
Consolidation
Cascasding
Manager
Extract
Pump
Replicate
Oracle Database
TimesTen
MySQL
IBM DB2
Informix
Teradata
Sybase
Enscribe
SQL/MX
Goldengate supports both DML and DDL Replication from the source to target.
When a transaction transaction is committed on the source database, only new data
is written to the Redo log. However for Oracle to apply these transactions on the
destination database, the before image key values are required to identify the
effected rows. This data is also placed in the trail file and used to identify the
rows on the destination, using the key value the transactions are executed against
them.
Sequences: Are not supported. The work around is use to use odd/even, range or
concatenate sequences.
LAG: This should be minimized. If a customer says that there will not be any LAG
due to network or huge load, then we don’t need to deploy CRDs. But this is not the
case always as there would be some LAG and these can cause Conflicts.
CDR (Conflict Detection & Resolution): OGG has built in CDRs for all kind of DMLs
that can be used to detect and resolve them.
Packaged Application: These are not supported as it may contain data types which
are not support by OGG or it might not allow the application modification to work
with OGG.
The Data Pump (not to be confused with the Oracle Export Import Data Pump) is an
optional secondary Extract group that is created on the source system. When Data
Pump is not used, the Extract process writes to a remote trail that is located on
the target system using TCP/IP. When Data Pump is configured, the Extract process
writes to a local trail and from here Data Pump will read the trail and write the
data over the network to the remote trail located on the target system.
Filtering of the columns of a table can be set at the Extract, Pump or Replicat
level.
A PUMP extract is an option, but it is highly recommended to use this to safe guard
against network failures. Normally it is configured when you are setting up OGG
replication across the network.
What are the differences between the Classic and integrated Capture?
Classic Capture:
The Classic Capture mode is the traditional Extract process that accesses the
database redo logs (optionally archive logs) to capture the DML changes occurring
on the objects specified in the parameter files.
At the OS level, the GoldenGate user must be a part of the same database group
which owns the database redo logs.
There are some data types that are not supported in Classic Capture mode.
In the Integrated Capture mode, GoldenGate works directly with the database log
mining server to receive the data changes in the form of logical change records
(LCRs).
IC mode does not require any special setup for the databases using ASM, transparent
data encryption, or Oracle RAC.
This feature is only available for oracle databases in Version 11.2.0.3 or higher.
It also supports various object types which were previously not supported by
Classic Capture.
This Capture mode supports extracting data from source databases using compression.
List the minimum parameters that can be used to create the extract process?
The following are the minimium required parameters which must be defined in the
extract parameter file.
EXTRACT NAME
USERID
EXTTRAIL
TABLE
Macro is an easier way to build your parameter file. Once a macro is written it can
be called from different parameter files. Common parameters like username/password
and other parameters can be included in these macros. A macro can either be another
parameter file or a library.
Manager
Extract
Replicat
Gobals
Parameter list
Macro body
Sample:
MACRO #macro_name
PARAMS (#param1, #param2, …)
BEGIN
< macro_body >
END;
Password Encryption.
Network Encryption.
What are the different password encrytion options available with OGG?
Is there a way to check the syntax of the commands in the parameter file without
running the GoldenGate process?
Yes, you can place the SHOWSYNTAX parameter in the parameter file and try starting.
If there is any error, you will see it.
How can you increase the maximum size of the read operation into the buffer that
holds the results of the reads from the transaction log?
If you are using the Classical Extract you may use the TRANSLOGOPTION ASMBUFSIZE
parameter to control, the read size for ASM Databases.
What information can you expect when there us data in the discard file?
Database Errors
What command can be used to switch writing the trail data to a new trail file?
You can use the following command to write the trail data to a new trail file.
SEND EXTRACT ext_name, ROLLOVER
How can you determine if the parameters for a process was recently changed?
Whenever a process is started, the parameters in the. prm file for the process is
written to the process REPORT. You can look at the older process reports to view
the parameters which were used to start up the process. By comparing the older and
the current reports you can identify the changes in the parameters.
If there isn’t a Guaranteed Restore Point and it’s a primary database, the
Flashback Mode will be automatically turned off for the database, which will have
continued to operate normally.
If there isn’t a Guaranteed Restore Point and it’s a standby database, the database
will hang until the cause of the write failure is fixed.
In RMAN you can use the LIST RESTORE POINT [ALL|restore_point_name] command. If you
use a recovery catalog you can use the view RC_RESTORE_POINT in the recovery
catalog repository, or the command the V$RESTORE_POINT in the target database.
Yes, you can. During a FLASHBACK DATABASE operation, you can query
V$SESSION_LONGOPS from another session to see the progress of the flashback.
The FLASHBACK DATABASE operation has two distinct phases: the actual flashback and
the media recovery that happens afterwards to bring the database to a consistent
state.
While the actual flashback is running, you’ll see the following message in
V$SESSION_LONGOPS, on Oracle 11gR2:
Flashback Database: Flashback Data Applied : 238 out of 282 Megabytes done
Use a fast file system for your flash recovery area, preferably without operating
system file caching. It is recommended to use a file system that avoids operating
system file caching, such as ASM.
Configure enough disk spindles for the file system that will hold the flash
recovery area. For large production databases, multiple disk spindles may be needed
to support the required disk throughput for the database to write the flashback
logs effectively.
If the storage system used to hold the flash recovery area does not have non-
volatile RAM, try to configure the file system on top of striped storage volumes,
with a relatively small stripe size such as 128K. This will allow each write to the
flashback logs to be spread across multiple spindles, improving performance
1.4
How Does Golden Gate Work
Oracle GoldenGate consists of decoupled modules that are combined to create the
best
possible
solution for your business requireme
nts.
On the source system(s):
•Oracle GoldenGate’s Capture (Extract) process reads data transactions as they
occur,
by
reading the native transaction log, typically the redo log. Oracle
GoldenGate only
moves
changed, committed transactional data, which is
only a % of all transactions
–
therefore
operating with extremely high performance and very low impact on the data
infrastructure.
• Filtering can be performed at the source or target - at table, column and/or row
level.
• Transformations can be applied atthe capture or delivery stages.
• Advanced queuing (trail files):
To move transactional data efficiently and accurately across systems,
Oracle
GoldenGate converts the captured data into a Oracle GoldenGate data format
in“trail” files. With both source an
d target trail files, Oracle GoldenGate’s unique architecture eliminates any single
point of failure and ensures data integrity is
maintained –even in the event of a system error or outage.
Routing:
•Data is sent via TCP/IP to the target systems. Data com
pression and encryption are
supported. Thousands of transactions can be moved per second,
without distance
limitations.
On the target system(s):
•A Server Collector process (not shown) reassembles the transactional data into a
target trail.
•The Delivery (
Replicat) process applies transactional data to the designated target
systems
using native SQL calls
Preventing Data Looping
In a bidirectional configuration, SQL changes that are replicated from one system
to another must be prevented from being replicated back to the first system.
Otherwise, it moves back and forth in an endless loop, as in this example:
prevent the capture of SQL operations that are generated by Replicat, but enable
the capture of SQL operations that are generated by business applications if they
contain objects that are specified in the Extract parameter file.
identify local Replicat transactions, in order for the Extract process to ignore
them.
Depending on which database you are using, you may or may not need to provide
explicit instructions to prevent the capture of Replicat operations.
9.3.1.1 Preventing the Capture of Replicat Transactions (Oracle)
LCR : Logical Change Record TYPEs : This chapter describes the logical change
record (LCR) types. In Streams, LCRs are message payloads that contain information
about changes to a database.
OCI : Oracle Call Interface : Oracle Call Interface (OCI) is the comprehensive,
high performance, native C language interface to Oracle Database. It exposes the
full power of Oracle Database to custom or packaged C applications. Oracle In-
Memory Database Cache also supports access from OCI programs.