Sie sind auf Seite 1von 26

Oracle Golden Gate is a solution for real-time data integration and replication

between heterogeneous systems. It is a software package which enables high


availability, real-time integration, transactions capture, and data replication
between operational and analytical enterprise systems.It is more suitable for data
replication requirements rather than data migration. It supports bi-directional
replication ensuring systems are operational 24/7, and distribution of data across
enterprise to optimize the decision-making process. It also supports replication
from non-Oracle databases (Oracle MySQL, MS SQL Server, Sybase, and IBM DB2).

Golden Gate Architecture


Golden Gate captures the transactions and DML changes written to redo and/or
archive logs in Oracle database. From redo log files, trail files
are generated which are transferred to the target and then used to read and load
the data into target tables/objects.

Oracle Golden Gate architecture comprises of three primary components:


1) Capture
This component captures the transactional changes and DDL changes from source
system, using transaction and redo log(s).
2) Trail Files
Contain details of the operation for changes on data in transportable and platform
independent format.
3) Delivery
Gets the data from trail files on remote/target database, and applies the data
changes into the target tables.

Initial load of the data can be done separately to perform initial sync-up of data
between source and destination. This can be done using Data Pump or any other
option

Key Characteristics

Trail files contain data changes in transportable and platform independent format
and provide continuous data capture from source, even
if target system is unavailable. They can be used for recovering and synchronizing
the data after system is online.

Zero downtime operational capability

Supports transaction integrity to avoid data consistency issues

Real-time data capture and replication

Event-based interaction and capture from source system

Conflict detection and resolution for data replication

Golden Gate Replication Strategies


Oracle Golden Gate provides various replication strategies to ensure a
comprehensive real-time information environment.

Unidirectional Query Offoading Zero-Downtime Migration


Bi-Directional Hot Standby or Active-Active for HA
Peer-to-Peer Lot Balancing Multi-Master
Broadcast Data Distribution
Integration/Consolidation Data Warehouse
Data Distribution via Messaging
Replication Strategy
Characteristics
One-to-One
• Real-time feeding of reporting DB, enabling Live Reporting
• Supports DDL replication
One-to-Many
• Dedicated site for backup data
• Dedicated site for Live Reporting, separate from backup
• Minimizes corruption of data, as backup is separate from reporting DB
Many-to-One
• Centralized data center consolidating information from remote sites
• Useful for retail industry, central order processing
• Useful for multiple bank branches serving same customer account
• Data feeds to operation data store/data warehouse, supporting Operational
Business Intelligence
Cascading
• Data distribution from master to multiple systems to carry out a transaction
Bi-Directional (Active-Active)
• Live standby and high availability
• Load distribution and performance scalability
Bi-Directional (Active-Passive)
• Fastest possible recovery and switchover
• Reverse direction data replication ready

The key Oracle GoldenGate configuration best practices are:


Enable supplemental logging to ensure the correct data is replicated to the target
database.
Configure Oracle GoldenGate Extract in integrated capture mode to take advantage of
the database LogMiner server functionality and simplify management.
Configure Oracle GoldenGate integrated Replicat processes to leverage the apply
processing functionality that is available within the Oracle database.
Configure multiple parallel Replicat processes using batched SQL for improved apply
performance.

Use Oracle GoldenGate Release 12.1.2 or later to take advantage of the increased
functionality and enhanced performance features. With OracleGoldenGate Release
12.1.2, Replicat can operate in integrated mode for improved scalability within
Oracle target environments. The apply processing functionality within the Oracle
database is leveraged to automatically handle referential integrity and data
description language (DDL) so that the operations are applied in the correct order.
Extract can also be used in integrated capture mode with an Oracle database,
introduced with Oracle GoldenGate Release 11.2.1. Extract integrates with an Oracle
database log mining server to receive change data from the database in the form of
logical change records (LCRs). Extract can be configured to capture from a local or
downstream mining database.

Database Configuration
This section contains the configuration best practices for the source and target
databases used in an Oracle GoldenGate replicated environment. It is assumed that
the Extract and Data Pump processes are both running on the source environment and
one or more Replicat processes are running on the target database. In an active-
active bi-directional Oracle GoldenGate environment, or when the target database
may be converted to a source database,
combine both target and source database configuration steps.
Source Database
The source database should be configured with the following:
1. Run the database in ARCHIVELOG mode
Oracle GoldenGate Extract mines the Oracle redo for data that can be replicated.
The database must be running in ARCHIVELOG mode. When using Extract in integrated
capture mode, the LogMiner server can seamlessly mine redo from the log buffer,
online and archive log files.
2. Enable force logging modeIn order to ensure that the required redo information
is contained in the Oracle redo logs for segments being replicated, it is important
to override any NOLOGGING operations which would prevent the required redo
information from being generated. If you are replicating the entire database,
enable database force logging mode. Check the existing force logging status by
executing the following command

SQL> SELECT FORCE_LOGGING_MODE FROM V$DATABASE;

If the database is currently not in force logging mode, enable force logging by
executing the following commands:

SQL> ALTER DATABASE FORCE LOGGING;


SQL> ALTER SYSTEM SWITCH LOGFILE;

There are cases when you do not want to replicate some application data that are
loaded with NOLOGGING operations. In those cases, isolate the tables and indexes
into separate tablespaces and then you can enable and disable logging according to
your requirements. You must first disable database force logging mode by executing
the following commands:

SQL> ALTER DATABASE NO FORCE LOGGING;


SQL> ALTER TABLESPACE <tablespaces_replicated> FORCE LOGGING;
SQL> ALTER TABLESPACE <tablespaces_not_replicated> NOLOGGING;

Enable supplemental logging


Oracle GoldenGate requires key column values to be logged into redo to allow the
same updated or deleted rows manipulated on the source database to be found on the
target database. Add supplemental logging at the schema level using the Oracle
GoldenGate command ADD SCHEMATRANDATA. For additional information about creating
supplemental log groups, refer to Oracle GoldenGate Installing and Configuring
Oracle GoldenGate for Oracle Database at:
http://docs.oracle.com/goldengate/1212/gg-winux/GIORA.pdf

4. Configure the Streams pool When using Extract in integrated capture mode, an
area of Oracle memory called the Streams pool must be configured in the System
Global Area (SGA) of the database. If you are using Extract in classic mode (non-
integrated capture mode), the Streams pool is not necessary.
The size requirement of the Streams pool for Extract in integrated capture mode is
based on two integrated capture mode parameters:
•MAX_SGA_SIZE – controls the amount of shared memory used by the LogMiner server.
The default value is 1GB and, in most cases, this is adequate. This is not the same
as the database initialization parameter

SGA_MAX_SIZE
. By monitoring Automatic Workload Repository (AWR) reports during peak times for a
high number of background process waits on ‘LogMiner preparer: mem
ory’ or ‘LogMiner reader: buffer’ with high Avg wait (ms) times (>5ms) and high %
bg time (>25%), increasing the MAX_SGA_SIZE parameter by 25% can improve Extract
performance

PARALLELISM
– controls the number of LogMiner preparer (LMP) server processes used by the
LogMiner server. The default value for Oracle Database Enterprise Edition is 2 and
is adequate for most workloads. Oracle Database Standard Edition defaults to 1 and
cannot be increased. To identify when to increase the parallelism parameter, use
the Oracle Streams Performance Advisor (SPADV) and evaluate if all LogMiner
preparer (LMP) processes are nearing 100% CPU.

5. Install the UTL_SPADV package


The UTL_SPADV PL/SQL package provides subprograms to collect and analyze statistics
for the LogMiner server processes. The statistics help identify any current areas
of contention such as CPU or I/O. To install the UTL_SPADV package, as the Oracle
GoldenGate administrator user on the source database, run the following SQL script:

SQL> @$ORACLE_HOME/rdbms/admin/utlspadv.sql

Target Database
The target database should be configured with the following:
1. Run the database in ARCHIVELOG mode Although Oracle GoldenGate does not require
the target database to run in ARCHIVELOG mode, Oracle recommends doing so for high
availability and recoverability. If the target database is configured to fail over
or switch over to a source database, ARCHIVELOG
mode is required. The target database should also be involved in a backup strategy
to match the recovery options on the source database. In theevent of a failure on
the source environment and if an incomplete recovery is carried out, the target
database also needs recovery to make sure the replicated objects are not from a
point in time ahead of the source.
2. Enable force logging When replicating bi-directionally or if the source and
target database need to switch roles, force logging should be enabled to prevent
missing redo data required by Oracle GoldenGate Extract. Refer to the previous
section titled “Source Database” for instructions on how to enable force logging
mode

3. Configure the Streams pool

4. Target SGA parameters


The database parameters controlling the size of the shared memory components in the
System Global Area (SGA) need to be configured similarly to the source database of
the data being replicated. This ensures that no unexpected drop in performance is
seen due to incorrectly sized memory. For example, if the source database is
configured with an 11GB buffer cache, the same performance cannot be expected with
the same workload using a 2GB buffer cache.

List important considerations for bi-directional replication?

The customer should consider the following points in an active-active replication


environment.

Primary Key: Helps to identify conflicts and Resolve them.


Sequences: Are not supported. The work around is use to use odd/even, range or
concatenate sequences.
Triggers: These should be disabled or suppressed to avoid using uniqueness
issue
Data Looping: This can easy avoided using OGG itself
LAG: This should be minimized. If a customer says that there will not be any
LAG due to network or huge load, then we don’t need to deploy CRDs. But this is not
the case always as there would be some LAG and these can cause Conflicts.
CDR (Conflict Detection & Resolution): OGG has built in CDRs for all kind of
DMLs that can be used to detect and resolve them.
Packaged Application: These are not supported as it may contain data types
which are not support by OGG or it might not allow the application modification to
work with OGG.

Where can filtering of data for a column be configured?

Filtering of the columns of a table can be set at the Extract, Pump or Replicat
level.

What type of Topology does Goldengate support?

GoldenGate supports the following topologies. More details can be found here.

Unidirectional
Bidirectional
Peer-to-peer
Broadcast
Consolidation
Cascasding

What are the main components of the Goldengate replication?

The replication configuration consists of the following processes.

Manager
Extract
Pump
Replicate

What database does GoldenGate support for replication?

Oracle Database
TimesTen
MySQL
IBM DB2
Microsoft SQL Server
Informix
Teradata
Sybase
Enscribe
SQL/MX

For the latest list, look here.


What transaction types does Goldengate support for Replication?

Goldengate supports both DML and DDL Replication from the source to target.
What are the supplemental logging pre-requisites?

The following supplemental logging is required.

Database supplemental logging


Object level logging

Why is Supplemental logging required for Replication?

When a transaction is committed on the source database, only new data is written to
the Redo log. However for Oracle to apply these transactions on the destination
database, the before image key values are required to identify the effected rows.
This data is also placed in the trail file and used to identify the rows on the
destination, using the key value the transactions are executed against them.
List important considerations for bi-directional replication?
The customer should consider the following points in an active-active replication
environment.

Primary Key: Helps to identify conflicts and Resolve them.


Sequences: Are not supported. The work around is use to use odd/even, range or
concatenate sequences.
Triggers: These should be disabled or suppressed to avoid using uniqueness
issue
Data Looping: This can easy avoided using OGG itself
LAG: This should be minimized. If a customer says that there will not be any
LAG due to network or huge load, then we don’t need to deploy CRDs. But this is not
the case always as there would be some LAG and these can cause Conflicts.
CDR (Conflict Detection & Resolution): OGG has built in CDRs for all kind of
DMLs that can be used to detect and resolve them.
Packaged Application: These are not supported as it may contain data types
which are not support by OGG or it might not allow the application modification to
work with OGG.

Are OGG binaries supported on ASM Cluster File System (ACFS)?

Yes, you can install and configure OGG on ACFS.


Are OGG binaries supported on the Database File System (DBFS)? What files can be
stored in DBFS?

No, OGG binaries are not supported on DBFS. You can however store parameter files,
data files (trail files), and checkpoint files on DBFS.
What is the default location of the GLOBALS file?

A GLOBALS file is located under Oracle GoldenGate installation directory (OGG HOME)
Where can filtering of data for a column be configured?

Filtering of the columns of a table can be set at the Extract, Pump or Replicat
level.
Is it a requirement to configure a PUMP extract process in OGG replication?

A PUMP extract is an option, but it is highly recommended to use this to safe guard
against network failures. Normally it is configured when you are setting up OGG
replication across the network.
What are the differences between the Classic and integrated Capture?

Classic Capture:

The Classic Capture mode is the traditional Extract process that accesses the
database redo logs (optionally archive logs) to capture the DML changes occurring
on the objects specified in the parameter files.
At the OS level, the GoldenGate user must be a part of the same database group
which owns the database redo logs.
This capture mode is available for other RDBMS as well.
There are some data types that are not supported in Classic Capture mode.
Classic capture can’t read data from the compressed tables/tablespaces.

Integrated Capture (IC):

In the Integrated Capture mode, GoldenGate works directly with the database log
mining server to receive the data changes in the form of logical change records
(LCRs).
IC mode does not require any special setup for the databases using ASM,
transparent data encryption, or Oracle RAC.
This feature is only available for oracle databases in Version 11.2.0.3 or
higher.
It also supports various object types which were previously not supported by
Classic Capture.
This Capture mode supports extracting data from source databases using
compression.
Integrated Capture can be configured in an online or downstream mode.

List the minimum parameters that can be used to create the extract process?

The following are the minimium required parameters which must be defined in the
extract parameter file.

EXTRACT NAME
USERID
EXTTRAIL
TABLE

What are macros?

Macro is an easier way to build your parameter file. Once a macro is written it can
be called from different parameter files. Common parameters like username/password
and other parameters can be included in these macros. A macro can either be another
parameter file or a library.
Where can macros be invoked?

The macros can be called from the following parameter files.

Manager
Extract
Replicat
Globals

How is a macro defined?

A macro statement consists of the following.

Name of the Macro


Parameter list
Macro body

Sample:
MACRO #macro_name
PARAMS (#param1, #param2, …)
BEGIN
< macro_body >
END;
I want to configure multiple extracts to write to the same exttrail file? Is this
possible?

Only one Extract process can write to one exttrail at a time. So you can’t
configure multiple extracts to write to the same exttrail.
What type of Encryption is supported in Goldengate?

Oracle Goldengate provides 3 types of Encryption.

Data Encryption using Blow fish.


Password Encryption.
Network Encryption.
What are the different password encrytion options available with OGG?

You can encrypt a password in OGG using

Blowfish algorithm and


Advance Encryption Standard (AES) algorithm

What are the different encryption levels in AES?

You can encrypt the password/data using the AES in three different keys

a) 128 bit
b) 192 bit and
c) 256 bit
Is there a way to check the syntax of the commands in the parameter file without
actually running the GoldenGate process

Yes, you can place the SHOWSYNTAX parameter in the parameter file and try starting.
If there is any error you will see it.
How can you increase the maximum size of the read operation into the buffer that
holds the results of the reads from the transaction log?

If you are using the Classical Extract you may use the TRANSLOGOPTION ASMBUFSIZE
parameter to control the read size for ASM Databases.
What information can you expect when there us data in the discard file?

When data is discarded, the discard file can contain:


1. Discard row details
2. Database Errors
3. Trail file number
What command can be used to switch writing the trail data to a new trail file?

You can use the following command to write the trail data to a new trail file.
SEND EXTRACT ext_name, ROLLOVER
How can you determine if the parameters for a process was recently changed

When ever a process is started, the parameters in the .prm file for the process is
written to the process REPORT. You can look at the older process reports to view
the parameters which were used to start up the process. By comparing the older and
the current reports you can identify the changes in the parameters.

1) What are processes/components in GoldenGate?

Ans:

Manager, Extract, Replicat, Data Pump

2) What is Data Pump process in GoldenGate ?

he Data Pump (not to be confused with the Oracle Export Import Data Pump) is an
optional secondary Extract group that is created on the source system. When Data
Pump is not used, the Extract process writes to a remote trail that is located on
the target system using TCP/IP. When Data Pump is configured, the Extract process
writes to a local trail and from here Data Pump will read the trail and write the
data over the network to the remote trail located on the target system.
The advantages of this can be seen as it protects against a network failure as in
the absence of a storage device on the local system, the Extract process writes
data into memory before the same is sent over the network. Any failures in the
network could then cause the Extract process to abort (abend). Also if we are doing
any complex data transformation or filtering, the same can be performed by the Data
Pump. It will also be useful when we are consolidating data from several sources
into one central target where data pump on each individual source system can write
to one common trail file on the target.

3) What is the command line utility in GoldenGate (or) what is ggsci?

ANS: Golden Gate Command Line Interface essential commands – GGSCI

GGSCI -- (Oracle) GoldenGate Software Command Interpreter

4) What is the default port for GoldenGate Manager process?

ANS:

7809

5) What are important files GoldenGate?

GLOBALS, ggserr.log, dirprm, etc ...

6) What is checkpoint table?

ANS:

Create the GoldenGate Checkpoint table

GoldenGate maintains its own Checkpoints which is a known position in the trail
file from where the Replicat process will start processing after any kind of error
or shutdown.
This ensures data integrity and a record of these checkpoints is either maintained
in files stored on disk or table in the database which is the preferred option.

7) How can you see GoldenGate errors?

ANS:

ggsci> VIEW GGSEVT


ggserr.log file

GoldenGate supports the following topologies. More details can be found here.

Unidirectional
Bidirectional
Peer-to-peer
Broadcast
Consolidation
Cascasding
What are the main components of the Goldengate replication?
The replication configuration consists of the following processes.

Manager
Extract
Pump
Replicate

What transaction types does Goldengate support for Replication?


Goldengate supports both DML and DDL Replication from the source to target.
What are the supplemental logging pre-requisites?
The following supplemental logging is required.

Database supplemental logging


Object level logging

Why is Supplemental logging required for Replication?


[sociallocker]When a transaction is committed on the source database, only new data
is written to the Redo log. However for Oracle to apply these transactions on the
destination database, the before image key values are required to identify the
effected rows. This data is also placed in the trail file and used to identify the
rows on the destination, using the key value the transactions are executed against
them.
List important considerations for bi-directional replication?
The customer should consider the following points in an active-active replication
environment.

Primary Key: Helps to identify conflicts and Resolve them.


Sequences: Are not supported. The work around is use to use odd/even, range or
concatenate sequences.
Triggers: These should be disabled or suppressed to avoid using uniqueness
issue
Data Looping: This can easy avoided using OGG itself
LAG: This should be minimized. If a customer says that there will not be any
LAG due to network or huge load, then we don’t need to deploy CRDs. But this is not
the case always as there would be some LAG and these can cause Conflicts.
CDR (Conflict Detection & Resolution): OGG has built in CDRs for all kind of
DMLs that can be used to detect and resolve them.
Packaged Application: These are not supported as it may contain data types
which are not support by OGG or it might not allow the application modification to
work with OGG.

Are OGG binaries supported on ASM Cluster File System (ACFS)?


Yes, you can install and configure OGG on ACFS.
Are OGG binaries supported on the Database File System (DBFS)? What files can be
stored in DBFS?
No, OGG binaries are not supported on DBFS. You can however store parameter files,
data files (trail files), and checkpoint files on DBFS.
What is the default location of the GLOBALS file?
A GLOBALS file is located under Oracle GoldenGate installation directory (OGG HOME)
Where can filtering of data for a column be configured?
Filtering of the columns of a table can be set at the Extract, Pump or Replicat
level.
Is it a requirement to configure a PUMP extract process in OGG replication?
A PUMP extract is an option, but it is highly recommended to use this to safe guard
against network failures. Normally it is configured when you are setting up OGG
replication across the network.
What are the differences between the Classic and integrated Capture?
Classic Capture:

The Classic Capture mode is the traditional Extract process that accesses the
database redo logs (optionally archive logs) to capture the DML changes occurring
on the objects specified in the parameter files.
At the OS level, the GoldenGate user must be a part of the same database group
which owns the database redo logs.
This capture mode is available for other RDBMS as well.
There are some data types that are not supported in Classic Capture mode.
Classic capture can’t read data from the compressed tables/tablespaces.

[/sociallocker]
Integrated Capture (IC):

In the Integrated Capture mode, GoldenGate works directly with the database log
mining server to receive the data changes in the form of logical change records
(LCRs).
IC mode does not require any special setup for the databases using ASM,
transparent data encryption, or Oracle RAC.
This feature is only available for oracle databases in Version 11.2.0.3 or
higher.
It also supports various object types which were previously not supported by
Classic Capture.
This Capture mode supports extracting data from source databases using
compression.
Integrated Capture can be configured in an online or downstream mode.

List the minimum parameters that can be used to create the extract process?
The following are the minimium required parameters which must be defined in the
extract parameter file.

EXTRACT NAME
USERID
EXTTRAIL
TABLE

I want to configure multiple extracts to write to the same exttrail file? Is this
possible?
Only one Extract process can write to one exttrail at a time. So you can’t
configure multiple extracts to write to the same exttrail.
What type of Encryption is supported in Goldengate?
Oracle Goldengate provides 3 types of Encryption.

Data Encryption using Blow fish.


Password Encryption.
Network Encryption.

What are the different password encrytion options available with OGG?
You can encrypt a password in OGG using

Blowfish algorithm and


Advance Encryption Standard (AES) algorithm

What are the different encryption levels in AES?


You can encrypt the password/data using the AES in three different keys
a) 128 bit
b) 192 bit and
c) 256 bit
Oracle Golden-Gate solutions
Oracle Golden-Gate provides five data replication solutions:
1. High Availability
°° Live Standby for an immediate fail-over solution that can later re-synchronize
with your primary source.
°° Active-Active solutions for continuous availability and transaction load
distribution between two or more active systems.
2. Zero-Downtime Upgrades and Migrations
°° Eliminates downtime for upgrades and migrations.
3. Live Reporting
°° Feeding a reporting database so as not to burden the source production systems
with BI users or tools.
4. Operational Business Intelligence (BI)
°° Real-time data feeds to operational data stores or data warehouses, directly or
via Extract Transform and Load (ETL) tools.
5. Transactional Data Integration
°° Real-time data feeds to messaging systems for business activity monitoring,
business process monitoring, and complex event processing.
°° Uses event-driven architecture and service-oriented architecture (SOA).

A number of system architecture solutions are offered for data replication and
synchronization:
• One-to-one (source to target)
• One-to-many (one source to many targets)
• Many to one (hub and spoke)
• Cascading
• Bi-directional (active active)
• Bi-directional (active passive)

Installing and Preparing Golden-Gate

GoldenGate installation in the order specified as follows:


1. Downloading the software from the Oracle Website.
2. Unpacking the installation zip file.
3. Preparing the source and target systems.
4. Installing the software on source and target systems.
5. Preparing the source database.
6. Configuring the Manager process on the source and target systems.
7. Configuring the Extract process on the source system.
8. Configuring the Data Pump process on the source system.
9. Configuring the Replicate process on the target system.
10. Starting the Extract process.
11. Starting the Data Pump process.
12. Starting the Replicate process.

Oracle Database Upgrade Interview Questions


1)For database projects, what are the biggest challenge?
Downtime for upgrade is a biggest challenge.
2)What kind of upgrade project you have done so far?
I have work for almost all type of upgrade: RAC and non-RAC . OLTP and DW .Small
database and large database.
3)What were the size of database?
From 1 TB -15 TB
4)How long it will take to upgrade 2 nodes RAC database of 10 TB size?
Basic upgrade is independent of volume of database but depend mostly on number of
components.
5)Can you please explain some of the issue you faced for DB upgrade?
Performance issue after upgrade is a major concern.
Error related to timezone
6)What are the different method for upgrading DB?
Manually or using DBUA.
7)Which one you prefer and why?
I generally prefer manual .Reason: to have more control over the steps of upgrade.
8)One major difference between upgrading DB manual vs DBUA?
Manual process : we need to set cluster_database=FALSE but when upgrading via DBUA
cluster_database=TRUE
9)One parameter which once set, cant be revert back but need to recover database
from last backup.
COMPATIBLE
10)In which months oracle release CPU patches?
JAN, APR, JUL, OCT
I11)n which mode you will start your database for upgrading from 10x to 11x?
Startup upgrade
12)Why not in startup migrate?
startup migrate was used to upgrade a database till 9i.But from 10G we are using
startup upgrade to upgrade database.
13)What is Oracle home ,oracle base ,oracle inventory and oracle sid
14)What is oraInventory ?
15)Can we apply PSU on database without applying PSU on grid in 11.2.0.4?
No, unfortunately, you need to apply the PSU patches to both the grid-home and the
rdbms-home as sometimes these patches contain fixes for a bug in both homes.

8. How can you see Golden Gate errors?


ggsci> VIEW GGSEVT
ggserr.log file

What are the different encryption levels in AES?


You can encrypt the password/data using the AES in three different keys
a) 128 bit
b) 192 bit and
c) 256 bit
Is there a way to check the syntax of the commands in the parameter file without
actually running the GoldenGate process
Yes, you can place the SHOWSYNTAX parameter in the parameter file and try starting.
If there is any error you will see it.
How can you increase the maximum size of the read operation into the buffer that
holds the results of the reads from the transaction log?
If you are using the Classical Extract you may use the TRANSLOGOPTION ASMBUFSIZE
parameter to control the read size for ASM Databases.
What information can you expect when there us data in the discard file?
When data is discarded, the discard file can contain:
1. Discard row details
2. Database Errors
3. Trail file number
What command can be used to switch writing the trail data to a new trail file?
You can use the following command to write the trail data to a new trail file.
SEND EXTRACT ext_name, ROLLOVER
How can you determine if the parameters for a process was recently changed
When ever a process is started, the parameters in the .prm file for the process is
written to the process REPORT. You can look at the older process reports to view
the parameters which were used to start up the process. By comparing the older and
the current reports you can identify the changes in the parameters.
How can we report on long running transactions?
The WARNLONGTRANS parameter can be specified with a threshold time that a
transaction can be open before Extract writes a warning message to the ggs error
log.
Example: WARNLONGTRANS 1h, CHECKINTERVAL 10m
What command can be used to view the checkpoint information for the extract
process?
Use the following command to view the Extract checkpoint information.
GGSCI> info extract , showch
GGSCI> info extract ext_fin, showch
How is the RESTARTCOLLISION parameter different from HANDLECOLLISIONS?
The RESTARTCOLLISION parameter is used to skip ONE transaction only in a situation
when the GoldenGate process crashed and performed an operation (INSERT, UPDATE &
DELETE) in the database but could not checkpoint the process information to the
checkpoint file/table. On recovery it will skip the transaction and AUTOMATICALLY
continue to the next operation in the trail file.
When using HANDLECOLLISION GoldenGate will continue to overwritten and process
transactions until the parameter is removed from the parameter files and the
processes restarted.
How do you view the data which has been extracted from the redo logs?
The logdump utility is used to open the trail files and look at the actual records
that have been extracted from the redo or the archive log files.
What does the RMAN-08147 warning signify when your environment has a GoldenGate
Capture Processes configured?
This occurs when the V$ARCHIVED_LOG.NEXT_CHANGE# is greater than the SCN required
by the GoldenGate Capture process and RMAN is trying to delete the archived logs.
The RMAN-08147 error is raised when RMAN tries to delete these files.
When the database is open it uses the DBA_CAPTURE values to determine the log files
required for mining. However if the database is in the mount state the
V$ARCHIVED_LOG. NEXT_CHANGE# value is used.
See MetaLink note: 1581365.1
How would you look at a trail file using logdump, if the trail file is Encrypted?
You must use the DECRYPT option before viewing data in the Trail data.
List few useful Logdump commands to view and search data stored in OGG trail files.
Below are few logdump commands used on a daily basis for displaying or analyzing
data stored in a trail file.
$ ./logdump – to connect to the logdump prompt
logdump> open /u01/app/oracle/dirdat/et000001 – to open a trail file in logdump
logdump> fileheader on – to view the trail file header
logdump> ghdr on – to view the record header with data
logdump> detail on – to view column information
logdump> detail data – to display HEX and ASCII data values to the column list
logdump> reclen 200 – to control how much record data is displayed
logdump> pos 0 – To go to the first record
logdump> next (or simply n) – to move from one record to another in sequence
logdump> count – counting records in a trail
I have a one-way replication setup. The system administration team wants to apply
an OS patch to both the OGG source host and the target servers. Provide the
sequence of steps that you will carry before and after applying this patch.
Procedure:

Check to make sure that the Extract has processed all the records in the data
source (Online Redo/archive logs)
GGSCI> send extract , logend
(The above command should print YES)

Verify the extract, pump and replicat has zero lag.


GGSCI> send extract , getlag
GGSCI> send extract , getlag
GGSCI> send replicat , getlag
(The above command should pring “At EOF, no more records to process.”)
Stop all application and database activity.
Make sure that the primary extract is reading the end of the redolog and that
there is no LAG at all for the processes.
Now proceed with stopping the processes:

Source:

Stop the primary extract


Stop the pump extract
Stop the manager process
Make sure all the processes are down.

Target:

Stop replicat process


Stop mgr
Make sure that all the processes are down.
Proceed with the maintenance
After the maintenance, proceed with starting up the processes:

Source:

Start the manager process


Start the primary extract
Start the pump extract
(Or simply all the extract processes as GGSCI> start extract *)
Make sure all that the processes are up.

Target:

Start the manager process


Start the replicat process.
Make sure that all the processes are up.

What are the basic resources required to configure Oracle GoldenGate high
availability solution with Oracle Clusterware?
There are 3 basic resources required:

Virtual IP
Shared storage
Action script

How would you comment out a line in the parameter file?


You can use the “–” character to comment out a line.

What type of Encryption is supported in Goldengate?

Oracle Goldengate provides 3 types of Encryption.

Data Encryption using Blow fish.

Password Encryption.

Network Encryption.

What are the different password encrytion options available with OGG?

You can encrypt a password in OGG using


Blowfish algorithm and

Advance Encryption Standard (AES) algorithm

Oracle GoldenGate Interview Questions


Oracle GoldenGate Interview Questions

1) What are processes/components in GoldenGate?

Ans:

Manager, Extract, Replicat, Data Pump

2) What is Data Pump process in GoldenGate ?

he Data Pump (not to be confused with the Oracle Export Import Data Pump) is an
optional secondary Extract group that is created on the source system. When Data
Pump is not used, the Extract process writes to a remote trail that is located on
the target system using TCP/IP. When Data Pump is configured, the Extract process
writes to a local trail and from here Data Pump will read the trail and write the
data over the network to the remote trail located on the target system.

The advantages of this can be seen as it protects against a network failure as in


the absence of a storage device on the local system, the Extract process writes
data into memory before the same is sent over the network. Any failures in the
network could then cause the Extract process to abort (abend). Also if we are doing
any complex data transformation or filtering, the same can be performed by the Data
Pump. It will also be useful when we are consolidating data from several sources
into one central target where data pump on each individual source system can write
to one common trail file on the target.

3) What is the command line utility in GoldenGate (or) what is ggsci?

ANS: Golden Gate Command Line Interface essential commands – GGSCI

GGSCI -- (Oracle) GoldenGate Software Command Interpreter

4) What is the default port for GoldenGate Manager process?

ANS:

7809

5) What are important files GoldenGate?

GLOBALS, ggserr.log, dirprm, etc ...

6) What is checkpoint table?

ANS:

Create the GoldenGate Checkpoint table

GoldenGate maintains its own Checkpoints which is a known position in the trail
file from where the Replicat process will start processing after any kind of error
or shutdown.
This ensures data integrity and a record of these checkpoints is either maintained
in files stored on disk or table in the database which is the preferred option.

7) How can you see GoldenGate errors?

ANS:

ggsci> VIEW GGSEVT


ggserr.log file

GoldenGate supports the following topologies. More details can be found here.

Unidirectional
Bidirectional
Peer-to-peer
Broadcast
Consolidation
Cascasding

What are the main components of the Goldengate replication?


The replication configuration consists of the following processes.

Manager
Extract
Pump
Replicate

What transaction types does Goldengate support for Replication?


Goldengate supports both DML and DDL Replication from the source to target.
What are the supplemental logging pre-requisites?
The following supplemental logging is required.

Database supplemental logging


Object level logging

Why is Supplemental logging required for Replication?


[sociallocker]When a transaction is committed on the source database, only new data
is written to the Redo log. However for Oracle to apply these transactions on the
destination database, the before image key values are required to identify the
effected rows. This data is also placed in the trail file and used to identify the
rows on the destination, using the key value the transactions are executed against
them.
List important considerations for bi-directional replication?
The customer should consider the following points in an active-active replication
environment.

Primary Key: Helps to identify conflicts and Resolve them.


Sequences: Are not supported. The work around is use to use odd/even, range or
concatenate sequences.
Triggers: These should be disabled or suppressed to avoid using uniqueness
issue
Data Looping: This can easy avoided using OGG itself
LAG: This should be minimized. If a customer says that there will not be any
LAG due to network or huge load, then we don’t need to deploy CRDs. But this is not
the case always as there would be some LAG and these can cause Conflicts.
CDR (Conflict Detection & Resolution): OGG has built in CDRs for all kind of
DMLs that can be used to detect and resolve them.
Packaged Application: These are not supported as it may contain data types
which are not support by OGG or it might not allow the application modification to
work with OGG.

Are OGG binaries supported on ASM Cluster File System (ACFS)?


Yes, you can install and configure OGG on ACFS.
Are OGG binaries supported on the Database File System (DBFS)? What files can be
stored in DBFS?
No, OGG binaries are not supported on DBFS. You can however store parameter files,
data files (trail files), and checkpoint files on DBFS.
What is the default location of the GLOBALS file?
A GLOBALS file is located under Oracle GoldenGate installation directory (OGG HOME)
Where can filtering of data for a column be configured?
Filtering of the columns of a table can be set at the Extract, Pump or Replicat
level.
Is it a requirement to configure a PUMP extract process in OGG replication?
A PUMP extract is an option, but it is highly recommended to use this to safe guard
against network failures. Normally it is configured when you are setting up OGG
replication across the network.
What are the differences between the Classic and integrated Capture?
Classic Capture:

The Classic Capture mode is the traditional Extract process that accesses the
database redo logs (optionally archive logs) to capture the DML changes occurring
on the objects specified in the parameter files.
At the OS level, the GoldenGate user must be a part of the same database group
which owns the database redo logs.
This capture mode is available for other RDBMS as well.
There are some data types that are not supported in Classic Capture mode.
Classic capture can’t read data from the compressed tables/tablespaces.

[/sociallocker]
Integrated Capture (IC):

In the Integrated Capture mode, GoldenGate works directly with the database log
mining server to receive the data changes in the form of logical change records
(LCRs).
IC mode does not require any special setup for the databases using ASM,
transparent data encryption, or Oracle RAC.
This feature is only available for oracle databases in Version 11.2.0.3 or
higher.
It also supports various object types which were previously not supported by
Classic Capture.
This Capture mode supports extracting data from source databases using
compression.
Integrated Capture can be configured in an online or downstream mode.

List the minimum parameters that can be used to create the extract process?
The following are the minimium required parameters which must be defined in the
extract parameter file.

EXTRACT NAME
USERID
EXTTRAIL
TABLE
I want to configure multiple extracts to write to the same exttrail file? Is this
possible?
Only one Extract process can write to one exttrail at a time. So you can’t
configure multiple extracts to write to the same exttrail.
What type of Encryption is supported in Goldengate?
Oracle Goldengate provides 3 types of Encryption.

Data Encryption using Blow fish.


Password Encryption.
Network Encryption.

What are the different password encrytion options available with OGG?
You can encrypt a password in OGG using

Blowfish algorithm and


Advance Encryption Standard (AES) algorithm

What are the different encryption levels in AES?


You can encrypt the password/data using the AES in three different keys
a) 128 bit
b) 192 bit and
c) 256 bit

type of Topology does Goldengate support?

GoldenGate supports the following topologies. More details can be found

Unidirectional

Bidirectional

Peer-to-peer

Broadcast

Consolidation

Cascasding

What are the main components of the Goldengate replication?

The replication configuration consists of the following processes.

Manager

Extract

Pump

Replicate

What database does GoldenGate support for replication?

Oracle Database

TimesTen

MySQL
IBM DB2

Microsoft SQL Server

Informix

Teradata

Sybase

Enscribe

SQL/MX

What transaction types does Goldengate support for Replication?

Goldengate supports both DML and DDL Replication from the source to target.

What are the supplemental logging pre-requisites?

The following supplemental logging is required.

Database supplemental logging

Object level logging

Why is Supplemental logging required for Replication?

When a transaction transaction is committed on the source database, only new data
is written to the Redo log. However for Oracle to apply these transactions on the
destination database, the before image key values are required to identify the
effected rows. This data is also placed in the trail file and used to identify the
rows on the destination, using the key value the transactions are executed against
them.

List important considerations for bi-directional replication?

The customer should consider the following points in an active-active replication


environment.

Primary Key: Helps to identify conflicts and Resolve them.

Sequences: Are not supported. The work around is use to use odd/even, range or
concatenate sequences.

Triggers: These should be disabled or suppressed to avoid using uniqueness issue

Data Looping: This can easy avoided using OGG itself

LAG: This should be minimized. If a customer says that there will not be any LAG
due to network or huge load, then we don’t need to deploy CRDs. But this is not the
case always as there would be some LAG and these can cause Conflicts.

CDR (Conflict Detection & Resolution): OGG has built in CDRs for all kind of DMLs
that can be used to detect and resolve them.

Packaged Application: These are not supported as it may contain data types which
are not support by OGG or it might not allow the application modification to work
with OGG.

What is data pump process in Goldengate?

The Data Pump (not to be confused with the Oracle Export Import Data Pump) is an
optional secondary Extract group that is created on the source system. When Data
Pump is not used, the Extract process writes to a remote trail that is located on
the target system using TCP/IP. When Data Pump is configured, the Extract process
writes to a local trail and from here Data Pump will read the trail and write the
data over the network to the remote trail located on the target system.

The advantages of this be it protects against a network failure as in the absence


of a storage device on the local system, the Extract process writes data into
memory before the same is sent over the network. Any failures in the network could
then cause the Extract process to abort (abend). Also, if we are doing any complex
data transformation or filtering, the same can be performed by the Data Pump. It
will also be useful when we are consolidating data from several sources into one
central target where data pump on each individual source system can write to one
common trail file on the target.

Where can filtering of data for a column be configured?

Filtering of the columns of a table can be set at the Extract, Pump or Replicat
level.

Is it a requirement to configure a PUMP extract process in OGG replication?

A PUMP extract is an option, but it is highly recommended to use this to safe guard
against network failures. Normally it is configured when you are setting up OGG
replication across the network.

What are the differences between the Classic and integrated Capture?

Classic Capture:

The Classic Capture mode is the traditional Extract process that accesses the
database redo logs (optionally archive logs) to capture the DML changes occurring
on the objects specified in the parameter files.

At the OS level, the GoldenGate user must be a part of the same database group
which owns the database redo logs.

This capture mode is available for other RDBMS as well.

There are some data types that are not supported in Classic Capture mode.

Classic capture can’t read data from the compressed tables/tablespaces.

Integrated Capture (IC):

In the Integrated Capture mode, GoldenGate works directly with the database log
mining server to receive the data changes in the form of logical change records
(LCRs).

IC mode does not require any special setup for the databases using ASM, transparent
data encryption, or Oracle RAC.

This feature is only available for oracle databases in Version 11.2.0.3 or higher.
It also supports various object types which were previously not supported by
Classic Capture.

This Capture mode supports extracting data from source databases using compression.

Integrated Capture can be configured in an online or downstream mode.

List the minimum parameters that can be used to create the extract process?

The following are the minimium required parameters which must be defined in the
extract parameter file.

EXTRACT NAME

USERID

EXTTRAIL

TABLE

What are macros?

Macro is an easier way to build your parameter file. Once a macro is written it can
be called from different parameter files. Common parameters like username/password
and other parameters can be included in these macros. A macro can either be another
parameter file or a library.

What is the command line utility in GoldenGate (or) what is ggsci?

Golden Gate Command Line Interface essential commands – GGSCI

GGSCI — (Oracle) GoldenGate Software Command Interpreter

Where can macros be invoked?

The macros can be called from the following parameter files.

Manager

Extract

Replicat

Gobals

How is a macro defined?

A macro statement consists of the following.

Name of the Macro

Parameter list

Macro body

Sample:
MACRO #macro_name
PARAMS (#param1, #param2, …)
BEGIN
< macro_body >
END;

What type of Encryption is supported in Goldengate?

Oracle Goldengate provides 3 types of Encryption.

Data Encryption using Blow fish.

Password Encryption.

Network Encryption.

What are the different password encrytion options available with OGG?

You can encrypt a password in OGG using

Blowfish algorithm and

Advance Encryption Standard (AES) algorithm

Is there a way to check the syntax of the commands in the parameter file without
running the GoldenGate process?

Yes, you can place the SHOWSYNTAX parameter in the parameter file and try starting.
If there is any error, you will see it.

How can you increase the maximum size of the read operation into the buffer that
holds the results of the reads from the transaction log?

If you are using the Classical Extract you may use the TRANSLOGOPTION ASMBUFSIZE
parameter to control, the read size for ASM Databases.

What information can you expect when there us data in the discard file?

When data is discarded, the discard file can contain:

Discard row details

Database Errors

Trail file number

What command can be used to switch writing the trail data to a new trail file?

You can use the following command to write the trail data to a new trail file.
SEND EXTRACT ext_name, ROLLOVER

How can you determine if the parameters for a process was recently changed?

Whenever a process is started, the parameters in the. prm file for the process is
written to the process REPORT. You can look at the older process reports to view
the parameters which were used to start up the process. By comparing the older and
the current reports you can identify the changes in the parameters.

What happens if RVWR cannot write to disk?

It depends on the context where the write error occurs:


If there’s a Guaranteed Restore Point, the database crashes to ensure the restore
point guarantee is not voided.

If there isn’t a Guaranteed Restore Point and it’s a primary database, the
Flashback Mode will be automatically turned off for the database, which will have
continued to operate normally.

If there isn’t a Guaranteed Restore Point and it’s a standby database, the database
will hang until the cause of the write failure is fixed.

How to list restore points in RMAN?

In RMAN you can use the LIST RESTORE POINT [ALL|restore_point_name] command. If you
use a recovery catalog you can use the view RC_RESTORE_POINT in the recovery
catalog repository, or the command the V$RESTORE_POINT in the target database.

Can you see the progress of a FLASHBACK DATABASE operation?

Yes, you can. During a FLASHBACK DATABASE operation, you can query
V$SESSION_LONGOPS from another session to see the progress of the flashback.

The FLASHBACK DATABASE operation has two distinct phases: the actual flashback and
the media recovery that happens afterwards to bring the database to a consistent
state.

While the actual flashback is running, you’ll see the following message in
V$SESSION_LONGOPS, on Oracle 11gR2:

Flashback Database: Flashback Data Applied : 238 out of 282 Megabytes done

During the media recovery, the following messages will be seen:

Media Recovery: Redo Applied: 263 out of 0 Megabytes done

Media Recovery: Average Apply Rate: 1164 out of 0 KB/sec done

Media Recovery: Last Applied Redo: 626540 out of 0 SCN+Time done

Media Recovery: Elapsed Time: 232 out of 0 Seconds done

Media Recovery: Active Time: 116 out of 0 Seconds done

Media Recovery: Active Apply Rate: 1859 out of 0 KB/sec done

Media Recovery: Maximum Apply Rate: 1859 out of 0 KB/sec done

Media Recovery: Log Files: 15 out of 0 Files done

Media Recovery: Apply Time per Log: 7 out of 0 Seconds done

How should I set the database to improve Flashback performance?

Oracle’s recommendations are:

Use a fast file system for your flash recovery area, preferably without operating
system file caching. It is recommended to use a file system that avoids operating
system file caching, such as ASM.

Configure enough disk spindles for the file system that will hold the flash
recovery area. For large production databases, multiple disk spindles may be needed
to support the required disk throughput for the database to write the flashback
logs effectively.

If the storage system used to hold the flash recovery area does not have non-
volatile RAM, try to configure the file system on top of striped storage volumes,
with a relatively small stripe size such as 128K. This will allow each write to the
flashback logs to be spread across multiple spindles, improving performance

For large, production databases, set the init.ora parameter LOG_BUFFER to be at


least 8MB. This makes sure the database allocates maximum memory (typically 16MB)
for writing flashback database logs.

1.4
How Does Golden Gate Work
Oracle GoldenGate consists of decoupled modules that are combined to create the
best
possible
solution for your business requireme
nts.
On the source system(s):
•Oracle GoldenGate’s Capture (Extract) process reads data transactions as they
occur,
by
reading the native transaction log, typically the redo log. Oracle
GoldenGate only
moves
changed, committed transactional data, which is
only a % of all transactions

therefore
operating with extremely high performance and very low impact on the data
infrastructure.
• Filtering can be performed at the source or target - at table, column and/or row
level.
• Transformations can be applied atthe capture or delivery stages.
• Advanced queuing (trail files):
To move transactional data efficiently and accurately across systems,
Oracle
GoldenGate converts the captured data into a Oracle GoldenGate data format
in“trail” files. With both source an
d target trail files, Oracle GoldenGate’s unique architecture eliminates any single
point of failure and ensures data integrity is
maintained –even in the event of a system error or outage.

Routing:
•Data is sent via TCP/IP to the target systems. Data com
pression and encryption are
supported. Thousands of transactions can be moved per second,
without distance
limitations.
On the target system(s):
•A Server Collector process (not shown) reassembles the transactional data into a
target trail.
•The Delivery (
Replicat) process applies transactional data to the designated target
systems
using native SQL calls
Preventing Data Looping

In a bidirectional configuration, SQL changes that are replicated from one system
to another must be prevented from being replicated back to the first system.
Otherwise, it moves back and forth in an endless loop, as in this example:

A user application updates a row on system A.

Extract extracts the row on system A and sends it to system B.

Replicat updates the row on system B.

Extract extracts the row on system B and sends it back to system A.

The row is applied on system A (for the second time).

This loop continues endlessly.

To prevent data loopback, you may need to provide instructions that:

prevent the capture of SQL operations that are generated by Replicat, but enable
the capture of SQL operations that are generated by business applications if they
contain objects that are specified in the Extract parameter file.

identify local Replicat transactions, in order for the Extract process to ignore
them.

9.3.1 Preventing the Capture of Replicat Operations

Depending on which database you are using, you may or may not need to provide
explicit instructions to prevent the capture of Replicat operations.
9.3.1.1 Preventing the Capture of Replicat Transactions (Oracle)

To prevent the capture of SQL that is applied by Replicat to an Oracle database,


there are different options depending on the Extract capture mode:

When Extract is in classic or integrated capture mode, use the TRANLOGOPTIONS


parameter with the EXCLUDETAG tag option. This parameter directs the Extract
process to ignore transactions that are tagged with the specified redo tag. See
Section 9.3.2 to set the tag value.

When Extract is in classic capture mode, use the Extract TRANLOGOPTIONS


parameter with the EXCLUDEUSER or EXCLUDEUSERID option to exclude the user name or
ID that is used by Replicat to apply transactions. Multiple EXCLUDEUSER statements
can be used. The specified user is subject to the rules of the GETREPLICATES or
IGNOREREPLICATES parameter. See Section 9.3.1.3 for more information.

LCR : Logical Change Record TYPEs : This chapter describes the logical change
record (LCR) types. In Streams, LCRs are message payloads that contain information
about changes to a database.
OCI : Oracle Call Interface : Oracle Call Interface (OCI) is the comprehensive,
high performance, native C language interface to Oracle Database. It exposes the
full power of Oracle Database to custom or packaged C applications. Oracle In-
Memory Database Cache also supports access from OCI programs.

Das könnte Ihnen auch gefallen