Beruflich Dokumente
Kultur Dokumente
===============================================
==========================================================
===============================================
What are the steps to install oracle on Linux system? List two kernel parameter that
effect oracle installation?
Initially set up disks and kernel parameters, then create oracle user and DBA group,
and finally run installer to start the installation process. The SHMMAX & SHMMNI
two kernel parameter required to set before installation process.
The following are the parameter that will be used by DBA to adjust time or interval of
how frequently its checkpoint should occur in database.
LOG_CHECKPOINT_TIMEOUT = 3600; # Every one hour
LOG_CHECKPOINT_INTERVAL = 1000; # number of OS blocks.
What is the use of large pool, which case you need to set the large pool?
You need to set large pool if you are using: MTS (Multi thread server) and RMAN
Backups. Large pool prevents RMAN & MTS from competing with other sub system
for the same memory. RMAN uses the large pool for backup & restore when you set
the DBWR_IO_SLAVES or BACKUP_TAPE_IO_SLAVES parameters to simulate
asynchronous I/O. If neither of these parameters is enabled, then Oracle allocates
backup buffers from local process memory rather than shared memory. Then there is
no use of large pool.
Oracle Instance:
a means to access an Oracle database,always opens one and only one database and
consists of memory structures and background process.
Oracle server:
a DBMS that provides an open, comprehensive, integrated approach to information
management,Consists of an Instance and a database.
Oracle database:
a collection of data that is treated as a unit,Consists of Datafiles, Control files, Redo
log files. (optional param file, passwd file, archived log)
Background processes:
Started when an Oracle Instance is started.
Background Processes Maintains and enforces relationships between physical and
memory structures
There are two types of database processes:
1. Mandatory background processes
2. Optional background processes
Mandatory background processes:
– DBWn, PMON, CKPT, LGWR, SMON
Optional background processes:
– ARCn, LMDn, RECO, CJQ0, LMON, Snnn, Dnnn, Pnnn, LCKn, QMNn
DBWn writes when:
• Checkpoint occurs
• Dirty buffers reach threshold
• There are no free buffers
• Timeout occurs
• RAC ping request is made
• Tablespace OFFLINE
• Tablespace READ ONLY
• Table DROP or TRUNCATE
• Tablespace BEGIN BACKUP
Log Writer (LGWR) writes:
• At commit
• When 1/3rd full
• When there is 1 MB of redo
• Every 3 seconds
• Before DBWn writes
Why do you run orainstRoot and ROOT.SH once you finalize the Installation?
orainstRoot.sh needs to be run to change the Permissions and groupname to 770 and
to dba.
Root.sh (ORACLE_HOME) location needs to be run to create a ORATAB in
/etc/oratab or /opt/var/oratab in Solaris and to copy dbhome, oraenv and coraenv to
/usr/local/bin.
orainstRoot.sh
[root@oracle11g ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory to 770.
Changing groupname of /u01/app/oraInventory to dba.
The execution of the script is complete
root.sh
[root@oracle11g ~]# /u01/app/oracle/product/11.1.0/db_1/root.sh
Running Oracle 11g root.sh script…
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.1.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …
Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
For Oracle installation on unix/linux, we will be prompted to run a script ‘root.sh’
from the oracle inventory directory.this script needs to run the first time only when
any oracle product is installed on the server.
It creates the additional directories and sets appropriate ownership and permissions
on files for root user.
Oracle Database 11g New Feature for DBAs?
1) Automatic Diagnostic Repository [ADR]
2) Database Replay
3) Automatic Memory Tuning
4) Case sensitive password
5) Virtual columns and indexes
6) Interval Partition and System Partition
7) The Result Cache
8) ADDM RAC Enhancements
9) SQL Plan Management and SQL Plan Baselines
10) SQL Access Advisor & Partition Advisor
11) SQL Query Repair Advisor
12) SQL Performance Analyzer (SPA) New
13) DBMS_STATS Enhancements
14) The Result Cache
15) Total Recall (Flashback Data Archive)
Note: The above are only top new features, there are other features as well
introduced in 11g which will be included subsequently
==========================================================
===============================================
==========================================================
===============================================
What is the benefit of running the DB in archivelog mode over no archivelog mode?
When a database is in no archivelog mode whenever log switch happens there will be
a loss of some redoes log information in order to avoid this, redo logs must be
archived. This can be achieved by configuring the database in archivelog mode.
If an oracle database is crashed? How would you recover that transaction which is
not in backup?If the database is in archivelog we can recover that transaction
otherwise we cannot recover that transaction which is not in backup.
Why RMAN incremental backup fails even though full backup exists?If you have
taken the RMAN full backup using the command ‘Backup database’, where as a level
0 backup is physically identical to a full backup. The only difference is that the level 0
backup is recorded as an incremental backup in the RMAN repository so it can be
used as the parent for a level 1 backup. Simply the ‘full backup without level 0’ can
not be considered as a parent backup from which you can take level 1 backup.
Can we perform RMAN level 1 backup without level 0?If no level 0 is available, then
the behavior depends upon the compatibility mode setting (oracle version).
If the compatibility mode less than 10.0.0, RMAN generates a level 0 backup of files
contents at the time of backup.
If the compatibility is greater than 10.0.0, RMAN copies all block changes since the
file was created, and stores the results as level 1 backup.
How to put Manual/User managed backup in RMAN?In case of recovery catalog, you
can put by using catalog command:
RMAN> CATALOG START WITH ‘/oracle/backup.ctl’;
How to check RMAN version in oracle?If you want to check RMAN catalog version
then use the below query from SQL*plus
SQL> Select * from rcver;
==========================================================
===============================================
==========================================================
===============================================
When you moved oracle binary files from one ORACLE_HOME server to another
server then which oracle utility will be used to make this new ORACLE_HOME
usable?
Relink all.
You have collection of patch (nearly 100 patches) or patchset. How can you apply
only one patch from it?
With Napply itself (by providing patch location and specific patch id) you can apply
only one patch from a collection of extracted patch. For more information check the
opatch util NApply –help. It will give you clear picture.
For Example:
opatch util napply <patch_location> -id 9 -skip_subset -skip_duplicate
This will apply only the patch id 9 from the patch location and will skip duplicate and
subset of patch installed in your ORACLE_HOME.
If both CPU and PSU are available for given version which one, you will prefer to
apply?
From the above discussion it is clear once you apply the PSU then the recommended
way is to apply the next PSU only. In fact, no need to apply CPU on the top of PSU as
PSU contain CPU (If you apply CPU over PSU will considered you are trying to
rollback the PSU and will require more effort in fact). So if you have not decided or
applied any of the patches then, I will suggest you to go to use PSU patches. For more
details refer: Oracle Products [ID 1430923.1], ID 1446582.1
PSU is superset of CPU then why someone choose to apply a CPU rather than a PSU?
CPUs are smaller and more focused than PSU and mostly deal with security issues. It
seems to be theoretically more consecutive approach and can cause less trouble than
PSU as it has less code changing in it. Thus any one who is concerned only with
security fixes and not functionality fixes, CPU may be good approach.
If you are using latest support.oracle.com then after login to metalink Dashboard
- Click on “Patches & Updates” tab
- On the left sidebar click on “Latest Patchsets” under “Oracle Server/Tools”.
- A new window will appear.
- Just mouseover on your product in the “Latest Oracle Server/Tools Patchsets” page.
- Corresponding oracle platform version will appear. Then simply choose the
patchset version and click on that.
- You will go the download page. From the download page you can also change your
platform and patchset version.
REFERENCES:
http://docs.oracle.com/cd/E11857_01/em.111/e12255/e_oui_appendix.htm
Oracle® Universal Installer and OPatch User’s Guide
11g Release 2 (11.2) for Windows and UNIX
Part Number E12255-11
What is the recent Patch applied?
What is OPatch?
1. You MUST read the Readme.txt file included in opatch file, look for any prereq.
steps/ post installation steps or and DB related changes. Also, make sure that you
have the correct opatch version required by this patch.
2.Make sure you have a good backup of database.
3. Make a note of all Invalid objects in the database prior to the patch.
4. Shutdown All the Oracle Processes running from that Oracle Home , including the
Listener and Database instance, Management agent etc.
5. You MUST Backup your oracle Home and Inventory
tar -cvf $ORACLE_HOME $ORACLE_HOME/oraInventory | gzip >
Backup_Software_Version.tar.gz
6. Unzip the patch in $ORACLE_HOME/patches
7. cd to the patch direcory and do opatch -apply to apply the patch.
8. Read the output/log file to make sure there were no errors.
1. Download the required Patch from Metalink based on OS Bit Version and DB
Version.
2. Need to down the database before applying patch.
3. Unzip and Apply the Patch using ”opatch apply” command.On successfully applied
of patch you will see successful message “OPatch succeeded.“, Crosscheck your patch
is applied by using “opatch lsinventory” command .
4. Each patch has a unique ID, the command to rollback a patch is “opatch rollback -
id <patch no.>” command.On successfully applied of patch you will see successful
message “OPatch succeeded.“, Crosscheck your patch is applied by using “opatch
lsinventory” command .
5. Patch file format will be like, “p<patch no.>_<db version>_<os>.zip”
6. We can check the opatch version using “opatch -version” command.
7. Generally, takes 2 minutes to apply a patch.
8. To get latest Opatch version download “patch 6880880 – latest opatch tool”, it
contains OPatch directory.
9. Contents of downloaded patches will be like “etc,files directories and a README
file”
10. Log file for Opatch utility can be found at $ORACLE_HOME/cfgtoollogs/opatch
11. OPatch also maintains an index of the commands executed with OPatch and the
log files associated with it in the history.txt file located in the
<ORACLE_HOME>/cfgtoollogs/opatch directory.
12. Starting with the 11.2.0.2 patch set, Oracle Database patch sets are full
installations of the Oracle Database software. This means that you do not need to
install Oracle Database 11g Release 2 (11.2.0.1) before installing Oracle Database 11g
Release 2 (11.2.0.2).
13. Direct upgrade to Oracle 10g is only supported if your database is running one of
the following releases: 8.0.6, 8.1.7, 9.0.1, or 9.2.0. If not, you will have to upgrade the
database to one of these releases or use a different upgrade option (like export/
import).
14.Direct upgrades to 11g are possible from existing databases with versions 9.2.0.4+,
10.1.0.2+ or 10.2.0.1+. Upgrades from other versions are supported only via
intermediate upgrades to a supported upgrade version.
http://avdeo.com/2008/08/19/opatch-utility-oracle-rdbms-patching/
Oracle version 10.2.0.4.0 what does each number refers to?
Oracle version number refers:
10 – Major database release number
2 – Database Maintenance release number
0 – Application server release number
4 – Component Specific release number
0 – Platform specific release number
==========================================================
===============================================
==========================================================
===============================================
Oracle ASM
##########
ASM offers support for Oracle RAC clusters without the requirement to install 3rd
party software, such as cluster aware volume managers or filesystems.
ASM is shipped as part of the database server software (Enterprise and Standard
editions) and does not cost extra money to run.
Prevents fragmentation of disks, so you don’t need to manually relocate data to tune
I/O performance
Using disk group makes configuration easier, as files are placed into disk groups
ASM provides stripping and mirroring (fine and coarse gain – see below)
Striping—ASM spreads data evenly across all disks in a disk group to optimize
performance and utilization. This even distribution of database files eliminates the
need for regular monitoring and I/O performance tuning.
For example, if there are six disks in a disk group, pieces of each ASM file are written
to all six disks. These pieces come in 1 MB chunks known as extents. When a
database file is created, it is striped (divided into extents and distributed) across the
six disks, and allocated disk space on all six disks grows evenly. When reading the
file, file extents are read from all six disks in parallel, greatly increasing performance.
ASM supports 2-way mirroring, where each file extent gets one mirrored copy, and 3-
way mirroring, where each file extent gets two mirrored copies.
What are ASM Background Processes in Oracle?Both an Oracle ASM instance and an
Oracle Database instance are built on the same technology. Like a database instance,
an Oracle ASM instance has memory structures (System Global Area) and
background processes. Besides, Oracle ASM has a minimal performance impact on a
server. Rather than mounting a database, Oracle ASM instances mount disk groups
to make Oracle ASM files available to database instances.
There are at least two new background processes added for an ASM instance:
RBAL (Re-balancer) RBAL runs in both database and ASM instances. In the database
instance, it does a global open of ASM disks. In an ASM instance, it also coordinates
rebalance activity for disk groups.RBAL, which coordinates rebalance activities
for disk resources controlled by ASM.
ASMB, this process contact CSS using the group name and acquires the associated
ASM connect string. The connect string is subsequently used to connect to the ASM
instance.
RBAL, which performs global opens on all disks in the disk group.A global open
means that more than one database instance can be accessing the ASM disks at a
time.
Failure groups are defined within a disk group to support the required level of
redundancy. For two-way mirroring you would expect a disk group to contain two
failure groups so individual files are written to two locations.
ASM_DISKSTRING – Specifies a value that can be used to limit the disks considered
for discovery. Altering the default value may improve the speed of disk group mount
time and the speed of adding a disk to a disk group. Changing the parameter to a
value which prevents the discovery of already mounted disks results in an error. The
default value is NULL allowing all suitable disks to be considered.
Prevents fragmentation of disks, so you don’t need to manually relocate data to tune
I/O performance
Using disk group makes configuration easier, as files are placed into disk groups
ASM provides stripping and mirroring (fine and coarse gain – see below)
Striping—ASM spreads data evenly across all disks in a disk group to optimize
performance and utilization. This even distribution of database files eliminates the
need for regular monitoring and I/O performance tuning.
For example, if there are six disks in a disk group, pieces of each ASM file are written
to all six disks. These pieces come in 1 MB chunks known as extents. When a
database file is created, it is striped (divided into extents and distributed) across the
six disks, and allocated disk space on all six disks grows evenly. When reading the
file, file extents are read from all six disks in parallel, greatly increasing performance.
ASM supports 2-way mirroring, where each file extent gets one mirrored copy, and 3-
way mirroring, where each file extent gets two mirrored copies.
For clustered systems, create one ASM instance per node (called +ASM1, +ASM2,
etc).
Data with different storage characteristics should be stored in different disk groups.
Each disk group can have different redundancy (mirroring) settings (high, normal
and external), different fail-groups, etc. However, it is generally not necessary to
create many disk groups with the same storage characteristics (i.e. +DATA1,
+DATA2, etc. all on the same type of disks).
To get started, create 2 disk groups – one for data and one for recovery files. Here is
an example:
Here is an example how you can enable automatic file management with such a
setup:
You may also decide to introduce additional disk groups – for example, if you decide
to put historic data on low cost disks, or if you want ASM to mirror critical data
across 2 storage cabinets.
How are the Oracle ASM files stored within the Oracle ASM filesystem structure?
Oralce ASM files are stored within the Oracle ASM filesystem structures as objects
that RDBMS instances/Oracle database instance access. RDBMS/Oracle instance
treats the Oracle ASM files as standard filesystem files.
What are the Oracle ASM files that are stored within the Oracle ASM file hierarchy?
Files stored in Oracle ASM diskgroup/Oracl ASM filestructures include:
1) Datafile
2) Controlfiles
3) Server Parameter Files(SPFILE)
4) Redo Log files
What happens when you create a file/database file in ASM?What commands do you
use to create database files?
Some common commands used for creating database files are :
1) Create tabespace
2) Add Datafile
3) Add Logfile
For example,
SQL> CREATE TABLESPACE TS1 DATAFILE ‘+DATA1′ SIZE 10GB;
Above command creates a datafile in DATA1 diskgroup
ASM’s SPFile will be residing inside ASM itself. This could be found out in number of
ways, looking at the alert log of ASM when ASM starts
Machine: x86_64
Using parameter settings in server-side spfile
+DATA/asm/asmparameterfile/registry.253.766260991
System parameters with non-default values:
large_pool_size = 12M
instance_type = “asm”
remote_login_passwordfile= “EXCLUSIVE”
asm_diskgroups = “FLASH”
asm_diskgroups = “DATA”
asm_power_limit = 1
diagnostic_dest = “/opt/app/oracle”
Or using the asmcmd’s spget command which shows the spfile location registered
with GnP profile
ASMCMD> spget
+DATA/asm/asmparameterfile/registry.253.766260991
==========================================================
===============================================
==========================================================
===============================================
Oracle RAC
##########
What is RAC? What is the benefit of RAC over single instance database?
In Real Application Clusters environments, all nodes concurrently execute
transactions against the same database. Real Application Clusters coordinates each
node’s access to the shared data to provide consistency and integrity.
Benefits:
Improve response time
Improve throughput
High availability
Transparency
Oracle Clusterware has two key components Cluster Registry OCR and Voting Disk.
The cluster registry holds all information about nodes, instances, services and ASM
storage if used, it also contains state information ie they are available and up or
similar.
The voting disk is used to determine if a node has failed, i.e. become separated from
the majority. If a node is deemed to no longer belong to the majority then it is
forcibly rebooted and will after the reboot add itself again the the surviving cluster
nodes.
A virtual IP address or VIP is an alternate IP address that the client connections use
instead of the standard public IP address. To configure VIP address, we need to
reserve a spare IP address for each node, and the IP addresses must use the same
subnet as the public network.
For high availability, Oracle recommends that you have a minimum of three or odd
number (3 or greater) of votingdisks.
Voting Disk – is file that resides on shared storage and Manages cluster members.
Voting disk reassigns cluster ownership between the nodes in case of failure.
The Voting Disk Files are used by Oracle Clusterware to determine which nodes are
currently members of the cluster. The voting disk files are also used in concert with
other Cluster components such as CRS to maintain the clusters integrity.
Oracle Database 11g Release 2 provides the ability to store the voting disks in ASM
along with the OCR. Oracle Clusterware can access the OCR and the voting disks
present in ASM even if the ASM instance is down. As a result CSS can continue to
maintain the Oracle cluster even if the ASM instance has failed.
How many voting disks are you maintaining ?
http://www.toadworld.com/KNOWLEDGE/KnowledgeXpertforOracle/tabid/648/T
opicID/RACR2ARC6/Default.aspx
By default Oracle will create 3 voting disk files in ASM.
Oracle expects that you will configure at least 3 voting disks for redundancy
purposes. You should always configure an odd number of voting disks >= 3. This is
because loss of more than half your voting disks will cause the entire cluster to fail.
You should plan on allocating 280MB for each voting disk file. For example, if you
are using ASM and external redundancy then you will need to allocate 280MB of disk
for the voting disk. If you are using ASM and normal redundancy you will need
560MB.
SCAN provides a single domain name via (DNS), allowing and-users to address a
RAC cluster as-if it were a single IP address. SCAN works by replacing a hostname or
IP list with virtual IP addresses (VIP).
Single client access name (SCAN) is meant to facilitate single name for all Oracle
clients to connect to the cluster database, irrespective of number of nodes and node
location. Until now, we have to keep adding multiple address records in all clients
tnsnames.ora, when a new node gets added to or deleted from the cluster.
Single Client Access Name (SCAN) eliminates the need to change TNSNAMES entry
when nodes are added to or removed from the Cluster. RAC instances register to
SCAN listeners as remote listeners. Oracle recommends assigning 3 addresses to
SCAN, which will create 3 SCAN listeners, though the cluster has got dozens of
nodes.. SCAN is a domain name registered to at least one and up to three IP
addresses, either in DNS (Domain Name Service) or GNS (Grid Naming Service). The
SCAN must resolve to at least one address on the public network. For high
availability and scalability, Oracle recommends configuring the SCAN to resolve to
three addresses.
http://www.freeoraclehelp.com/2011/12/scan-setup-for-oracle-11g-
release211gr2.html
What are SCAN components in a cluster?
1.SCAN Name
2.SCAN IPs (3)
3.SCAN Listeners (3)
What is FAN?
Fast application Notification as it abbreviates to FAN relates to the events related to
instances,services and nodes.This is a notification mechanism that Oracle RAc uses
to notify other processes about the configuration and service level information that
includes service status changes such as,UP or DOWN events.Applications can
respond to FAN events and take immediate action.
What is TAF?
TAF (Transparent Application Failover) is a configuration that allows session fail-
over between different nodes of a RAC database cluster.
Transparent Application Failover (TAF). If a communication link failure occurs after
a connection is established, the connection fails over to another active node. Any
disrupted transactions are rolled back, and session properties and server-side
program variables are lost. In some cases, if the statement executing at the time of
the failover is a Select statement, that statement may be automatically re-executed on
the new connection with the cursor positioned on the row on which it was positioned
prior to the failover.
==========================================================
===============================================
==========================================================
===============================================
Data Guard provides a comprehensive set of services that create, maintain, manage,
and monitor one or more standby databases to enable production Oracle databases
to survive disasters and data corruptions. Data Guard maintains these standby
databases as copies of the production database. Data Guard can be used with
traditional backup, restoration, and cluster techniques to provide a high level of data
protection and data availability.
What is DG Broker?
DG Broker “it is the management and monitoring tool”.
Oracle dataguard broker is a distributed management framework that automates and
centralizes the creation , maintenance and monitoring of DG configuration.
All management operations can be performed either through OEM, which uses the
broker (or) broker specified command-line tool interface “DGMGRL”.
Standby Database :
Physical standby database provides a physically identical copy of the primary
database, with on disk database structures that are identical to the primary database
on a block-for-block basis.
Standby capability is available on Standard Edition.
REFERENCE:
http://neeraj-dba.blogspot.in/2011/06/difference-between-dataguard-and.html
What are the differences between Physical/Logical standby databases? How would
you decide which one is best suited for your environment?
Physical standby DB:
As the name, it is physically (datafiles, schema, other physical identity) same copy of
the primary database.
It synchronized with the primary database with Apply Redo to the standby DB.
Logical Standby DB:
As the name logical information is the same as the production database, it may be
physical structure can be different.
It synchronized with primary database though SQL Apply, Redo received from the
primary database into SQL statements and then executing these SQL statements on
the standby DB.
We can open “physical stand by DB to “read only” and make it available to the
applications users (Only select is allowed during this period). we can not apply redo
logs received from primary database at this time.
We do not see such issues with logical standby database. We can open the database
in normal mode and make it available to the users. At the same time, we can apply
archived logs received from primary database.
For OLTP large transaction database it is better to choose logical standby database.
REFERENCE:
http://gavinsoorma.com/2009/07/11g-snapshot-standby-database/
Snapshot Standby Database (UPDATEABLE SNAPSHOT FOR TESTING)
A snapshot standby database is a fully updatable standby database that is created by
converting a physical standby database into a snapshot standby database.
Like a physical or logical standby database, a snapshot standby database receives and
archives redo data from a primary database. Unlike a physical or logical standby
database, a snapshot standby database does not apply the redo data that it receives.
The redo data received by a snapshot standby database is not applied until the
snapshot standby is converted back into a physical standby database, after first
discarding any local updates made to the snapshot standby database.
REFERENCE:
http://docs.oracle.com/cd/B28359_01/server.111/b28294/title.htm
What is the Default mode will the Standby will be, either SYNC or ASYNC?
ASYNC
Dataguard Architechture?
Data Guard Configurations:
A Data Guard configuration consists of one production database and one or more
standby databases. The databases in a Data Guard configuration are connected by
Oracle Net and may be dispersed geographically. There are no restrictions on where
the databases are located, provided they can communicate with each other.
Dataguard Architecture
The Oracle 9i Data Guard architecture incorporates the following items:
Primary Database:
A Data Guard configuration contains one production database, also referred to as the
primary database, that functions in the primary role. This is the database that is
accessed by most of your applications.
Standby Database:
A standby database is a transactionally consistent copy of the primary database.
Using a backup copy of the primary database, you can create up to nine standby
databases and incorporate them in a Data Guard configuration. Once created, Data
Guard automatically maintains each standby database by transmitting redo data
from the primary database and then applying the redo to the standby database.
The types of standby databases are as follows:
What are the services required on the primary and standby database ?
The services required on the primary database are:
• Log Writer Process (LGWR) – Collects redo information and updates the online
redo logs. It can also create local archived redo logs and transmit online redo to
standby databases.
• Archiver Process (ARCn) – One or more archiver processes make copies of online
redo logs either locally or remotely for standby databases.
• Fetch Archive Log (FAL) Server – Services requests for archive redo logs from FAL
clients running on multiple standby databases. Multiple FAL servers can be run on a
primary database, one for each FAL request. .
The services required on the standby database are:
• Fetch Archive Log (FAL) Client – Pulls archived redo log files from the primary site.
Initiates transfer of archived redo logs when it detects a gap sequence.
• Remote File Server (RFS) – Receives archived and/or standby redo logs from the
primary database.
• Archiver (ARCn) Processes – Archives the standby redo logs applied by the
managed recovery process (MRP).
• Managed Recovery Process (MRP) – Applies archive redo log information to the
standby database.
Maximum Availability
This protectionmode provides the highest level of data protection that is possible
without compromising the availability of a primary database. Transactions do not
commit until all redo data needed to recover those transactions has been written to
the online redo log and to at least one synchronized standby database. If the primary
database cannot write its redo stream to at least one synchronized standby database,
it operates as if it were in maximum performance mode to preserve primary database
availability until it is again able to write its redo stream to a synchronized standby
database.
This mode ensures that no data loss will occur if the primary database fails, but only
if a second fault does not prevent a complete set of redo data from being sent from
the primary database to at least one standby database.
Maximum Performance
This protectionmode provides the highest level of data protection that is possible
without affecting the performance of a primary database. This is accomplished by
allowing transactions to commit as soon as all redo data generated by those
transactions has been written to the online log. Redo data is also written to one or
more standby databases, but this is done asynchronously with respect to transaction
commitment, so primary database performance is unaffected by delays in writing
redo data to the standby database(s).
This protection mode offers slightly less data protection than maximum availability
mode and has minimal impact on primary database performance.
This is the default protection mode.
Maximum Protection
This protection mode ensures that zero data loss occurs if a primary database fails.
To provide this level of protection, the redo data needed to recover a transaction
must be written to both the online redo log and to at least one synchronized standby
database before the transaction commits. To ensure that data loss cannot occur, the
primary database will shut down, rather than continue processing transactions, if it
cannot write its redo stream to at least one synchronized standby database.
Because this data protection mode prioritizes data protection over primary database
availability, Oracle recommends that a minimum of two standby databases be used
to protect a primary database that runs in maximum protection mode to prevent a
single standby database failure from causing the primary database to shut down.
A standby database automatically applies redo logs when they arrive from the
primary database. But in some cases, we want to create a time lag between the
archiving of a redo log at the primary site, and the application of the log at the
standby site.
DB_NAME=chicago
DB_UNIQUE_NAME=chicago
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(chicago,boston)’
CONTROL_FILES=’/arch1/chicago/control1.ctl’, ‘/arch2/chicago/control2.ctl’
LOG_ARCHIVE_DEST_1=
‘LOCATION=/arch1/chicago/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=chicago’
LOG_ARCHIVE_DEST_2=
‘SERVICE=boston LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=boston’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
DB_NAME=chicago
DB_UNIQUE_NAME=boston
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(chicago,boston)’
CONTROL_FILES=’/arch1/boston/control1.ctl’, ‘/arch2/boston/control2.ctl’
DB_FILE_NAME_CONVERT=’chicago’,'boston’
LOG_FILE_NAME_CONVERT=
‘/arch1/chicago/’,'/arch1/boston/’,'/arch2/chicago/’,'/arch2/boston/’
LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc
LOG_ARCHIVE_DEST_1= ‘LOCATION=/arch1/boston/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=boston’
LOG_ARCHIVE_DEST_2= ‘SERVICE=chicago LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=chicago’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
STANDBY_FILE_MANAGEMENT=AUTO
FAL_SERVER=chicago
FAL_CLIENT=boston
==========================================================
===============================================
==========================================================
===============================================
More:
1.Run TOP command in Linux to check CPU usage.
2.Run VMSTAT, SAR, PRSTAT command to get more information on CPU, memory
usage and possible blocking.
3.Enable the trace file before running your queries,then check the trace file using
tkprof create output file.
According to explain plan check the elapsed time for each query,then tune them
respectively.
If you are getting high “Busy Buffer waits”, how can you find the reason behind it?
Buffer busy wait means that the queries are waiting for the blocks to be read into the
db cache. There could be the reason when the block may be busy in the cache and
session is waiting for it. It could be undo/data block or segment header wait.
Run the below two query to find out the P1, P2 and P3 of a session causing buffer
busy wait
then after another query by putting the above P1, P2 and P3 values.
SQL> Select p1 “File #”,p2 “Block #”,p3 “Reason Code” from v$session_wait Where
event = ‘buffer busy waits’;
SQL> Select owner, segment_name, segment_type from dba_extents
Where file_id = &P1 and &P2 between block_id and block_id + blocks -1;
¦ Latch waits
¦ Top SQL
¦ Instance activity
¦ File I/O and segment statistics
¦ Memory allocation
¦ Buffer waits
What is the difference between DB file sequential read and DB File Scattered Read?
DB file sequential read is associated with index read where as DB File Scattered Read
has to do with full table scan.
The DB file sequential read, reads block into contiguous memory and DB File
scattered read gets from multiple block and scattered them into buffer cache.
Which factors are to be considered for creating index on Table? How to select column
for index?
Creation of index on table depends on size of table, volume of data. If size of table is
large and we need only few data for selecting or in report then we need to create
index. There are some basic reason of selecting column for indexing like cardinality
and frequent usage in where condition of select query. Business rule is also forcing to
create index like primary key, because configuring primary key or unique key
automatically create unique index.
It is important to note that creation of so many indexes would affect the performance
of DML on table because in single transaction should need to perform on various
index segments and table simultaneously.
How can you track the password change for a user in oracle?
Oracle only tracks the date that the password will expire based on when it was latest
changed. Thus listing the view DBA_USERS.EXPIRY_DATE and subtracting
PASSWORD_LIFE_TIME you can determine when password was last changed. You
can also check the last password change time directly from the PTIME column in
USER$ table (on which DBA_USERS view is based). But If you have
PASSWORD_REUSE_TIME and/or PASSWORD_REUSE_MAX set in a profile
assigned to a user account then you can reference dictionary table USER_HISTORY$
for when the password was changed for this account.
SELECT user$.NAME, user$.PASSWORD, user$.ptime,
user_history$.password_date
FROM SYS.user_history$, SYS.user$
WHERE user_history$.user# = user$.user#;
You have more than 3 instances running on the Linux server? How can you
determine which shared memory and semaphores are associated with which
instance?
Oradebug is undocumented oracle supplied utility by oracle. The oradebug help
command list the command available with oracle.
SQL>oradebug setmypid
SQL>oradebug ipc
SQL>oradebug tracfile_name
Temp Tablespace is 100% FULL and there is no space available to add datafiles to
increase temp tablespace. What can you do in that case to free up TEMP tablespace?
Try to close some of the idle sessions connected to the database will help you to free
some TEMP space. Otherwise you can also use ‘Alter Tablespace PCTINCREASE 1’
followed by ‘Alter Tablespace PCTINCREASE 0’
Row Chaining:
A row is too large to fit into a single database block. For example, if you use a 4KB
blocksize for your database, and you need to insert a row of 8KB into it, Oracle will
use 3 blocks and store the row in pieces.
Some conditions that will cause row chaining are: Tables whose rowsize exceeds the
blocksize. Tables with LONG and LONG RAW columns are prone to having chained
rows. Tables with more then 255 columns will have chained rows as Oracle break
wide tables up into pieces.
So, instead of just having a forwarding address on one block and the data on another
we have data on two or more blocks.