Sie sind auf Seite 1von 60

Jul042013

What is Oracle RAC Node Eviction


What is Oracle RAC Node Eviction
One of the most common and complex issue is node eviction issue. A node is evicted from the cluster after it kills
itself because it is not able to
service the applications.
This generally happens during the communication failure between the instances, when the instance is not able to
send heartbeat information to the
control file and various other reasons.
Oracle Clusterware is designed to perform a node eviction by removing one or more nodes from the cluster if some
critical problem is detected. A
critical problem could be a node not responding via a network heartbeat, a node not responding via a disk heartbeat,
a hung or severely degraded machine, or
a hung ocssd.bin process. The purpose of this node eviction is to maintain the overall health of the cluster by
removing bad members.
During failures, to avoid data corruption, the failing instance evicts itself from the cluster group. The node eviction
process is reported as
Oracle error ORA-29740 in the alert log and LMON trace files.

Consolidated AWR report for RAC


11gR2 consolidated AWR report for RAC
awrgrpt.sql
This is a cluster wide awr report, so you can see a lot of the information from all the nodes in the same section, and
you can also see aggregated statistics from all the instances at the same time (You can see totals, averages and
standard deviations).
awrgdrpt.sql
This is a cluster wide stats diff report (like you had awrddrpt.sql in 11gr1), comparing the stats differences between
two different snapshot intervals, across all nodes in the cluster.
ar282013

basic rman commands


1. All Backup Sets, Backup Pieces, and Proxy Copies

To list all backup sets, backup pieces, and proxy copies:


RMAN> list backup;
2. Expired Backup List
We can also specify the EXPIRED keyword to identify those backups that were not found during a crosscheck:
RMAN> list expired backup;
3. List Backup by File
You can list copies of datafiles, control files, and archived logs. Specify the desired objects with the listObjList or
recordSpec clause. If you do not specify an object, then RMAN displays copies of all database files and archived redo
logs. By default, RMAN lists backups in verbose mode, which means that it provides extensive, multiline information.
RMAN> list backup by file;
4. List all Archived Log Files
You can list all archived log files as follows
RMAN> list archivelog all;
5. Summary Lists
By default the LIST output is highly detailed, but you can also specify that RMAN display the output in summarized
form. Specify the desired objects with the listObjectList or recordSpec clause. If you do not specify an object, then
LIST BACKUP displays all backups. By default, RMAN lists backups in verbose mode.
RMAN> list backup summary;

Dec092012

Difference between Raw Device and Block Device


Difference between Raw Device and Block Device
A RAW Device read/write 0 or more bytes, in a stream and can be opened using Direct I/O. RAW devices can be
faster for certain applications like databases because they does not contain a file system, and they dont use cache
for the same reason. You dont mount a RAW device.
A BLOCK device would read/write bytes in fixed size blocks, as in disk sectors. The block device is cached. I/O to the
device is read into memory, referred to as buffer cache, in large blocks.The block devices are used to mount
filesystems. Each disk has a block device interface where the system makes the device byte addressable and you
can write a single byte in the middle of the disk.

262012

ORA-00354: corrupt redo log block header


ORA-00354: corrupt redo log block header

Issue :
Normal users could not connect to database.
It messaged ORA-00257 :Connect Internal Only until freed.
Whenever you try to archive the redo log it returns the message.
ORA-16038: log %s sequence# %s cannot be archived
ORA-00354: corrupt redo log block header
ORA-00312: online log %s thread %s: %s
Cause:
At alert log you will see the ORA-16038,ORA-00354,ORA-00312 error serial.
The error produced as because it failed to archive online redolog due to a corruption in the online redo file.
Solution of The problem:
Making your database running by clear the unarchived redo log.
SQL>select * from v$log;
This will show you some log is not Archived. It may be the corrupted redo log. We should clear it by unarchive it.
SQL>alter database clear unarchived logfile logilename;
This makes the corruption disappear which causes the contents of the cleared online redo file.
Try to switch the log and confirm it is working fine.
If not then you may have to recreate a redo log for that group only.
Make a complete backup of the database.

252012

How to stop and start cluster including ASM and DB


Let us say we have two Node
Node 1: operation to stop the CRS
# crsctl stop crs
# crsctl status resources -t
Node 2: Operation to stop the CRS
# crsctl stop crs
# crsctl status resources -t

Confirm the following logs that both nodes related all services are stopped without any problem.

1. $GRID_HOME/log/<hostname>/gipcd/gipcd.log
2. $GRID_HOME/log/<hostname>/agent/ohasd/orarootagent_root/orarootagent_root.log
3. $GRID_HOME/log/<hostname>/hostnamealert.log
4. $GRID_HOME/log/<hostname>/ohasd/ohasd.log

While start the services please make sure you start the CRS of Node 1 first when you run on 11g Rel2 11.2.0.2. If you
do start the Second node you may get error and not able to start the ASM with LMON failed status which leads few
more error.
When you do deep analysis you may find the private IP assigned to these get assigned with 169.x.x.x which leads
- Snip from ASM Log
Private Interface eth1:1 configured from GPnP for use as a private interconnect.
[name=eth1:1, type=1, ip=169.254.85.248

looks like it is a bug (Oracle Note id 1374360.1 and Bug# 12425730)


Create SR with Oracle and apply the patch accordingly.
We have work around for this is make sure you stop and start the Nodes in the sequence of you stopped.

032012

detecting whos causing excessive redo generation


Solution I AWR Report:
The first solution that comes in our mind is to go through AWR report for DBA_HIST_SESSMETRIC_HISTORY view.
Oracle DBA_HIST_SESSMETRIC_HISTORY displays the history of several important session metrics and we hope
to get clue to our problem by analyzing the metrics of past sessions. Unfortunately sometimes we find no rows after
querying the view.
Solution II Enabling Oracle Session History Metrics in DBCONSOLE:
This solution suggests you to enable Oracle Session History Metrics in DBCONSOLE. You can set a small value of
Redo Writes per second.
Solution III Calculating Metric for Redo Generated:
If you are not using Enterprise Manager then you will have to calculate the metric information manually. You can
calculate the metric for Redo Generated per second with the formula DeltaRedoSize / Seconds where DeltaRedoSize
is the difference in select value from v$sysstat where name=redo size between end and start of sample period and
Seconds is the number of seconds in sample period.

Solution IV Querying Oracle V$SESS_IO View:


When undo is generated in any transaction then it will automatically generate redo as well. This solution examines
the amount of undo generated in order to find the sessions that are generating lots of redo.
Oracle V$SESS_IO view lists I/O statistics for each user session. The column BLOCK_CHANGES shows the number
of blocks changed by the session. High value for this column means your session is generating lots of redo. You will
have to run below query multiple times and examine the delta between each occurrence of BLOCK_CHANGES. If
you get large deltas then it shows high redo generation by the session. You use this solution to check for programs
that are generating lots of redo when these programs activate more than one transaction.
SQL> SELECT S1.SID, S1.SERIAL_NUM, S1.USER_NAME, S1.PROGRAM, I1.BLOCK_CHANGES
FROM V$SESSION S1, V$SESS_IO I1 WHERE S1.SID = I1.SID ORDER BY 5 DESC, 1, 2, 3, 4;
Solution V Querying Oracle V$TRANSACTION View:
Oracle V$TRANSACTION is a Data Dictionary view that lists the active transactions in the system. This view can be
used to track undo by session. The USED_UBLK column of this view shows the number of undo blocks used and the
USED_UREC column shows the number of undo records used by the transaction.
Below query can help you find out the particular transactions that are generating redo. Running the query multiple
times and analyzing the delta between each occurrence of USED_UBLK and SED_UREC will help you infer that
large deltas indicate high redo generation by the session.
SQL> SELECT S1.SID, S1.SERIAL_NUM, S1.USER_NAME, S1.PROGRAM, T1.USED_UBLK, T1.USED_UREC
FROM V$SESSION S1, V$TRANSACTION T1 WHERE S1.TADDR = T1.ADDR ORDER BY 5 DESC, 6 DESC, 1, 2,
3, 4;
Solution VI Tracking Undo Generated By All Sessions:
The following statement displays a record for all sessions that have generated undo. It shows both how many undo
blocks and undo records a session made.
SELECT S1.SID, S1.USER_NAME, R1.NAME, T1.START_TIME, T1.USED_UBLK , T1.USED_UREC
FROM V$SESSION S1, V$TRANSACTION T1, V$ROLLNAME R1 WHERE T1.ADDR = S1.TADDR AND R1.USN =
T1.XIDUSN;
Solution VII Collecting Statistics from V$SESSTAT to AWR:
Oracle V$SESSTAT view records the statistical data about the session that accesses it. You will have query the
V$STATNAME view in order to find the name of the statistic associated with each statistic number. In this solution we
will collects statistics from V$SESSTAT view into our private AWR views.

032012

Which Sessions Generating more Redo logs in oracle

SELECT s.sid, s.serial#, s.username, s.program,


i.block_changes

FROM v$session s, v$sess_io i


WHERE s.sid = i.sid
ORDER BY 5 desc;
SELECT s.sid, s.serial#, s.username, s.program,
t.used_ublk, t.used_urec
FROM v$session s, v$transaction t
WHERE s.taddr = t.addr
ORDER BY 5,6 desc;

Recover Database from ORA-00333: redo log read error


In development environment, it is very common scenario that we have multiple databases in a single
machine by using VMware (i.e, each VMware contains one database). Again those machines doesn't
have consistant power backup. Therefore we have to face power failure or VMware hang-up. So, we are
forced to restart the machine while databases are still up & running. After restarting the machine, we have
mostly got he following error:

ORA-00333: redo log read error block count .

Here are the steps to overcome the error


SQL> startup
ORACLE instance started.
Total System Global Area ***** bytes
Fixed Size
***** bytes
Variable Size
***** bytes
Database Buffers
***** bytes
Redo Buffers
***** bytes
Database mounted.
ORA-00333: redo log read error block *Number* count *Number*
Step 1: As the Db is in mount mode, We can query v$log & v$logfile to identify the status of log file group
and their member.
SQL> select l.status, member from v$logfile inner join v$log l using (group#);
STATUS MEMBER
------------- -------------------------------------CURRENT /oracle/fast_recovery_area/redo01.log
INACTIVE /oracle/fast_recovery_area/redo02.log
INACTIVE /oracle/fast_recovery_area/redo03.log
Step 2: Recover the database using ackup controlfile.
SQL> recover database using backup controlfile;
ORA-00279: change generated at needed for thread 1
ORA-00289: suggestion : /oracle/fast_recovery_area/archivelog/o1_mf_1_634_%u_.arc
ORA-00280: change for thread 1 is in sequence #
Specify log: {=suggested | filename | AUTO | CANCEL}

Step3: Give 'CURRENT' log file member along with location as input. If it does not work give other log file
members along with location in input prompt. In our case we give
/oracle/fast_recovery_area/redo01.log
Log applied.
Media recovery complete.
Step 4: Open the database with reset logfile
SQL> alter database open resetlogs;
Database altered.

282011

RMAN Show Commands


The SHOW command is used to display the values of current RMAN configuration settings.
RMAN> show all;
Shows all parameters.
RMAN> show archivelog backup copies;
Shows the number of archivelog backup copies.
RMAN> show archivelog deletion policy;
Shows the archivelog deletion policy.
RMAN> show auxname;
Shows the auxiliary database information.
RMAN> show backup optimization;
Shows whether optimization is on or off.
RMAN> show auxiliary channel;
Shows how the normal channel and auxiliary hannel are configured.
RMAN> show controlfile autobackup;
Shows whether autobackup is on or off.
RMAN> show controlfile autobackup format;
Shows the format of the autobackup control file.
RMAN> show datafile backup copies;
Shows the number of datafile backup copies being ept.
RMAN> show default device type;

Shows the default type disk or tape.


RMAN> show encryption algorithm;
Shows the encryption algorithm currently in use.
RMAN> show encryption for database;
Shows the encryption for the database.
RMAN> show encryption for tablespace;
Shows the encryption for the tablespace.
RMAN> show exclude;
Shows the tablespaces excluded from the backup.
RMAN> show maxsetsize;
Shows the maximum size for backup sets. The default value is unlimited.
RMAN> show retention policy;
Shows the policy for datafile and control file backups and copies that RMAN marks as obsolete.
RMAN> show snapshot controlfile name;
Shows the snapshot control filename.
Note: You can see any nondefault RMAN configured settings in the V$RMAN_CONFIGURATION database view.

222011

How to find out the Master Node of a RAC


How to find out the Master Node of a RAC
Option 1:
# ocrconfig -showbackup
The node that store OCR backups is the master node.
Option 2:
$ cssd >grep -i master node ocssd.log | tail -1
[CSSD]CLSS-3001: local node number 1, master node number 1
Above grep shows the master node in the cluster is node number 1.
Option 3:
$ grep master rac3_diag_4217.trc

Im the master node


Option 4:
Query V$GES_RESOURCE to identified master node.

How do I identify the OCR file location


Do simple search for ocr.loc
/var/opt/oracle/ocr.loc
or
/etc/ocr.loc
or
# ocrcheck

How to delete all archive logs in ASM


Best option is using RMAN with nocatalog and remove the old archive logs if not required
$ rman nocatalog /
RMAN>delete archivelog all completed before sysdate -3;

How to recover from a DROP or TRUNCATE table by using RMAN.


There are three options available:
1. Restore and recover the primary database to a point in time before the drop. This is an extreme measure for one
table as the entire database goes back in time.
2. Restore and recover the tablespace to a point in time before the drop. This is a better option, but again, it takes the
entire tablespace back in time.
3. Restore and recover a subset of the database as a DUMMY database to export the table data and import it into the
primary database. This is the best option as only the dropped table goes back in time to before the drop.
So option 3 is best.
Steps for Option 3
1. To recover from a dropped or truncated table, a dummy database (copy of primary) will be restored and recovered
to point in time so the table can be exported.
2. Once the table export is complete, the table can be imported into the primary database. This dummy database can
be a subset of the primary database. However,the dummy database must include the SYSTEM, UNDO (or
ROLLBACK), and the tablespace(s) where the dropped/truncated table resides.
The simpliest method to create this dummy database is to use the RMAN duplicate command.
RMAN Duplicate Command

CONNECT TARGET SYS/oracle@trgt


CONNECT AUXILIARY SYS/oracle@dupdb
DUPLICATE TARGET DATABASE TO dupdb
NOFILENAMECHECK UNTIL TIME SYSDATE-7;
Assuming the following
The target database trgt and duplicate database dupdb are on different hosts but have exactly the same directory
structure.
You want to name the duplicate database files the same as the target files.
You are not using a recovery catalog.
You are using automatic channels for disk and sbt, which are already configured.
You want to recover the duplicate database to one week ago in order to view the data in prod1 as it appeared at that
time (and you have the required backups and logs to recover the duplicate to that poin tin time).

Difference between locks and latches


Locks are used to protect the data or resourses from the simulteneous use of them by multiple sessions which might
set them in inconsistant state Locks are external mechanism, means user can also set locks on objects by using
various oracle statements.
Latches are for the same purpose but works at internal level. Latches are used to Protect and control access to
internal data structres like various SGA buffers.They are handled and maintained by oracle and we cant access or
set it.. this is the main difference

Flashback Database disabled automatically


Issue 1:
Intially Flashback Database was enabled but noticed Flashback was disabled automatically long time ago.
Reason:
It could be because the flashback area 100%
Once Flashback Area become 100% full then oracle will log in Alert that Flashback will be disabled and it will
automatically turn off Flash Back without user intervention.

ASM Limitation
ASM Limitation
ASM has the following size limits:
63 disk groups in a storage system
10,000 ASM disks in a storage system
1 million files for each disk group

How can I check if there is anything rolling back?


It depends on how you killed the process.
If you did and alter system kill session you should be able to look at the used_ublk block in v$transaciton to get an
estimate for the rollback being done.
If you killed to server process in the OS and pmon is recovering the transaction you can look at
V$FAST_START_TRANSACTIONS view to get the estimate

RMAN Restore Preview


The PREVIEW option of the RESTORE command allows you to identify the backups required to complete a specific
restore operation. The output generated by the command is in the same format as the LIST command. In addition the
PREVIEW SUMMARY command can be used to produce a summary report with the same format as the LIST
SUMMARY command. The following examples show how these commands are used:
# Spool output to a log file
SPOOL LOG TO c:oraclermancmdrestorepreview.lst;
# Show what files will be used to restore the SYSTEM tablespaces datafile
RESTORE DATAFILE 2 PREVIEW;
# Show what files will be used to restore a specific tablespace
RESTORE TABLESPACE users PREVIEW;
# Show a summary for a full database restore
RESTORE DATABASE PREVIEW SUMMARY;
# Close the log file
SPOOL LOG OFF;

How to Create AWR Report Manually


Step 1 Create snapshot manually
exec DBMS_WORKLOAD_REPOSITORY.create_snapshot();
Step 2 Create AWR Report
$cd $ORACLE_HOME
$cd rdbms
$cd admin
$sqlplus /nolog
SQL>connect / as sysdba
SQL>@awrrpt.sql
.
.
Enter value for begin_snap: 1405
.
.

Enter value for end_snap: 1406


.
.
Enter value for report_name:awrrpt_1_10405_10406.html

http://www.bestremotedba.com/topics/database-administration/page/7/

ADDM(Automatic Database Diagnostic Monitor) in Oracle Database 10g


The Automatic Database Diagnostic Monitor (ADDM) analyzes data in the Automatic Workload Repository (AWR) to
identify potential performance bottlenecks. For each of the identified issues it locates the root cause and provides
recommendations for correcting the problem.
An ADDM analysis is performed every time an AWR snapshot is taken and the results are saved in the database
provided the STATISTICS_LEVEL parameter is set to TYPICAL or ALL.
The ADDM analysis includes:
CPU bottlenecks
Undersized Memory Structures
I/O capacity issues
High load SQL statements
High load PL/SQL execution and compilation, as well as high load Java usage
RAC specific issues
Sub-optimal use of Oracle by the application
Database configuration issues
Concurrency issues
Hot objects and top SQL for various problem areas
ADDM analysis results are represented as a set of FINDINGs
Example ADDM Report
FINDING 1: 31% impact (7798 seconds)

SQL statements were not shared due to the usage of literals. This resulted in additional hard parses which were
consuming significant database time.
RECOMMENDATION 1: Application Analysis, 31% benefit (7798 seconds)
ACTION: Investigate application logic for possible use of bind variables
instead of literals. Alternatively, you may set the parameter cursor_sharing to force.
RATIONALE: SQL statements with PLAN_HASH_VALUE 3106087033 were found to be using literals. Look in V$SQL
for examples of such SQL statements.
In this example, the finding points to a particular root cause, the usage of literals in SQL statements, which is
estimated to have an impact of about 31% of total DB time in the analysis period.

In addition to problem diagnostics, ADDM recommends possible solutions. When appropriate, ADDM recommends
multiple solutions for the DBA to choose from. ADDM considers a variety of changes to a system while generating its
recommendations.
Recommendations include:
Hardware changes
Database configuration
Schema changes
Application changes
Using other advisors
ADDM Settings
Automatic database diagnostic monitoring is enabled by default and is controlled by the STATISTICS_LEVEL
initialization parameter.
The STATISTICS_LEVEL parameter should be set to the TYPICAL or ALL to enable the automatic database
diagnostic monitoring.
The default setting is TYPICAL.
Setting STATISTICS_LEVEL to BASIC disables many Oracle features, including ADDM, and is strongly discouraged.
ADDM analysis of I/O performance partially depends on a single argument, DBIO_EXPECTED
The value of DBIO_EXPECTED is the average time it takes to read a single database block in microseconds. Oracle
uses the default value of 10 milliseconds
Set the value using
EXECUTE DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER(ADDM, DBIO_EXPECTED, 8000);
Diagnosing Database Performance Issues with ADDM
To diagnose database performance issues, ADDM analysis can be performed across any two AWR snapshots as
long as the following requirements are met:
Both the snapshots did not encounter any errors during creation and both have not yet been purged.
There were no shutdown and startup actions between the two snapshots.
Using Enterprise Manager
The obvious place to start viewing ADDM reports is Enterprise Manager. The Performance Analysis section on the
Home page is a list of the top five findings from the last ADDM analysis task.
Specific reports can be produced by clicking on the Advisor Central link, then the ADDM link. The resulting page
allows you to select a start and end snapshot, create an ADDM task and display the resulting report by clicking on a
few links.
Executing addmrpt.sql Script
The addmrpt.sql script can be used to create an ADDM report from SQL*Plus. The script is called as follows:

@/u01/app/oracle/product/10.1.0/db_1/rdbms/admin/addmrpt.sql
It then lists all available snapshots and prompts you to enter the start and end snapshot along with the report name.
Using DBMS_ADVISOR Package
The DBMS_ADVISOR package can be used to create and execute any advisor tasks, including ADDM tasks. The
following example shows how it is used to create, execute and display a typical ADDM report:
BEGIN
Create an ADDM task.
DBMS_ADVISOR.create_task (
advisor_name => ADDM,
task_name => 970_1032_AWR_SNAPSHOT,
task_desc => Advisor for snapshots 970 to 1032.);
Set the start and end snapshots.
DBMS_ADVISOR.set_task_parameter (
task_name => 970_1032_AWR_SNAPSHOT,
parameter => START_SNAPSHOT,
value => 970);
DBMS_ADVISOR.set_task_parameter (
task_name => 970_1032_AWR_SNAPSHOT,
parameter => END_SNAPSHOT,
value => 1032);
Execute the task.
DBMS_ADVISOR.execute_task(task_name => 970_1032_AWR_SNAPSHOT);
END;
/
Display the report.
SET LONG 100000
SET PAGESIZE 50000
SELECT DBMS_ADVISOR.get_task_report(970_1032_AWR_SNAPSHOT) AS report
FROM dual;
SET PAGESIZE 24The value for the SET LONG command should be adjusted to allow the whole report to be
displayed.
The relevant AWR snapshots can be identified using the DBA_HIST_SNAPSHOT view.
ADDM Views
DBA_ADVISOR_TASKS
This view provides basic information about existing tasks, such as the task Id, task name, and when created.
DBA_ADVISOR_LOG

This view contains the current task information, such as status, progress, error messages, and execution times.
DBA_ADVISOR_RECOMMENDATIONS
This view displays the results of completed diagnostic tasks with recommendations for the problems identified in each
run. The recommendations should be looked at in the order of the RANK column, as this relays the magnitude of the
problem for the recommendation. The BENEFIT column gives the benefit to the system you can expect after the
recommendation is carried out.
DBA_ADVISOR_FINDINGS
This view displays all the findings and symptoms that the diagnostic monitor encountered along with the specific
recommendation

Bigfile Tablespaces in Oracle 10g


Bigfile tablespaces are tablespaces with a single large datafile.
In contrast normal (smallfile) tablespaces can have several datafiles, but each is limited in size.
The system default is to create a smallfile tablespace.The SYSTEM and SYSAUX tablespace types are always
created using the system default type.
Bigfile tablespaces must be
locally managed
with automatic segment-space management
Exceptions to this rule include
temporary tablespaces and
locally managed undo tablespaces which are all allowed to have manual segment-space management.
Advantages of using Bigfile Tablespaces:
By allowing tablespaces to have a single large datafile the total capacity of the database is increased. An Oracle
database can have a maximum of 64,000 datafiles which limits its total capacity.It allows you to create a bigfile
tablespace of up to eight exabytes (eight million terabytes) in size, and significantly increase the storage capacity of
an Oracle database
Using fewer larger datafiles allows the DB_FILES and MAXDATAFILES parameters to be reduced, saving SGA and
controlfile space.
It simplifies large database tablespace management by reducing the number of datafiles needed.
The ALTER TABLESPACE syntax has been updated to allow operations at the tablespace level, rather than datafile
level.
Considerations:
Bigfile Tablespace can be used with:

ASM (Automatic Storage Management)


a logical volume manager supporting striping/RAID
Avoid creating bigfile tablespaces on a system that does not support striping because of negative implications for
parallel execution and RMAN backup parallelization.
Avoid using bigfile tablespaces if there could possibly be no free space available on a disk group, and the only way
to extend a tablespace is to add a new datafile on a different disk group.
Syntax to create Bigfile Tablespace
SQL > CREATE BIGFILE TABLESPACE
DATAFILE /u01/oradata/datafilename.dbf SIZE 50G
Views:
The following views contain a BIGFILE column that identifies a tablespace as a bigfile tablespace:
DBA_TABLESPACES
USER_TABLESPACES
V$TABLESPACE

Major RAC Wait Events


In a RAC environment the buffer cache is global across all instances in the cluster and hence the processing
differs.The most common wait events related to this are gc cr request and gc buffer busy
GC CR request: the time it takes to retrieve the data from the remote cache
Reason: RAC Traffic Using Slow Connection or Inefficient queries (poorly tuned queries will increase the amount of
data blocks
requested by an Oracle session. The more blocks requested typically means the more often a block will need to be
read from a remote instance via the interconnect.)
GC BUFFER BUSY: It is the time the remote instance locally spends accessing the requested data block.

SRVCTL
srvctl start instance -d db_name -i inst_name_list [-o start_options]
srvctl stop instance -d name -i inst_name_list [-o stop_options]
srvctl stop instance -d orcl -i orcl3,orcl4 -o immediate
srvctl start database -d name [-o start_options]
srvctl stop database -d name [-o stop_options]
srvctl start database -d orcl -o mount
RAC | Aarthi Mudhalvan |

Comments (0)

rac, SRVCTL
Apr092010

Oracle clusterware tools

OIFCFG allocating and deallocating network interfaces


OCRCONFIG Command-line tool for managing Oracle Cluster Registry
OCRDUMP Identify the interconnect being used
CVU Cluster verification utility to get status of CRS resources
RAC | Aarthi Mudhalvan |

Comments (0)

Oracle clusterware tools, rac


Apr072010

OCR
Oracle clusterware manages CRS resources based on the configuration information of CRS resources stored in
OCR(Oracle Cluster Registry).
RAC | Aarthi Mudhalvan |

Comments (0)

OCR, Oracle Cluster Registry, rac


Apr072010

CRS Resource
Oracle clusterware is used to manage high-availability operations in a cluster.Anything that Oracle Clusterware
manages is known as a CRS resource.Some examples of CRS resources are database,an instance,a service,a
listener,a VIP address,an application process etc.

FAN
Fast application Notification as it abbreviates to FAN relates to the events related to instances,services and
nodes.This is a notification mechanism that Oracle RAc uses to notify other processes about the configuration and
service level information that includes service status changes such as,UP or DOWN events.Applications can respond
to FAN events and take immediate action.
FAN UP and DOWN events:
FAN UP and FAN DOWN events can be applied to instances,services and nodes.
Use of FAN events in case of a cluster configuration change:
During times of cluster configuration changes,Oracle RAC high availability framework publishes a FAN event
immediately when a state change occurs in the cluster.So applications can receive FAN events and react
immediately.This prevents applications from polling database and detecting a problem after such a state change.

How do we verify that RAC instances are running?


Issue the following query from any one node connecting through SQL*PLUS.
$connect sys/sys as sysdba
SQL>select * from V$ACTIVE_INSTANCES;
The query gives the instance number under INST_NUMBER column,host_:instancename under INST_NAME
column.

RAC | Aarthi Mudhalvan |

Comments (0)

verify the RAC instances are running, verify whether RAC instances are running
Mar262010

VIP
VIP Virtual IP address in RAC
VIP is mainly used for fast connection in failover.
Until 9i RAC faileover we used physical IP address of another server. When the connection request come from a
client to server, then failure of first server listener then RAC redirect the connection request to second available
server using physical IP address. Hence it is physical IP address rediretion to second physical IP address is possible
only after we get timeout error from First Physical IP address. So connection should wait a while for getting TCP
connection timeout.
From RAC 10g we can use the VIP to save connection timeout wait, Because ONS (Oracle Notification Service)
maintains communication between each nodes and listeners. Once ONS found any listener down or node down, it
will notify another nodes and listeners. While new connection is trying to establish connection to failure node or
listener, virtual IP of failure node automatically divert to surviving node. This Process will not wait for TCP/IP timeout
event. So new connection will be faster even one listener/node failed.
A virtual IP address or VIP is an alternate IP address that the client connections use instead of the standard public IP
address. To configure VIP address, we need to reserve a spare IP address for each node, and the IP addresses must
use the same subnet as the public network.
If a node fails, then the nodes VIP address fails over to another node on which the VIP address can accept TCP
connections but it cannot accept Oracle connections.
Situations under which VIP address failover happens:VIP addresses failover happens when the node on which the VIP address runs fails, all interfaces for the VIP address
fails, all interfaces for the VIP address are disconnected from the network.
Significance of VIP address failover:When a VIP address failover happens, Clients that attempt to connect to the VIP address receive a rapid connection
refused error .They dont have to wait for TCP connection timeout messages.
RAC | Aarthi Mudhalvan |

Comments (0)

RAC VIP, RAC virtual IP address, vip, virtual IP address


Mar242010

ORA-13516: AWR Operation failed: INTERVAL Setting is ZERO


===== ORA-13516: AWR Operation failed: INTERVAL Setting is ZERO
The above error message is because of the INTERVAL of snapshot setting is Zero. You can change the snapshot
setting by using the following command. The below command set the interval to 60 mins

DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS( INTERVAL=>60 );
Performance | mudhalvan |

Comments (0)

ORA-13516: AWR Operation failed: INTERVAL Setting is ZERO


Mar242010

How to create AWR Snapshot Manually


exec DBMS_WORKLOAD_REPOSITORY.create_snapshot();
Performance | mudhalvan |

Comments (0)

How to create AWR Snapshot Manually


Mar242010

Views and Usage Related to AWR and ASH


* V$ACTIVE_SESSION_HISTORY Displays the active session history (ASH) sampled every second.
* V$METRIC Displays metric information.
* V$METRICNAME Displays the metrics associated with each metric group.
* V$METRIC_HISTORY Displays historical metrics.
* V$METRICGROUP Displays all metrics groups.
* DBA_HIST_ACTIVE_SESS_HISTORY Displays the history contents of the active session history.
* DBA_HIST_BASELINE Displays baseline information.
* DBA_HIST_DATABASE_INSTANCE Displays database environment information.
* DBA_HIST_SNAPSHOT Displays snapshot information.
* DBA_HIST_SQL_PLAN Displays SQL execution plans.
* DBA_HIST_WR_CONTROL Displays AWR settings.

How to Change the Session and Process Value


How to Change the Session and Process Value
1. backup the spfile
$cp -p spfile.ora spfile.ora.
2. check the session and parameter value
$ sqlplus /nolog
SQL> connect / as sysdba
SQL>select NAME, VALUE from v$parameter where NAME = sessions;
SQL>select NAME, VALUE from v$parameter where NAME = processes;
3. Change the Process and Session Values
SQL> alter system set processes=100 scope=spfile;
SQL> alter system set sessions=100 scope=spfile;

4. Restart the Database


SQL> shutdown immediate;
SQL> startup;
5. check the session and parameter value
SQL>select NAME, VALUE from v$parameter where NAME = sessions;
SQL>select NAME, VALUE from v$parameter where NAME = processes;

How do I change archive log to noarchive log in cluster environment?


Changing the Archiving Mode in Real Application Clusters
After configuring your Real Application Clusters environment for RMAN, you can alter the archiving mode if needed.
For example, if your Real Application Clusters database uses NOARCHIVELOG mode, then follow these steps to
change the archiving mode to ARCHIVELOG mode:
1. Shut down all instances.
2. Reset the CLUSTER_DATABASE parameter to false on one instance. If you are using the server parameter file,
then make a sid-specific entry for this.
3. Add settings in the parameter file for the LOG_ARCHIVE_DEST_n, LOG_ARCHIVE_FORMAT, and
LOG_ARCHIVE_START parameters. You can multiplex the destination to use up to ten locations. The
LOG_ARCHIVE_FORMAT parameter should contain the %t parameter to include the thread number in the archived
log file name. You must configure an archiving scheme before setting these parameter values.
4. Start the instance on which you have set CLUSTER_DATABASE to false.
5. Run the following statement in SQL*Plus:
SQL>ALTER DATABASE ARCHIVELOG;
6. Shut down the instance.
7. Change the value of the CLUSTER_DATABASE parameter to true.
8. Restart your instances.
You can also change the archiving mode from ARCHIVELOG to NOARCHIVELOG. To disable archiving, follow the
preceding steps with the following changes:
1. Delete the archiving settings that you created in step 3.
2. Specify the NOARCHIVELOG keyword in step 5:
ALTER DATABASE NOARCHIVELOG;

RMAN Notes
1.1. Where should the catalog be created?
The recovery catalog to be used by rman should be created in a seperate database other than the target database.
The reason been that the target database will be shutdown while datafiles are restored.
1.2. How do I create a catalog for rman?
First create the user rman.

CREATE USER rman IDENTIFIED BY rman


TEMPORARY TABLESPACE temp
DEFAULT TABLESPACE tools
QUOTA UNLIMITED ON tools;
GRANT connect, resource, recovery_catalog_owner TO rman;
exit
Then create the recovery catalog:rman catalog=rman/rman
create catalog tablespace tools;
exit
Then register the database
oracle@debian:~$ rman target=/ catalog=rman/rman@newdb
Recovery Manager: Release 10.1.0.2.0 Production
Copyright (c) 1995, 2004, Oracle. All rights reserved.
connected to target database: TEST (DBID=1843143191)
connected to recovery catalog database
RMAN> register database;
database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
Note
If you try rman catalog=rman/rman and try to register the database it will not work.
Note
We have 2 Databases here 1 is newdb which is solely for catalog and the other is TEST which is our database on
which we want to perform all rman operations.
1.3. How many times does oracle ask before dropping a catalog?
The default is two times one for the actual command, the other for confirmation.
1.4. How to view the current defaults for the database.
rman> show all;
RMAN> show all
2> ;
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default

CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default


CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO %F; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO /u02/app/oracle/product/10.1.0/db_1/dbs/snapcf_test.f; #
default
1.5. Backup the database.
RMAN> run{
configure retention policy to recovery window of 2 days;
backup database plus archivelog;
delete noprompt obsolete;
}
tarting backup at 04-JUL-05
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=256 devtype=DISK
channel ORA_DISK_1: starting archive log backupset

1.6. How to resolve the ora-19804 error


Basically this error is because of flash recovery area been full. One way to solve is to increase the space available for
flashback database.
sql>ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=5G; It can be set to K,M or G.
rman>backup database;
.
channel ORA_DISK_1: specifying datafile(s) in backupset
including current controlfile in backupset
including current SPFILE in backupset
channel ORA_DISK_1: starting piece 1 at 04-JUL-05
channel ORA_DISK_1: finished piece 1 at 04-JUL-05
piece
handle=/u02/app/oracle/flash_recovery_area/TEST/backupset/2005_07_04/o1_mf_ncsnf_TAG20050704T205840_1d
my15cr_.bkp comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 04-JUL-05
Oracle Flashback

After taking a back up resync the database.


Restoring the whole database.
run {
shutdown immediate;
startup mount;
restore database;
recover database;
alter database open;
}
1.7. What are the various reports available with RMAN
rman>list backup; rman> list archive;
1.8. What does backup incremental level=0 database do?
Backup database level=0 is a full backup of the database. rman>>backup incremental level=0 database;
You can also use backup full database; which means the same thing as level=0;
1.9. What is the difference between DELETE INPUT and DELETE ALL command in backup?
Generally speaking LOG_ARCHIVE_DEST_n points to two disk drive locations where we archive the files, when a
command is issued through rman to backup archivelogs it uses one of the location to backup the data. When we
specify delete input the location which was backedup will get deleted, if we specify delete all all log_archive_dest_n
will get deleted.
DELETE all applies only to archived logs. delete expired archivelog all;
Chapter 2. Recovery
Recovery involves placing the datafiles in the appropriate state for the type of recovery you are performing. If
recovering all datafiles, then mount the database, if recovering a single tablespace or datafile then you can keep the
database open and take the tablespace or datafile offline. Perform the required recovery and put them back online.
Put the commands in a rman script .rcv file such as myrman.rcv
run
{
# shutdown immediate; # use abort if this fails
startup mount;
#SET UNTIL TIME Nov 15 2001 09:00:00;
# SET UNTIL SCN 1000; # alternatively, you can specify SCN
SET UNTIL SEQUENCE 9923; # alternatively, you can specify log sequence number
restore database;
recover database;
alter database open;
}

Run the myrman.rcv file as :- rman target / @myrman.rcv


After successful restore & recovery immediately backup your database, because the database is in a new
incarnation.
ALTER DATABASE open resetlogs; command creates a new incarnation of the database database with a new
stream of sequence numbers starting with sequence 1.
Before running RESETLOGS it is a good practice to open the database in read only mode and examining the data
contents.
2.1. Simulating media failure.
2.1.1. How to simulate media failure and recover a tablespace in the database ?
2.1.2. What is the difference between alter database recover and sql*plus recover command?
2.1.1. How to simulate media failure and recover a tablespace in the database ?
Firstly create the table in the required tablespace.
CREATE TABLE mytest ( id number(10));
Then insert into the table mytest values(100); execute the insert statement a couple of times but do not commit the
results.
Take the tablespace offline, this is possible only if the database is in archivelog mode.
now commit the transaction. by issuing commit.
Now try to bring the tablespace online, at this point you will get the error that datafile 4 needs media recovery.
issue the following command to recover the tablespace, please note that the database itself can remain open.
SQL>recover tablespace users;
media recovery completed.
now bring the tablespace online.
SQL>alter tablespace users online;
2.1.2. What is the difference between alter database recover and sql*plus recover command?
ALTER DATABASE recover is useful when you as a user want to control the recovery. SQL*PLUS recover command
is useful when we prefer automated recovery.
Chapter 3. Duplicate database with control file
What are the steps required to duplicate a database with control file?
Copy initSID.ora to the new initXXX.ora file. i.e.,
cp $ORACLE_HOME/dbs/inittest.ora $ORACLE_HOME/dbs/initDUP.ora
Edit the parameters that are specific to location and instance:-

db_name = dup
instance_name = dup
control_files = change the location to point to dup
background_dump_dest = change the location to point to dup/bdump
core_dump_dest = change the location to point to dup/cdump
user_dump_dest = change the location to point to dup/udump
log_archive_dest_1 = dup/archive
db_file_name_convert = (test, dup)
log_file_name_convert = (test, dup)
remote_login_passwordfile = exclusive
Actual settings :*.background_dump_dest=/u02/app/oracle/admin/DUP/bdump
*.compatible=10.1.0.2.0
*.control_files=/u02/app/oracle/oradata/DUP/control01.ctl,'/u02/app/oracle/oradata/DUP/control02.ctl,'/u02/app/oracl
e/oradata/DUP/control03.ctl
*.core_dump_dest=/u02/app/oracle/admin/DUP/cdump
*.db_block_size=8192
*.db_cache_size=25165824
*.db_domain=
*.db_file_multiblock_read_count=16
*.db_name=DUP
*.db_recovery_file_dest=/u02/app/oracle/flash_recovery_area
*.db_recovery_file_dest_size=2147483648
*.dispatchers=(PROTOCOL=TCP) (SERVICE=DUPXDB)
*.java_pool_size=50331648
*.job_queue_processes=10
*.large_pool_size=8388608
*.log_archive_dest_1=LOCATION=/u02/app/oracle/oradata/payroll MANDATORY
*.open_cursors=300
*.pga_aggregate_target=25165824
*.processes=250
*.shared_pool_size=99614720
*.sort_area_size=65536
*.undo_management=AUTO
*.undo_tablespace=UNDOTBS1
*.user_dump_dest=/u02/app/oracle/admin/DUP/udump
*.remote_login_passwordfile=exclusive
*.db_file_name_convert = (test, dup)
*.log_file_name_convert =(test,dup)
Make the directories for the dump destination:-

oracle@debian:/u02/app/oracle/admin/DUP$ mkdir bdump


oracle@debian:/u02/app/oracle/admin/DUP$ mkdir cdump
oracle@debian:/u02/app/oracle/admin/DUP$ mkdir udump
Make a directory to hold control files, datafiles and such:oracle@debian:/u02/app/oracle/oradata/PRD$ cd ..
oracle@debian:/u02/app/oracle/oradata$ mkdir DUP
Ensure that the oracle sid is pointing to the right Database. Make an ora password file so that other users can
connect too.
export ORACLE_SID=DUP
$orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=easypass
sqlplus / as sysdba
sql>startup nomount;
oracle@debian:/u02/app/oracle/product/10.1.0/db_1/dbs$ sqlplus / as sysdba;
SQL*Plus: Release 10.1.0.2.0 Production on Wed Aug 24 21:05:26 2005
Copyright (c) 1982, 2004, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 188743680 bytes
Fixed Size 778036 bytes
Variable Size 162537676 bytes
Database Buffers 25165824 bytes
Redo Buffers 262144 bytes
SQL>
Check net8 connectivity sqlplus sys/easypass@dup if that goes through successfully then exit. The idea is to check
for sql*net connectivity.
if you get ORA-12154: TNS:could not resolve the connect identifier specified then more work needs to be done.
DUP =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = debian)(PORT = 1521))
(CONNECT_DATA =
(SERVICE_NAME = dup)
)
$tnsping dup
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = debian)(PORT = 1521))

(CONNECT_DATA = (SERVICE_NAME = dup)))


OK (0 msec)
Even with this if you are getting ORA-12528: TNS:listener: all appropriate instances are blocking new connections
then we have to connect to the auxiliary (the database to be duplicated as / ) and the target database ( source ) with
user/pass@test
start duplicating the database. export ORACLE_SID=DUP
rman target sys/easypass@test auxiliary /
run{
allocate auxiliary channel ch1 type disk;
duplicate target database to dup;
}
oracle@debian:/u02/app/oracle/product/10.1.0/db_1/network/admin$ rman target sys/kernel@test auxiliary /
Recovery Manager: Release 10.1.0.2.0 Production
Copyright (c) 1995, 2004, Oracle. All rights reserved.
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00554: initialization of internal recovery manager package failed
RMAN-04005: error from target database:
ORA-01017: invalid username/password; logon denied
The work around to this is to create a user with dba privileges and connect through that users id .
$export ORACLE_SID=test
SQL>grant sysdba to mhg;
oracle@debian:/u02/app/oracle/product/10.1.0/db_1/network/admin$ rman target mhg/mhg@test auxiliary /
Recovery Manager: Release 10.1.0.2.0 Production
Copyright (c) 1995, 2004, Oracle. All rights reserved.
connected to target database: TEST (DBID=1843143191)
connected to auxiliary database: DUP (not mounted)
oracle@debian:~$ rman target mhg/mhg@test auxiliary / @run.rcv
Recovery Manager: Release 10.1.0.2.0 Production
Copyright (c) 1995, 2004, Oracle. All rights reserved.
connected to target database: TEST (DBID=1843143191)
connected to auxiliary database: DUP (not mounted)

RMAN> run{
2> allocate auxiliary channel c1 type disk;
3> duplicate target database to dup;
4> }
5>
using target database controlfile instead of recovery catalog
allocated channel: c1
channel c1: sid=270 devtype=DISK
Starting Duplicate Db at 24-AUG-05
released channel: c1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 08/24/2005 21:13:09
RMAN-05501: aborting duplication of target database
RMAN-05001: auxiliary filename /u02/app/oracle/oradata/test/users01.dbf conflicts with a file used by the target
database
This error is primarily because the files of the test database are already present, this is a bad thing we have to use
db_file_name_convert and long_file_name_convert to overcome these errors.
This is the final run output:oracle@debian:~$ rman target mhg/mhg@test auxiliary /
Recovery Manager: Release 10.1.0.2.0 Production
Copyright (c) 1995, 2004, Oracle. All rights reserved.
connected to target database: TEST (DBID=1843143191)
connected to auxiliary database: DUP (not mounted)
RMAN> @run.rcv
RMAN> run{
2> allocate auxiliary channel c1 type disk;
3> duplicate target database to dup;
4> }
using target database controlfile instead of recovery catalog
allocated channel: c1
channel c1: sid=270 devtype=DISK
Starting Duplicate Db at 24-AUG-05
contents of Memory Script:
{
set until scn 2150046;

set newname for datafile 1 to


/u02/app/oracle/oradata/DUP/system2.dbf;
set newname for datafile 2 to
/u02/app/oracle/oradata/DUP/undotbs01.dbf;
set newname for datafile 3 to
/u02/app/oracle/oradata/DUP/sysaux01.dbf;
set newname for datafile 4 to
/u02/app/oracle/oradata/DUP/users01.dbf;
restore
check readonly
clone database
;
}
executing Memory Script
executing command: SET until clause
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting restore at 24-AUG-05
channel c1: starting datafile backupset restore
channel c1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to /u02/app/oracle/oradata/DUP/system2.dbf
restoring datafile 00002 to /u02/app/oracle/oradata/DUP/undotbs01.dbf
..
datafile copy filename=/u02/app/oracle/oradata/DUP/sysaux01.dbf recid=2 stamp=567206656
cataloged datafile copy
datafile copy filename=/u02/app/oracle/oradata/DUP/users01.dbf recid=3 stamp=567206656
datafile 2 switched to datafile copy
input datafilecopy recid=1 stamp=567206656 filename=/u02/app/oracle/oradata/DUP/undotbs01.dbf
datafile 3 switched to datafile copy
input datafilecopy recid=2 stamp=567206656 filename=/u02/app/oracle/oradata/DUP/sysaux01.dbf
datafile 4 switched to datafile copy
input datafilecopy recid=3 stamp=567206656 filename=/u02/app/oracle/oradata/DUP/users01.dbf
contents of Memory Script:
{
Alter clone database open resetlogs;

}
executing Memory Script
database opened
Finished Duplicate Db at 24-AUG-05
RMAN> **end-of-file**
This ends a successful duplication of a database without control file.
Chapter 4. Using rman tocheck logical and physical block corruption
To generate block corruption you can use the dd unix utility
caution, it will corrupt your block(s):$dd if=/dev/null of=/u02/oradata/myrac/anyfile.dbf bs=8192 conv=notrunc seek=10 count=1
seek=10 write at block 10, count=1 write to only that block
now you can run dbv to verify that the blocks are actually corrupt
and then recover the datafile by using oracles blockrecover command.
export ORACLE_HOME=test
rman target /
run {
allocate channel d1 type disk;
backup check logical validate database;
release channel d1;
}
To validatea datafile(s) :run {
allocate channel d1 type disk;
backup check logical validate datafile 1,2;
release channel d1;
}
During this command every block is written to memory and then subsequently rewriten to another portion of the
memory, during
this memory to memory write every block is checked for corruption.
RMANs backup command with validate and check logical clause allow to quickly validate for both physical and
logical corruption.
Chapter 5. Checking for datafile corruption
A corrupted block requires dropping an object. The message identifies the block in error by file number and block
number. The cure has always been to run a query such as: SELECT owner, segment_name, segment_type FROM
dba_extents WHERE file_id = AND BETWEEN block_id AND block_id + blocks 1; where and were the numbers
from the error message. This query indicates which object contains the corrupted block. Then, depending on the
object type, recovery is either straightforward (for indexes and temporary segments), messy (for tables), or very
messy (for active rollback segments and parts of the data dictionary). In Oracle 9i Enterprise Edition, however, a new

Recovery Manager (RMAN) command, BLOCKRECOVER, can repair the block in place without dropping and
recreating the object involved. After logging into RMAN and connecting to the target database, type:
BLOCKRECOVER DATAFILE filenumber BLOCK blocknumber; A new view, V$DATABASE_BLOCK_CORRUPTION,
gets updated during RMAN backups, and a block must be listed as corrupt for a BLOCKRECOVER to be performed.
To recover all blocks that have been marked corrupt, the following RMAN sequence can be used: BACKUP
VALIDATE DATABASE; BLOCKRECOVER CORRUPTION LIST; This approach is efficient if only a few blocks need
recovery. For large-scale corruption, its more efficient to restore a prior image of the datafile and recover the entire
datafile, as before. As with any new feature, test it carefully before using it on a production database.
run {
allocate channel ch1 type ;
blockrecover datafile block ;
}
1. What are the steps to start the database from a text control file?
1.1. What are the steps required to start a database from text based control file?
1.2. Give a complete scenario of backup, delete and restore.
1.3. How do I backup archive log?
1.4. How do I do a incremental backup after a base backup?
1.5. What is ORA-002004 error?
1.6. What Information is Required for RMAN TAR?
1.7. How To turn Debug Feature on in rman?
1. What are the steps to start the database from a text control file?
1.1. What are the steps required to start a database from text based control file?
1.2. Give a complete scenario of backup, delete and restore.
1.3. How do I backup archive log?
1.4. How do I do a incremental backup after a base backup?
1.5. What is ORA-002004 error?
1.6. What Information is Required for RMAN TAR?
1.7. How To turn Debug Feature on in rman?
1.1. What are the steps required to start a database from text based control file?
ALTER DATABASE BACKUP CONTROLFILE TO /oracle/backup/cf.bak REUSE; or to a file name on the OS. With
this command you will get a text based version of your control file. REUSE clause specifies Oracle to overwrite the
control files. If we ignore this option Oracle will not overwrite the control file if it is already present in the directory
specified by the initSID.ora file.
Start the database in nomount mode. If you have 3 control file entries in pfile / spfile you will get 3 new control files.
Now run the control file script to create your control files.
recover database using backup controlfile until cancel
1.2. Give a complete scenario of backup, delete and restore.

Given that you want to take a base level backup, simulate complete failure by removing controlfile, datafile, redo log,
archive log, these are the steps to be followed.
First take a base level backup of the database.
backup incremental level=0 database;
Simulate media failure by removing the control file and data file. sqlplus / as sysdba; shutdown immediate; exit; rm
control* system*
When we dont have a control file the problem becomes quite complex the reason been that the rman backup
information is stored in the control file. So when we dont have the control file we wont have the information about
backups. First step should be towards restoring the control file. Fortunately we can do a listing in our flash recovery
area and guess which backup piece has the information about our control file. In my box following is the listing on the
flash recovery area:/u02/app/oracle/flash_recovery_area/TEST/backupset/2005_07_30/
o1_mf_ncnn0_TAG20050730T130722_1gqn4jy2_.bkp
o1_mf_nnnd0_TAG20050730T130722_1gqmzdjz_.bkp
now I am assuming xxcnn0xxx has the control file information in it.
We have to use a nifty pl/sql program to recover our control file, once it is done successfully then we can go on our
merry way using rman to recover the rest of the database.
DECLARE
v_devtype VARCHAR2(100);
v_done BOOLEAN;
v_maxPieces NUMBER;
TYPE t_pieceName IS TABLE OF varchar2(255) INDEX BY binary_integer;
v_pieceName t_pieceName;
BEGIN
Define the backup pieces (names from the RMAN Log file)
v_pieceName(1) :=
/u02/app/oracle/flash_recovery_area/TEST/backupset/2005_07_30/o1_mf_ncnn0_TAG20050730T130722_1gqn4jy2
_.bkp;
v_maxPieces := 1;
Allocate a channel (Use type=>null for DISK, type=>sbt_tape for TAPE)
v_devtype := DBMS_BACKUP_RESTORE.deviceAllocate(type=>NULL, ident=>d1);
Restore the first Control File
DBMS_BACKUP_RESTORE.restoreSetDataFile;
CFNAME mist be the exact path and filename of a controlfile taht was backed-up
DBMS_BACKUP_RESTORE.restoreControlFileTo(cfname=>/u02/app/oracle/oradata/test/control01.ctl);
dbms_output.put_line(Start restoring ||v_maxPieces|| pieces.);
FOR i IN 1..v_maxPieces LOOP
dbms_output.put_line(Restoring from piece ||v_pieceName(i));
DBMS_BACKUP_RESTORE.restoreBackupPiece(handle=>v_pieceName(i), done=>v_done, params=>null);

exit when v_done;


END LOOP;
Deallocate the channel
DBMS_BACKUP_RESTORE.deviceDeAllocate(d1) ;
EXCEPTION
WHEN OTHERS THEN
DBMS_BACKUP_RESTORE.deviceDeAllocate;
RAISE;
END;
/
Pl/SQL completed successfully. I had 3 control files, the above command will restore only one control file so I will do a
operating system copy to restore the rest of the control files. cp control01.ctl control02.ctl cp control01.ctl control03.ctl
After control file is restored launch rman and list all the backup information.,
rman target /
rman>sql alter database mount;
rman>list backup;
BS Key Type LV Size Device Type Elapsed Time Completion Time
- - -
21 Incr 0 2G DISK 00:02:39 30-JUL-05
BP Key: 21 Status: AVAILABLE Compressed: NO Tag: TAG20050730T130722
Piece Name:
/u02/app/oracle/flash_recovery_area/TEST/backupset/2005_07_30/o1_mf_nnnd0_TAG20050730T130722_1gqmzdjz
_.bkp List of Datafiles in backup set 21
File LV Type Ckp SCN Ckp Time Name
- - - 1 0 Incr 1723296 30-JUL-05 /u02/app/oracle/oradata/test/system2.dbf
2 0 Incr 1723296 30-JUL-05 /u02/app/oracle/oradata/test/undotbs01.dbf
3 0 Incr 1723296 30-JUL-05 /u02/app/oracle/oradata/test/sysaux01.dbf
4 0 Incr 1723296 30-JUL-05 /u02/app/oracle/oradata/test/users01.dbf
The above command indicates that the backup set key is 21 and tag is blah and time is blah.
Connect to rman target / restore database; recover database; exit;
sqlplus / as sysdba; startup; Now the database should have been recovered to your current SCN at which time we
encountered a complete media failure.
1.3. How do I backup archive log?
In order to backup archivelog we have to do the following:run {
backup

(archivelog all delete input);


}
If you want to delete archive logs ignoring those that were inaccesible after backup you have to use (archivelog all
skip inaccessible delete input);
1.4. How do I do a incremental backup after a base backup?
RMAN> backup incremental level=1 database plus archivelog delete all input;
This will take a incremental backup of the database and make a copy of archivelog and delete all input.
1.5. What is ORA-002004 error?
A disk I/O failure was detected on reading the controlfile.
Basically check whether the control file is available, permissions
are right on the control file,
spfile/init.ora right to the right location, if all checks were
done still you are getting the error, then from the multiplexed
control file overlay on the corrupted one, let us say you have
three control files control01.ctl, control02.ctl and control03.ctl
and now you are getting errors on control03.ctl then just cp control01.ctl
over to control03.ctl and you should be all set.
In order to issue
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
database should be mounted and in our case it is not mounted then the only
other option available is to restore control file from backup or cp the
multiplexed control file over to the bad one.
1.6. What Information is Required for RMAN TAR?
Hardware Configuration
* The name of the node that hosts the database
* The make and model of the production machine
* The version and patch of the operating system
* The disk capacity of the host
* The number of disks and disk controllers
* The disk capacity and free space
* The media management vendor (if you use a third-party media manager)
* The type and number of media management devices
Software Configuration
* The name of the database instance (SID)
* The database identifier (DBID)
* The version and patch release of the Oracle database server
* The version and patch release of the networking software
* The method (RMAN or user-managed) and frequency of database backups

* The method of restore and recovery (RMAN or user-managed)


* The datafile mount points
You should keep this information both in electronic and hardcopy form. For example, if you save this information in a
text file on the network or in an email message, then if the entire system goes down, you may not have this data
available.
1.7. How To turn Debug Feature on in rman?
run {
allocate channel c1 type disk;
debug on;
}
rman>list backup of database;
You will see a output similar to
DBGMISC: ENTERED krmkdftr [18:35:11.291]
DBGSQL: EXEC SQL AT TARGET begin dbms_rcvman . translateDataFile (
:fno ) ; end ; [18:35:11.291]
DBGSQL: sqlcode=0 [18:35:11.300]
DBGSQL: :b1 = 1
DBGMISC: ENTERED krmkgdf [18:35:11.301]
DBGMISC: ENTERED krmkgbh [18:35:11.315]
DBGMISC: EXITED krmkgbh with status Not required no flags
[18:35:11.315] elapsed time [00:00:00:00.000]
DBGMISC: EXITED krmkgdf [18:35:11.315] elapsed time [00:00:00:00.014]
DBGMISC: EXITED krmkdftr [18:35:11.315] elapsed time [00:00:00:00.024]
DBGMISC: EXITED krmknmtr with status DF [18:35:11.315] elapsed time
[00:00:00:00.024]
DBGMISC: EXITED krmknmtr with status DFILE [18:35:11.315] elapsed time
[00:00:00:00.024]
DBGMISC: EXITED krmknmtr with status backup [18:35:11.315] elapsed time
[00:00:00:00.024]
DBGMISC: krmknmtr: the parse tree after name translation is:
[18:35:11.315]
DBGMISC: EXITED krmknmtr with status list [18:35:11.316] elapsed time
[00:00:00:00.078]
DBGMISC: krmkdps: this_reset_scn=1573357 [18:35:11.316]
DBGMISC: krmkdps: this_reset_time=19-AUG-06 [18:35:11.316]
DBGMISC: krmkdps: untilSCN= [18:35:11.317]
You can always turn debug off by issuing
rman>debug off;

To check if flashback is enabled or not

select flashback_on from v$database;

How to rename/move data file in oracle


Method 1 (Easy Method)
1) shutdown
2) COPY the dbf files where you want them
3) startup mount
4) alter database rename file orig location to target loc; for each file including system.
5) This stated method of moving the redo will work fine
6) alter database open
7) once everything has been verifed, you can delete the dbf files from their original location
Method 2
Moving datafiles of a database: The datafiles reside under /home/oracle/OraHome1/databases/ora9 and have go
to /home/oracle/databases/ora9.
SQL> select tablespace_name, substr(file_name,1,70) from dba_data_files;
TABLESPACE_NAME SUBSTR(FILE_NAME,1,70)
SYSTEM /home/oracle/OraHome1/databases/ora9/system.dbf
UNDO /home/oracle/OraHome1/databases/ora9/undo.dbf
DATA /home/oracle/OraHome1/databases/ora9/data.dbf
SQL> select member from v$logfile;
MEMBER

/home/oracle/OraHome1/databases/ora9/redo1.ora
/home/oracle/OraHome1/databases/ora9/redo2.ora
/home/oracle/OraHome1/databases/ora9/redo3.ora
SQL> select name from v$controlfile;
NAME

/home/oracle/OraHome1/databases/ora9/ctl_1.ora
/home/oracle/OraHome1/databases/ora9/ctl_2.ora
/home/oracle/OraHome1/databases/ora9/ctl_3.ora
Now, the files to be moved are known, the database can be shut down:
SQL> shutdown
The files can be copied to their destination:
$cp /home/oracle/OraHome1/databases/ora9/system.dbf /home/oracle/databases/ora9/system.dbf
$cp /home/oracle/OraHome1/databases/ora9/undo.dbf /home/oracle/databases/ora9/undo.dbf

$cp /home/oracle/OraHome1/databases/ora9/data.dbf /home/oracle/databases/ora9/data.dbf


$cp /home/oracle/OraHome1/databases/ora9/redo1.ora /home/oracle/databases/ora9/redo1.ora
$cp /home/oracle/OraHome1/databases/ora9/redo2.ora /home/oracle/databases/ora9/redo2.ora
$cp /home/oracle/OraHome1/databases/ora9/redo3.ora /home/oracle/databases/ora9/redo3.ora
$cp /home/oracle/OraHome1/databases/ora9/ctl_1.ora /home/oracle/databases/ora9/ctl_1.ora
$cp /home/oracle/OraHome1/databases/ora9/ctl_2.ora /home/oracle/databases/ora9/ctl_2.ora
$cp /home/oracle/OraHome1/databases/ora9/ctl_3.ora /home/oracle/databases/ora9/ctl_3.ora
The ##A(init.ora,/ora/admin/init_ora.html) file is also copied because it references the control files. I name the copied
file just init.ora because it is not in a standard place anymore and it will have to be named explicitely anyway when
the database is started up.
$cp /home/oracle/OraHome1/dbs/initORA9.ora /home/oracle/databases/ora9/init.ora
The new location for the control files must be written into the (copied) init.ora file:
/home/oracle/databases/ora9/init.ora
control_files = (/home/oracle/databases/ora9/ctl_1.ora,
/home/oracle/databases/ora9/ctl_2.ora,
/home/oracle/databases/ora9/ctl_3.ora)
$ sqlplus / as sysdba
SQL> startup exclusive mount pfile=/home/oracle/databases/ora9/init.ora
SQL> alter database rename file /home/oracle/OraHome1/databases/ora9/system.dbf to
/home/oracle/databases/ora9/system.dbf;
SQL> alter database rename file /home/oracle/OraHome1/databases/ora9/undo.dbf to
/home/oracle/databases/ora9/undo.dbf;
SQL> alter database rename file /home/oracle/OraHome1/databases/ora9/data.dbf to
/home/oracle/databases/ora9/data.dbf;
SQL> alter database rename file /home/oracle/OraHome1/databases/ora9/redo1.ora to
/home/oracle/databases/ora9/redo1.ora;
SQL> alter database rename file /home/oracle/OraHome1/databases/ora9/redo2.ora to
/home/oracle/databases/ora9/redo2.ora;
SQL> alter database rename file /home/oracle/OraHome1/databases/ora9/redo3.ora to
/home/oracle/databases/ora9/redo3.ora;
SQL> shutdown
SQL> startup pfile=/home/oracle/databases/ora9/init.ora

How to Increase Size of Redo Log


1. Add new log files (groups) with new size
ALTER DATABASE ADD LOGFILE GROUP

2. Switch with alter system switch log file until a new log file group is in state current
3. Now you can delete the old log file
ALTER DATABASE DROP LOGFILE MEMBER

Row chaining and Row Migration


Concepts: There are two circumstances when this can occur, the data for a row in a table may be too large to fit into
a single data block. This can be caused by either row chaining or row migration.
Chaining: Occurs when the row is too large to fit into one data block when it is first inserted. In this case, Oracle
stores the data for the row in a chain of data blocks (one or more) reserved for that segment. Row chaining most
often occurs with large rows, such as rows that contain a column of datatype LONG, LONG RAW, LOB, etc. Row
chaining in these cases is unavoidable.
Migration: Occurs when a row that originally fitted into one data block is updated so that the overall row length
increases, and the blocks free space is already completely filled. In this case, Oracle migrates the data for the entire
row to a new data block, assuming the entire row can fit in a new block. Oracle preserves the original row piece of a
migrated row to point to the new block containing the migrated row: the rowid of a migrated row does not change.
When a row is chained or migrated, performance associated with this row decreases because Oracle must scan
more than one data block to retrieve the information for that row.
o INSERT and UPDATE statements that cause migration and chaining perform poorly, because they perform
additional processing.
o SELECTs that use an index to select migrated or chained rows must perform additional I/Os.
Detection: Migrated and chained rows in a table or cluster can be identified by using the ANALYZE command with
the LIST CHAINED ROWS option. This command collects information about each migrated or chained row and
places this information into a specified output table. To create the table that holds the chained rows,
execute script UTLCHAIN.SQL.
SQL> ANALYZE TABLE scott.emp LIST CHAINED ROWS;
SQL> SELECT * FROM chained_rows;
You can also detect migrated and chained rows by checking the table fetch continued row statistic in the v$sysstat
view.
SQL> SELECT name, value FROM v$sysstat WHERE name = table fetch continued row;
NAME VALUE
-
table fetch continued row 308
Although migration and chaining are two different things, internally they are represented by Oracle as one. When
detecting migration and chaining of rows you should analyze carrefully what you are dealing with.
Resolving:
o In most cases chaining is unavoidable, especially when this involves tables with large columns such as LONGS,
LOBs, etc. When you have a lot of chained rows in different tables and the average row length of these tables is not
that large, then you might consider rebuilding the database with a larger blocksize.

e.g.: You have a database with a 2K block size. Different tables have multiple large varchar columns with an average
row length of more than 2K. Then this means that you will have a lot of chained rows because you block size is too
small. Rebuilding the database with a larger block size can give you a significant performance benefit.
o Migration is caused by PCTFREE being set too low, there is not enough room in avoid migration, all tables that are
updated should have their PCTFREE set so that there is enough space within the block for updates.
You need to increase PCTFREE to avoid migrated rows. If you leave more free space available in the block for
updates, then the row will have more room to grow.
SQL Script to eliminate row migration :
Get the name of the table with migrated rows:
ACCEPT table_name PROMPT Enter the name of the table with migrated rows:
Clean up from last execution
set echo off
DROP TABLE migrated_rows;
DROP TABLE chained_rows;
Create the CHAINED_ROWS table
@/rdbms/admin/utlchain.sql
set echo on
spool fix_mig
List the chained and migrated rows
ANALYZE TABLE &table_name LIST CHAINED ROWS;
Copy the chained/migrated rows to another table
create table migrated_rows as
SELECT orig.*
FROM &table_name orig, chained_rows cr
WHERE orig.rowid = cr.head_rowid
AND cr.table_name = upper(&table_name);
Delete the chained/migrated rows from the original table
DELETE FROM &table_name WHERE rowid IN (SELECT head_rowid FROM chained_rows);
Copy the chained/migrated rows back into the original table
INSERT INTO &table_name SELECT * FROM migrated_rows;
spool off
Tips
1. Analyze the table and check the chained count for that particular table
8671 Chain Count
analyze table tbl_tmp_transaction_details compute statistics
select table_name,chain_cnt,pct_free,pct_used from dba_tables where
table_name=TBL_TMP_TRANSACTION_DETAILS

2. Increase the pctfree size to 30


alter table tbl_tmp_transaction_details pctfree 30
3. Regenerate Report (When rows get updated only we will have Chained rows)
tbl_report_generation_status
begin dbms_job.run(190); end;
4. Analyze the table and check the chained count for that particular table
0 Chain Count
analyze table tbl_tmp_transaction_details compute statistics
select table_name,chain_cnt,pct_free,pct_used from dba_tables where
table_name=TBL_TMP_TRANSACTION_DETAILS
Note:
If we want to do the procedure to delete the chained rows from original table and insert the same again, then we need
chained_rows table
To create chained rows we need to run the utlchain.sql from $ORACLE_HOME/rdbms
Find out the chained rows.
analyze table tbl_tmp_transaction_details list chained count;
The above command will move the chained rows to chained_row table
Based on the rowid in chained_row table we can move those record to temp table and delete those chained rows
from original table then insert the same again into original table.
select * from tbl_tmp_transaction_details where rowid=AAAG8DAAGAAAGOKABD:

Oracle Database Architectural Overview (In Depth)


Oracle Architectural Overview
The architecture of Oracle is configured in such a way as to ensure that client requests
for data retrieval and modification are satisfied efficiently while maintaining database
integrity. The architecture also ensures that, should parts of the system become
unavailable, mechanisms of the architecture can be used to recover from such failure
and, once again, bring the database to a consistent state, ensuring database integrity.
Furthermore, the architecture of Oracle needs to provide this capability to many clients
at the same time so performance is a consideration when the architecture is configured.

Oracle Instance - The instance is a combination of a memory structure shared by all


clients accessing the data, and a number of background processes that perform actions
for the instance as a whole.
The shared memory structure is called the SGA, which stands for System Global Area or
Shared Global Area, depending on who you ask. Either term is equally acceptable and
the acronym SGA is the most common way to refer to this memory structure.
Oracle also includes a number of background processes that are started when the
instance is started. These include the database writer (DBW0), system monitor (SMON),
process monitor (PMON), log writer (LGWR), and checkpoint process (CKPT). Depending
on the configuration of your instance and your requirements, others may also be
started. An example of this is the archiver process (ARC0), which will be started if
automatic archiving of log files is turned on.

Oracle Database - The database consists of


three types of files. Datafiles, of which there can be many depending on the
requirements of the database, are used to store the data that users query and modify.
The control file is a set of one or more files that keeps information about the status of
the database and the data and log files that make it up. The redo log files are used to
store a chronological record of changes to the data in the datafiles.
User Process - The user process is any application, either on the same computer as
the database or on another computer across a network that can be used to query the
database. For example, one of the standard Oracle query tools is SQL*Plusa user
process. Another example of a user process is Microsoft Excel or an Oracle Financials
application. Any application that makes use of the database to query and modify data in

the database is considered a user process. The user process does not have to come
from Oracle Corporation; it only needs to make use of the Oracle database.
Server Process - The server process is a process launched when a user process makes
a connection to the instance. The server process resides on the same computer as the
instance and database and performs all of the work that the user process requests. As
you will find out in more detail later in this chapter, the server process receives requests
from the user process in the form of SQL commands, checks their syntax, executes the
statements and returns the data to the user process. In a typical Oracle configuration,
each user process will have a corresponding server process on the Oracle server to
perform all the work on its behalf.
Oracle Instance - As shown earlier in Figure 1-2, the Oracle instance is made up of a
shared memory structure (the SGA), which is composed of a number of distinct memory
areas. The other part of the instance is the set of background processes, both required
and optional, that perform work on the database.

The instance is always associated with one,


and only one, database. This means that when an instance is started, the DB_NAME
parameter in the Oracle parameter (INIT.ORA) file specifies which database the instance
will be connected to, while the INSTANCE_NAME parameter (which defaults to the value
of the DB_NAME parameter) specifies the name of the instance. The configuration of the
instance is always performed through parameters specified in the INIT.ORA file and one
environment variableORACLE_SIDwhich is used to determine which instance to start
and perform configuration operations on when on the same server as the database.
One of the main objectives of the instance is to ensure that connections by multiple
users to access database data are handled as efficiently as possible. One way it
accomplishes this is by holding information in the datafiles in one of its shared memory
structuresthe database buffer cacheto allow multiple users reading the same data
to retrieve that data from memory instead of disk since access to memory is about a
thousand times quicker than access to a disk file.

Another reason that the instance is important is that, when multiple users access Oracle
data, allowing more than one to make changes to the same data can cause data
corruption and cause the integrity of the data to become suspect. The instance
facilitates locking and the ability for several users to access data at the same time.
Note: It is important to remember that a user process, when attempting to access data
in the database does not connect to the database but to the instance. When specifying
what to connect to from a user process, you always specify the name of the instance
and not the name of the database. The instance, in this way, is the gatekeeper to the
database. It provides the interface to the database without allowing a user to actually
touch the various files that make up the database.
System Global Area (SGA) - The SGA is a shared memory structure that is accessed
by all processes in order to perform database activity, such as read and write data, log
changes to the log files, and keep track of frequently executed code and data dictionary
objects. The SGA is allocated memory from the operating system on which the instance
is started, but the memory that is allocated to it is managed by various Oracle
processes. The SGA is composed of several specific memory structures, as shown earlier
in Figure 1-2.
These include:
Shared Pool - The Shared Pool is an area of SGA memory whose size is specified by
the INIT.ORA parameter SHARED_POOL_SIZE. The default value for SHARED_POOL_SIZE
is 3,000,000 bytes, or just under 3MB in versions of Oracle prior to 8.1.7, and 8,000KB
bytes, or just under 8MB in Oracle 8.1.7. The size of the shared pool remains constant
while the instance is running and can only be changed by shutting down and restarting
the instance, after modifying the value in the INIT.ORA file.
The shared pool is divided into two main areas of memory the data dictionary
cache (also called the dictionary cache or row cache) and the library cache. The data
dictionary cache is used to store a cached copy of information on frequently accessed
data dictionary objects. The information cached includes the name of the object,
permissions granted on the object, dependency information, and so on. The data
dictionary cache also includes information on the files that make up the database and
what tablespaces they belong to, as well as other important information.
When a server process needs to determine what the name Students refers to, it
queries the data dictionary cache for that information and, if the information cannot be
found, it reads the information from the datafile where the data dictionary is located
and then places it in a cache for others to read. The information in the cache is stored

using a least-recently-used (or LRU) algorithm. This means that information that is
frequently requested remains in the cache while information that is only occasionally
required is brought into the cache and flushed out if space is required to bring other
information in.
You cannot manually size the data dictionary cacheOracle does this dynamically and
automatically. If more memory is required to cache data dictionary information, which
may be the case in a database with many objects, the cache is made larger to
accommodate the requests. If the memory is needed by the library cache component of
the shared pool, some memory may be freed up and removed from the data dictionary
cache.
The other major component of the shared pool is the library cache. The library cache is
used to store frequently executed SQL statements and PL/SQL program units such as
stored procedures and packages. Storing the parsed statement along with the execution
plan for the commands sent to the server in memory allows other users executing the
same statement (that is, identical in every way including case of statement text, spaces
and punctuation) to reduce the work required by not having to re-parse the code. This
improves performance and allows the code to appear to run quicker.
The

library

cache

is

broken

up

into

series

of

memory

structures

called

shared SQL areas that store three elements of every command sent to the server: the
text of the SQL statement or anonymous PL/SQL block itself, the parse tree or compiled
version of the statement, and the execution plan for the statement that outlines the
steps to be followed to perform the actions for the SQL statement of PL/SQL block.
Each shared SQL area is assigned a unique value that is based upon the hash calculated
by Oracle from the text of the statement, the letters and case thereof in it, spacing, and
other factors. Identical statements always hash out to the same value whereas different
statements, even returning the same result, hash out to different values. For example,
the following two statements use two shared SQL areas because their case is different,
though they both return the same information:
SELECT

FROM

DBA_USERS;

select * from dba_users;


One of the goals for ensuring good performance of the applications accessing data in
the database is to share SQL areas by ensuring that the statement returning the same
result are identical, thereby allowing each subsequent execution following the first of
the same SQL statement to use the execution plan created the first time a command is
run. The preceding two statements are considered inefficient because they would need

to allocate two shared SQL areas to return the same results. This would consume more
memory in the shared pool (once for each statement), as well as cause the server
process to build the execution plan each time. Like the data dictionary cache, the library
cache also works on an LRU algorithm that ensures that statements that are frequently
executed by users remain in the cache while those executed infrequently or just once be
aged out when space is required. Also like the data dictionary cache, you cannot
specifically size the library cacheOracle sizes it automatically based upon the
requirements of the users and statements sent to the server, as well as memory
allocated to the shared pool with the SHARED_POOL_SIZE parameter.
Database Buffer Cache - The database buffer cache is used to store the most recently
used blocks from the datafiles in memory. Because Oracle does not allow a server
process to read data from the database directly before returning it to the user process,
the server process always checks to see if a block it needs to read is in the database
buffer cache, and, if so, retrieve it from the cache and return the rows required to the
user. If the block the server process needs to read is not in the database buffer cache, it
reads the block from the datafile and places it in the cache.
The database buffer cache also uses an LRU algorithm to determine which blocks should
be kept in memory and which can be flushed out. The type of access can also have an
impact on how long a block from the datafile is kept in the cache. If the situation where
a block is placed in the cache as a result of an index lookup, the block is placed higher
in the list of blocks to be kept in the cache than if it were retrieved as a result of a full
table scan, where every block of the table being queried is read. Both placing the
datafile blocks in the database buffer cache in the first place, and their importance in
being kept there for a long or short period, is designed to ensure that frequently
accessed blocks remain in the cache.
The database buffer cache is sized by a couple of Oracle initialization parameters. The
INIT.ORA parameter DB_BLOCK_SIZE determines the size, in bytes, of each block in the
database buffer cache and each block in the datafile. The value for this parameter is
determined when the database is created and cannot be changed. Essentially, each
block in the database buffer cache is exactly the same size, in bytes, as each database
block in the datafiles. This makes it easy to bring datafile blocks into the database
buffer cachethey are the same size. The default for DB_BLOCK_SIZE is 2048KB too
small in almost all cases.
The other INIT.ORA parameter that is used to determine the size of the database buffer
cache is DB_BLOCK_BUFFERS. This parameter defaults to 50, which is also its minimum
value. The total amount of memory that will be used for the database buffer cache is

DB_BLOCK_BUFFERS * DB_BLOCK_SIZE. For example, setting DB_BLOCK_BUFFERS to


2,000 when the DB_BLOCK_SIZE is 8,192, allocates 20008192 or 16MB of RAM for the
database buffer cache. When sizing the database buffer cache, you need to consider the
amount of physical memory available on the server and what the database is being
used for. If users of the database are going to make use of full table scans, you may be
able to have a smaller database buffer cache than if they frequently accessed the data
with index lookups. The right number is always the balance between a sufficiently high
number so that physical reads of the datafiles are minimized and not too high a number
so that memory problems take place at the operating system level.
Redo Log Buffer - Before a change to a row or the addition or removal of a row from a
table in the database is recorded to the datafiles, it is recorded to the redo log files and,
even before that, to the redo log buffer. The redo log buffer is essentially a temporary
storage area for information to be written to the redo log files.
When a server process needs to insert a row into a table, change a row, or delete a row,
it first records the change in the redo log buffer so that a chronological record of
changes made to the database can be kept. The information and the size of the
information, that will be written to the redo log buffer, and then to the redo log files,
depends upon the type of operation being performed. On an INSERT, the entire row is
written to the redo log buffer because none of the data already exists in the database,
and all of it will be needed in case of recovery. When an UPDATE takes place, only the
changed columns values is written to the redo log buffernot the entire row. If a
DELETE is taking place, then only the ROWID (unique internal identifier for the row) is
written to the redo log buffer, along with the operation being performed. The whole
point of the redo log buffer is to hold this information until it can be written to the redo
log file. The redo log buffer is sized by the INIT.ORA parameter LOG_BUFFER. The default
size depends on the operating system that is being used but is typically four times the
largest operating system block size supported by the host operating system. It is
generally recommended that the LOG_BUFFER parameter be set to 64KB in most
environments since transactions are generally short.
The redo log buffer is a circular buffer, which means that entries have been written to
the redo log file and the space occupied by those entries can be used by other
transactions. This is possible because the log writer (LGWR) background process flushes
the contents of the redo log buffer to the redo log files whenever a commit occurs or
whenever any one transaction occupies more than one-third of the space in the buffer.
This essentially means that the redo log files are the most write-intensive files in a
database and that the redo log buffer can also be kept relatively small and still be
efficient.

While the database buffer cache, shared pool, and redo log buffer are a required part of
the SGA, the SGA also may have additional shared memory areas, based upon the
configuration of the instance. Two of the most common additional shared memory
structures are the large pool and the Java pool.
The large pool is sized by the INIT.ORA parameter LARGE_POOL_SIZE whose minimum
value is 0 but may actually allocate a different amount of memory based upon the
values of other INIT.ORA parameters such as PARALLEL_AUTOMATIC_TUNING. The large
pool is used for memory structures that are not directly related to the processing of SQL
statements. An example of this is for holding blocks in memory when a backup or
restore operation through Oracle Recovery Manager (RMAN) is taking place. Another use
of the large pool is for sort space in a multi-threaded server (MTS) environment.
The Java pool is used to store Java code and its execution plan. It is used for Java stored
procedures and functions and other classes that you have created that will be run in the
Java virtual machine (JVM) that resides on the Oracle server. The minimum value for
JAVA_POOL_SIZE is 0, but Oracle will always allocate a minimum of 32,768 bytes in
Oracle 8.1.6 or higher (in Oracle 8.1.5 the minimum value was 1,000,000 bytes and
setting the parameter JAVA_POOL_SIZE to a value less than that would generate an
error). The default value, if not set in the INIT.ORA file, is 20,000KB or 20MB.
Background processes - Aside from the shared memory structures, the SGA also
includes a number of background processes that perform actions that deal with
database operations for all users. In detailing these, you need to distinguish between
required and optional background processes.Oracle8i has five required background
processes. If any of these processes are killed or terminate for any reason, the instance
is considered to have crashed and instance recovery (that is, stopping and re-starting
the instance), and even database recovery may be needed. The required background
processes, as shown earlier in Figure 1-2, are the following:
SMON - System Monitor (SMON) is a background process that does exactly what you
would expectit monitors the health of the instance as a whole and ensures that the
data in the database is consistent. To ensure database and instance integrity, when you
start the instance before the database is opened and available to users, SMON rolls
forward any committed transactions found in the redo log files and rolls back any
uncommitted transactions. Because all changes to the data in the database are
recorded to the redo log buffer and then the redo log files, this means that anything that
has taken place before the instance crashed is properly recorded to the datafiles. Once
this is completed, the database will be opened and its data available to users for
querying and modification.

Note: During instance startup SMON only actually performs a roll forward before the
database is opened. Rollback of uncommitted transactions takes place after the
database is opened and before users access any data blocks requiring recovery. This is
known as delayed transaction rollback and is there to ensure that the database can
be opened as quickly as possible so that users can get at the data. Before any data that
was not committed prior to the instance going down is queried or modified, SMON
ensures that a rollback takes place to bring it to a consistent state.
SMON also does some clean-up work by coalescing any free space in the datafiles to
make it contiguous. When a table, index, or other object that requires storage is
dropped or truncated, this frees up the space that was previously used by that object.
Because a single object could be made up on many sets of database blocks called
extents, and these extents could be of different sizes, SMON coalesces (or combines)
these extents into larger chunks of free disk space in the datafiles so that they may be
allocated to other objects, if needed. The reason for this is that if the extents were left
at their original size, a CREATE TABLE statement may fail if it cannot find an extent of
the size it requestedwhile free space would still exist in the datafile.
As well as coalescing free space, SMON also de-allocates temporary segments in
datafiles that belong to permanent tablespaces to ensure that they do not occupy space
required by tables, indexes, or other permanent objects. Temporary segments are
created when a sort being performed cannot completely be performed in memory and
disk space is needed as a temporary storage area. When this disk space is created on a
tablespace holding other permanent objects such as tables, indexes, and the like, SMON
needs to get rid of the temporary segment as quickly as possible to ensure that a table
or index does not run out of disk space if the space is needed.
PMON - Process Monitor (PMON) is a required background process that does exactly
what the name saysit monitors server and background processes to ensure that they
are operating properly and have not hung or been terminated. If a server process dies
unexpectedly, it is up to PMON to rollback any transaction that the process was in the
middle of, release any locks that the process held, and release any other resources
(such as latches) that the process may have held. When an Oracle server process dies,
these actions are not automatically performedit is up to PMON to do so.
You should note that PMON might not perform these actions immediately after the
process has terminated. In the first place, how does PMON know for sure that a process
is hung and not just sitting idle? For example, a user could be connected to the instance
through SQL*Plus and then decide to go for lunch while still connected. Should his or her
process be terminated? Perhaps, but generally PMON cannot make that decision. It

waits for a clue that the process is no longer doing even simple things, such as
communicating its presence. The time PMON may wait to determine this can be quite
lengthy and cause others to be locked out from portions of the database. PMON will
eventually safely rollback the transaction, release locks, and clean up other resources.
In other words you may have to wait a while.
DBW0 - The Database Writer (DBW0) is a background process that performs a very
specific taskit writes changed data blocks (also known as dirty buffers) from the
database buffer cache to the datafiles. Whenever a change is made to data on a block
in the database buffer cache, the buffer where the change was made is flagged for
writes to the datafile. The database writer process, of which there could be several,
writes the changed blocks to the datafiles whenever a checkpoint occurs, or at other
pre-defined intervals. DBW0 does not write to the datafiles with the same frequency
that LGWR writes to the redo log files. The main reason for this is that the minimum size
of a write to the datafiles is DB_BLOCK_SIZE, which is at least 2,048 bytes. LGWR writes
to the redo log files no matter how much information needs to be recorded, and it could
be as little as 30 bytes, if only a single change of a small column is made in a
transaction. Therefore DBW0 writes are more expensive, in terms of disk I/O than LGWR
writes. Because DBW0 writes are more expensive, they are bunched and take place
when one of the following takes place:
The number of dirty buffers exceeds a pre-defined threshold. The threshold level is
different for each operating system and dependent on the values of other INIT.ORA
parameters but essentially is designed to ensure that a process can always find a clean
(that is, stable and already written to the hard disk) buffer in the cache to write its
information to. It is also preferable that the clean buffer be coldnot accessed for a
longer period of time and therefore lower on the LRU list of buffers.
A checkpoint has taken place. A checkpoint can be initiated manually by the DBA
issuing the command ALTER SYSTEM CHECKPOINT or automatically by a redo log file
group being filled up and LGWR needing to switch to a different redo log file group. In
either case, a checkpoint forces DBW0 to write all dirty buffers to disk to ensure that at
the time a checkpoint takes place all the datafiles are consistent with each other. A
checkpoint also causes the CKPT process to update all datafile headers and the control
files to ensure that the information is consistent across the database.
A server process cannot find a free buffer. Each time a server process needs to bring
data into the database buffer cache, it scans the LRU list of buffers to determine if a
clean free block exists. If it cannot find one after scanning a pre-defined number of

buffers, this will trigger DBW0 to write all dirty buffers to the datafiles and thereby
create clean buffers that the process can use for its information.
A timeout occurs. Oracle ensures that a write takes place to the datafiles every three
seconds, if needed so that the datafiles do not become too out-of-sync with the
information in the redo log files and the database buffer cache. It also helps to ensure
that changes to the data are written to the database ahead of checkpoints so that less
information needs to be written to the disk at that point in time.
In a typical Oracle8i instance, the number of database writers is set to one and others
are not started. If you have many hard disks and datafiles in your database, you may
want to increase the number of database writer processes. The Oracle initialization
parameter DB_WRITER_PROCESSES is used to configure the number of database
writers. Its default value is 1, which starts a process called DBW0. Increasing the value
of DB_WRITER_PROCESSES starts additional database writers up to a maximum of 10
DBW1 to DBW9. You should only use more database writers if your system is writeintensive and you have datafiles on several disks.
LGWR - The Log Writer process, as mentioned previously, writes data from the redo log
buffer to the redo log files at defined thresholds. It performs this whenever an implicit or
explicit commit occurs; when more than 1MB of data exists in the redo log buffer; when
the log buffer is more than one-third full with data from a transaction (that is, when a
single transaction occupies a large portion of the redo log buffer); when DBW0 needs to
write to the datafiles, forcing LGWR to write out changes from the redo log buffer to the
redo log files beforehand; or when three seconds have elapsed since the last LGWR
write. For any instance there is only one LGWR process and this number cannot be
changed. Of the times that LGWR writes to the redo log files, the most common reason
for doing so is that a commit has taken place. The most unlikely reason for an LGWR
write is that the redo log buffer contains more than 1MB of changes. Very few databases
have a need for a large redo log buffer and having one that is extremely large can leave
you open to losing a large number of changes and being unable to fully recover the
database in case of instance failure.
One important element to keep in mind is what actually happens when a transaction
commits. When the user issues the COMMIT statement to commit the transaction, LGWR
is instructed to flush data in the redo log buffer to the redo log files. If for any reason
LGWR is unable to write to the redo log files, the transaction cannot commit and will be
rolled back. Oracle requires that a physical record (that is, a write to a disk file) exist in
order for the transaction to be considered committed. If the physical record cannot be
created (that is, a write to the redo log files cannot take place because the redo log file

is unavailable due to a disk crash or other occurrence), the commit cannot complete
and the user is notified that the transaction was aborted.
CKPT - The Checkpoint process is responsible for one thingupdating control files and
datafiles whenever a checkpoint takes place. Checkpoints can take place when a DBA
issues the command ALTER SYSTEM CHECKPOINT, when a redo log file group fills up and
the LGWR initiates the checkpoint, or when the values specified by the INIT.ORA
parameters

LOG_CHECKPOINT_INTERNAL,

LOG_CHECKPOINT_TIMEOUT,

or

FAST_START_IO_TARGET are exceeded. Prior to Oracle8 the CKPT process was optional
and not required. Whenever a checkpoint occurred, LGWR would update datafile
headers and control files. However, this would also mean that LGWR could not write to
the redo log files at the same time as it was performing checkpoint updates, which
slowed the system down. For this reason, the CKPT process was made a required
background process in Oracle8.
The required background processes, along with the SGA, provide the basic functionality
required for an Oracle instance to operate. However, depending on the configuration of
your database and what Oracle options you are using, additional background processes
can also be started. Some of the more common optional background processes include:
ARC0 - The Archiver Process is used to write redo log files that have been filled and
switched from to one or more archive log destinations. In Oracle8i, unlike previous
versions, you can configure multiple archiver processes using the INIT.ORA parameter
LOG_ARCHIVE_MAX_PROCESSES, which defaults to one.
If you specify a large number of archive log destinations and have configured archiving
for the instance, having more than one process can improve performance.
Snnn - The Shared Server Process is used in a multi-threaded server (MTS) environment
to process requests from database users. Unlike in a typical dedicated server
configuration where each user process is assigned a dedicated server process to
perform work on its behalf, a multi-threaded server configuration shares the server
processes among all the user processes, thereby making the use of resources on the
computer more efficient. The number of shared server processes is configured by the
INIT.ORA parameters MTS_SERVERS (which defaults to 0) and MTS_MAX_SERVERS (which
defaults to 20). If MTS_SERVERS is set to 0, the multi-threaded server is not configured
on the instance and no shared server processes are launched.
Dnnn - The Dispatcher Process is also used in an MTS configuration. When a request to
initiate an MTS connection to the instance is received from a user process, the
dispatcher is the one that is assigned to the process. The same dispatcher can be used

to service requests from many users and passes those requests to a queue where they
are picked up by a shared server process and executed. The results are then placed in a
queue for the dispatcher that requested it. The dispatcher picks up the results and
transmits them to the client. The configuration of dispatchers is performed by setting
the INIT.ORA parameters MTS_DISPATCHERS (default of 0) and MTS_MAX_DISPATCHERS
(default of 5).
LCK0 - The Lock Process provides inter-instance locking between nodes in an Oracle
Parallel Server environment. When using Oracle Parallel Server, more than one instance
can access the same database. When users connected to one instance need to allocate
locks to change data in the database, the LCK0 process ensures that a user connected
to another instance does not already have the lock before allowing the requesting user
to get it. This is done to ensure that data in the database remains consistent at all
times, even when accessed by more than one instance.
RECO - The Recoverer Process is used in distributed database environments and only
when the DISTRIBUTED_TRANSACTIONS Oracle initialization parameter is set to a value
higher than zero, the default. In this situation it will be started automatically and will be
responsible for resolving any failed distributed transactions with data residing on other
nodes in the distributed database configuration. Essentially it makes sure that all
databases in a distributed transaction are in a consistent state and that a distributed
transaction does not commit on one node while not on another.
Oracle database - An Oracle instance by itself only provides the mechanism to access
the database. If the database is not created, having the instance started allows you to
create it. However, to get any real use out of the instance, the database must exist. The
database is where the data is storedboth the metadata (data about data, also known
as the data dictionary) and user data, such as the Orders table or Customers table or
LastName index.
An Oracle database is composed of a series of operating system disk files, as shown in
Figure 1-3. The three key types of files that Oracle uses are datafiles, control files, and
redo log files. Each file type is required for a database to operate properly and the loss
of one or more of the files belonging to the database usually requires that recovery be
initiated.

Datafiles

- Oracle

datafiles

contain

the

actual information that the database stores. When you create a table or index, it is
stored in a datafile (a physical element of storage) that belongs to a tablespace (a
logical element of storage). Datafiles store anything that is considered a segment in
Oracle, such as tables, indexes, clusters, partitions, large objects (LOBs), index
organized tables (IOTs), rollback segments, and temporary segments. Anything that
requires storage in the databasewhether created by the user or by Oracle itselfis
stored in datafiles. In fact, even the configuration of the database itself and the
datafiles, redo log files, tables, indexes, stored procedures, and other objects that exist
in the database are stored in a datafile.
Datafiles in Oracle8i have certain characteristics, such as:
A datafile can only be associated with one database. It is not possible to create a
datafile that will be part of two databases at the same time. You can, when using Oracle
Parallel Server, have two instances access the same databases and datafile.
A datafile belongs to a logical storage element called a tablespace. A single datafile
can only be associated with one tablespace and will only store data that is configured to
reside on that tablespace.
Datafiles can be configured to have a fixed size, or they can have an attribute set to
allow them to grow, should no free space within the datafile be found. If you configure a
datafile to autogrow, you can also configure a maximum size for the datafile to grow to,
or set no limit (no recommended).
Datafiles are organized internally into database blocks of the same size as the value
of the DB_BLOCK_SIZE parameter. Each unit of storage inside the datafile is of
DB_BLOCK_SIZE.
Datafiles can be read by any server process in order to place blocks of data in the
database buffer cache. Datafiles are normally written to only by the DBW0 process to
minimize the possibility of corruption.
Redo log files - Redo log files contain a chronological record of all changes that have
been made to the database. They are written to by the LGWR process and operate in a

circular fashion, which means that when one redo log file fills up, it is closed and
another redo log file is opened for writes. When the second one fills up it is closed and
the first, or another redo log file, is opened for writes, and so on. Each Oracle database
must have at least two redo log file groups with one redo log file per group, or the
database will cease to allow changes to the data.
Control files - When an Oracle instance is started, one of the first files opened and
read is the control file. The control file contains information about what files makeup the
database and when the last bit of information was written to them. If the information in
the control file and one of the datafiles or redo log files does not match, the instance
cannot be opened because the database is considered suspect. This means that some
sort of recovery may be required and you will need to deal with it. The location and
number of control files is set by the INIT.ORA parameter CONTROL_FILES. A database
must have at least one control file, although two or more are recommended. If a control
file cannot be opened for any reason, the instance cannot start and the database will
not be accessible.
Other key Oracle files - While datafiles, redo log files, and control files make up the
Oracle database, other files are also required to make the instance work properly, to
determine who is allowed to start and stop the instance, and to ensure good
recoverability. The files available for this purpose, as shown in Figure 1-3, are the
parameter (or INIT.ORA) file, the password file, and the archived redo log files.
Parameter file - The Oracle parameter file, also known as the INIT.ORA file, contains
the parameters and values that are used to start and configure an Oracle instance. The
parameter file may also contain settings for parameters that determine how Oracle
behaves in processing a query. An example of the latter is the OPTIMIZER_MODE
parameter, which can determine whether Oracle should use statistics in calculating the
execution plan for a specific query. The parameter file can be stored in any location on
the hard drive of the computer where the instance will be started and can have any
name. The default location for the parameter file is operating system dependent,
although the name always defaults to INITSID.ORA, where SID represents the name of
the instance it will be used for.
Password file - In order for a user to be able to start and stop the instance, special
privileges called SYSDBA or SYSOPER are required. The password file, which is created
by using the Oracle Database Configuration Assistant or the ORAPWD utility, lists the
users that have been granted one or both of the privileges mentioned. When a user
issues the STARTUP or SHUTDOWN command, the password file is checked to ensure
that the user has the appropriate privileges.

In

order

to

add

users

to

the

password

file,

the

INIT.ORA

parameter

REMOTE_LOGIN_PASSWORD_FILE must be set to EXCLUSIVE. The default for this


parameter is SHARED and does not allow you to add users to the file.
Archived redo log files - Archived redo log files are copies of full redo log files that
have been closed by LGWR and are created by the ARC0 process when automatic
archiving is turned on and the database is in ARCHIVELOG mode. An Oracle database by
default runs in NOARCHIVELOG mode, which means that as redo log files become full,
they are not copied or backed up in any way. If the redo log files need to be written to
again because other log files have also become full and a log switch has taken place,
the information in the previously full log is overwritten with no record of the changes
within it ever being recorded. From a recovery standpoint, running a database in
NOARCHIVELOG mode essentially means that if anything goes wrong, you must restore
your last full database backup and re-enter any data that change since then.
When the database is in ARCHIVELOG mode, Oracle does not allow a redo log file to be
overwritten until it is archived, that is, copied to a different location, by the ARC0
process. Until the redo log file is archived it cannot be written to. If the ARC0 process is
not started indicating that automatic archiving has not been configured by using the
LOG_ARCHIVE_START, and LOG_ARCHIVE_DEST_n Oracle initialization parameters, users
will not be able to make changes to the database until archiving takes place.
Having an archived copy of the redo log file enables you to perform data recovery up to
the point of failure, when combined with the current redo log file. ARCHIVELOG mode
also allows you to recover the database to a specific point in time or a particular
transaction, which provides flexibility from disastrous user error, such as someone
issuing the command DELETE FROM CUSTOMERS and then committing the transaction.
An archived redo log file is simply an image copy of a redo log file that has been
completely filled by LGWR and a log switch to another log file group has taken place.
However, having a copy of that redo log file ensures that your critical data can be
recovered with more flexibility.
Processing SQL Statements
Connecting to an instance: In order to submit SQL statements for processing to
Oracle, you must first establish a connection to the instance. The process of doing so, as
shown in Figure 1-4, may include a number of components, such as the user process,
Net8, the physical network, the server computer, server process, the instance and,
finally, the database.

Note: While a network connection between the


client (user process) and the server is not required, this is the more typical
configuration. It is also possible to have both the user process and server process on
the same computer, as would be the case if you were running SQL*Plus on the same
machine as the database. With the exception of the network connection, all other
elements of the way requests are sent to the server and the results returned to the user
process remain the same.
The process of connecting to an instance is initiated when a user starts an application
on the client computer that makes use of data in an Oracle database. By specifying a
connect string (username, password, and the instance to connect to), the user instructs
the client application to start the process of establishing a connection to the Oracle
instance (not the database).
After the user has provided a username and password, as well as the name of the
instance to connect to, the Net8 client component on the client computer attempts to
resolve the name of the instance in any of a number of ways that have been configured
for the client. One of these methods is to use a file on the local client computer called
TNSNAMES.ORA to lookup the instance name and determine which machine the
instance resides on and the network protocol that needs to be used to communicate
with it. Another method is to contact an Oracle Names server to determine the same
information. Whichever way is configured for Net8 to use, the process of determining
the location of the instance will be transparent to the user process, unless an error is
encountered.
Once Net8 has determined on which computer the instance to be connected to resides,
it sends a connection request with the username and password along the network using

the protocol configured to communicate to that computer. On the server computer, the
listener process of Net8 receives the request and launches a dedicated server process
on behalf of the user (or connect the client to a dispatcher, in an MTS connection
request) and verify the username and password. If the username and password are not
correct, the user is returned an error and needs to try again; if correct, the user is
connected to the instance. At this point, the communication between the user process
and the server process is direct and the listener is no longer involveduntil another
connection attempt is made.
In processing of client requests, the server process receives the request from the user
process and, in the processing of the SQL statement, makes use of an area of memory
allocated explicitly to itthe Process Global Area (or Program Global Area) or PGA for
short. The PGA is private to each server process launched on the server and is used to
allocate memory used to perform sorts, store session information (such as the
username and associated privileges for the users), keep track of the state of various
cursors used by the session, and stack space for keeping track of the values of variables
and the execution of PL/SQL code for the session. In a multi-threaded server (MTS)
environment, some of this information is kept in the large pool because server
processes are shared in an MTS configuration, but a typical dedicated server
environment allocates a PGA when a server process starts and de-allocates it when the
process terminates.
Statement and transaction processing: The user and server processes, the PGA,
and the SGA, along with the other processes that make up the Oracle instance, all work
together when a SQL statement is sent to the Oracle server to query or update data.
When a user issues any SELECT, INSERT, UPDATE, or DELETE statement, Oracle must go
through several steps to process these queries. Consider the processing of the following
statement:
UPDATE
SET

Courses
RetailPrice

1900

WHERE CourseID = 101;


When this statement is executed, Oracle goes through the following steps. Oracle first
parses the statement to make sure that it is syntactically correct. The parse phase is
typically done once for any SQL statement and will not need to be performed again if
the same statement is executed by any user, even the same one that sent it across in
the first phase. Oracle always attempts to minimize the amount of parsing that needs to
be performed because it is quite CPU-intensive, and having to parse many statements
increases the amount of work that needs to be performed by the server process.

During this parse phase, Oracle (that is, the server process) first determines whether
the statement is syntactically correct. If not, an error is returned to the user and no
further work is performed; if so, Oracle next determines whether the objects that are
referenced (in this case, the Courses table) are available for the user and whether the
user has permission to access the object and perform the required task (that is, the
UPDATE). It does this by locating information about the object in the data dictionary
cache or, if this information is not in cache, by reading the information from the
datafiles where the data dictionary resides and placing it in the cache. By placing
information about the object in the cache, it ensures that future requests for the object
are performed quicker, in case other users are also referencing the Courses table. If the
user does not have permissions or the object does not exist, an error is returned to the
user.
When the object is located and the user has permissions, the next element of the parse
phase is to apply parse locks on the objects being referenced by the statement (the
Courses table) to ensure that no one makes a structural change to the object while it is
being used, or drops the object. The server process next checks whether the statement
has been previously executed by anyone by calculating a unique hash value for the
statement and checking the shared pool to see if the shared SQL areas contain the hash
value calculated. If so, then Oracle does not need to build the execution plan (the series
of tasks to be performed to satisfy the query). It can simply keep the execution plan that
was previously created and use it in the next phase of processing. If it cannot find the
execution plan, indicating this is the first time the statement is being run, or the
statement is no longer in the shared pool and has been aged out, Oracle then builds the
execution plan and places it in the shared SQL area in the shared pool. Oracle then
proceeds to the execute phase of processing.
During the execute phase, Oracle runs the execution plan in the shared pool and
performs whatever tasks are contained therein. This includes locating the relevant
blocks of data in the Database Buffer Cache, or, if they are not in the cache, the server
process reads the datafiles where the data resides and loads the data blocks into the
Database Buffer Cache within the SGA. The server process then places a lock on the
data being modified (in this case, the row containing course 101). This lock prevents
other users from updating the row at the same time you are updating it. Oracle then
updates a rollback segment block and a data segment block in the database buffer
cache, and records these changes in the redo log buffer. It places the data in the row
prior to the update in the rollback block and the new value in the data block.
The

Rollback

Segment

is

used

for

two

purposes:

Read consistency: Until the change is committed, any user who executes a query for

the retail price of course 101 sees the price prior to the upgrade. The new value is not
visible

until

the

update

is

committed.

If the system crashes before the transaction is committed, or if the user issues an
explicit ROLLBACK command, the data in the rollback segment can be used to return
the row to its initial state.
The final phase of processing is the fetch phase. For a SELECT statement, the fetch
phase of processing returns the actual data to the user, and it is displayed in SQL*Plus,
or the user process that made the request. For an UPDATE operation, or any data
manipulation language (DML) statement, the fetch phase simply notifies the user that
the requisite number of rows has been updated.
When other statements are part of the same transaction, the same series of steps (that
is, parse, execute, and fetch) take place for each statement until the user issues a
COMMIT or ROLLBACK statement. When the transaction is committed or rolled back,
Oracle ensures that all information in the redo log buffer pertaining to the transaction is
written to the redo log files, in the case of a COMMIT, or the data blocks are restored to
their previous state, in the case of a ROLLBACK, and removes all locks. Oracle also
erases the values held in the rollback segment. This means that once a transaction is
committed, it is no longer possible to roll it back, except by performing a database
restore and recovery.
***********************************************************

How to find IP address of Unix Server


netstat -ni

DataGuard Switch Over


DataGuard SWITCH OVER
Phase 1 PRIMARY:
1) SELECT SWITCHOVER_STATUS FROM V$DATABASE;
2) /**** ALTER SYSTEM SET remote_archive_enable=RECEIVE SCOPE=SPFILE;/
ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY WITH SESSION SHUTDOWN; (If you get
SESSIONS_ACTIVE in 1, otherwise not needed)
3) ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY;
4) SHUTDOWN IMMEDIATE;
5) STARTUP NOMOUNT;
6) ALTER DATABASE MOUNT STANDBY DATABASE;
7) ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;

Phase 2 STANDBY:
1) SELECT SWITCHOVER_STATUS FROM V$DATABASE;
2) /****ALTER SYSTEM SET remote_archive_enable=SEND SCOPE=SPFILE;/
3) ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
4) SHUTDOWN IMMEDIATE;
5) STARTUP;
tch from Standby to Primary
Sep252009

RMAN Control File AutoBackup On/Off


$rman target /
RMAN> show controlfile autobackup;
RMAN> show controlfile autobackup format;
To Turn Off Auto Control file Backup
RMAN> configure controlfile autobackup off;
RMAN> show controlfile autobackup;

How to Calculate Hit Ratio from Tkprof output


Tkprof will have three fields in output
1. Disk
2. Query
3. Current
Generally hit ratio is calculated like as follows
Hit Ratio = 1 (Physical Reads/Logical Reads)
Physical Reads = Sum (Disk)
Logical Reads = Sum (Query) + Sum (Current)

Das könnte Ihnen auch gefallen