Sie sind auf Seite 1von 197

9866465379

Oracle DBA

What is Oracle ?
An Oracle database is a collection of related data treated as a unit. The
purpose of database is to store and retrieve related information. A
database server can manage large amount of data in a multiuser
environment reliably so that multiple users are allowed to access
the same data concurrently with high performance and also allows
efficiently to recover on failure.
Who is DBA ?
DBA is a person who is responsible to manage the database server.
Some times a database can be very large and can have large
number of users. Hence it is not a single person job, but group of
people(DBA's) who share responsibility.
DBA's Responsibilities

As an Oracle DBA, you can expect to be involved in the following tasks:


● Installing Oracle software
● Creating Oracle databases
● Performing upgrades of the database and software to new release
levels
● Starting up and shutting down the database
● Managing the database's storage structures
● Managing users and security
● Managing schema objects, such as tables, indexes, and views
● Making database backups and performing recovery when necessary
● Proactively monitoring the database's health and taking
preventive or corrective action as required
● Monitoring and tuning performance
● Configuring oracle net services
● Configuring high availability features like creating standby
● Performing database cloning
● Performing database migrations etc.
Types of Database Users

1
9866465379

Database Administrators
These are the persons who will install oracle software, create database,
configure and manage in the host computer.
Security Administrators or officers
These will create users in the database, controls and monitors user access
to the database, and looks after the system security.
Network Administrators
These will look after the administration of oracle network products.
Application Developers
These people will design and implement database applications.
Design database structure, estimate storage requirements for the
application and communicate this to the database administrator.
Application Administrators
Every application can have its own administrator.
Database Users
These users will interact with the database through applications or
utilities and are responsible to enter, modify, delete data where permitted.

Some Oracle Product Family

Oracle database 10g


This database product introduces many new features. Three main goals
are Ease of management, Enhanced scalability and performance.
Ease of management feature include the automatic management of disk
storage allocated to the database and this can be implemented by using
Automatic Storage Management (ASM).
Scalability and performance are based on grid computing model.
With the help of automated performance tuning and monitoring
we can dynamically adjust these resources to the database allocation.
There are several editions of oracle 10g
Enterprise Edition
This includes all available oracle 10g components as bundle.
This is for mission critical applications such as high volume online

2
9866465379

transaction processing. It provides performance, availability, scalability,


and security required.
Standard Edition
Ease of use management features, performance and full clustering
features. This will support servers running as many as four processors.
Standard Edition One
All 10g ease of management features for servers, includes all the facilities
necessary to build business critical applications. This will support servers
running as many as 2 processors.
Personal Edition
Includes all available 10g features either bundled or extra cost options,
but for an individual user database.
Express Edition
This is entry level of oracle database edition which is quick to download,
install, and manage. Free to develop, deploy and distribute. This can be
installed on any machine with any number of processors, stores up to
4GB of user data and can use 1GB of RAM.
Lite
This is useful for developing, deploying and managing applications for
mobile and embedded environments.

Oracle database 11g


Oracle database 11g Enterprise
Oracle Database 11g Release 2 Enterprise Edition delivers industry
leading performance, scalability, security and reliability on a choice of
clustered or single-servers running Windows, Linux and UNIX. It
provides comprehensive features to easily manage the most demanding
transaction processing, business intelligence, and content management
applications
 Protects from server failure, site failure, human error, and reduces
planned downtime
 Secures data and enables compliance with unique row-level
security, fine-grained auditing, transparent data encryption and
total recall of data
 High-performance data warehousing, online analytic processing,
and data mining

3
9866465379

 Easily manages entire lifecycle of information for the largest of


databases

Oracle database 11g standard


Oracle Database 11g Standard Edition is an affordable, full-featured
database for servers with up to four sockets. It includes Oracle Real
Application Clusters for higher availability, provides enterprise-class
performance and security, is simple to manage, and can easily scale as
demand increases. It is also upwardly compatible with Enterprise edition
and can easily grow with you, protecting your initial investment
Oracle Database 11g Standard One
Oracle Database 11g Standard Edition One is an affordable, full-featured
database for servers with up to two sockets. It provides enterprise-class
performance and security, is simple to manage, and can easily scale as
demand increases. It is also upwardly compatible with other database
editions and can easily grow with you, protecting your initial investment.
 Get started with low entry cost of $180 per user (minimum 5 users)
 Support all business applications with enterprise-class
performance, security, availability and scalability
 Run on Windows, Linux and Unix operating systems and easily
manage with automated, self-managing capabilities

Oracle Application Server 10g


This is used to deploy web based applications, highly reliable and
scalable like the database to thousands of users.
This is also available in number of versions. All include full java
functionality, and oracle's http server with additional components such as
portals, forms, reports, and wireless connectivity.

Oracle developer Suite


This contains several products which are useful to design, develop web
based applications. Some of the products available are Designer,
JDeveloper, Forms and Reports, Discoverer, Warehouse Builder etc.

Oracle Application 11i


Database, Application server and Developer products collectively
called Oracle Applications 11i. This is composed of a number of modules
which are used to manage financial, manufacturing, sales, service etc for

4
9866465379

both Business and Public sector organizations.

Oracle Collaboration suite


This offers a comprehensive system that integrates all of a business's
communication technologies from email, voice mail , faxes to wireless
connectivity. Like applications 11i this also uses database, application
server as core technology.
Apart from the above oracle also offers technical and consulting services.
Technical support is delivered through oracle's metalink website and
available to all customers with current maintenance
agreement. Offers consulting services to help customers install, configure
oracle products.
metalink is http://metalink.oracle.com
A valid customer support identifier (CSI) is required to create
metalink account.
The DBA and developers can access freely http://otn.oracle.com
for services and resources.
Another service offered by oracle is Education.
Connecting to Oracle database as DBA user
For an administrator to perform basic database operations like starting up
and shutting down a database are granted through two special system
privileges called SYSDBA and SYSOPER.
The user is going to work as DBA must be granted any one of these
privileges. These will allow access to a database instance even when the
database is not open. If any user have these privileges he or she can
connect to the database by specifying as follows.
[raju@linux10 ~]$ sqlplus /nolog

SQL*Plus: Release 10.2.0.1.0 - Production on Mon Mar 15 20:48:53


2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
SQL> conn as sysdba
Enter user-name: sys
Enter password:

5
9866465379

Connected to an idle instance.


SQL>
SQL> show user;
USER is "SYS"
SQL>
When you connect with SYSDBA privilege the default schema is SYS.
Oracle will use the following methods for authenticating DBAs

1. Operating System authentication


2. Password File authentication

Precedence is always given to Operating System authentication.


If you meet Operating System authentication requirements, even if you
use password file but authentication is done by Operating System
authentication.
When we are connecting to the database from a remote system as a
privileged user then authentication is done by password file.
If the user is having OS account with operating system group OSDBA or
OSOPER and the database is running in the same machine user can
connect to the database as follows
[raju@linux10 ~]$ sqlplus /nolog

SQL*Plus: Release 10.2.0.1.0 - Production on Mon Mar 15 20:50:59


2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
SQL> conn / as sysdba
Connected to an idle instance.
SQL>
[raju@linux10 ~]$ sqlplus /nolog

SQL*Plus: Release 10.2.0.1.0 - Production on Mon Mar 15 20:51:28


2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.

6
9866465379

SQL> conn / as sysoper;


Connected to an idle instance.
SQL>
[raju@linux10 ~]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Mon Mar 15 20:52:04


2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to an idle instance.
SQL>
Under unix OSDBA group is “dba” group and OSOPER is “oper” group.
We can also connect to the database by using SYS and SYSTEM
accounts which are automatically created at the time of installation and
creation of the database and granted DBA role. After installation we need
to change the default passwords for SYS and SYSTEM accounts.
For SYS account default password is CHANGE_ON_INSTALL
For SYSTEM account default password is MANAGER.
SYS USER
All the base tables and views for the database data dictionary are stored in
SYS schema. These are critical for the operation of oracle database. They
should never modified by any user or database administrator. And no one
should create any tables in SYS user schema.
SYSTEM USER
This is used to create additional tables and views that display
administrative information and internal tables used by various oracle
tools.
The DBA role
A predefined DBA role is automatically created with every oracle
database installation. This contains many database system privileges. So
this role should be granted to only the DBAs.
This role does not include SYSDBA and SYSOPER system privileges.
We can use sqlplus to connect to oracle locally.
For this you must ensure that environment variables are set properly.
Each oracle instance will have a unique System Identifier (SID) .
We can set the following variables

7
9866465379

ORACLE_SID
ORACLE_BASE
ORACLE_HOME
LD_LIBRARY_PATH
Under unix flavors we can add these variables to shell profile files so that
they are set automatically at the time of logging into the system.
For example if the default login shell is bash then we must enter into a
file called .bash_profile.
Append the following lines at the end of above file.

export ORACLE_SID=prod
export ORACLE_BASE=/oracle
export ORACLE_HOME=/oracle/10.2.0
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib
export PATH=$PATH:$ORACLE_HOME/bin

Differences between SYSDBA and SYSOPER privileges

SYSDBA :
Allow to perform Startup and shutdown the database
Allow to alter the database
Allow to create and drop database
Allow to create spfile
SYSOPER :
Allow startup and shutdown the database
Allow to create Spfile
Allow to alter the database
We can grant SYSDBA and SYSOPER privileges to any database user,
there after who will act as a DBA.
SQL > grant sysdba to ramesh;
The above statement add user to the password file and enable the user to
connect as SYSDBA.

8
9866465379

Now the user can connect to the database as follows

sqlplus ramesh/ramesh@sales as sysdba

We can use revoke to revoke the privileges.


SQL > revoke sysdba from ramesh;

To see all users in the password file use


SQL > select * from v$pwfile_users;

Apart from user tables the oracle database also contain some system
tables that store the data about the database itself.
These system tables include the names of all the tables in the database,
the column names and datatypes of those tables, the number of rows these
tables contain, security information about which users are allowed to
access these tables etc.
This data about the database is referred as metadata.
These tables have cryptic names such as OBJ$, FILE$ etc.
To make it easier to use sql to examine metadata tables oracle builds
views on these tables.
Oracle 10g database contains two types of metadata views.
Data Dictionary Views
Depending upon features configured and installed oracle 10g can contain
more than 1300 Data dictionary views.
These have names that begin with DBA_, ALL_, USER_.
For Example :
DBA_TABLES view shows information on all the tables in the database.
ALL_TABLES view shows only the tables that a particular user owns
and has access to other user tables.
USER_TABLES view shows only those objects owned by a user.
Some Examples :
DBA_TABLES
Show the names and physical storage information about all the tables.

9
9866465379

DBA_USERS
Show information about all the users in the database.
DBA_VIEWS
Show information about all the views in the database.
DBA_TAB_COLUMNS
Show all the names and datatypes of the table columns in the database.
Dynamic Performance Views
Oracle 10g can contain around 350 dynamic performance views
depending up on features selected.
Most of these have names that begin with v$.

Some examples :

V$DATABASE
Contain info about the database itself such as database name, when it was
created etc.
V$VERSION
Shows which software version the database is using.
V$OPTION
shows optional components that are installed in the database.
V$SQL
Show info about the sql statements that database users have been issuing.

Differences between Data Dictionary views and Dynamic performance


views.
Data Dictionary views
These will have usually plural names
These are available only when database is open and running
Data is generally upper
Data is static and not cleared when the database is shutdown
Dynamic performance views
V$ view names are usually singular

10
9866465379

Some views are available even when the database is not fully open and
running
Data contained usually lower case
These contain dynamic statistical data that is lost each time the database
is shutdown.
When ever we start database instance, parameter initialization file is read
first which will contain some parameters and values.
These parameters advice the instance of certain settings when it starts up
the database.
There are two types of parameter initialization files
Parameter File (Pfile) and Server parameter file (Spfile)
We can use any one of these two to configure the instance and database
options.
Pfile ( parameter file )
This is a text file which can be edited using a text editor
Its name will be initinstancename.ora
can be created from an Spfile.
Spfile ( Server parameter file )
Binary file that cannot be edited using text editor.
Changes can be made to spfile while the instance is open and running by
executing sql commands from an sql prompt
Its name will be spfileinstancename.ora
Can be created from a pfile.
For example : If ORACLE_SID is prod then
Pfile name will be initprod.ora
Spfile name will be spfileprod.ora
The default location of these files will be $ORACLE_HOME/dbs folder.
we can specify more than 250 configuration parameters in the pfile or
spfile.
Oracle 10g divides these parameters into two categories basic and
dvanced.
Oracle recommends to set only about 30 parameters manually.
Remaining parameters can be set as directed by oracle support or to meet
the specific needs of an application.

11
9866465379

Some Initialization parameters and their description


CLUSTER_DATABASE
Tells the instance whether it is a part of cluster environment.
COMPATIBLE
Specifies the release level and features set that you want to be active in
the instance.
CONTROL_FILES
The physical location of database control files.
DB_BLOCK_SIZE
Specifies default database block size.
DB_CREATE_FILE_DEST
Specifies the location where database files will be created if the oracle
managed files feature is used.
DB_DOMAIN
Specifies the logical location of a database in the network.
DB_NAME
Specifies the name of a database that is mounted by the instance.
REMOTE_LOGIN_PASSWORDFILE
Tell whether the instance uses password file or not and type.
SESSIONS
Specifies the maximum number of sessions that can connect to a
database.
SGA_TARGET
Specifies the maximum size of the SGA, within that space is
automatically allocated to each SGA component when automatic memory
management is used.
SHARED_SERVERS
Specifies the number of shared server processes to start when the instance
is started.

12
9866465379

Oracle Architecture
The Oracle server architecture can be described as follows.

● User related processes


● Oracle Instance (Memory structures & Background processes)
● Database ( Physical files)
User Related Processes
Two processes required for a user to interact with the instance and
database, User Process and Server Process.
When ever a user runs an application oracle starts a User Process to
support the user's connection to the instance. Then User Process initiates
a connection to the instance. Once connection is made, the user
establishes a session in the instance. After establishing session each user
starts a Server Process on the host machine. This Server Process is
responsible to allow the user to interact with the database. The server
Process communicates with the oracle instance on behalf of the user.
Server Processes generally have one to one relationship with
user processses. These are called dedicaed servers.
In some configurations multiple user processes can share a server
process, these are called shared servers.
In addition to the User and Server Process associated with each user
connection, an additional memory structure called Program Global Area
(PGA) is created for each user.
The PGA stores user specific session information such as bind variables
and session variables.
Generally the PGA memory can be classified into
Private SQL Area
This has data on bind information and runtime memory structures. Every
user that issues a sql statement has this area.
Any Private SQL Area can be associated with the same shared SQL area.
This is divided into persistent area which is freed when the cursor is
closed and runtime area which is freed when execution is terminated.
Cursors
This is a handle or name given for a Private SQL area.
Used as a named resource during execution of the program.

13
9866465379

The number of Private SQL areas a user process can allocate is limited by
open_cursors parameter (Default is 50).
Session Memory
This memory is allocated to hold session's variables and other session
information related to the session.
The PGA is also includes a sort area, which is used when ever a user
request requires a sort, bitmap merge or hash join operations.
As of oracle 9i PGA_AGGREGATE_TARGET parameter in conjunction
with the WORKAREA_SIZE_POLICY initialization parameter can ease
system administration by allowing the DBA to choose a total size for all
work areas and let oracle manage and allocate the memory between all
user processes.

PGA

Stack Space Session Information Sort, hash, merge Area

14
9866465379

The Oracle Instance


Oracle instance is made up of oracle's main memory structure called
System Global Area (SGA) which is a shared memory and many oracle
background processes.
When user access the data in the database, the server process
communicates with SGA. This contains data and control information for
one oracle instance. Oracle allocates the SGA when an instance starts and
deallocates when the instance shutdowns. The entire SGA should be large
enough to increase the system performance and to reduce disk I/O. There
is a fixed part and variable part.
The SGA is made up of three required memory structures and three
optional components.
The required SGA components
3. Shared Pool
The shared pool can be subdivided into number of smaller segments,
control structures, character sets, and the dictionary cache, Library cache.
Library cache : This holds information about SQL and PL/SQL
Statements that are run against the database. Because it is shared by all
the users, different database users can share the same SQL statement.
Along with the SQL statement, the execution plan and parse tree of the
SQL statement are stored in the library cache. The second time an
identical SQL statement is run by the same user or a different user, the
execution plan and parse tree are already computed, improving the
execution time of the query or DML statement.
If the library cache is sized too small, the execution plans and parse trees
are flushed out of the cache thus requiring frequent reloads of SQL
statements into the library cache.
Private SQL Area
Each session that issues an SQL statement has a private SQL area
associated with this statement. The first step for Oracle when it executes
an SQL statement is to establish a run time area (within the private SQL
area) for the statement.
Data Dictionary Cache : This is a collection of database tables owned
by the SYS and SYSTEM schemas that contain the metadata about the
database, its structures, privileges and roles of database users. This holds
a subset of the columns from data dictionary tables after first being read
into the buffer cache.

15
9866465379

If the data dictionary cache is too small, requests for information from the
data dictionary will cause extra I/O to occur. These I/O bound data
dictionary requests are called recursive calls and should be avoided by
sizing the data dictionary cache correctly.
The shared pool is sized by the SHARED_POOL_SIZE parameter.
This is dynamic parameter.
2. Database Buffer Cache
This holds blocks of data from disk that have been recently read to
satisfy a SELECT statement or that contain modified blocks that have
been changed or added from a DML statement The buffer cache contains
both modified and unmodified blocks. As of oracle 9i database buffer
cache is dynamic. Considering that there may be tablespaces in the
database with block sizes other than the default block size require their
own buffer cache. As the processing and transactional needs change
during the day or during the week the values of DB_CACHE_SIZE and
DB_nk_CACHE_SIZE can be dynamically changed without restarting
the instance. ( one block size for the default and up to four others)
Oracle can use two additional caches with the same block size as the
default block size. The KEEP buffer pool and the RECYCLE buffer pool.
As of oracle 9i both of these pools allocate memory independently of
other caches in the SGA.
When a table is created, you can specify the pool where the tables data
blocks will reside by using the BUFFER_POOL KEEP clause or
BUFFER_POOL_RECYCLE clause in the storage clause. For tables that
we use frequently throughout the day, it would be advantageous to place
this table into KEEP buffer pool to minimize the I/O needed to fetch
blocks in the table.
Oracle uses LRU algorithm to manage the contents of the Shared Pool
and Database Buffer Cache

3. Redo Log Buffer


This holds transaction information for recovery purpose.
This is a log of changes made to the database. The contents of this buffer
are written to an online redo log file. The entries in the redo log buffer
once written to the redo log files are critical to database recovery if the
instance crashes before the changed data blocks are written from the
buffer cache to the datafiles. A users committed transaction is not
considered complete until the redo log entries have been successfully

16
9866465379

written to the redo log files.

The optional SGA components


1. Java Pool
Caches the most recently used java objects and application code when
oracle's jvm option is used.
2. Large Pool
This is used for transactions that interact with more than one database.
Message buffers for processes performing parallel queries, and caches
data for large operations such as Recovery Manager backup and restore
activities and shared server components.
The initialization parameter LARGE_POOL_SIZE controls the size of
the large pool and is a dynamic parameter.
3.Streams Pool
New to oracle 10g,. This holds data and control structures to support the
oracles streams feature. Oracles streams manages the sharing of data and
events in a distributed environment. The initialization parameter
STREAMS_POOL_SIZE is used to control the size.
If this parameter is not initialized or set to zero, the memory used for
streams operations is allocated from the shared pool and may use up to 10
percent of the shared pool (in ASMM only).
Software Code Area
Software Code Areas store the oracle executable files that are running as
part of an oracle instance. These areas are static in nature and only change
when a new release of the software is installed.
The sizes of these SGA components can be managed in two ways,
Manual and Automatic.
If we select to manage manually we must specify the size of each
component and increase or decrease according to the requirement.
If these are managed automatically the instance itself will monitor
adjust the sizes accordingly relative to a predefined maximum allowable
SGA size.
Whether the instance operates in manual or automatic mode is determined
by settings in a configuration file called the parameter initialization file.
These parameters advise the instance of certain settings when it starts up
the database.

17
9866465379

To see SGA components and their sizes


SQL > select * from v$SGA;
SQL > select component, current_size from
V$SGA_DYNAMIC_COMPONENTS;

System Global Area (SGA)

Database KEEP Buffer Shared Pool


pool Reserved Pool
Buffer
Cache
(Default Size)
RECYCLE Buffer Data Library Cache
pool Dictionary
Cache Shared
Database Buffer Cache (size nk) SQL
Area
Server
Database Buffer Cache (size nk) Result Private
Cache SQL
Area
Large Pool
Request
Response Queue
Queue
PL/SQL
Control procedures
Structures and
Java Pool packages

Streams Pool
Fixed SGA
Redo Log Buffer

Software Code Area

18
9866465379

Oracle Background Processes


There are many oracle background processes. A background process ia a
block of executable code designed to perform a specific job in managing
the instance. Five oracle background processes are required and many are
optional.
Required Background processes
System Monitor (SMON)
This performs crash recovery following an instance crash or system crash
due to a power outage or CPU failure by applying the entries in the online
redo log files to the datafiles. Temporary segments in all the tablespaces
are purged during system restart. coalesces free space in the database and
manages space used for sorting.
Process Monitor (PMON)
Clean up failed user database connections. It cleans up the database
buffer cache along with any other resources that the user connection was
using.
For example a user session may be updating some rows in a table, placing
a lock on one or more rows. Due to some problem knocks out the power
at the users desk and sqlplus session disappers.
Within no time PMON will detect that the connection no longer exists
and perform the following tasks.
Roll back the transaction that was in progress when the power went out.
Mark the transactions blocks as available in the buffer cache.
Remove the locks on the affected rows in the table.
Remove the process ID of the disconnected process from the list of active
processes.
PMON will also register the database with the listener, and also interact
with the listeners by providing information about the status of the
instance for incoming connection requests.
Database Writer (DBWn)
Is responsible for managing the contents of the data base buffer
cache and dictionary cache. Writes new or changed data blocks(known as
dirty blocks) from Database Buffer cache to the datafiles on the disk.
Using an LRU algorithm DBWn writes the oldest, least active blocks
first. As a result the most commonly requested blocks even if they are

19
9866465379

dirty blocks are in memory.


Checkpoint process(CKPT)
Checkpoint : At specific times, all modified database buffers in the
system global area are written to the datafiles by DBWn process, this
event is called a checkpoint
This helps to reduce the amount of time required for instance recovery.
During a check point CKPT updates the header of the control file and the
datafiles to reflect the last successful SCN. A check point occurs
automatically every time a redo log switch occurs. The DBWn processes
routinely write dirty buffers to advance the check point from where
instance recovery can begin, thus reducing the Mean Time to Recovery
(MTTR). incremental checkpoints are looked for when dbwr times out,
which is 3 seconds. They may or may not be performed every three
seconds.

fast_start_mttr_target doesn't fire checkpoints -- it uses incremental


checkpoints to incrementally keep the buffer cache "cleaner". the less
dirty blocks in the cache, the less time it takes to recover

Controlfile records information about last checkpoint and archived


sequence along with other information.
-- shows the CHECKPOINT SCN recorded in the datafiles
SQL> select max(CHECKPOINT_CHANGE#) from v$datafile;

MAX(CHECKPOINT_CHANGE#)
-----------------------
789051
gets the current system SCN
SQL> select dbms_flashback.get_system_change_number from dual;

GET_SYSTEM_CHANGE_NUMBER
------------------------
789124
You can also query v$transaction to arrive at the SCN for that transaction
Events that trigger a checkpoint
The following events trigger a checkpoint.
Redo log switch
Log_checkpoint_timeout has expired

20
9866465379

Log_checkpoint_interval has been reached


When DBA uses alter system checkpoint
Additionally, if a tablespace is hot backuped a check point for the
tablespace in question is taking place
While redo log switches cause a checkpoint, checkpoints don’t cause a
log switch.
SCN and checkpoint

System change number(SCN) is represented with SCN_WRAP and


SCN_BASE. Whenever SCN_BASE reaches 4294967290 (2 power 32),
SCN_WRAP goes up by one and SCN_BASE will be reset to 0. This
way you can have a maximum SCN at 1.8E+19.

SCN = (SCN_WRAP * 4294967290) + SCN_BASE

Checkpoint number is the SCN number at which all the dirty buffers are
written to the disk, there can be a checkpoint at
object/tablespace/datafile/database level.

Checkpoint number is never updated for the datafiles of readonly


tablespaces.

SCN numbers are being reported at frequent intervals by SMON in


"SMON_SCN_TIME" table

Set parameter LOG_CHECKPOINTS_TO_ALERT=TRUE to observe


checkpoint start and end times in the database alert log.

Log Writer (LGWR)


Writes transaction recovery information from the Redo Log buffer to the
online Redo log files on the disk. LGWR is one of the most active
processes in an instance with heavy DML activity. A transaction is not
considered complete until LGWR successfully writes the redo
information, including commit record to the redo log files. In addition the
dirty buffers in the buffer cache cannot be written to the datafiles by
DBWn until LGWR has written the redo information.
Optional oracle background processes
Archiver (ARCn)

21
9866465379

Copies the transaction recovery information written to disk by LGWR to


the online Redo Log files to a secondary location in case it is needed for
recovery. All production databases use this optional process.
Recoverer (RECO)
Recovers failed transactions that are distributed across multiple
databases.
Job Queue Monitor (CJQn)
Assigns jobs to the job queue when using oracle job queue scheduling.
Job Queue (Jnnn)
Executes database jobs that have been scheduled using oracles
job scheduling feature.
Dispatcher (Dnnn)
Assigns users database request to a queue where they are then serviced by
shared server processes when oracles shared server feature is used.
Shared Server (Snnn)
Server processes that are shared among several users when oracles shared
server feature is used.
Memory Manager (MMAN)
Manages the size of each individual SGA component when oracles
automatic shared memory management feature is used.
Memory Monitor (MMON)
Gathers and analyzes statistics used by the Automatic Workload
Repository.
Recovery Writer (RVWR)
Write recovery information to disk when oracles flash back database
feature is used.
On unix systems we can view these background processes from command
line by using the following command.
orasrv1 $ ps -ef | grep sales

here sales is instance name.

22
9866465379

Oracle Database
An instance is a temporary memory structure, but the oracle database is
made up of a set of physical files those resides on the host computer hard
disk. These are Control files, Data files and Redo log files.
Apart from these additional files that are associated with oracle database
and are not technically part of the database are password file, pfile, spfile
and archived redo log files.
The three types of files that make up a database are
1.Control files
These contain locations of other physical files, database name,
database block size, database character set, and recovery information.
These are required to start the database instance.
The control files are created when the database is created in the locations
specified by CONTROL_FILES initialization parameter
in the parameter file.
Most production databases multiplex control files to multiple locations to
minimize the potential damage due to disk failure.
Oracle uses the CKPT background process to update these files
automatically.

To see all the names and locations of database's control files

SQL > select name from V$CONTROLFILE;


NAME
disk1/sales/control1.ora
/disk1/sales/control2.ora

2. Datafiles
These are the physical files which actually store the data that has been
inserted into each table in the database.
Datafiles are the physical structures behind another database storage area
called tablespace. A tablespace is a logical storage area in the database.
The information for a single table can span many datafiles or many tables
can share a set of datafiles. A tablespace can have more than one datafile.

23
9866465379

The number of datafiles can be configured is limited by using the


parameter DB_FILES.
By default every oracle 10g database must have atleast three
tablespaces.
SYSTEM
Stores the data dictionary tables and PL/SQL code.
SYSAUX (new in 10g)
Stores segments used for database options such as the
Automatic Workload Repository, olap etc
TEMP
Used for performing large sort operations. This is required when
SYSTEM tablespace is created as a locally managed tablespace
otherwise optional.

In addition to the required tablespaces most databases have


tablespaces for storing other database segments like
USERS
Used as the default tablespace for the database users.
UNDOTBS
Used to store transaction information for read consistency and
recovery purpose.
we can use the data dictionary view DBA_TABLESPACES to
see a list of all the tablespaces in the database.

SQL > select tablespace_name from dba_tablespaces;


TABLESPACE_NAME
------------------------------
SYSTEM
UNDOTBS
SYSAUX
TEMP
USERS

24
9866465379

TS1
TS2
For each tablespace there must be at least one datafile.

SQL > select tablespace_name, file_name from dba_data_files;


TABLESPACE_NAME FILE_NAME
-------------------- ----------------------------------------
SYSTEM /disk1/sales/sys.dbf
TS3 /disk1/sales/ts3.dbf
UNDOTBS /disk1/sales/undo1.dbf
SYSAUX /disk1/sales/sysaux.dbf
USERS /disk1/sales/user1.dbf
TS1 /disk1/sales/ts1.dbf
TS2 /disk1/sales/ts2.dbf

8 rows selected.
Temporary tablespaces are listed in dba_temp_files.
Datafiles are usually the largest files in the database. They will be
ranging from MB s to TB s. The maximum number of database files can
be set with the init parameter db_files. the maximum number of database
files in a smallfile tablespace is 1022. A bigfile tablespace can contain
only one database file. A datafile that contains a block whose SCN is
more recent than the SCN of its header is called a fuzzy datafile
if you're interested in when the a file's last checkpoint was
select name, checkpoint_change#, to_char(checkpoint_time,
'DD.MM.YYYY HH24:MI:SS') from v$datafile_header
The datafile size is still limited to 4,194,304 Oracle blocks. With a block
size of 8k, it limits the datafile to 32GB
When ever a user performs an sql operation on a table the user's
server process copies the affected data from the datafiles into the
Database Buffer cache. If the user has performed a committed
transaction that modifies the data, The Database Writer process
writes the modified data back to the datafiles.

25
9866465379

When does database Writer write ?


This will write to the datafiles when ever one of the following
event occurs.
- A user's server process has searched too long for a free buffer
when reading into the Buffer cache.
- The number of modified, committed but unwritten buffers in the
Database Buffer cache is too large.
- At a database checkpoint event.
- The instance is shutdown using any method other than abort.
- A tablespace is placed into backup mode.
- A tablespace is taken offline to make it unavailable or changed
to read only.
- A segment is dropped.
3. Redo Log Files
When ever a user performs a transaction in the database the
information needed to reproduce the transaction in the event of
a database failure is automatically recorded in the Redo Log Buffer. The
contents of this buffer are finally written to the
Redo Log files by LGWR oracle background process.

Because of the important role they play in oracle's recovery


process, they are usually multiplexed or copied.
These sets of Redo logs are referred to as Redo Log groups.
Each multiplexed file within the group is called a Redo Log group
member. Each database must have minimum of two Redo log
groups.

SQL > select group#, member from V$LOGFILE;

GROUP# MEMBER
---------- ----------------------------------------
1 /disk1/sales/log1a.ora

26
9866465379

2 /disk1/sales/log2a.ora

When ever a user performs a DML operation on the database


the recovery information for this transaction is written to the redo
log buffer by the user's server process. LGWR then writes this to
the active redo log group until that group is filled.
The current log group is full then LGWR switches to the next redo log.
When the last redo log is used then LGWR starts using
the first redo log again.
To see currently active redo log group

SQL > select group#, members, status from V$LOG;


GROUP# MEMBERS STATUS
---------- ---------- ----------------
1 1 CURRENT
2 1 INACTIVE
When does Log Writer write
- LGWR writes when ever any one of the following event occurs
Every Three seconds
- A user commits a transaction
- The redo log buffer is one third full
- The redo log buffer contain 1MB of redo information
- Before the DBWn process whenever a database checkpoint occur

When ever LGWR switches from last redo log group to first,
any recovery information already available in the first redo log group is
overwritten. Therefore it is no longer available for recovery purpose.
But if the database is running in archive log mode, the contents of
these used logs are copied to a secondary location before the log
is used by LGWR.
If archiving feature is enabled a background process called
Archiver (ARCn) will copy the contents of redo log file to the
archive location.

27
9866465379

All production databases run in archive log mode because they need to be
able to recover all transactions since the last backup in
case of hardware failure.

The life of an SQL statement

What happens when Oracle processes an SQL Statement? An SQL


statement must always be parsed. Then, it can be executed any number of
times. If it is a select statement, the result-data needs be fetched for each
execution.
Parse

28
9866465379

One of the goals of the parse phase is to generate a query execution plan
(QEP). Does the statement correspond to an open cursor, ie, does the
statement already exist in the library cache. If yes, the statement needs
not be parsed anymore and can directly be executed. If the cursor is not
open, it might still be that it is cached in the cursor cache. If yes, the
statement needs not be parsed anymore and can directly be executed. If
not, the statement has to be verified syntactically and semantically:

Syntax

This step checks if the syntax of the statmenet is correct. For example, a
statement like select foo frm bar is syntactically not correct (frm instead
of from).

Semantic

A statement might be invalid even if the syntax is correct. For example


select foo from bar is invalid if there is no table or view named bar, or if
there is such a table, but without a column named foo. Also, the table
might exist, but the user trying to execute the query does not have the
necessary object privileges. If the statement is syntactically and
semantically correct, it is placed into the library cache (which is part of
the Shared Pool).

Opening the cursor

A cursor for the statement is opened. The statement is hashed and


compared with the hashed values in the sql area. If it is in the sql area, it
is a soft parse, otherwise, it's a hard parse. Only in the case of a hard
parse, the statement undergoes the following steps (view merging,
statement transformation, optimization):

View merging

If the query contains views, the query might be rewritten to join the
view's base tables instead of the views.

29
9866465379

Statement Transformation

Transforms complex statements into simpler ones through subquery


unnesting or in/or transformations

Optimization

The CBO uses "gathered??" statistics to minimize the cost to execute the
query. The result of the optimization is the query evaluation plan (QEP).
If bind variable peeking is used, the resulting execution plan might be
dependant on the first bound bind-value.

Execute

Memory for bind variables is allocated and filled with the actual bind-
values. The execution plan is executed. Oracle checks if the data it needs
for the query are already in the buffer cache. If not, it reads the data off
the disk into the buffer cache. The record(s) that are changed are locked.
No other session must be able to change the record while they're updated.
Also, before and after images describing the changes are written to the
redo log buffer and the rollback segments. The original block receives a
pointer to the rollback segment. Then, the data is changed.

Fetch

Data is fetched from database blocks. Rows that don't match the predicate
are removed. If needed (for example in an order by statement), the data is
sorted. The data is then returned to the application.

Creating and Maintaining password file.


We create a password file using orapwd utility.

30
9866465379

Arguments for orapwd utility.


FILE :
This sets the name of the password file being created.
We must specify the full full path for the file.
The contents of this file are encrypted and cannot read directly.
PASSWORD :
This sets the password for SYS user.
If we user ALTER USER and change the SYS user password
both the password stored in the data dictionary and password
stored in the password file are updated.
ENTRIES :
We can specify the number of entries for the password file.
FORCE : If we set this to Y then it will allow to overwrite an existing
password file.
To enable password file authentication we must set one
initialization parameter in the parameter file
REMOTE_LOGIN_PASSWORDFILE. This is a static
initialization parameter. This cannot be changed without
restarting the database.

The values can be


NONE : If we Set this value oracle ignores the password file.
password file authentication will not be done for privileged
connections.
EXCLUSIVE : This can be used only one instance of one database.
SHARED : This can be used by multiple databases running on the same
machine. This is set password file cannot be modified.
For all users to add to this file first set
REMOTE_LOGIN_PASSWORDFILE to exclusive and later
change it to shared.
Creating a password file.
$ orapwd file=$ORACLE_HOME/dbs/orapwsales password=oracle

31
9866465379

Adding users to password file.


If we want to grant SYSDBA or SYSOPER privileges to any
database users the user name and privilege information can be
added to the password file.
SQL > grant sysdba to ramesh;
The above statement will add ramesh database user to the password file
and ramesh can access the database with SYSDBA
privilege.
SQL > revoke sysdba from ramesh;
To see password file members
SQL > select * from V$pwfile_users;

Parameter initialization files


When we set the parameter by using ALTER SYSTEM SET
the value will remain until the server was restarted. Oracle 9i
introduced the concept of persistent initialization parameters.
We have Pfile and spfile.
A pfile is required in order to create a spfile to enable persistent
initialization parameters.
SQL > create spfile from pfile;
similarly we can create pfile from spfile
SQL > create pfile from spfile;
A SPFILE, Server Parameter File, is a server managed binary file that
Oracle uses to hold persistent initialization parameters. If a parameter is
changed using the ALTER SYSTEM SET command Oracle will apply
this parameter change to the current SPFILE. Since the database uses this
file during startup all parameter changes persist between shutdowns.

Using ALTER SYSTEM SET

ALTER SYSTEM SET parameter = value SCOPE =


where SCOPE can be

32
9866465379

BOTH - The parameter takes affect in the current instance and is stored in
the SPFILE.
SPFILE - The parameter is altered in the SPFILE only. It does not affect
the current instance.
MEMORY - The parameter takes affect in the current instance, but is not
stored in the SPFILE.
A parameter value can be reset to the default using:

SQL > ALTER SYSTEM RESET OPEN_CURSORS='*'


SCOPE = 'SPFILE';
Some parameters can be dynamically modified to affect the
present instance, while others require the instance to be brought
down so that changes can take affect. This remains the same
when using PFILE or SPFILE.
The ISSYS_MODIFIABLE column in v$parameter tells us
whether the parameter is static or dynamic.
Static parameters require the instance to be restarted to take effect
while dynamic parameters can take effect immediately on change.
ISSYS_MODIFIABLE values can be
FALSE – The database needs to be restarted for changes.
IMMEDIATE – It is dynamic and can be set to current instance
as well as future database restarts.
DEFERRED – This is also dynamic. Currently active sessions
will not be affected and retain the old values.

SQL > select distinct issys_modifiable from v$parameter.


ISSYS_MOD
---------
IMMEDIATE
FALSE
DEFERRED
By default STARTUP command will look for spfile when PFILE
option is not specified explicitly.

33
9866465379

The contents of SPFILE can be viewed from v$spparameter.


The parameters having ISSPECIFIED column set to true
are the ones present in the spfile.
Reverting to a PFILE can be done by simply creating a pfile
from spfile and remove the spfile from default directory.
To see current parameter value

SQL > select value,issys_modifiable from v$parameter


where name='remote_login_passwordfile;
VALUE ISSYS_MOD
-------------------------------------------------- ---------
EXCLUSIVE FALSE

SQL > show parameter remote_login_passwordfile;


NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
remote_login_passwordfile string EXCLUSIVE

Logical Strucutres
Tablespaces and Data Files
Tablespaces are the primary logical storage structures of
any ORACLE database. The usable data of an ORACLE
database is logically stored in the tablespaces and
physically stored in the data files associated with the
corresponding tablespace
databases and tablespaces
An ORACLE database is comprised of one or more logical
storage units called tablespaces. The database's data is
collectively stored in the database's tablespaces.
tablespaces and data files
Each tablespace in an ORACLE database is comprised of
one or more operating system files called data files. A
tablespace's data files physically store the associated
database data on disk
schema objects, segments, and tablespaces
When a schema object such as a table or index is created,
its segment is created within a designated tablespace in

34
9866465379

the database. For example, suppose a table is created in a


specific tablespace using the CREATE TABLE command
with the TABLESPACE option. The space for this table's
data segment is allocated in one or more of the data files
that constitute the specified tablespace. An object's
segment allocates space in only one tablespace of a
database.

A database is divided into one or more logical storage


units called tablespaces. A database administrator can use
tablespaces to do the following
Control disk space allocation for database data.
Assign specific space quotas for database users
Control availability of data by taking individual
tablespaces online or offline.
Perform partial database backup or recovery operations.
Allocate data storage across devices to improve
performance.

Every ORACLE database contains a tablespace named


SYSTEM, which is automatically created when the
database is created. The SYSTEM tablespace always
contains the data dictionary tables for the entire database.
Data files associated with a tablespace store all the
database data in that tablespace. One or more datafiles
form a logical unit of database storage called a
tablespace. A data file can be associated with only one
tablespace, and only one database.
After a data file is initially created, the allocated disk
space does not contain any data; however, the space is
reserved to hold only the data for future segments of the
associated tablespace - it cannot store any other
program's data. As a segment (such as the data segment
for a table) is created and grows in a tablespace, ORACLE
uses the free space in the associated data files to allocate
extents for the segment.
The data in the segments of objects (data segments, index
segments, rollback segments, and so on) in a tablespace
are physically stored in one or more of the data files that
constitute the tablespace. Note that a schema object does
not correspond to a specific data file; rather, a data file is
a repository for the data of any object within a specific

35
9866465379

tablespace. The extents of a single segment can be


allocated in one or more data files of a tablespace ;
therefore, an object can ``span'' one or more data files.
The database administrator and end-users cannot control
which data file stores an object
Data Blocks, Extents, and Segments
ORACLE allocates database space for all data in a
database. The units of logical database allocations are
data blocks, extents, and segments.
Data Blocks
At the finest level of granularity, Oracle stores data in data
blocks
One data block corresponds to a specific number of bytes
of physical database space on disk.
Extents
The next level of logical database space is called an
extent. An extent is a specific number of contiguous data
blocks that are allocated for storing a specific type of
information
Segments
The level of logical database storage above an extent is
called a segment. A segment is a set of extents which
have been allocated for a specific type of data structure,
and all are stored in the same tablespace. For example,
each table's data is stored in its own data segment, while
each index's data is stored in its own index segment.
ORACLE allocates space for segments in extents.
Therefore, when the existing extents of a segment are full,
ORACLE allocates another extent for that segment.
Because extents are allocated as needed, the extents of a
segment may or may not be contiguous on disk, and may
or may not span files. An extent cannot span files, though.
Overview of Data Blocks
Oracle manages the storage space in the datafiles of a
database in units called data blocks. A data block is the
smallest unit of data used by a database. In contrast, at
the physical, operating system level, all data is stored in
bytes. Each operating system has a block size. Oracle
requests data in multiples of Oracle data blocks, not
operating system blocks.
The standard block size is specified by the
DB_BLOCK_SIZE initialization parameter. In addition, you

36
9866465379

can specify of up to five nonstandard block sizes. The data


block sizes should be a multiple of the operating system's
block size within the maximum limit to avoid unnecessary
I/O. Oracle data blocks are the smallest units of storage
that Oracle can use or allocate.

Data Block Format


The Oracle data block format is similar regardless of
whether the data block contains table, index, or clustered
data

Header (Common and Variable)


The header contains general block information, such as
block address, segment type, such as data, index, or
rollback. While some block overhead is fixed in size (about
107 bytes), the total block overhead size is variable
Table Directory
The table directory portion of the block contains
information about the table having rows in this block
Row Directory

37
9866465379

This portion of the data block contains information about


the actual rows in the block (including addresses for each
row piece in the row data area)
After the space has been allocated in the row directory of
a data block's overhead, this space is not reclaimed when
the row is deleted. Therefore, a block that is currently
empty but had up to 50 rows at one time continues to
have 100 bytes allocated in the header for the row
directory. Oracle reuses this space only when new rows
are inserted in the block.
The data block header, table directory, and row directory
are referred to collectively as overhead. Some block
overhead is fixed in size; the total block overhead size is
variable. On average, the fixed and variable portions of
data block overhead total 84 to 107 bytes.
block header = fixed header + transaction header + table
directory + row directory.
Specifically, its size is at least 23*initrans bytes. It can
grow up to 23*maxtrans bytes.
The size of the table directory is 4 bytes*number of
tables. Number of tables is important for cluster tables.
For other tables, it's 1
The row directory uses 2 bytes per stored row
For non cluster tables, the row header is 3 bytes. Each
stored row has one row header. One byte is used to store
flags, one byte to indicate if the row is locked (for example
because it's updated but not commited), and one byte for
the column count.
Each column within a row needs at least 1 byte indicating
the size of the data in the column. For varchar2's longer
than 250 bytes, 3 bytes are used.
Row Data
This portion of the data block contains table or index data.
Rows can span blocks
Free Space
Free space is allocated for insertion of new rows and for
updates to rows that require additional space(for example,
when a trailing null is updated to a nonnull value)
Space Used for Transaction Entries
Data blocks allocated for the data segment of a table,
cluster, or the index segment of an index can also use
free space for transaction entries

38
9866465379

A transaction entry is required in a block for each INSERT,


UPDATE, DELETE, and SELECT...FOR UPDATE statement
accessing one or more rows in the block. The space
required for transaction entries is operating system
dependent; however, transaction entries in most operating
systems require approximately 23 bytes.

39
9866465379

Free Space Management


Free space can be managed automatically or manually
Free space can be managed automatically inside database
segments. The in-segment free/used space is tracked
using bitmaps, as opposed to free lists
Use segment space management clause to specify how
free and used space within a segment is to be managed.
Once it is established we cannot modify segment space
management for tablespace.
From Oracle 9i, one can not only have bitmap managed
tablespaces, but also bitmap managed segments when
setting Segment Space Management to AUTO for a
tablespace
Manual – This setting uses free lists to manage free space
within segments. Free blocks of a segment are recorded in
the segment's free list.
we must specify and tune the PCTUSED, FREELISTS, FREE

40
9866465379

LIST GROUPS storage parameter. default is manual.


Auto – This uses bit maps to manage free space within
segments.
A bit map describes the status of each data block within a
segment with regard to the data block ability to have
additional rows inserted.
Bitmaps allow oracle to manage free space automatically
Automatic delivers better space utilization than manual
and it is self tuning. It is not necessary to specify and tune
pctused, freelists, and free list groups storage parameters
for an object created in the tablespace. If specified these
are ignored.
PCTFREE, PCTUSED
If an insert statement is executed, Oracle tries to insert
the row in a free block. If it doesn't find a free block, it
tries to insert it in an unused block.
A block that has never been written to is a unused block.
For manually managed tablespaces, two space
management parameters, PCTFREE and PCTUSED, enable
you to control the use of free space for inserts and
updates to the rows in all the data blocks of a particular
segment. Specify these parameters when you create or
alter a table or cluster (which has its own data segment).
You can also specify the storage parameter PCTFREE when
creating or altering an index
The PCTFREE parameter is used to set the percentage of a
block to be reserved (kept free) for possible updates to
rows that already are contained in that block. For
example, assume that you specify the following parameter
within a CREATE TABLE statement:
pctfree 20
This states that 20\% of each data block used for this
table's data segment will be kept free and available for
possible updates to the existing rows already within each
block.
After a data block becomes full, as determined by
PCTFREE, the block is not considered for the insertion of
new rows until the percentage of the block being used

41
9866465379

falls below the parameter PCTUSED. Before this value is


achieved, the free space of the data block can only be
used for updates to rows already contained in the data
block. For example, assume that you specify the following
parameter within a CREATE TABLE statement:
pctused 40
In this case, a data block used for this table's data
segment is not considered for the insertion of any new
rows until the amount of used space in the blocks falls to
39\% or less (assuming that the block's used space has
previously reached PCTFREE).
No matter what type, each segment in a database is
created with at least one extent to hold its data. This
extent is called the segment's initial extent.
PCTFREE and PCTUSED work together to optimize the use
of space in the data blocks of the extents within a data
segment.
In a newly allocated data block, the space available for
inserts is the block size minus the sum of the block
overhead and free space (PCTFREE). Updates to existing
data can use any available space in the block. Therefore,
updates can reduce the available space of a block to less
than PCTFREE, the space reserved for updates but not
accessible to inserts.
For each data and index segment, Oracle maintains one or
more free lists—lists of data blocks that have been
allocated for that segment's extents and have free space
greater than PCTFREE. These blocks are available for
inserts. When you issue an INSERT statement, Oracle
checks a free list of the table for the first available data
block and uses it if possible. If the free space in that block
is not large enough to accommodate the INSERT
statement, and the block is at least PCTUSED, then Oracle
takes the block off the free list. Multiple free lists for each
segment can reduce contention for free lists when
concurrent inserts take place
After you issue a DELETE or UPDATE statement, Oracle
processes the statement and checks to see if the space
being used in the block is now less than PCTUSED. If it is,
then the block goes to the beginning of the transaction

42
9866465379

free list, and it is the first of the available blocks to be


used in that transaction. When the transaction commits,
free space in the block becomes available for other
transactions.
Overview of Extents
An extent is a logical unit of database storage space
allocation made up of a number of contiguous data blocks.
One or more extents in turn make up a segment. When
the existing space in a segment is completely used, Oracle
allocates a new extent for the segment.
When you create a table, Oracle allocates to the table's
data segment an initial extent of a specified number of
data blocks. Although no rows have been inserted yet, the
Oracle data blocks that correspond to the initial extent are
reserved for that table's rows.
If the data blocks of a segment's initial extent become full
and more space is required to hold new data, Oracle
automatically allocates an incremental extent for that
segment. An incremental extent is a subsequent extent of
the same or greater size than the previously allocated
extent in that segment.
Storage parameters expressed in terms of extents define
every segment. Storage parameters apply to all types of
segments. They control how Oracle allocates free
database space for a given segment. For example, you
can determine how much space is initially reserved for a
table's data segment or you can limit the number of
extents the table can allocate by specifying the storage
parameters of a table in the STORAGE clause of the
CREATE TABLE statement. If you do not specify a table's
storage parameters, then it uses the default storage
parameters of the tablespace.
A tablespace that manages its extents locally can have
either uniform extent sizes or variable extent sizes that
are determined automatically by the system. When you
create the tablespace, the UNIFORM or AUTOALLOCATE
(system-managed) clause specifies the type of allocation
The storage parameters INITIAL, NEXT, PCTINCREASE, and
MINEXTENTS cannot be specified at the tablespace level
for locally managed tablespaces. They can, however, be

43
9866465379

specified at the segment level. In this case, INITIAL, NEXT,


PCTINCREASE, and MINEXTENTS are used together to
compute the initial size of the segment. After the segment
size is computed, internal algorithms determine the size of
each extent.
When Extents Are Deallocated
The Oracle Database provides a Segment Advisor that
helps you determine whether an object has space
available for reclamation based on the level of space
fragmentation within the object
In general, the extents of a segment do not return to the
tablespace until you drop the schema object whose data is
stored in the segment (using a DROP TABLE or DROP
CLUSTER statement).
A database administrator (DBA) can deallocate unused
extents using the following SQL syntax:
ALTER TABLE table_name DEALLOCATE UNUSED;

Overview of Segments

A segment is a set of extents that contains all the data for


a specific logical storage structure within a tablespace. For
example, for each table, Oracle allocates one or more
extents to form that table's data segment, and for each
index, Oracle allocates one or more extents to form its
index segment.
The segment header is stored in the first block of the first
extent. It contains:
The extents table
free lists descriptors
high water mark
SQL > create tablespace ts2
datafile '/u01/app/oracle/oradata/sales/s2.dbf' size 50M
extent management dictionary
default storage(
initial 50k

44
9866465379

next 50k
minextents 2
maxextents 50
pctincrease 0);
All segments created in the tablespace will inherit the
default storage parameters unless their storage
parameters are specified explicitly to override the default.
Initial – size in bytes of the first extent in a segment.
Next – size in bytes of second and subsequent segment
extents.
Pctincrease – Percent by which each extent grows after
the second.
SMON periodically coalesces free space in a DMT but only
if
the PCTINCREASE setting is not zero.
Minextents – Number of minimum extents allocated to
each
segment upon creation.
Maxextents – Number of maximum extents allocated in a
segment. we can a also specify UNLIMITED.

Managing Tablespaces and Datafiles


An Oracle database consists of one or more logical storage units called
tablespaces, which collectively store all of the database's data.
Each tablespace in an Oracle database consists of one or more files called
datafiles, which are physical structures that conform to the operating
system in which Oracle is running.
A database's data is collectively stored in the datafiles that constitute each
tablespace of the database. For example, the simplest Oracle database
would have one tablespace and one datafile. Another database can have
three tablespaces, each consisting of two datafiles (for a total of six
datafiles).
Advantages of using multiple tablespaces.
Separate user data from dictionary data to reduce contention among
dictionary objects and schema objects from same datafiles.Separate the
data of one application from the data of another application which will

45
9866465379

prevent affecting the application if a tablespace taken offline.


Take individual tablespaces offline while while others remain online.
Creating New Tablespaces
We can create Locally Managed or Dictionary Managed tablespaces.
In earlier oracle versions only dictionary managed tablespaces are
available. But from oracle 8i we can also create Locally Managed
tablespaces.
When oracle allocates space to a segment a group of contiguous free
blocks called an extent is added to the segment.
Locally Managed Tablespaces:
A tablespace that manages its own extents maintains a bitmap in each
datafile to keep track of the free or used status of blocks in that datafile.
Each bit in the bitmap corresponds to a group of blocks. When an extent
is allocated or freed for reuse, Oracle changes the bitmap values to show
the new status of the blocks. These changes do not generate rollback
information because they do not update tables (like sys.uet$, sys.fet$) in
the data dictionary (except for special cases such as tablespace quota
information).
When you create a locally managed tablespace, header bitmaps are
created for each datafile. If more datafiles are added, new header bitmaps
are created for each added file.
Local management of extents automatically tracks adjacent free space,
eliminating the need to coalesce free extents. The sizes of extents that are
managed locally can be determined automatically by the system
alternatively, all extents can have the same size in a locally managed
tablespace.
Dictionary Managed Tablespaces:
In DMT, to keep track of the free or used status of blocks, oracle uses
data dictionry tables. When an extent is allocated or freed for reuse, free
space is recorded in the SYS.FET$ table, and used space in the
SYS.UET$ table. Whenever space is required in one of these tablespaces,
the ST (space transaction) enqueue latch must be obtained to do inserts
and deletes agianst these tables. As only one process can acquire the ST
enque at a given time, this often lead to contention. These changes
generate rollback information because they update tables (like sys.uet$,
sys.fet$) in the data dictionary.

SQL > select tablespace_name,extent_management,


allocation_type from dba_tablespaces;
TABLESPACE_NAME EXTENT_MAN ALLOCATIO

46
9866465379

------------------------------ ---------- ---------


SYSTEM DICTIONARY USER
UNDOTBS LOCAL SYSTEM
SYSAUX LOCAL SYSTEM
TEMP LOCAL UNIFORM
USERS LOCAL SYSTEM

5 rows selected.
To create a locally managed tablespace
Important Points:
1. LMTs can be created as
a) AUTOALLOCATE: specifies that the tablespace is system managed.
Users cannot specify an extent size.
b) UNIFORM: specifies that the tablespace is managed with uniform
extents of SIZE bytes. The default SIZE is 1 megabyte.
2. One cannot create a locally managed SYSTEM tablespace in 8i.
3. This is possible with in 9.2.0.X, where SYSTEM tablespace is created
by DBCA as locally managed by default. With a locally managed
SYSTEM tablespace
AUTOALLOCATE specifies that extent sizes are system managed.
Oracle will choose "optimal" next extent sizes starting with 64KB. As the
segment grows larger extent sizes will increase to 1MB, 8MB, and
eventually to 64MB. This is the recommended option for a low or
unmanaged environment.
UNIFORM specifies that the tablespace is managed with uniform extents
of SIZE bytes (use K or M to specify the extent size in kilobytes or
megabytes). The default size is 1M. The uniform extent size of a locally
managed tablespace cannot be overridden when a schema object, such as
a table or an index, is created.
Also not, if you specify, LOCAL, you cannot specify DEFAULT
STORAGE, MINIMUM EXTENT or TEMPORARY.
SQL > create tablespace test
datafile '/u01/app/oracle/oradata/sales/test.dbf' size 100M
Extent management Local Autoallocate;

Autoallocate causes the tablespace to be system managed


with a minimum size of extent 64K.

47
9866465379

An alternative to autoallocate is UNIFORM. This will tell the tablespace


should be managed by using with extents of uniform size.

SQL > create tablespace ts1


datafile '/u01/app/oracle/oradata/sales/ts1.dbf' size 50M
extent management local uniform size 256K;
SQL > create tablespace ts3
datafile '/u01/app/oracle/oradata/sales/s3.dbf' size 50M
extent management local uniform size 128k
blocksize 4k;

To create DMT
SQL > create tablespace ts2
datafile '/u01/app/oracle/oradata/sales/ts2.dbf' size 50M
extent management Dictionary;
Segment Space Management in LMT:
Use segment space management clause to specify how free and used
space within a segment is to be managed.
Once it is established we cannot modify segment space management for
tablespace.

From Oracle 9i, one can not only have bitmap managed tablespaces, but
also bitmap managed segments when setting Segment Space
Management to AUTO for a tablespace
Segment Space Management eliminates the need to specify and tune the
PCTUSED, FREELISTS, and FREELISTS GROUPS storage parameters
for schema objects. The Automatic Segment Space Management feature
improves the performance of concurrent DML operations significantly
since different parts of the bitmap can be used simultaneously eliminating
serialization for free space lookups against the FREELSITS. This is of
particular importance when using RAC, or if "buffer busy waits" are
deteted.
Manual – This setting uses free lists to manage free space within
segments.

48
9866465379

we must specify and tune the PCTUSED, FREELISTS, FREE LIST


GROUPS storage parameter. default is manual.
Auto – This uses bit maps to manage free space within segments.
A bit map describes the status of each data block within a segment with
regard to the data block ability to have additional rows inserted.
Bitmaps allow oracle to manage free space automatically.
Automatic delivers better space utilization than manual and it is self
tuning. It is not necessary to specify and tune pctused, freelists, and free
list groups storage parameters for an object created in the tablespace. If
specified these are ignored.
Free lists are lists of data blocks that have space available for inserting
rows.
SQL > create tablespace ts1
datafile '/u01/app/oracle/oradata/sales/s1.dbf' size 50M
extent management local segment space management auto;
SQL > create tablespace ts2
datafile '/u01/app/oracle/oradata/sales/s2.dbf' size 50M
extent management dictionary
default storage(
initial 50k
next 50k
minextents 2
maxextents 50
pctincrease 0);
All segments created in the tablespace will inherit the default
storage parameters unless their storage parameters are specified
explicitly to override the default.
Initial – size in bytes of the first extent in a segment.
Next – size in bytes of second and subsequent segment extents.
Pctincrease – Percent by which each extent grows after the second.
SMON periodically coalesces free space in a DMT but only if
the PCTINCREASE setting is not zero.
Minextents – Number of minimum extents allocated to each

49
9866465379

segment upon creation.


Maxextents – Number of maximum extents allocated in a segment. we
can a also specify UNLIMITED.

Bigfile Tablespaces (oracle 10g)


Oracle lets you create bigfile tablespaces. This allows Oracle Database to
contain tablespaces made up of single large files rather than numerous
smaller ones. This lets Oracle Database utilize the ability of 64-bit
systems to create and manage ultralarge files. The consequence of this is
that Oracle Database can now scale up to 8 exabytes in size
The system default is to create a smallfile tablespace, which is the
traditional type of Oracle tablespace. The SYSTEM and SYSAUX tablespace
types are always created using the system default type
An Oracle database can contain both bigfile and smallfile tablespaces.
Tablespaces of different types are indistinguishable in terms of execution
of SQL statements that do not explicitly refer to datafiles.
SQL > create bigfile tablespace bs1
datafile '/u01/app/oracle/oradata/sales/bs1.dbf' size 10G;

Extending the size of a tablespace


The size of a tablespace is the size of the datafiles that constitute the
tablespace. The size of a database is the collective size of the tablespaces
that constitute the database.
You can enlarge a database in three ways:
3. Add a datafile to a tablespace
4. Add a new tablespace
5. Increase the size of a datafile
When you add another datafile to an existing tablespace, you increase the
amount of disk space allocated for the corresponding tablespace.

first
By extending the size of a datafile
SQL > alter database datafile
'/u01/app/oracle/oradata/sales/ts1.dbf' resize 100M;

50
9866465379

Second
We can also extend the size of the tablespace by adding a new
datafile to a tablespace.

SQL > alter tablespace ts1 add datafile


'/u01/app/oracle/oradata/sales/ts3.dbf' size 50M;

Third
We can also autoextend feature. oracle will automatically increase the
size of the datafile whenever sapce is required. Here we can also specify
how much size file should increase and maximum size.

SQL > alter database datafile


'/u01/app/oracle/oardata/sales/ts1.dbf' autoextend on next 5M
maxsize 500M;
We can also make a datafile auto extendable at the time creating
the tablesapce itself.
SQL > create tablespace ts5
datafile '/u01/app/oracle/oradata/sales/ts5.dbf' size 100M
autoextend on next 5M maximum 200M;
Decreasing size of a tablespace
we can decrease the size by decreasing datafile size.
datafile size can be decreased up to free space in it.
SQL > alter database datafile
'/u01/app/oracle/oradata/sales/ts1.dbf' resize 30M;
SQL > select tablespace_name,autoextensible from
dba_data_files;
TABLESPACE_NAME AUT
------------------------------ ---
SYSTEM NO
TS3 NO
UNDOTBS NO
SYSAUX NO

51
9866465379

USERS NO
TS1 NO
TS2 NO
Coalescing Tablespaces
SQL > alter tablespace ts1 colaesce;

Taking Tablespaces offline or online


We can take an online tablespace offline so that it is temporarily
unavailable for general use.
The rest of the database is available for users to access.
Similarly we can bring an offline tablespace to an online.

SQL > alter tablespace ts1 offline;

SQL > alter tablespace ts1 online;


SQL > alter database sales datafile
'/u01/app/oracle/oradata/sales/ts1.dbf' offline;
We cannot take the individual datafile offline if the database is
running in NOARCHIVELOG mode.
If the datafile has become corrupt or missing when the database is
running in NOARCHIVELOG mode then you can only drop
it by giving the following.

SQL > alter database datafile


'/u01/app/oracle/oradata/sales/ts1.dbf' offline for drop;

Renaming Tablespaces

SQL > alter tablespace ts1 rename to test1;

Dropping tablespaces
We can drop a table space and its contents.

52
9866465379

We must have the drop tablespace system privilege.


SQL > drop tablespace ts1;

The above statement drop a tablesapce which is empty.


if not empty then use following
SQL > drop tablespace ts1 including contents;
But the datafiles will not be deleted. we have to use os commands
to delete these files.
otherwise use following
SQL > drop tablespace ts1 including contents and datafiles;

To list free space in the database


SQL > select sum(bytes)/1024 “free space” from dba_free_space;
To see used space in the database
SQL > select sum(bytes)/1024 “used space” from dba_segments;
To list default storage parameters
SQL > select tablespace_name,initial_extent,next_extent,
max_extents from dba_tablespaces;
Advantages of Locally Managed Tablespaces(LMT) over Dictionary
Managed Tablespaces(DMT):

1. Reduced recursive space management


2. Reduced contention on data dictionary tables
3. No rollback generated
4. No coalescing required
From Oracle9i release 9.2 one can change the SYSTEM tablespace to
locally managed. Further, if you create a database with DBCA (Database
Configuration Assistant), it will have a locally managed SYSTEM
tablespace by default. The following restrictions apply:
● No dictionary-managed tablespace in the database can be READ
WRITE.
● You cannot create new dictionary managed tablespaces
● You cannot convert any dictionary managed tablespaces to local
Thus, it is best only to convert the SYSTEM tablespace to LMT after
all other tablespaces are migrated to LMT.

53
9866465379

dbms_space_admin.Tablespace_Migrate_To_Local
dbms_space_admin.Tablespace_Migrate_From_Local.
Converting DMT to LMT
SQL > exec bms_space_admin.Tablespace_Migrate_To_Local('ts1');
Converting LMT to DMT:
SQL> exec
dbms_space_admin.Tablespace_Migrate_FROM_Local('ts2');

To convert System tablespace from DMT to LMT.


First list all tablespaces and convert all tablespaces to LMT
before converting SYSTEM tablespace to LMT.
For this enable system in restricted session.
SQL > alter system enable restricted session;

Alter all the tablespaces into readonly. Bring the SYSAUX


tablespace into offline.
SQL > alter tablespace users read only;
SQL > alter tablespace index_data read only;
SQL > alter tablespace sysaux offline;
SQL > exec dbms_space_admin.Tablespace_Migrate_To_Local(
'SYSTEM');

SQL > select tablespace_name,extent_management from


dba_tablespaces;
Managing tablespaces with oracle managed files
we use OMF DB_CREATE_FILE_DEST parameter in the
parameter file specifies that datafiles are to be created and their
location. File names are automatically generated by oracle.
datafile clause is not used.
Setting the parameter dynamically.
SQL > alter system set DB_CREATE_FILE_DEST=/u01/app/

54
9866465379

oracle/oradata;

SQL > create tablespace appdat1 datafile size 100M;


When OMF tablespaces are dropped their associated datafiles are
also deleted at the operating system level.
Relocating or Renaming datafiles
Take the tablespace offline
Rename or relocate with the help of OS.
give ALTER TABLESPACE with RENAME DATAFILE
Bring the tablespace online.
SQL > ALTER TABLESPACE USERS OFFLINE;

SQL > alter tablespace users rename datafile


'/u01/app/oracle/oradata/sales/user.dbf' to
'/u02/app/oracle/oradata/sales/u1.dbf';

SQL > alter tablespace users online;


OR
we can also use rman
rman> connect target /
rman> BACKUP AS COPY DATAFILE {file id} FORMAT '{new file
name/new location}';
rman> sql 'ALTER TABLESPACE {tablespace of datafile} OFFLINE';
rman> SWITCH DATAFILE {file id} TO COPY;
rman> RECOVER DATAFILE {file id};
rman> sql 'ALTER TABLESPACE {tablespace of datafile] ONLINE';
rman> host 'rm -f {filename of ORIGINAL file}';

OR
1) SQL> alter tablespace <TS name> begin backup;
2) $ cp <old name> <new name>;
2.5) SQL> alter tablespace <TS name> end backup;
3) SQL> alter database datafile <old name> offline;
4) SQL> alter database rename file <old name> to <new name>;
5) SQL> recover datafile <new name>;
4. SQL> alter database datafile <new name> online;

55
9866465379

● Managing Tablespace Alerts

Oracle Database provides proactive help in managing disk space for


tablespaces by alerting you when available space is running low. Two
alert thresholds are defined by default: warning and critical. The warning
threshold is the limit at which space is beginning to run low. The critical
threshold is a serious limit that warrants your immediate attention. The
database issues alerts at both thresholds.

There are two ways to specify alert thresholds for both locally managed
and dictionary managed tablespaces:
● By percent full
For both warning and critical thresholds, when space used becomes
greater than or equal to a percent of total space, an alert is issued.
● By free space remaining (in kilobytes (KB))
For both warning and critical thresholds, when remaining space
falls below an amount in KB, an alert is issued. Free-space-
remaining thresholds are more useful for very large tablespaces.
New tablespaces are assigned alert thresholds as follows:

● Locally managed tablespace—When you create a new locally


managed tablespace, it is assigned the default threshold values
defined for the database. A newly created database has a default of
85% full for the warning threshold and 97% full for the critical
threshold. Defaults for free space remaining thresholds for a new
database are both zero (disabled). You can change these database
defaults, as described later in this section.
● Dictionary managed tablespace—When you create a new
dictionary managed tablespace, it is assigned the threshold values
that Enterprise Manager lists for "All others" in the metrics
categories "Tablespace Free Space (MB) (dictionary managed)"
and "Tablespace Space Used (%) (dictionary managed)." You
change these values on the Metric and Policy Settings page.
To set alert threshold values

For locally managed tablespaces, use Enterprise Manager the


DBMS_SERVER_ALERT.SET_THRESHOLD package procedure

For dictionary managed tablespaces, use Enterprise Manager

Example—Locally Managed Tablespace

56
9866465379

The following example sets the free-space-remaining thresholds in the


USERS tablespace to 10 MB (warning) and 2 MB (critical), and disables
the percent-full thresholds.
BEGIN
DBMS_SERVER_ALERT.SET_THRESHOLD(
metrics_id =>
DBMS_SERVER_ALERT.TABLESPACE_BYT_FREE,
warning_operator =>
DBMS_SERVER_ALERT.OPERATOR_LE,
warning_value => '10240',
critical_operator =>
DBMS_SERVER_ALERT.OPERATOR_LE,
critical_value => '2048',
observation_period => 1,
consecutive_occurrences => 1,
instance_name => NULL,
object_type =>
DBMS_SERVER_ALERT.OBJECT_TYPE_TABLESPACE,
object_name => 'USERS');

DBMS_SERVER_ALERT.SET_THRESHOLD(
metrics_id =>
DBMS_SERVER_ALERT.TABLESPACE_PCT_FULL,
warning_operator =>
DBMS_SERVER_ALERT.OPERATOR_GT,
warning_value => '0',
critical_operator =>
DBMS_SERVER_ALERT.OPERATOR_GT,
critical_value => '0',
observation_period => 1,
consecutive_occurrences => 1,
instance_name => NULL,
object_type =>
DBMS_SERVER_ALERT.OBJECT_TYPE_TABLESPACE,
object_name => 'USERS');
END;
/

Restoring a Tablespace to Database Default Thresholds

After explicitly setting values for locally managed tablespace alert


thresholds, you can cause the values to revert to the database defaults by
setting them to NULL with

57
9866465379

DBMS_SERVER_ALERT.SET_THRESHOLD.
Modifying Database Default Thresholds
To modify database default thresholds for locally managed tablespaces,
invoke DBMS_SERVER_ALERT.SET_THRESHOLD as shown in the
previous example, but set object_name to NULL. All tablespaces that
use the database default are then switched to the new default.
● Viewing Alerts

You view alerts by accessing the home page of Enterprise Manager


Database Control.
● Limitations

Threshold-based alerts have the following limitations:


● Alerts are not issued for locally managed tablespaces that are
offline or in read-only mode. However, the database reactivates the
alert system for such tablespaces after they become read/write or
available.
● When you take a tablespace offline or put it in read-only mode, you
should disable the alerts for the tablespace by setting the thresholds
to zero. You can then reenable the alerts by resetting the thresholds
when the tablespace is once again online and in read/write mode.
Temporary Tablespaces
These are used for large sorting operations.
Every database should have one temporary tablespace.
SQL > create temporary tablespace temp tempfile
'/u01/app/oracle/oradata/sales/tmp.dbf' size 100M uniform
size 5m;
All temporary tablespaces are created with locally managed
extents of a uniform size. we cannot use autoallocate.

Increasing or decreasing size of a temporary tablespace.


SQL > alter database tempfile
'/u01/app/oracle/oradata/sales/tmp.dbf' resize 50M;

Unlike normal data files, TEMPFILEs are not fully initialised (sparse).
When you create a TEMPFILE, Oracle only writes to the header and last

58
9866465379

block of the file. This is why it is much quicker to create a TEMPFILE


than to create a normal database file.
TEMPFILEs are not recorded in the database's control file. This implies
that one can just recreate them whenever you restore the database, or after
deleting them by accident. This opens interesting possibilities like having
different TEMPFILE configurations between permanent and standby
databases, or configure TEMPFILEs to be local instead of shared in a
RAC environment.
One cannot remove datafiles from a tablespace until you drop the entire
tablespace. However, one can remove a TEMPFILE from a database.
Look at his example:

SQL > alter database tempfile


'/u01/app/oracle/oradata/sales/tmp.dbf' drop including
datafiles;
If you remove all tempfiles from a temporary tablespace, you may
encounter error: ORA-25153: Temporary Tablespace is Empty. Use the
following statement to add a TEMPFILE to a temporary tablespace:
SQL> ALTER TABLESPACE temp ADD TEMPFILE
'/oradata/temp03.dbf' SIZE 100M;
One can monitor temporary segments from V$SORT_SEGMENT and
V$SORT_USAGE
DBA_FREE_SPACE does not record free space for temporary
tablespaces. Use V$TEMP_SPACE_HEADER instead:

To view information about free space in tempfiles


SQL> select * from v$temp_space_header;

TABLESPACE_NAME FILE_ID BYTES_USED BLOCKS_USED


BYTES_FREE
------------------------------ ---------- ---------- ----------- ----------
BLOCKS_FREE RELATIVE_FNO
----------- ------------
TEMP 1 15728640 1920 89128960
10880 1

Tablespace groups

59
9866465379

A tablespace group enables a user to consume temporary space from


multiple tablespaces.
It contains at least one tablespace.
It's name cannot be same as any tablespace.
We do not create a tablespace group explicitly.
It is created when we assign the first temporary tablespace to the group. It
is deleted when we delete the last temporary tablespace from the group.
The view dba_tablespace_groups list tablesapce groups and their
member tablespaces.

● Temporary Tablespace Group Benefits

Temporary tablespace group has the following benefits:


● It allows multiple default temporary tablespaces to be
specified at the database level.
● It allows the user to use multiple temporary
tablespaces in different sessions at the same time.
● It allows a single SQL operation to use multiple
temporary tablespaces for sorting.

SQL > create temporary tablesapce temp2 tempfile


'/u01/app/oracle/oradata/sales/tmp2.dbf' size 50M tablesapce
group group1;
Making a group as the default temporary tablespace.

SQL > alter database default temporary tablespace


group1;

This statement will remove temporary tablespace temp04 from its


original temporary tablespace group:

ALTER TABLESPACE temp04 TABLESPACE GROUP ‘‘;

Add a temporary tablespace to a temporary tablespace group.

ALTER TABLESPACE temp03 TABLESPACE GROUP


tempgroup_b;

60
9866465379

Renaming temporary files

SQL> alter database tempfile


'/oracle/oradata/prod/temp01.dbf' offline;

Database altered.

SQL>

Move the file to a different location.

SQL > alter database rename file


'/oracle/oradata/prod/temp01.dbf' to

'/oracle/temp01.dbf' ;

SQL> alter database tempfile '/oracle/temp01.dbf' online;

● Managing the Undo Tablespace

● What Is Undo?

Every Oracle Database must have a method of maintaining information


that is used to roll back, or undo, changes to the database. Such
information consists of records of the actions of transactions, primarily
before they are committed. These records are collectively referred to as
undo.
Undo records are used to:
● Roll back transactions when a ROLLBACK statement is issued
● Recover the database
● Provide read consistency
● Analyze data as of an earlier point in time by using Oracle Flashback
Query
● Recover from logical corruptions using Oracle Flashback features
When a ROLLBACK statement is issued, undo records are used to undo
changes that were made to the database by the uncommitted transaction.
During database recovery, undo records are used to undo any
uncommitted changes applied from the redo log to the datafiles. Undo
records provide read consistency by maintaining the before image of the
data for users who are accessing the data at the same time that another

61
9866465379

user is changing it.

● Introduction to Automatic Undo Management

● Overview of Automatic Undo Management


Oracle provides a fully automated mechanism, referred to as automatic
undo management, for managing undo information and space. In this
management mode, you create an undo tablespace, and the server
automatically manages undo segments and space among the various
active sessions.
You set the UNDO_MANAGEMENT initialization parameter to AUTO
to enable automatic undo management. A default undo tablespace is then
created at database creation. An undo tablespace can also be created
explicitly.
When the instance starts, the database automatically selects the first
available undo tablespace. If no undo tablespace is available, then the
instance starts without an undo tablespace and stores undo records in the
SYSTEM tablespace. This is not recommended in normal circumstances,
and an alert message is written to the alert log file to warn that the system
is running without an undo tablespace.
If the database contains multiple undo tablespaces, you can optionally
specify at startup that you want to use a specific undo tablespace. This is
done by setting the UNDO_TABLESPACE initialization parameter, as
shown in this example:
UNDO_TABLESPACE = undotbs_01

In this case, if you have not already created the undo tablespace (in this
example, undotbs_01), the STARTUP command fails. The
UNDO_TABLESPACE parameter can be used to assign a specific undo
tablespace to an instance in an Oracle Real Application Clusters
environment.
The following is a summary of the initialization parameters for automatic
undo management:
Initialization
Parameter Description
UNDO_MANA If AUTO, use automatic undo management. The default is
GEMENT MANUAL.

62
9866465379

Initialization
Parameter Description
UNDO_TABLE An optional dynamic parameter specifying the name of an
SPACE undo tablespace. This parameter should be used only
when the database has multiple undo tablespaces and you
want to direct the database instance to use a particular
undo tablespace.

When automatic undo management is enabled, if the initialization


parameter file contains parameters relating to manual undo management,
they are ignored.

● Undo Retention
After a transaction is committed, undo data is no longer needed for
rollback or transaction recovery purposes. However, for consistent read
purposes, long-running queries may require this old undo information for
producing older images of data blocks. Furthermore, the success of
several Oracle Flashback features can also depend upon the availability of
older undo information. For these reasons, it is desirable to retain the old
undo information for as long as possible.
When automatic undo management is enabled, there is always a current
undo retention period, which is the minimum amount of time that Oracle
Database attempts to retain old undo information before overwriting it.
Old (committed) undo information that is older than the current undo
retention period is said to be expired. Old undo information with an age
that is less than the current undo retention period is said to be unexpired.
Oracle Database automatically tunes the undo retention period based on
undo tablespace size and system activity. You can specify a minimum
undo retention period (in seconds) by setting the UNDO_RETENTION
initialization parameter. The database makes its best effort to honor the
specified minimum undo retention period, provided that the undo
tablespace has space available for new transactions. When available space
for new transactions becomes short, the database begins to overwrite
expired undo. If the undo tablespace has no space for new transactions
after all expired undo is overwritten, the database may begin overwriting
unexpired undo information. If any of this overwritten undo information
is required for consistent read in a current long-running query, the query
could fail with the snapshot too old error message.
The following points explain the exact impact of the
UNDO_RETENTION parameter on undo retention:

63
9866465379

5. The UNDO_RETENTION parameter is ignored for a fixed size undo


tablespace. The database may overwrite unexpired undo information
when tablespace space becomes low.
6. For an undo tablespace with the AUTOEXTEND option enabled, the
database attempts to honor the minimum retention period specified by
UNDO_RETENTION. When space is low, instead of overwriting
unexpired undo information, the tablespace auto-extends. If the
MAXSIZE clause is specified for an auto-extending undo tablespace,
when the maximum size is reached, the database may begin to
overwrite unexpired undo information.

Retention Guarantee

To guarantee the success of long-running queries or Oracle Flashback


operations, you can enable retention guarantee. If retention guarantee is
enabled, the specified minimum undo retention is guaranteed; the
database never overwrites unexpired undo data even if it means that
transactions fail due to lack of space in the undo tablespace. If retention
guarantee is not enabled, the database can overwrite unexpired undo
when space is low, thus lowering the undo retention for the system. This
option is disabled by default.
WARNING:
Enabling retention guarantee can cause multiple DML operations to fail.
Use with caution.
You enable retention guarantee by specifying the RETENTION
GUARANTEE clause for the undo tablespace when you create it with
either the CREATE DATABASE or CREATE UNDO TABLESPACE
statement. Or, you can later specify this clause in an ALTER
TABLESPACE statement. You disable retention guarantee with the
RETENTION NOGUARANTEE clause.
You can use the DBA_TABLESPACES view to determine the retention
guarantee setting for the undo tablespace. A column named RETENTION
contains a value of GUARANTEE, NOGUARANTEE, or NOT APPLY
(used for tablespaces other than the undo tablespace).

Automatic Tuning of Undo Retention

Oracle Database automatically tunes the undo retention period based on


how the undo tablespace is configured.
6. If the undo tablespace is fixed size, the database tunes the retention
period for the best possible undo retention for that tablespace size

64
9866465379

and the current system load. This tuned retention period can be
significantly greater than the specified minimum retention period.
7. If the undo tablespace is configured with the AUTOEXTEND
option, the database tunes the undo retention period to be
somewhat longer than the longest-running query on the system at
that time. Again, this tuned retention period can be greater than the
specified minimum retention period.
Note:
Automatic tuning of undo retention is not supported for LOBs. This is
because undo information for LOBs is stored in the segment itself and not
in the undo tablespace. For LOBs, the database attempts to honor the
minimum undo retention period specified by UNDO_RETENTION.
However, if space becomes low, unexpired LOB undo information may
be overwritten.
You can determine the current retention period by querying the
TUNED_UNDORETENTION column of the V$UNDOSTAT view. This
view contains one row for each 10-minute statistics collection interval
over the last 4 days. (Beyond 4 days, the data is available in the
DBA_HIST_UNDOSTAT view.) TUNED_UNDORETENTION is given
in seconds.
select to_char(begin_time, 'DD-MON-RR HH24:MI') begin_time,
to_char(end_time, 'DD-MON-RR HH24:MI') end_time,
tuned_undoretention
from v$undostat order by end_time;

BEGIN_TIME END_TIME TUNED_UNDORETENTION


--------------- --------------- -------------------
04-FEB-05 00:01 04-FEB-05 00:11 12100
...
07-FEB-05 23:21 07-FEB-05 23:31 86700
07-FEB-05 23:31 07-FEB-05 23:41 86700
07-FEB-05 23:41 07-FEB-05 23:51 86700
07-FEB-05 23:51 07-FEB-05 23:52 86700

576 rows selected.

Undo Retention Tuning and Alert Thresholds For a fixed size undo


tablespace, the database calculates the maximum undo retention period
based on database statistics and on the size of the undo tablespace. For
optimal undo management, rather than tuning based on 100% of the

65
9866465379

tablespace size, the database tunes the undo retention period based on
85% of the tablespace size, or on the warning alert threshold percentage
for space used, whichever is lower. (The warning alert threshold defaults
to 85%, but can be changed.) Therefore, if you set the warning alert
threshold of the undo tablespace below 85%, this may reduce the tuned
length of the undo retention period

● Setting the Undo Retention Period

You set the undo retention period by setting the UNDO_RETENTION


initialization parameter. This parameter specifies the desired minimum
undo retention period in seconds
the current undo retention period may be automatically tuned to be
greater than UNDO_RETENTION, or, unless retention guarantee is
enabled, less than UNDO_RETENTION if space is low.
To set the undo retention period:
● Do one of the following:
● Set UNDO_RETENTION in the initialization parameter file.
UNDO_RETENTION = 1800

● Change UNDO_RETENTION at any time using the ALTER


SYSTEM statement:
ALTER SYSTEM SET UNDO_RETENTION = 2400;

The effect of an UNDO_RETENTION parameter change is immediate,


but it can only be honored if the current undo tablespace has enough
space.
Sizing the Undo Tablespace
You can size the undo tablespace appropriately either by using automatic
extension of the undo tablespace or by using the Undo Advisor for a fixed
sized tablespace.

● Using Auto-Extensible Tablespaces


Oracle Database supports automatic extension of the undo tablespace to
facilitate capacity planning of the undo tablespace in the production
environment. When the system is first running in the production
environment, you may be unsure of the space requirements of the undo
tablespace. In this case, you can enable automatic extension of the undo

66
9866465379

tablespace so that it automatically increases in size when more space is


needed. You do so by including the AUTOEXTEND keyword when you
create the undo tablespace.

● Sizing Fixed-Size Undo Tablespaces


If you have decided on a fixed-size undo tablespace, the Undo Advisor
can help you estimate needed capacity. You can access the Undo Advisor
through Enterprise Manager or through the DBMS_ADVISOR PL/SQL
package. Enterprise Manager is the preferred method of accessing the
advisor.
The Undo Advisor relies for its analysis on data collected in the
Automatic Workload Repository (AWR). It is therefore important that the
AWR have adequate workload statistics available so that the Undo
Advisor can make accurate recommendations. For newly created
databases, adequate statistics may not be available immediately. In such
cases, an auto-extensible undo tablespace can be used.
To use the Undo Advisor, you first estimate these two values:
● The length of your expected longest running query
After the database has been up for a while, you can view the
Longest Running Query field on the Undo Management page of
Enterprise Manager.
● The longest interval that you will require for flashback operations
For example, if you expect to run Flashback Queries for up to 48
hours in the past, your flashback requirement is 48 hours.
You then take the maximum of these two undo retention values and use
that value to look up the required undo tablespace size on the Undo
Advisor graph.

The Undo Advisor PL/SQL Interface

You can activate the Undo Advisor by creating an undo advisor task
through the advisor framework. The following example creates an undo
advisor task to evaluate the undo tablespace. The name of the advisor is
'Undo Advisor'. The analysis is based on Automatic Workload Repository
snapshots, which you must specify by setting parameters
START_SNAPSHOT and END_SNAPSHOT. In the following example,
the START_SNAPSHOT is "1" and END_SNAPSHOT is "2".
DECLARE
tid NUMBER;
tname VARCHAR2(30);

67
9866465379

oid NUMBER;
BEGIN
DBMS_ADVISOR.CREATE_TASK('Undo Advisor', tid, tname, 'Undo
Advisor Task');
DBMS_ADVISOR.CREATE_OBJECT(tname, 'UNDO_TBS', null,
null, null, 'null', oid);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname,
'TARGET_OBJECTS', oid);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname,
'START_SNAPSHOT', 1);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname,
'END_SNAPSHOT', 2);
DBMS_ADVISOR.SET_TASK_PARAMETER(name, 'INSTANCE',
1);
DBMS_ADVISOR.execute_task(tname);
end;
/

After you have created the advisor task, you can view the output and
recommendations in the Automatic Database Diagnostic Monitor in
Enterprise Manager. This information is also available in the
DBA_ADVISOR_* data dictionary views.

● Managing Undo Tablespaces

This section describes the various steps involved in undo tablespace


management and contains the following sections:

● Creating an Undo Tablespace


There are two methods of creating an undo tablespace. The first method
creates the undo tablespace when the CREATE DATABASE statement is
issued. This occurs when you are creating a new database, and the
instance is started in automatic undo management mode
(UNDO_MANAGEMENT = AUTO). The second method is used with
an existing database. It uses the CREATE UNDO TABLESPACE
statement.
You cannot create database objects in an undo tablespace. It is reserved
for system-managed undo data.
Oracle Database enables you to create a single-file undo tablespace.
Single-file, or bigfile.

68
9866465379

Using CREATE DATABASE to Create an Undo Tablespace

You can create a specific undo tablespace using the UNDO


TABLESPACE clause of the CREATE DATABASE statement.
The following statement illustrates using the UNDO TABLESPACE
clause in a CREATE DATABASE statement. The undo tablespace is
named undotbs_01 and one datafile, /u01/oracle/rbdb1/undo0101.dbf, is
allocated for it.

Using the CREATE UNDO TABLESPACE Statement

The CREATE UNDO TABLESPACE statement is the same as the


CREATE TABLESPACE statement, but the UNDO keyword is
specified. The database determines most of the attributes of the undo
tablespace, but you can specify the DATAFILE clause.
This example creates the undotbs_02 undo tablespace with the
AUTOEXTEND option:
CREATE UNDO TABLESPACE undotbs_02
DATAFILE '/u01/oracle/rbdb1/undo0201.dbf' SIZE 2M REUSE
AUTOEXTEND ON;

You can create more than one undo tablespace, but only one of them can
be active at any one time.

● Altering an Undo Tablespace


Undo tablespaces are altered using the ALTER TABLESPACE
statement. However, since most aspects of undo tablespaces are system
managed, you need only be concerned with the following actions:
● Adding a datafile
● Renaming a datafile
● Bringing a datafile online or taking it offline
● Beginning or ending an open backup on a datafile
● Enabling and disabling undo retention guarantee
These are also the only attributes you are permitted to alter.
If an undo tablespace runs out of space, or you want to prevent it from
doing so, you can add more files to it or resize existing datafiles.
The following example adds another datafile to undo tablespace
undotbs_01:

69
9866465379

ALTER TABLESPACE undotbs_01


ADD DATAFILE '/u01/oracle/rbdb1/undo0102.dbf' AUTOEXTEND
ON NEXT 1M
MAXSIZE UNLIMITED;

You can use the ALTER DATABASE...DATAFILE statement to resize


or extend a datafile.

● Dropping an Undo Tablespace


Use the DROP TABLESPACE statement to drop an undo tablespace. The
following example drops the undo tablespace undotbs_01:
DROP TABLESPACE undotbs_01;

An undo tablespace can only be dropped if it is not currently used by any


instance. If the undo tablespace contains any outstanding transactions (for
example, a transaction died but has not yet been recovered), the DROP
TABLESPACE statement fails. However, since DROP TABLESPACE
drops an undo tablespace even if it contains unexpired undo information
(within retention period), you must be careful not to drop an undo
tablespace if undo information is needed by some existing queries.
DROP TABLESPACE for undo tablespaces behaves like DROP
TABLESPACE...INCLUDING CONTENTS

● Switching Undo Tablespaces


You can switch from using one undo tablespace to another. Because the
UNDO_TABLESPACE initialization parameter is a dynamic parameter,
the ALTER SYSTEM SET statement can be used to assign a new undo
tablespace.
The following statement switches to a new undo tablespace:
ALTER SYSTEM SET UNDO_TABLESPACE = undotbs_02;

Assuming undotbs_01 is the current undo tablespace, after this command


successfully executes, the instance uses undotbs_02 in place of
undotbs_01 as its undo tablespace.
If any of the following conditions exist for the tablespace being switched
to, an error is reported and no switching occurs:
● The tablespace does not exist

70
9866465379

● The tablespace is not an undo tablespace


● The tablespace is already being used by another instance (in a RAC
environment only)
The database is online while the switch operation is performed, and user
transactions can be executed while this command is being executed.
When the switch operation completes successfully, all transactions started
after the switch operation began are assigned to transaction tables in the
new undo tablespace.
The switch operation does not wait for transactions in the old undo
tablespace to commit. If there are any pending transactions in the old
undo tablespace, the old undo tablespace enters into a PENDING
OFFLINE mode (status). In this mode, existing transactions can continue
to execute, but undo records for new user transactions cannot be stored in
this undo tablespace.
An undo tablespace can exist in this PENDING OFFLINE mode, even
after the switch operation completes successfully. A PENDING
OFFLINE undo tablespace cannot be used by another instance, nor can it
be dropped. Eventually, after all active transactions have committed, the
undo tablespace automatically goes from the PENDING OFFLINE mode
to the OFFLINE mode. From then on, the undo tablespace is available for
other instances (in an Oracle Real Application Cluster environment).
If the parameter value for UNDO TABLESPACE is set to '' (two single
quotes), then the current undo tablespace is switched out and the next
available undo tablespace is switched in. Use this statement with care
because there may be no undo tablespace available.
The following example unassigns the current undo tablespace:
ALTER SYSTEM SET UNDO_TABLESPACE = '';

Establishing User Quotas for Undo Space


The Oracle Database Resource Manager can be used to establish user
quotas for undo space. The Database Resource Manager directive
UNDO_POOL allows DBAs to limit the amount of undo space consumed
by a group of users (resource consumer group).
You can specify an undo pool for each consumer group. An undo pool
controls the amount of total undo that can be generated by a consumer
group. When the total undo generated by a consumer group exceeds its
undo limit, the current UPDATE transaction generating the undo is
terminated. No other members of the consumer group can perform further
updates until undo space is freed from the pool.

71
9866465379

When no UNDO_POOL directive is explicitly defined, users are allowed


unlimited undo space.
View Description V$UNDOSTAT Contains statistics for monitoring and
tuning undo space. Use this view to help estimate the amount of undo
space required for the current workload. The database also uses this
information to help tune undo usage in the system. This view is
meaningful only in automatic undo management mode. V$ROLLSTAT
For automatic undo management mode, information reflects behavior of
the undo segments in the undo tablespace V$TRANSACTION Contains
undo segment information DBA_UNDO_EXTENTS Shows the status
and size of each extent in the undo tablespace. DBA_HIST_UNDOSTAT
Contains statistical snapshots of V$UNDOSTAT information
The V$UNDOSTAT view is useful for monitoring the effects of
transaction execution on undo space in the current instance. Statistics are
available for undo space consumption, transaction concurrency, the
tuning of undo retention, and the length and SQL ID of long-running
queries in the instance.
Each row in the view contains statistics collected in the instance for a ten-
minute interval. The rows are in descending order by the BEGIN_TIME
column value. Each row belongs to the time interval marked by
(BEGIN_TIME, END_TIME). Each column represents the data collected
for the particular statistic in that time interval. The first row of the view
contains statistics for the (partial) current time period. The view contains
a total of 576 rows, spanning a 4 day cycle.
The following example shows the results of a query on the
V$UNDOSTAT view.
SELECT TO_CHAR(BEGIN_TIME, 'MM/DD/YYYY HH24:MI:SS')
BEGIN_TIME,
TO_CHAR(END_TIME, 'MM/DD/YYYY HH24:MI:SS')
END_TIME,
UNDOTSN, UNDOBLKS, TXNCOUNT, MAXCONCURRENCY
AS "MAXCON"
FROM v$UNDOSTAT WHERE rownum <= 144;

BEGIN_TIME END_TIME UNDOTSN UNDOBLKS


TXNCOUNT MAXCON
------------------- ------------------- ---------- ---------- ---------- ----------
10/28/2004 14:25:12 10/28/2004 14:32:17 8 74 12071108
3
10/28/2004 14:15:12 10/28/2004 14:25:12 8 49 12070698
2

72
9866465379

10/28/2004 14:05:12 10/28/2004 14:15:12 8 125 12070220


1
10/28/2004 13:55:12 10/28/2004 14:05:12 8 99 12066511
3
...
10/27/2004 14:45:12 10/27/2004 14:55:12 8 15 11831676
1
10/27/2004 14:35:12 10/27/2004 14:45:12 8 154 11831165
2

144 rows selected.

The preceding example shows how undo space is consumed in the system
for the previous 24 hours from the time 14:35:12 on 10/27/2004.
Finding the amount of undo generated in the current session
To illustrate the exaplmes in the later sections of this article we need to
devise a small transaction (here, it is a single update statement). We also
need to know the exact amount of UNDO generated by the statement.
Table-1 shows the creation of a table TEMP1, and shows an UPDATE
on table TEMP1. It uses a query into the datadictionary dynamic views to
find the exact amount of UNDO generated by the UPDATE. We will
need this value in subsequent examples. The default block size for the
database is 8K.
UNDO Blocks and Bytes generated in a transaction/statement
SQL> create table temp1 as
2 select * from all_objects where rownum < 5001;

Table created.

SQL> update temp1 set owner = 'stage1';

5000 rows updated.

SQL> select USED_UBLK, USED_UREC, START_SCNB


2 from v$session a, v$transaction b
3 where rawtohex(a.saddr) = rawtohex(b.ses_addr)
4 and a.audsid = sys_context('userenv','sessionid');

USED_UBLK USED_UREC START_SCNB


---------- ---------- ----------

73
9866465379

59 5001 687483932

SQL> commit;

Commit complete.

Managing Redo Log files


Every database must have at least two redo log groups.
Oracle writes all statements except, SELECT statement, to the logfiles.
This is done because Oracle performs deferred batch writes i.e. it does
write changes to disk per statement instead it performs write in batches.
So in this case if a user updates a row, Oracle will change the row in
db_buffer_cache and records the statement in the logfile and give the
message to the user that row is updated. Actually the row is not yet
written back to the datafile but still it give the message to the user that
row is updated. After 3 seconds the row is actually written to the datafile.
This is known as deferred batch writes.

Since Oracle defers writing to the datafile there is chance of power failure
or system crash before the row is written to the disk. That’s why Oracle
writes the statement in redo logfile so that in case of power failure or
system crash oracle can re-execute the statements next time when you
open the database.

Adding a New Redo logfile group


SQL > alter database add logfile group 4
'/u01/app/oracle/oradata/sales/log4.ora' size 10M;
You can add groups to a database up to the MAXLOGFILES setting you
have specified at the time of creating the database. If you want to change
MAXLOGFILE setting you have to create a new controlfile.

Adding members to an existing group

Sql > alter database add logfile member


'/u01/app/oracle/oradata/sales/log5.ora' to group 1;

You can add members to a group up to the MAXLOGMEMBERS setting


you have specified at the time of creating the database. If you want to

74
9866465379

change MAXLOGMEMBERS setting you have create a new controlfile

Dropping members from a group


You can drop member from a log group only if the group is having more
than one member and if it is not the current group. If you want to drop
members from the current group, force a log switch or wait so that log
switch occurs and another group becomes current. To force a log switch
give the following command
SQL > alter system switch logfile;

SQL > alter database drop logfile member


'/u01/app/oracle/oradata/sales/log5.ora' ;

When you drop logfiles the files are not deleted from the disk. You have
to use O/S command to delete the files from disk

Dropping Logfile Group


you can also drop logfile group only if the database is having more than
two groups and if it is not the current group

SQL > alter database drop logfile group 3;

We cannot resize logfiles.


Renaming a logfile

SQL > shutdown immediate

Move the logfile from Old location to new location using operating
system command
SQL > startup mount
Change the location in control file.
SQL > alter database rename file

75
9866465379

'/u01/app/oracle/oradata/sales/log1.ora' to
'/u02/app/oracle/oradata/sales/log2.ora';
Open the database.
SQL > alter database open;

Clearing Redo logfiles

SQL > alter database sales clear logfile group 3;


A redo log file might become corrupted while the database is open, and
ultimately stop database activity because archiving cannot continue. In
this
situation the ALTER DATABASE CLEAR LOGFILE statement can
be
used reinitialize the file without shutting down the database
SQL > alter database sales clear unarchived logfile group 3;
If the corrupt redo log file has not been archived, use the UNARCHIVED
keyword in the statement
To view information about logfiles
SQL > select * from v$log;

SQL > select * from v$logfile;


Managing control file
Every database will have a control file which is a small binary
file that records the physical structure of the database.
This includes
- The database name
- Names and locations of associated datafiles and redo log files
- The timestamp of the database creation
- The current log sequence number
- Checkpoint information. etc.
It is strongly recommended that you multiplex control files
Have at least two control files one in one hard disk and another one

76
9866465379

located in another disk, in a database


In this way if control file becomes corrupt in one disk the another copy
will be available and you don’t have to do recovery of control file.
You can multiplex control file at the time of creating a database and later
on also. If you have not multiplexed control file at the time of creating a
database you can do it now
SQL > shutdown immediate
copy the control file from one location to other location
$ cp /u01/app/oracle/oradata/sales/control1.ora
/u02/app/oracle/oradata/sales/control2.ora.
Now open the parameter file and add the second control file path
to CONTROL_FILES parameter initialization file.
Now start the database.

Creating a new control file


If you ever want to change the name of database or want to change the
setting of MAXDATAFILES, MAXLOGFILES, MAXLOGMEMBERS then
you have to create a new control file
SQL > alter database backup controlfile to trace;
After giving above statement the create controlfile statement is
written to trace files in USER_DUMP_DEST directory.
Goto the above directory and open the last trace file in an editor.
It will have two sets of create controlfile, one is with resetlogs
and another without resetlogs.
Since we are changing the name of the database we have to use
the one with resetlogs.
Copy those lines to another file called ctl.sql
Edit this file and change the database name.
Start and do not mount the database.
SQL > startup nomount
Now execute the ctl.sql file.
SQL > @ctl.sql
Now open the database with resetlogs.

77
9866465379

SQL > alter database open resetlogs;


Managing Users
Oracle database will have a list of users who can access the database.
Here as a DBA you are responsible for creating, maintaining and
terminating user accounts, managing their passwords, managing roles and
granting to users, and granting privileges.
To see user names and their account status
SQL > select username,account_status from dba_users;
USERNAME ACCOUNT_STATUS
------------------------------ --------------------------------
RAMESH OPEN
OUTLN OPEN
MGMT_VIEW OPEN
SAMPLE OPEN
DBSNMP OPEN
SCOTT OPEN
SYSMAN OPEN
SYS OPEN
SYSTEM OPEN
TSMSYS EXPIRED & LOCKED
DIP EXPIRED & LOCKED
To lock and unlock an account
SQL > alter user scott account lock;
User altered.
SQL > select username,account_status from dba_users where
username='SCOTT';
USERNAME ACCOUNT_STATUS
------------------------------ --------------------------------
SCOTT LOCKED
Once account is locked the user cannot connect to the database.
If the user try to connect will get the following message.
SQL> connect scott/tiger;
ERROR:
ORA-28000: the account is locked

78
9866465379

SQL > alter user scott account unlock;


User altered.
SQL > select username,account_status from dba_users where
username='SCOTT';
USERNAME ACCOUNT_STATUS
------------------------------ --------------------------------
SCOTT LOCKED
To change user password we can use the following methods.
SQL > password scott;
Changing password for scott
New password:
Retype new password:
Password changed
SQL > alter user scott identified by tiger;
User altered.
To see password information execute the following query.
Here password is shown in encrypted form and passwords are not
case sensitive.
SQL > select username,password from dba_users where
username='SCOTT';
USERNAME PASSWORD
------------------------------ ------------------------------
SCOTT F894844C34402B67
Every user will be given some quota in a tablespace where the
user can create objects. Quota can be given to user in more than
one tablespace, but one tablespace can be made as default. To see the
default tablespace assigned for a user
SQL > select username,default_tablespace from dba_users;
USERNAME DEFAULT_TABLESPACE
------------------------------ ------------------------------
RAMESH USERS
OUTLN SYSTEM
MGMT_VIEW SYSTEM

79
9866465379

SAMPLE MYSPACE
DBSNMP SYSAUX
SCOTT SYSTEM
SYSMAN SYSAUX
SYS SYSTEM
SYSTEM SYSTEM
TSMSYS SYSTEM
DIP SYSTEM
To see quotas in all other tablespaces assigned to a user.
SQL > select username,tablespace_name,max_bytes from
dba_ts_quotas where username='SCOTT';
USERNAME TABLESPACE_NAME MAX_BYTES
------------------------------ ------------------------------ ----------
SCOTT USERS 5242880
SCOTT MYSPACE 10485760
Here apart from the default tablespace, quotas are given for user
SCOTT in users and myspace tablespaces.

To see user tables


SQL > select table_name,table_type from dba_catalog where
owner = 'SCOTT';
TABLE_NAME TABLE_TYPE
------------------------------ -----------
DEPT TABLE
EMP TABLE
BONUS TABLE
SALGRADE TABLE
SQL > select * from dba_objects where owner='SCOTT';
SQL > select table_owner,table_name,synonym_name
from dba_synonyms where owner='SCOTT';

To create a user
We can create a user by using CREATE USER statement.

80
9866465379

The person who is going to create users should have


CREATE USER system privilege. A user name must be unique
in the database. We cannot use a role name.
SQL > create user raju identified by rajukb default tablespace users
quota 10M on users temporary tablespace temp;
User created.
After creating the user, the user cannot connect to the database.
we must grant CREATE SESSION privilege to the user. This
will
grant the user minimum privileges to log into the database.
SQL > grant create session,create table to raju;
Grant succeeded.

To alter user quota on a tablespace we can use ALTER USER.


SQL > alter user scott quota unlimited on users;
User altered.
We can prevent a user to create objects in any tablespace by
simply changing the quota on a tablespace to zero.
Objects which already created will remain and does not allow the user to
create new objects and new space cannot be allocated to
existing objects.
Finding out objects owned by a user
SQL > select owner,object_name from dba_objects where
owner='SCOTT';
We can execute the following to know in which tablespaces the
tables are created.
SQL > select table_name,tablespace_name from
dba_tables where owner='SCOTT';
TABLE_NAME TABLESPACE_NAME
------------------------------ ------------------------------
DEPT SYSTEM
EMP SYSTEM

81
9866465379

BONUS SYSTEM
SALGRADE SYSTEM
Assigning quota to a user in other than default tablespace.
SQL > alter user scott quota 10M on users2;
User altered
Now we can query dba_ts_quotas to see tablespace quota for the
user scott.
By default all the objects created by a user will go the default tablespace.
Now while creating a table user can select the tablespace
SQL > create table mytable(name varchar(30))
tablespace users2;
The above table created by user scott will be stored in users2
tablespace.

To see all users who are connected to the database


SQL > select username,sid,serial#,status from v$session
where username is not null;
USERNAME SID SERIAL# STATUS
------------------------------ ---------- ---------- --------
SCOTT 35 60 INACTIVE
SYS 48 12 ACTIVE
A user can connect to the database more than once.To see all sessions for
the user scott.
SQL > select sid,serial#,status from v$session
where username='SCOTT';
SID SERIAL# STATUS
---------- ---------- --------
34 9 INACTIVE
35 60 INACTIVE
SQL > select sid,serial#,status,server from v$session
where username='SCOTT';
SID SERIAL# STATUS SERVER
---------- ---------- -------- ---------

82
9866465379

34 9 INACTIVE DEDICATED
35 60 INACTIVE DEDICATED
Here the user is connected the dedicated server process.
But if we see process ids by using operating system command
it simply show that the processes running in the host computer
belong the oracle database server. User scott is not shown.
ram $ ps -e | grep oracle
3702 ? 00:00:00 oracle
3704 ? 00:00:00 oracle
3706 ? 00:00:00 oracle
3708 ? 00:00:00 oracle
3710 ? 00:00:00 oracle
3712 ? 00:00:00 oracle
3714 ? 00:00:01 oracle
3716 ? 00:00:00 oracle
3718 ? 00:00:00 oracle
3720 ? 00:00:00 oracle
3722 ? 00:00:00 oracle
3726 ? 00:00:00 oracle
3734 ? 00:00:00 oracle
3736 ? 00:00:00 oracle
3739 ? 00:00:01 oracle
3903 ? 00:00:00 oracle
3970 ? 00:00:00 oracle
So any user connected to the database what ever the process is
created for this user in the host computer is not known to the OS.
This process simply belong to the oracle.

To see the user process ids


SQL > select a.username,b.spid from v$seesion a,
v$process b where a.paddr = b.addr and a.username is not null;
USERNAME SPID
------------------------------ ------------

83
9866465379

SYS 3739
SCOTT 3903
SCOTT 3970
We can also terminate a user session by using the following. To terminate
first get sid and serial of a user session then use ALTER SYSTEM KILL.
SQL > select sid,serial# from v$session where username='SCOTT';
SID SERIAL#
---------- ----------
34 9
35 60
SQL > alter system kill session '34,9';
System altered.
Now if the user is trying to query system will give the following
error message.
SQL> select * from tab;
select * from tab
*
ERROR at line 1:
ORA-00028: your session has been killed
Dropping users
Use the DROP USER statement to remove a database user and optionally
we can also remove the users objects. Oracle does not drop users whose
schemas contain objects unless we specify CASCADE.
SQL > drop user raju;
user dropped.
SQL > drop user raju cascade;

Setting resource limits to users by creating profiles


We can set the limits on the various system resources available
This is very useful in multiuser systems where resources are
shared by all the users.
Types of system resources and limits

84
9866465379

Session level – when user connects to a database a session is


created. This session consumes cpu time and memory on the host
machine where oracle database runs.
Call level – when ever a sql statement is executed several steps are taken
to process the statement. During this several calls are made
to the database. To prevent any one call to use the system excessively
oracle will allow us to set call level resource limits.
CPU time – when some calls are made to the database, some amount of
cpu time is required to process. To prevent uncontrolled use of cpu we set
these resource limits.
Logical reads - I/O is one of the most expensive operations in a database
system. SQL statements that are I/O-intensive can monopolize memory
and disk use and cause other database operations to compete for these
resources.
Other resources - You can limit the number of concurrent sessions for
each user. Each user can create only up to a predefined number of
concurrent sessions. You can limit the idle time for a session. If the time
between calls in a session reaches the idle time limit, then the current
transaction is rolled back, the session is aborted, and the resources of the
session are returned to the system. The next call receives an error that
indicates that the user is no longer connected to the instance. This limit is
set as a number of elapsed minutes.
To set limits to users the following parameters must be set.
SQL > show parameter resource_limit;
SQL > alter system set resource_limit = true scope=both;
To see all profiles
SQL > select * from dba_profiles;
PROFILE RESOURCE_NAME RESOURCE LIMIT
DEFAULT COMPOSITE_LIMIT KERNEL UNLIMITED
DEFAULT SESSIONS_PER_USER KERNEL UNLIMITED
DEFAULT CPU_PER_SESSION KERNEL UNLIMITED
DEFAULT CPU_PER_CALL KERNEL UNLIMITED
DEFAULT LOGICAL_READS_PER_SESSION KERNEL UNLIMITED
DEFAULT LOGICAL_READS_PER_CALL KERNEL UNLIMITED
DEFAULT IDLE_TIME KERNEL UNLIMITED
DEFAULT CONNECT_TIME KERNEL UNLIMITED

85
9866465379

DEFAULT PRIVATE_SGA KERNEL UNLIMITED


DEFAULT FAILED_LOGIN_ATTEMPTS PASSWORD 10
DEFAULT PASSWORD_LIFE_TIME PASSWORD
UNLIMITED
DEFAULT PASSWORD_REUSE_TIME PASSWORD
UNLIMITED
DEFAULT PASSWORD_REUSE_MAX PASSWORD
UNLIMITED
DEFAULT PASSWORD_VERIFY_FUNCTION PASSWORD NULL
DEFAULT PASSWORD_LOCK_TIME PASSWORD
UNLIMITED
DEFAULT PASSWORD_GRACE_TIME PASSWORD UNLIMITED
Listing all users and their associated profile information.
SQL > select username,profile from dba_users;
USERNAME PROFILE
------------------------------ ------------------------------
SAMPLE DEFAULT
RAMESH DEFAULT
DBSNMP DEFAULT
SYSMAN DEFAULT
OUTLN DEFAULT
MGMT_VIEW DEFAULT
SCOTT DEFAULT
SYS DEFAULT
SYSTEM DEFAULT
TSMSYS DEFAULT
DIP DEFAULT
Resource limits can be assigned to users by using profiles. Now create a
profile
SQL > create profile example_profile limit
sessions_per_user 1
idle_time 1
failed_login_attempts 3;
Profile created.
SQL > select resource_name ,limit from dba_profiles

86
9866465379

where profile = 'example_profile;


RESOURCE_NAME LIMIT
-------------------------------- ----------------------------------------
COMPOSITE_LIMIT DEFAULT
SESSIONS_PER_USER 1
CPU_PER_SESSION DEFAULT
CPU_PER_CALL DEFAULT
LOGICAL_READS_PER_SESSION DEFAULT
LOGICAL_READS_PER_CALL DEFAULT
IDLE_TIME 1
CONNECT_TIME DEFAULT
PRIVATE_SGA DEFAULT
FAILED_LOGIN_ATTEMPTS 3
PASSWORD_LIFE_TIME DEFAULT
PASSWORD_REUSE_TIME DEFAULT
PASSWORD_REUSE_MAX DEFAULT
PASSWORD_VERIFY_FUNCTION DEFAULT
PASSWORD_LOCK_TIME DEFAULT
PASSWORD_GRACE_TIME DEFAULT
Now we can create a user and assign this profile.
SQL > create user test identified by test default tablespace users
temporary tablespace temp profile example_profile;
SQL > grant create session to test;
Otherwise we can also assign this profile to an existing user also.
SQL > alter user scott profile example_profile;
To alter a profile and resource limit
SQL > alter profile example limit
idle_time 2;
Dropping profiles
SQL > drop profile example;
If the profile is already assigned to some users then we can
drop it by using CASCADE option.
SQL > drop profile example cascade;

87
9866465379

Password management using profiles


Password aging, expiration, history etc can be managed here.
Account locking – when a user exceeds particular number of
failed login attempts oracle automatically locks the user account.
For this we can use FAILED_LOGIN_ATTEMPTS.
similarly we can use PASSWORD_LOCK_TIME will specify the
amount of time in days the account will remain locked.
Password aging and expiration – here we can specify maximum
lifetime
for a password. when this time passes and password expires the DBA
or the user must change the password.
This can be specified using PASSWORD_LIFE_TIME.
We can also specify a grace period for password expiration. this can be
done by using PASSWORD_GRACE_TIME.
password complexity can be verified by using a verification function.
This should accept three parameters username, password, old password.
This should return a boolean value. True indicates the password is valid.
create or replace function pass_verification_function (
username varchar2,
password varchar2,
old_password varchar2)
return boolean as
BEGIN
if length(password) < 8 then
return flase;
else
return true;
end if;
END pass_verification_function;
/
compile the above function under the sys schema and reference it by

88
9866465379

password_verify_function parameter of a profile.


SQL > alter profile myprofile1 limit
password_verify_function pass_verification_function;
Now this profile can be assigned to any user. when ever the user want to
change the password, user must ensure that the password must be greater
than 8 characters otherwise verification will fail and does not accept the
given password.

Privileges
Privilege is a right to execute some type of sql statement or
right to access other users objects.
There are two types of Privileges -
System Privileges :
create session, sysdba, sysoper etc.
Object Privileges :
select, insert, update etc.
The set of privileges are fixed.
We can grant these privileges to users depending on the
requirement. We grant privileges in two ways. Grant privilege directly to
a user or create a role with required privileges and then grant the
role to a users.
To see system privileges
SQL > select name from system_privilege_map;

SQL > select * from dba_sys_privs;


SQL > select privilege from dba_sys_privs
where grantee='SCOTT';
PRIVILEGE
----------------------------------------
UNLIMITED TABLESPACE
System Privileges
This is the right to perform an action on any schema objects.

89
9866465379

For example create tablespace, create session etc.


We can use GRANT or REVOKE sql statements to grant or
revoke privileges to users.
Object Privileges
This privilege is the right to perform an action on a specific
object.
SQL > grant select on scott.emp to ramesh;

SQL > select grantee,privilege,owner,table_name from


dba_tab_privs where owner='SCOTT';
Table privileges
Object privileges for tables enable table security.
View Privileges
View is a presentation of data from one or more tables.
View contains no actual data.
Data in a view can be updated,deleted or inserted and these
operations directly affect the tables on which view is created.
Procedure Privileges
Execute is the only object privilege for procedures.

Working with Roles


A role is a group of privileges that can be granted to users or other roles.
With the help of roles we can manage the privileges
in a easier way. In the database each role name must be unique.
Roles will reduce the privilege administration and allow dynamic
privilege management.
We can grant or revoke roles to users by usning GRANT or
REVOKE sql statements.
Oracle database provides some predefined roles
SQL > select role from dba_roles;
ROLE
------------------------------

90
9866465379

CONNECT
RESOURCE
DBA
SELECT_CATALOG_ROLE
EXECUTE_CATALOG_ROLE
DELETE_CATALOG_ROLE
EXP_FULL_DATABASE
IMP_FULL_DATABASE
RECOVERY_CATALOG_OWNER
.
ROLE
------------------------------
AQ_ADMINISTRATOR_ROLE
AQ_USER_ROLE
GLOBAL_AQ_USER_ROLE
SCHEDULER_ADMIN
HS_ADMIN_ROLE
OEM_ADVISOR
OEM_MONITOR
MGMT_USER
CONNECT - Includes only the following system privilege: CREATE
SESSION
RESOURCE - Includes the following system privileges: CREATE
CLUSTER, CREATE INDEXTYPE, CREATE OPERATOR, CREATE
PROCEDURE, CREATE SEQUENCE, CREATE TABLE, CREATE
TRIGGER, CREATE TYPE
EXP_FULL_DATABASE - Provides the privileges required to perform
full and incremental database exports and includes: SELECT ANY
TABLE, BACKUP ANY TABLE, EXECUTE ANY PROCEDURE,
EXECUTE ANY TYPE, ADMINISTER RESOURCE MANAGER, and
INSERT, DELETE, and UPDATE on the tables SYS.INCVID,
SYS.INCFIL, and SYS.INCEXP. Also the following roles:
EXECUTE_CATALOG_ROLE and SELECT_CATALOG_ROLE.

SQL > create role myrole1 identified by myrole1;


SQL > grant create session to myrole1;

91
9866465379

When a user is created, the default for active roles is set to ALL.
ALL means all the roles granted to the user are active.

User can see all the active roles by using


SQL > select * from session_roles;
ROLE
------------------------------
CONNECT
RESOURCE
A user can enable more roles by using SET ROLE command.
SQL > set role myrole;
And if the role is having passoword then
SQL > set role myrole1 identified by myrole1;
SQL > set role all;
SQL > select * from session_privs;
PRIVILEGE
----------------------------------------
CREATE SESSION
UNLIMITED TABLESPACE
CREATE TABLE
CREATE CLUSTER
CREATE SEQUENCE
CREATE PROCEDURE
CREATE TRIGGER
CREATE TYPE
CREATE OPERATOR
CREATE INDEXTYPE
The DBA can change the default with the ALTER USER command.
SQL > alter user scott default role myrole1;
SQL > select grantee,granted_role from dba_role_privs
where grantee='SCOTT';
GRANTEE GRANTED_ROLE
------------------------------ ------------------------------

92
9866465379

SCOTT RESOURCE
SCOTT CONNECT
We can use MAX_ENABLED_ROLES parameter to set the
number of roles allowed to be enabled by a user at any time.
We can see privileges assigned to a role
SQL > select role,privilege from role_sys_privs
where role='RESOURCE';
ROLE PRIVILEGE
------------------------------ ----------------------------------------
RESOURCE CREATE SEQUENCE
RESOURCE CREATE TRIGGER
RESOURCE CREATE CLUSTER
RESOURCE CREATE PROCEDURE
RESOURCE CREATE TYPE
RESOURCE CREATE OPERATOR
RESOURCE CREATE TABLE
RESOURCE CREATE INDEXTYPE
Dropping roles
SQL > drop role myrole1

Oracle Net Services


provides enterprise wide connectivity solutions in distributed,
heterogeneous computing environments. Oracle Net Services ease the
complexities of network configuration and management, maximize
performance, and improve network diagnostic capabilities.
Oracle Net
a component of Oracle Net Services, enables a network session from a
client application to an Oracle database server. Once a network session is
established, Oracle Net acts as the data courier for both the client
application and the database server. It is responsible for establishing and
maintaining the connection between the client application and database
server, as well as exchanging messages between them. Oracle Net is able
to perform these jobs because it is located on each computer in the
network.
Oracle Net is a software component that resides on both the client and the
database server. Oracle Net is layered on top of a network protocol rules

93
9866465379

that determine how applications access the network and how data is
subdivided into packets for transmission across the network. Oracle Net
communicates with the TCP/IP protocol to enable computer-level
connectivity and data transfer between the client and the database server.

Basic oracle net server side configuration


The Listener process
The one operation unique to the Oracle database server side is the act of
receiving the initial connection through an oracle net listener. The Oracle
Net listener, commonly known as the listener, brokers a client request,
handing off the request to the server. The listener is configured with a
protocol address. Clients configured with the same protocol address can
send connection requests to the listener. Once a connection is established,
the client and Oracle database server communicate directly with one
another.
Once a client request has reached the listener, the listener selects an
appropriate service handler to service the client's request and forwards the
client's request to it. The listener determines if a database service and its
service handlers are available through service registration. During service
registration, the PMON process-an instance background process--
provides the listener with information about the following:
7. Names of the database services provided by the database
8. Name of the instance associated with the services and its current and
maximum load
9. Service handlers (dispatchers and dedicated servers) available for the instance,
including their type, protocol addresses, and current and maximum load

94
9866465379

The listener configuration is stored in a configuration file named


listener.ora. Because all of the configuration parameters have default
values, it is possible to start and use a listener with no configuration. This
default listener has a name of LISTENER, supports no services upon
startup, and listens on the following TCP/IP protocol address
(ADDRESS=(PROTOCOL=tcp)(HOST=oraclesrv1.com)(PORT=1521))
Supported services, that is, the services to which the listener forwards
client requests, can be configured in the listener.ora file.
or this information can be dynamically registered with the listener. This
dynamic registration feature is called service registration

● configuring service registration

● To ensure service registration works properly, the


initialization parameter file should contain the following parameters:

8. SERVICE_NAMES for the database service name


9. INSTANCE_NAME for the instance name
For example:
SERVICE_NAMES=sales.us.acme.com
INSTANCE_NAME=sales

The value for the SERVICE_NAMES parameter defaults to the global


database name, a name comprising the DB_NAME and DB_DOMAIN
parameters in the initialization parameter file, entered during installation
or database creation. The value for the INSTANCE_NAME parameter
defaults to the SID entered during installation or database creation.

● Registering Information with the Default, Local


Listener

By default, the PMON process registers service information with its local
listener on the default local address of TCP/IP, port 1521. As long as the
listener configuration is synchronized with the database configuration,
PMON can register service information with a nondefault local listener or
a remote listener on another node. Synchronization is simply a matter of
specifying the protocol address of the listener in the listener.ora
file and the location of the listener in the initialization parameter file.

95
9866465379

● Registering Information with a Nondefault Listener

If you want PMON to register with a local listener that does not use
TCP/IP, port 1521, configure the LOCAL_LISTENER parameter in the
initialization parameter file to locate the local listener.
For a shared server environment, you can alternatively use the
LISTENER attribute of the DISPATCHERS parameter in the
initialization parameter file to register the dispatchers with a nondefault
local listener. Because both the LOCAL_LISTENER parameter and the
LISTENER attribute enable PMON to register dispatcher information
with the listener, it is not necessary to specify both the parameter and the
attribute if the listener values are the same.
Set the LOCAL_LISTENER parameter as follows:
LOCAL_LISTENER=listener_alias

Set the LISTENER attribute as follows:


DISPATCHERS="(PROTOCOL=tcp)
(LISTENER=listener_alias)"

listener_alias is then resolved to the listener protocol addresses


through a naming method, such as a tnsnames.ora file on the
database server.
For example, if the listener is configured to listen on port 1421 rather than
port 1521, you can set the LOCAL_LISTENER parameter in the
initialization parameter file as follows:
LOCAL_LISTENER=listener1

Using the same listener example, you can set the LISTENER attribute as
follows:
DISPATCHERS="(PROTOCOL=tcp)(LISTENER=listener1)"

You can then resolve listener1 in the local tnsnames.ora as


follows:
listener1=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=sales-server)

96
9866465379

(PORT=1421)))

● Starting and Stopping the Listener

STOP Command

To stop the listener from the command line, enter:


lsnrctl STOP [listener_name]

where listener_name is the name of the listener defined in the


listener.ora file. It is not necessary to identify the listener if you are
using the default listener, named LISTENER.

START Command

To start the listener from the command line, enter:


lsnrctl START [listener_name]

where listener_name is the name of the listener defined in the


listener.ora file. It is not necessary to identify the listener if you are
using the default listener, named LISTENER.
In addition to starting the listener, the Listener Control utility verifies
connectivity to the listener.

● Monitoring Runtime Behavior

The STATUS and SERVICES commands provide information about the


listener. When entering these commands, follow the syntax as shown for
the STOP and START commands.

STATUS Command

The STATUS command provides basic status information about a


listener, including a summary of listener configuration settings, the
listening protocol addresses, and a summary of services registered with
the listener.

97
9866465379

Table 12-2  Listener Control Utility STATUS Command

Description
Output Section
STATUS of Specifies the following:
the LISTENER
● Name of the listener
● Version of listener
● Start time and up time
● Tracing level
● Logging and tracing configuration settings
● listener.ora file being used
● Whether a password is set in listener.ora
file
● Whether the listener can respond to queries
from an SNMP-based network management
system
Listening Lists the protocol addresses the listener is configured
Endpoints to listen on
Summary
Services Displays a summary of the services registered with
Summary the listener and the service handlers allocated to each
service
Service Identifies the registered service

98
9866465379

Output Section Description


Instance Specifies the name of the instance associated with the
service along with its status and number of service
handlers associated with the service
Status can be one of the following:
● A READY status means that the instance can
accept connections.
● A BLOCKED status means that the instance
cannot accept connections.
● A READY/SECONDARY status means that this
is a secondary instance in an Oracle9i Real
Application Clusters primary/secondary
configuration and is ready to accept
connections.
● An UNKNOWN status means that the instance is
registered statically in the listener.ora
file rather than dynamically with service
registration. Therefore, the status is non
known.

SERVICES Command

The SERVICES command provides detailed information about the


services and instances registered and the service handlers allocated to
each instance.
Listener Control Utility SERVICES Command

Output
Section Description

Service Identifies the registered service


Instance Specifies the name of the instance associated with the
service

99
9866465379

Output
Section Description

The status field indicates if the instance is able to accept


connections.
● A READY status means that the instance can accept
connections.
● A BLOCKED status means that the instance cannot
accept connections.
● A READY/SECONDARY status means that this is a
secondary instance in an Oracle9i Real Application
Cluster primary/secondary configuration and is ready
to accept connections.
● An UNKNOWN status means that the instance is
registered statically in the listener.ora file
rather than dynamically with service registration.
Therefore, the status is non known.
Handlers Identifies the name of the service handler. Dispatchers are
named D000 through D999. Dedicated servers have a
name of DEDICATED.
This section also identifies the following about the service
handler:
● established: The number of client connections
this service handler has established
● refused: The number of client connections it has
refused
● current: The number of client connections it is
handling, that is, its current load
● max: The maximum number of connections for the
service handler, that is, its maximum load
● state: The state of the handler:
- A READY state means that the service handler can
accept new connections.
- A BLOCKED state means that the service handler
cannot accept new connections.
Following this, additional information about the service
handler displays, such as whether the service handler is a
dispatcher, a local dedicated server, or a remote dedicated
server on another node.

100
9866465379

To statically configure the listener:


● Access the Net Services Administration page in Oracle Enterprise
Manager.

● Select Listeners from the Administer list, and then select the
Oracle home that contains the location of the configuration files.
● Click Go.
The Listeners page appears.
● Select a listener, and then click Edit.
The Edit Listener page appears.
● Click the Static Database Registration tab, and then click Add.
The Add Database Service page appears. Enter the required
information in the fields.

● Click OK.

The following example shows file statically configured for a


database service called sales
SID_LIST_listener=
(SID_LIST=
(SID_DESC=
(SID_NAME=sales)
(ORACLE_HOME=/u01/app/oracle/10g)))

Bequeath Session
This enables clients to connect to a database without using network
listener. This protocol internally spawns a server process for each client
application. The bequeath protocol does not use a network listener and
automatically spawns a dedicated server. This is used for local
connections where an oracle database client application communicates
with an oracle database instance running on the same machine. This
works in only dedicated server mode.
Oracle net services client side configuration

Files tnsnames.ora and sqlnet.ora on windows machines.

101
9866465379

# TNSNAMES.ORA Network Configuration File:


C:\oracle\ora92\NETWORK\ADMIN\tnsnames.ora

# Generated by Oracle configuration tools.


ORCL =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = orasrv1.com)(PORT =
1521))
)
(CONNECT_DATA =
(SERVICE_NAME = orcl)
)
)

# SQLNET.ORA Network Configuration File:


C:\oracle\ora92\NETWORK\ADMIN\sqlnet.ora

# Generated by Oracle configuration tools.


SQLNET.AUTHENTICATION_SERVICES= (NTS)
NAMES.DIRECTORY_PATH= (TNSNAMES, ONAMES,
HOSTNAME, NIS)

Tnsnames.ora file under unix


SALES =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = linuxsrv1.com)(PORT
= 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = sales.linuxsrv1.com)
)

102
9866465379

)
Configuration of the shared server
Oracle's shared server architecture increases the scalability of
applications and the number of clients that can be simultaneously
connected to the database. The shared server architecture also enables
existing applications to scale up without making any changes to the
application itself.
When using shared server, clients do not communicate directly with a
database's server process -a database process that handles a client's
requests on behalf of a database. Instead, client requests are routed to one
or more dispatchers .The dispatchers place the client requests on a
common queue. An idle shared server process from the shared pool of
server processes picks up and processes a request from the queue. This
means a small pool of server processes can serve a large number of
clients.
In the shared server model, a dispatcher can support multiple client
connections concurrently. In the dedicated server model, there is one
server process for each client. Each time a connection request is received,
a server process is started and dedicated to that connection until
completed. This introduces a processing delay.
Shared server is ideal in configurations with a large number of
connections because it reduces the server's memory requirements. Shared
server is well suited for both Internet and intranet environments.
we can query V$ session view to find out a session has shared server or
dedicated server connection.
SQL > select username,server from v$session
where type='USER' and username is not null;
USERNAME SERVER
------------------------------ ---------
RAMESH DEDICATED
MADHU NONE
SYS DEDICATED

103
9866465379

Dedicated Servers

Database is always enabled to allow dedicated server processes. But to


enable shared servers we have to configure explicitly by setting
initialization parameters.
Shared Server Architecture

Initialization parameters for shared server.

104
9866465379

The following parameters will control the shared server operations.


Shared_servers : This will specify the initial number of shared servers
to start and minimum number of shared servers to keep. This is the only
required parameter to configure shared server.
Max_shared_servers : This will specify the maximum number of
shared
servers that can run at any given point of time.
Dispatchers : This can be used to configure dispatcher processes in the
shared server configuration.
Max_dispatchers : This will specify maximum number of dispatcher
processes that can run simultaneously.
Circuits : This will specify the total number of virtual circuits which are
available for inbound and outbound network sessions.
In a shared server environment user processes connect to a dispatcher.
Dispatcher can support multiple client connections concurrently. Each
client connection is bound to a virtual circuit. This is a small memory
used by the dispatcher for client database connection requests and replies.
Dispatcher will place a virtual circuit in a common queue when a request
comes. An idle shared server process takes the virtual circuit from the
common queue and serve the request then relinquishes the virtual circuit
before taking the next virtual circuit from the common queue. This will
enable small number of server processes to serve a large number of
clients.
Enabling shared server
This is enabled by simply setting the shared_servers initialization
parameter to a value greater than zero.
This parameter can be set at the time of starting the database or
dynamically by using ALTER SYSTEM SET statement.
Setting shared_servers equal to value zero will disable shared servers.
The max_shared_servers parameter specifies the maximum number of
shared servers that can be automatically created by PMON. It has no
default value. If no value is specified, then PMON starts as many shared
servers as is required by the load.

105
9866465379

Dispatchers
Dispatchers initialization parameter configures dispatcher processes in
the
shared server environment. At least one dispatcher process is required for
the shared server to work. If we do not specify this, and shared server is
enabled by using shared_servers then oracle database by default creates
one dispatcher for the TCP protocol.
Dispatcher initialization parameter attributes.
Address : This is used to specify the network protocol address of the
end point on which the dispatchers will listen.
Description : This is used to specify the network description of the end
point on which the dispatchers will listen including the network protocol
address.
Protocol : Specify the network protocol for which the dispatcher
generates a listening end point.
Dispatchers : This is used to specify the initial number of dispatchers to
start.
Connections : This will specify the maximum number of network
connections to allow for each dispatcher.
We can add the following line to initialization parameter file to set the
initial number of dispatchers.
dispatchers = “(protocol=tcp)(dispatchers=2)”
or
dispatchers = "(Address=(protocol=tcp)(host=192.168.100.10))
(dispatchers=2)"
Forcing the Port Used by Dispatchers To force the dispatchers to use a
specific port as the listening endpoint, add the port attribute as follows
dispatchers = “(address = (protocol = tcp) (port=6000))”
we can alter the number of dispatchers dynamically by using ALTER
SYSTEM statement.
for example
SQL > alter system set dispatchers = “(index=0)(disp=3)”;
we can shutdown a specific dispatcher process.

106
9866465379

Each dispatcher is uniquely identified by a name. we can query


SQL > select name,network from v$dispatcher;
NAME NETWORK
---- ------------------------------------------------------------
D000 (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.10)(PORT=32789))
D001 (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.10)(PORT=32794))
we can use the following statement to shutdown a dispatcher.
SQL > alter system shutdown immediate 'D001';
The immediate key word stops the dispatcher from accepting new
connections and database immediately terminates all existing
connections through that dispatcher. After all sessions are cleaned up, the
dispatcher process shuts down.

Using Shared Server on Clients

If shared server is configured and a client connection request arrives


when no dispatchers are registered, the requests can be handled by a
dedicated server process (configured in the listener.ora file). If you
want a particular client always to use a dispatcher, configure
(server=shared) in the connect data portion of the connect
descriptor. For example:
sales=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=sales-server)
(PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME=sales.us.acme.com)
(SERVER=shared)))
If a dispatcher is not available, the client connection request is rejected.

Overriding Shared Server on Clients

If the database is configured for shared server and a particular client


requires a dedicated server, you can configure the client to use a
dedicated server in one of the following ways:
● A net service name can be configured with a connect descriptor
that contains (server=dedicated) in the CONNECT_DATA
section. For example:
sales=
(DESCRIPTION=

107
9866465379

(ADDRESS=(PROTOCOL=tcp)(HOST=sales-server)
(PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME=sales.us.acme.com)
(SERVER=dedicated)))

Oracle Database Cloning

108
9866465379

In every production, development and test environments there is a need to


transport the entire database from one machine another. This duplicated
database can be used for development, testing etc. As a rule, testing and
development should not be done on your production database.
Or in some situations like
Relocating an oracle database to another machine
Moving oracle database to new storage media
Renaming oracle database etc
Cloning is a procedure to create duplicate database(an exact copy) of an
oracle database. (without performing full export and import). These
methods are frequently used by DBAs to create development or testing
environments from production as quick as possible.

There are many methods available for cloning an oracle database.

Using cold backup (offline)


This is very simple method to perform a clone of a database. Here
production database needs to be shutdown. Then take the backup of
database related files like datafiles, control files, redo log files using
operating system commands(cp) and transfer them to target machine then
clone.
Example :
Production server : 192.168.1.201
DBA os User : oracle
DB Name : prod

Cloning Server : 192.168.1.202 (Target machine)


DBA os User : raj
DB Name : test
First login to Production server Using oracle user. If you are connecting
from
windows system you can use putty or from another linux machine you
can use ssh.
[oracle@linux10 ~]$
connect to the oracle database as follows
[oracle@linux10 ~]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Mon Feb 1 20:53:03 2009


Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production

109
9866465379

With the Partitioning, OLAP and Data Mining options

SQL>
Now we can identify database name
SQL> select name from v$database;

NAME
---------
PROD

SQL>
And get the locations of all datafiles, log files
SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/oracle/oradata/prod/system01.dbf
/oracle/oradata/prod/undotbs01.dbf
/oracle/oradata/prod/sysaux01.dbf
/oracle/oradata/prod/users01.dbf
/oracle/oradata/prod/example01.dbf
/oracle/oradata/prod/ts1.dbf

6 rows selected.

SQL>
SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
/oracle/oradata/prod/redo01.log
/oracle/oradata/prod/redo03.log
/oracle/oradata/prod/redo02.log

SQL>
Create pfile from spfile.
SQL> create pfile from spfile;

File created.

SQL>
Now generate create control file sql statement
SQL> alter database backup controlfile to trace;

110
9866465379

Database altered.

SQL>
One trace file is generated in the location given by parameter
user_dump_dest
So identify the location
SQL> show parameter user_dump_dest;

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
user_dump_dest string /oracle/admin/prod/udump
SQL>
So here it is the above location where we can find out the latest trace file
which contain two copies of create control file statement. one with no
reset logs and another one with reset logs.
Now shutdown the database
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release
10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
[oracle@linux10 ~]$
change to the directory which is given by the parameter user_dump_dest
[oracle@linux10 ~]$ cd /oracle/admin/prod/udump/
[oracle@linux10 udump]$
Identify the latest trace file as follows
[oracle@linux10 udump]$ ls -ltr
total 656
-rw-r----- 1 oracle dba 573 Jan 22 01:00 prod_ora_7009.trc
-rw-r----- 1 oracle dba 889 Jan 22 01:01 prod_ora_7044.trc
-rw-r----- 1 oracle dba 573 Jan 22 01:01 prod_ora_7045.trc
-rw-r----- 1 oracle dba 2588 Jan 22 01:01 prod_ora_7070.trc
-rw-r----- 1 oracle dba 794 Jan 22 01:01 prod_ora_7084.trc
.
.
.
-rw-r----- 1 oracle dba 630 Feb 1 20:52 prod_ora_4164.trc
-rw-r----- 1 oracle dba 696 Feb 1 20:52 prod_ora_4165.trc
-rw-r----- 1 oracle dba 7879 Feb 1 21:09 prod_ora_4194.trc

111
9866465379

Now open the last file


[oracle@linux10 udump]$ vi prod_ora_4194.trc
Identify the create control statement with resetlogs
copy all the lines to con.sql file (any name)
120 CREATE CONTROLFILE REUSE DATABASE "PROD"
RESETLOGS ARCHIVELOG
121 MAXLOGFILES 16
122 MAXLOGMEMBERS 3
123 MAXDATAFILES 50
124 MAXINSTANCES 8
125 MAXLOGHISTORY 292
126 LOGFILE
127 GROUP 1 '/oracle/oradata/prod/redo01.log' SIZE 50M,
128 GROUP 2 '/oracle/oradata/prod/redo02.log' SIZE 50M,
129 GROUP 3 '/oracle/oradata/prod/redo03.log' SIZE 50M
130 -- STANDBY LOGFILE
131 DATAFILE
132 '/oracle/oradata/prod/system01.dbf',
133 '/oracle/oradata/prod/undotbs01.dbf',
134 '/oracle/oradata/prod/sysaux01.dbf',
135 '/oracle/oradata/prod/users01.dbf',
136 '/oracle/oradata/prod/example01.dbf',
137 '/oracle/oradata/prod/ts1.dbf'
138 CHARACTER SET WE8ISO8859P1
139 ;
All the above lines are copied to con.sql file.
Now login to the Target server where cloning should done using oracle
DBA
os account.
[raj@server1 ~]$
Then create the directory structure to keep all datafiles, control files, log
files.
Make directories for trace files. Assume that we want to keep all the
physical
files in the following directory and ORACLE_SID is testdb.
[raj@server1 ~]$ mkdir /oracle/testdb
[raj@server1 ~]$ mkdir /oracle/testdb/udump
[raj@server1 ~]$ mkdir /oracle/testdb/cdump
[raj@server1 ~]$ mkdir /oracle/testdb/bdump
[raj@server1 ~]$ mkdir /oracle/testdb/adump

Set ORACLE_SID
[raj@server1 ~]$ export ORACLE_SID=testdb

112
9866465379

[raj@server1 ~]$

Then from the source database machine copy all the datafiles, redologs,
pfile
con.sql(which contain create control file statement) to the target
machine's
database files location directory ie /oracle/testdb
[oracle@linux10 ~]$ cd /oracle/oradata/prod/
[oracle@linux10 prod]$ scp *.dbf *.log raj@192.168.1.202:/oracle/testdb
raj@192.168.1.202's password:
example01.dbf 100% 100MB 9.1MB/s 00:11
sysaux01.dbf 100% 240MB 10.0MB/s 00:24
system01.dbf 100% 480MB 9.8MB/s 00:49
temp01.dbf 100% 20MB 10.0MB/s 00:02
ts1.dbf 100% 100MB 9.1MB/s 00:11
undotbs01.dbf 100% 25MB 12.5MB/s 00:02
users01.dbf 100% 5128KB 5.0MB/s 00:01
redo01.log 100% 50MB 10.0MB/s 00:05
redo02.log 100% 50MB 10.0MB/s 00:05
redo03.log 100% 50MB 10.0MB/s 00:05
[oracle@linux10 prod]$
Then copy pfile and con.sql file
[oracle@linux10 prod]$ scp $ORACLE_HOME/dbs/initprod.ora
raj@192.168.1.202:/oracle/testdb
raj@192.168.1.202's password:
initprod.ora 100% 1033 1.0KB/s 00:00
[oracle@linux10 prod]$ scp
$ORACLE_BASE/admin/prod/udump/con.sql
raj@192.168.1.202:/oracle/testdb
raj@192.168.1.202's password:
con.sql 100% 621 0.6KB/s 00:00
Now we can start the source Database prod.
Then at the target machine perform the following steps to clone.
[raj@server1 ~]$ cd /oracle/testdb/
[raj@server1 testdb]$ ls
[raj@server1 testdb]$ ls
adump example01.dbf redo03.log ts1.dbf
bdump initprod.ora sysaux01.dbf udump
cdump redo01.log system01.dbf undotbs01.dbf
con.sql redo02.log temp01.dbf users01.dbf
[raj@server1 testdb]$
We can see as listed above all the files are successfully transfered from
the source database machine.

113
9866465379

Then modify the initialization parameter file as per the requirement and
also modify the con.sql file to create control files.
current con.sql is as follows
1 CREATE CONTROLFILE REUSE DATABASE "PROD"
RESETLOGS ARCHIVELOG
2 MAXLOGFILES 16
3 MAXLOGMEMBERS 3
4 MAXDATAFILES 50
5 MAXINSTANCES 8
6 MAXLOGHISTORY 292
7 LOGFILE
8 GROUP 1 '/oracle/oradata/prod/redo01.log' SIZE 50M,
9 GROUP 2 '/oracle/oradata/prod/redo02.log' SIZE 50M,
10 GROUP 3 '/oracle/oradata/prod/redo03.log' SIZE 50M
11 -- STANDBY LOGFILE
12 DATAFILE
13 '/oracle/oradata/prod/system01.dbf',
14 '/oracle/oradata/prod/undotbs01.dbf',
15 '/oracle/oradata/prod/sysaux01.dbf',
16 '/oracle/oradata/prod/users01.dbf',
17 '/oracle/oradata/prod/example01.dbf',
18 '/oracle/oradata/prod/ts1.dbf'
19 CHARACTER SET WE8ISO8859P1
20 ;

Make changes as per the requirement as follows.


In the first line change the database name to testdb in the double quotes
and replace reuse by set
Then change the logfile locations from /oracle/oradata/prod to
/oracle/testdb
also similary change the datafile locations from /oracle/oradata/prod to
/oracle/testdb
and save the file
After changes the modified con.sql file will be as follows.
1 CREATE CONTROLFILE SET DATABASE "TESTDB"
RESETLOGS ARCHIVELOG
2 MAXLOGFILES 16
3 MAXLOGMEMBERS 3
4 MAXDATAFILES 50
5 MAXINSTANCES 8
6 MAXLOGHISTORY 292
7 LOGFILE
8 GROUP 1 '/oracle/testdb/redo01.log' SIZE 50M,

114
9866465379

9 GROUP 2 '/oracle/testdb/redo02.log' SIZE 50M,


10 GROUP 3 '/oracle/testdb/redo03.log' SIZE 50M
11 -- STANDBY LOGFILE
12 DATAFILE
13 '/oracle/testdb/system01.dbf',
14 '/oracle/testdb/undotbs01.dbf',
15 '/oracle/testdb/sysaux01.dbf',
16 '/oracle/testdb/users01.dbf',
17 '/oracle/testdb/example01.dbf',
18 '/oracle/testdb/ts1.dbf'
19 CHARACTER SET WE8ISO8859P1
20 ;
Similary modify the parameter values in the pfile. It will be as follows.

1 db_cache_size=905969664
2 java_pool_size=16777216
3 large_pool_size=16777216
4 shared_pool_size=285212672
5 streams_pool_size=0
6 *.audit_file_dest='/oracle/testdb/adump'
7 *.background_dump_dest='/oracle/testdb/bdump'
8 *.compatible='10.2.0.1.0'
9 *.control_files='/oracle/testdb/control01.ctl'
10 *.core_dump_dest='/oracle/testdb/cdump'
11 *.db_block_size=8192
12 *.db_domain=''
13 *.db_file_multiblock_read_count=16
14 *.db_files=31
15 *.db_name='testdb'
16 *.db_recovery_file_dest='/oracle/flash_recovery_area'
17 *.db_recovery_file_dest_size=2147483648
18 *.dispatchers='(PROTOCOL=TCP) (SERVICE=prodXDB)'
19 *.job_queue_processes=10
20 *.log_archive_dest_1='location=/oracle'
21 *.open_cursors=300
22 *.pga_aggregate_target=413138944
23 *.processes=150
24 *.remote_login_passwordfile='EXCLUSIVE'
25 *.sga_target=1239416832
26 *.undo_management='AUTO'
27 *.undo_tablespace='UNDOTBS1'
28 *.user_dump_dest='/oracle/testdb/udump'
Save and exit then change the name of the pfile from initprod.ora to

115
9866465379

inittestdb.ora
[raj@server1 testdb]$ mv initprod.ora inittestdb.ora
[raj@server1 testdb]$
If you want you can copy to the default location.
Now start the instance in nomount state.
[raj@server1 testdb]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Tue Feb 2 22:59:42 2010


Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to an idle instance.

SQL> startup nomount pfile=/oracle/testdb/inittestdb.ora


ORACLE instance started.

Total System Global Area 1241513984 bytes


Fixed Size 1219136 bytes
Variable Size 318768576 bytes
Database Buffers 905969664 bytes
Redo Buffers 15556608 bytes
SQL>
Now create control files by executing con.sql script.
SQL> @con.sql;

Control file created.


SQL>
Then open the database using resetlogs.
SQL> alter database open resetlogs;

Database altered.
SQL>
SQL> select name from v$database;

NAME
---------
TESTDB

SQL> select instance_name,status from v$instance;

INSTANCE_NAME STATUS
---------------- ------------
testdb OPEN

SQL>

116
9866465379

Now check the temporary files.


SQL> select tablespace_name,file_name from dba_temp_files;

no rows selected
SQL> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
UNDOTBS1
SYSAUX
TEMP
USERS
EXAMPLE
TS1
7 rows selected.
SQL>
Add new temporary files to TEMP tablesapce or create a new temporary
tablesapce
SQL> alter tablespace temp add tempfile
2 '/oracle/testdb/temp01.dbf' size 100m reuse;

Tablespace altered.

SQL> select tablespace_name,file_name from dba_temp_files;


TABLESPACE_NAME FILE_NAME
------------------------------ ------------------------------
TEMP /oracle/testdb/temp01.dbf
Then add this instance name (SID) to /etc/oratab file
[raj@server1 testdb]$ vi /etc/oratab
testdb:/oracle/10.2:N
then save and exit.
Finally configure listeners, configure Enterprise Manager, optionally
create
SPFILE , password file etc.
This completes the procedure to clone a database by transferring its
physical files.

Using Hot backup (online)

117
9866465379

In this method online database backup will be taken. The production


database must be runnning in archive log mode. For this method we can
use RMAN.
A nice feature of RMAN is the ability to duplicate, or clone, a database
from a previous backup. It is possible to create a duplicate database on a
remote server with the same file structure, a remote server will a different
file structure or the local server with a different file structure.

It is much faster than OS file copies


It deals with the backup process at the block level so it waits for each
block to become consistent to back it up, therfore no "cracked blocks"
like file copying
You can "easily" parallelize backup and much more importantly it can
parallelize recovery
No more putting tablespaces in backup mode and hammering the
daylights out of the redo log system
Making standby databases and cloning is way simpler under RMAN
becuse it is designed to do that for you

Example
Production server : 192.168.1.201
DBA os User : oracle
DB Name : prod

Cloning Server : 192.168.1.202 (Target machine)


DBA os User : raj
DB Name : test
First login to Production server Using oracle user. If you are connecting
from
windows system you can use putty or from another linux machine you
can use ssh.
The source database prod need not be shutdown during cloning process.
Source database must be running in archive log mode.
To check archive log mode
First login to the Source database host operating system. Then set SID
[oracle@linux10 ~]$ export ORACLE_SID=prod
Login into the database
[oracle@linux10 ~]$ sqlplus / as sysdba
SQL> select log_mode from v$database;

LOG_MODE
------------

118
9866465379

ARCHIVELOG

SQL> archive log list;


Database log mode Archive Mode
Automatic archival Enabled
Archive destination /oracle
Oldest online log sequence 4
Next log sequence to archive 6
Current log sequence 6
SQL>
Then create pfile from spfile.
SQL> create pfile from spfile;

File created.

SQL>
Then using RMAN perform the backup
[oracle@linux10 ~]$ rman target /

Recovery Manager: Release 10.2.0.1.0 - Production on Tue Feb 2


09:39:25 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: PROD (DBID=120754003)


RMAN> backup database plus archivelog;

Starting backup at 02-FEB-10


current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=145 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=143 devtype=DISK
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=5 recid=1 stamp=709861860
.
.
channel ORA_DISK_2: starting piece 1 at 02-FEB-10
channel ORA_DISK_2: finished piece 1 at 02-FEB-10
piece handle=/oracle/backup/prod_df709897253_s17_p1
tag=TAG20100202T094052 comment=NONE

119
9866465379

channel ORA_DISK_2: backup set complete, elapsed time: 00:00:02


channel ORA_DISK_1: finished piece 1 at 02-FEB-10
piece handle=/oracle/backup/prod_df709897253_s16_p1
tag=TAG20100202T094052 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 02-FEB-10

Starting backup at 02-FEB-10


using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=/oracle/oradata/prod/system01.dbf
input datafile fno=00002 name=/oracle/oradata/prod/undotbs01.dbf
input datafile fno=00004 name=/oracle/oradata/prod/users01.dbf
.
.
input datafile fno=00006 name=/oracle/oradata/prod/ts1.dbf
channel ORA_DISK_2: starting piece 1 at 02-FEB-10
channel ORA_DISK_1: finished piece 1 at 02-FEB-10
piece handle=/oracle/backup/prod_df709897256_s18_p1
tag=TAG20100202T094056 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
including current control file in backupset
channel ORA_DISK_1: starting piece 1 at 02-FEB-10
channel ORA_DISK_2: finished piece 1 at 02-FEB-10
piece handle=/oracle/backup/prod_df709897256_s19_p1
tag=TAG20100202T094056 comment=NONE
.
.
channel ORA_DISK_2: starting piece 1 at 02-FEB-10
channel ORA_DISK_1: finished piece 1 at 02-FEB-10
piece handle=/oracle/backup/prod_df709897271_s20_p1
tag=TAG20100202T094056 comment=NONE
piece handle=/oracle/backup/prod_df709897271_s21_p1
tag=TAG20100202T094056 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
Finished backup at 02-FEB-10

Starting backup at 02-FEB-10


current log archived

120
9866465379

using channel ORA_DISK_1


using channel ORA_DISK_2
channel ORA_DISK_1: starting archive log backupset
.
.
piece handle=/oracle/backup/prod_df709897272_s22_p1
tag=TAG20100202T094112 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 02-FEB-10

RMAN>
From the above command's out put we see that backup pieces are stored
in
the /oracle/backup directory in the source database machine.
Copy all backup pieces to machine where cloning to be done into the
same directory as source database. Here it is /oracle/backup.
Log in to the destnation machine create all necessary directories
[raj@server1 ~]$ mkdir /oracle/backup (to keep backup pieces same
location as on the source machine)
[raj@server1 ~]$ mkdir /oracle/rclone (to keep all database physical files)
[raj@server1 ~]$ mkdir /oracle/rclone/bdump
[raj@server1 ~]$ mkdir /oracle/rclone/adump
[raj@server1 ~]$ mkdir /oracle/rclone/cdump
[raj@server1 ~]$ mkdir /oracle/rclone/udump
[raj@server1 ~]$ export ORACLE_SID=rclone (select an SID)
Then copy pfile and backup pieces to destination machine from source.
[oracle@linux10 backup]$ ls
prod_df709897253_s16_p1 prod_df709897271_s20_p1
prod_df709897253_s17_p1 prod_df709897271_s21_p1
prod_df709897256_s18_p1 prod_df709897272_s22_p1
prod_df709897256_s19_p1
[oracle@linux10 backup]$ scp * raj@192.168.1.202:/oracle/backup
raj@192.168.1.202's password:
prod_df709897253_s16_p1 100% 43MB 8.5MB/s 00:05
prod_df709897253_s17_p1 100% 4215KB 4.1MB/s 00:00
prod_df709897256_s18_p1 100% 381MB 9.8MB/s 00:39
prod_df709897256_s19_p1 100% 201MB 9.6MB/s 00:21
prod_df709897271_s20_p1 100% 6368KB 6.2MB/s 00:01
prod_df709897271_s21_p1 100% 96KB 96.0KB/s 00:00
prod_df709897272_s22_p1 100% 4608 4.5KB/s 00:00
[oracle@linux10 backup]$ cd
[oracle@linux10 ~]$ scp $ORACLE_HOME/dbs/initprod.ora
raj@192.168.1.202:/oracle/rclone

121
9866465379

raj@192.168.1.202's password:
initprod.ora 100% 1033 1.0KB/s 00:00
[oracle@linux10 ~]$
Then at the destination machine modify the pfile parameters as per the
requirement and add additional two parameters to change the locations of
datafiles and logfiles then change the name to init$ORACLE_SID.ora
[raj@server1 rclone]$ ls
initprod.ora
[raj@server1 rclone]$ mv initprod.ora
$ORACLE_HOME/dbs/initrclone.ora
[raj@server1 rclone]$
open the pfile make changes and save it finally it looks as follows.
1 db_cache_size=905969664
2 java_pool_size=16777216
3 large_pool_size=16777216
4 shared_pool_size=285212672
5 streams_pool_size=0
6 *.audit_file_dest='/oracle/rclone/adump'
7 *.background_dump_dest='/oracle/rclone/bdump'
8 *.compatible='10.2.0.1.0'
9 *.control_files='/oracle/rclone/control01.ctl'
10 *.core_dump_dest='/oracle/rclone/cdump'
11 *.db_block_size=8192
12 *.db_domain=''
13 *.db_file_multiblock_read_count=16
14 *.db_files=31
15 *.db_name='rclone'
16 *.db_recovery_file_dest='/oracle/flash_recovery_area'
17 *.db_recovery_file_dest_size=2147483648
18 *.job_queue_processes=10
19 *.log_archive_dest_1='location=/oracle'
20 *.open_cursors=300
21 *.pga_aggregate_target=413138944
22 *.processes=150
23 *.remote_login_passwordfile='EXCLUSIVE'
24 *.sga_target=1239416832
25 *.undo_management='AUTO'
26 *.undo_tablespace='UNDOTBS1'
27 *.user_dump_dest='/oracle/rclone/udump'
28 db_file_name_convert = (/oracle/oradata/prod,/oracle/rclone)
29 log_file_name_convert = (/oracle/oradata/prod,/oracle/rclone)
Last two parameters will make RMAN automatically to convert
filenames to new location.

122
9866465379

Configure Local Net Service name (connect string) on the destination


machine to connect to the production database.
[raj@server1 ]$ netca

Select Local net service name configuration. click the next button

123
9866465379

Select add and click the next button

124
9866465379

Type the service name of the production database, here it is prod an then
click the next buttton

125
9866465379

Select TCP then click the next Button

126
9866465379

Type either IP address of the production database's machine or type


hostname then select the listeners port number. Then click the next button

127
9866465379

Select donot test, click the next button

128
9866465379

129
9866465379

Select the Local net service name (connect string and this can be any
name)
Then click the next button

130
9866465379

Select no and click the next button

131
9866465379

Simply click the next Button

132
9866465379

Finally click the Finish Button

133
9866465379

Now test the connection to the production database (also called here
Target)
[raj@server1 ~]$ sqlplus sys@prod as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Tue Feb 2 23:57:16 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Enter password:

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL>
Now in the destination machine startup the instance in nomount state.
This is also called auxilliary instance.
[raj@server1 ~]$ export ORACLE_SID=rclone
[raj@server1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Wed Feb 3 00:01:21 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup nomount


ORACLE instance started.

Total System Global Area 1241513984 bytes


Fixed Size 1219136 bytes
Variable Size 318768576 bytes
Database Buffers 905969664 bytes
Redo Buffers 15556608 bytes
SQL>exit
Then use RMAN to connect target database ie source and auxiliary
instance.
[raj@server1 ~]$ rman target sys@prod auxiliary /

Recovery Manager: Release 10.2.0.1.0 - Production on Wed Feb 3


00:03:13 2010

134
9866465379

Copyright (c) 1982, 2005, Oracle. All rights reserved.

target database Password:


connected to target database: PROD (DBID=120754003)
connected to auxiliary database: RCLONE (not mounted)

RMAN>
Then execute Duplicate database command from rman prompt to clone.
RMAN> duplicate target database to 'rclone';
Starting Duplicate Db at 03-FEB-10
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2

contents of Memory Script:


{
set until scn 566363;
set newname for datafile 1 to
"/oracle/rclone/system01.dbf";
set newname for datafile 2 to
.
.
restoring datafile 00001 to /oracle/rclone/system01.dbf
restoring datafile 00004 to /oracle/rclone/users01.dbf
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from
backup set
restoring datafile 00003 to /oracle/rclone/sysaux01.dbf
restoring datafile 00005 to /oracle/rclone/example01.dbf
restoring datafile 00006 to /oracle/rclone/ts1.dbf
.
.
contents of Memory Script:
{
Alter clone database open resetlogs;
}
executing Memory Script
database opened
Finished Duplicate Db at 03-FEB-10
RMAN >
Then add instance entry to /etc/oratab file. Similary create password file,
configure listener, configure EM etc.
Oracle Database cloning using DBCA (Database Configuration
Assistant)

135
9866465379

Templates are used in DBCA to create new databases and clone existing
databases. The information in templates includes database options,
initialization parameters, and storage attributes (for datafiles, tablespaces,
control files, and online redo logs).
Templates can be used just like scripts, but they are more powerful than
scripts because you have the option of cloning a database. Cloning saves
time by copying a seed database's files to the correct locations.
Templates are stored in the following directory:
ORACLE_HOME/assistants/dbca/templates

● Advantages of Using Templates

Using templates has the following advantages:


● Time saving. If you use a template you do not have to define the
database.
● Easy Duplication. By creating a template containing your database
settings, you can easily create a duplicate database without
specifying parameters twice.
● Easy editing. You can quickly change database options from the
template settings.
● Easy sharing. Templates can be copied from one machine to
another.

● Types of Templates

Templates are divided into the following types:


10. Seed templates
11. Non-seed templates
Type : Seed : File extension .dbc include datafiles : yes
This type of template contains both the structure and the physical
datafiles of an existing (seed) database. Your database starts as a copy of
the seed database, and requires only the following changes:
10. Name of the database
11. Destination of the datafiles
12. Number of control files
13. Number of redo log groups
14. Initialization parameters

136
9866465379

Other changes can be made after database creation using custom scripts
that can be invoked by DBCA
The datafiles and online redo logs for the seed database are stored in a
compressed format in a file with a .dfb extension. The corresponding
.dfb file's location is stored in the .dbc file.
Type : Non Seed : File extension .dbt include datafiles : No
This type of template is used to create a new database from scratch. It
contains the characteristics of the database to be created. Non-seed
templates are more flexible than their seed counterparts because all
datafiles and online redo logs are created to your specification, and
names, sizes, and other attributes can be changed as required.

● Creating Templates Using DBCA

The Template Management window provides you with the option of


creating or deleting a template. The DBCA saves templates as XML files.
To create a database template, select one of the following options:
● From an existing template
Using an existing template, you can create a new template based on
the pre-defined template settings. You can add or change any
template settings such as initialization parameters, storage
parameters, or whether to use custom scripts.
● From an existing database (structure only)
You can create a new template that contains structural information
from an existing database, including database options, tablespaces,
datafiles, and initialization parameters. User-defined schema and
their data will not be part of the created template. The source
database can be either local or remote. Choose this option when
you want the new database to be structurally similar to the source
database, but not contain the same data.
● From an existing database (structure and data)
You can create a new template that has both the structural
information and physical datafiles of an existing database.
Databases created using such a template are identical to the source
database. User-defined schema and their data will be part of the
created template. The source database must be local. Choose the
option when you want to create an exact replica of the source
database.
When creating templates from existing databases, you can choose to

137
9866465379

translate file paths into Optimal Flexible Architecture (OFA) or maintain


existing file paths. Using OFA is recommended if the machine on which
you plan to create the database has a different directory structure.
Standard file paths can be used if the target machine has a similar
directory structure.
Example
On the source system start DBCA by issuing the the command dbca
Login to the source linux system using Oracle DBA's os account then
execute dbca.
[rajukb@linux10 ~]$ dbca

138
9866465379

On the "Welcome" screen click the "Next" button

139
9866465379

On the "Operations" screen select the "Manage Templates" option and


click the "Next" button

On the "Template Management" screen select the "Create a database

140
9866465379

template" option and select the "From and existing database (structure as
well as data)" sub-option then click the "Next" button

On the "Source database" screen select the database instance and click the

141
9866465379

"Next" button

On the "Template properties" screen enter a suitable name and

142
9866465379

description for the template, confirm the location for the template files
and click the "Next" button

On the "Location of database related files" screen choose either to

143
9866465379

maintain the file locations or to convert to OFA structure (recommended)


and click the "Finish" button.

144
9866465379

On the "Confirmation" screen click the "OK" button

145
9866465379

Wait while the Database Configuration Assistant progress screen gathers


information about the source database, backs up the database and creates
the template.

146
9866465379

If no other operation then click the no button.

147
9866465379

Now we have a template created and we can use this to create our new

148
9866465379

database. The template location will be


$ORACLE_HOME/assistants/dbca/templates
Here you can find files with the name we have selected ie cloneprod

[rajukb@linux10 ~]$ ls
$ORACLE_HOME/assistants/dbca/templates/cloneprod*
/oracle/10g/assistants/dbca/templates/cloneprod.ctl
/oracle/10g/assistants/dbca/templates/cloneprod.dbc
/oracle/10g/assistants/dbca/templates/cloneprod.dfb
[rajukb@linux10 ~]$

Now we can clone the database either on the same machine or in a


different
machine. If we want to clone on a different machine (target) oracle
RDBMS
software must be installed. Transfer the above files to target machine
using
DBA's os account in the target machine.

[rajukb@linux10 ~]$ cd $ORACLE_HOME/assistants/dbca/templates


[rajukb@linux10 templates]$ scp cloneprod*
\raj@192.168.1.202:/oracle/10.2/assistants/dbca/templates
raj@192.168.1.202's password:
cloneprod.ctl 100% 6224KB 6.1MB/s 00:01
cloneprod.dbc 100% 9478 9.3KB/s 00:00
cloneprod.dfb 100% 108MB 9.8MB/s 00:11
[rajukb@linux10 templates]$

Templates are transfered to target system's


$ORACLE_HOME/assistants/dbca/templates directory.
Then login to target system using oracle DBA os user account then
execute
dbca on the target sytem.
[rajukb@linux10 ~]$ ssh -X -l raj 192.168.1.202
raj@192.168.1.202's password:
Last login: Tue Feb 2 00:28:17 2009 from 192.168.1.201
[raj@server1 ~]$ dbca

149
9866465379

On the "Welcome" screen click the "Next" button

150
9866465379

Select "Create a Database" option and click "Next"

151
9866465379

In "Select a template from the following list to create a database" - select


the template name which we have transfered to target system and click
"Next".

152
9866465379

Provide the new Service Name for the new database. The SID will
automatically be set to the service name entered. Click "Next"

153
9866465379

Let the "Configure the Database with Enterprise Manager" remain


checked and "Use Database Control for Database Management" remain
checked. Click "Next"

154
9866465379

Provide the sys password and click "Next"

155
9866465379

Let the "File System" option remain checked unless you want to use
ASM or raw for your new database

156
9866465379

Select the Database File location and click the next button

Let the default values for Flash Recover Area remain as they are and

157
9866465379

click "Next"

Let the "No Scripts to run" remain checked an click "Next"

158
9866465379

You can keep the default values for Memory and Sizing over here or

159
9866465379

change it as per your need and Click "Next"

Now we are at the final screen and click the next button

160
9866465379

click the finish button.

161
9866465379

click the ok button and confirm, next few minutes Database should be up

162
9866465379

and runnning.

163
9866465379

164
9866465379

Note all user accounts except the system accounts are locked and expired
so we need to unlock them to allow users to connect to the new Database.
Set the passwords for users and the exit.

165
9866465379

Automatic Storage Management (ASM)


This is a new feature introduced in oracle 10g to simplify the storage of
Oracle datafiles, controlfiles and logfiles.
Overview of ASM
This simplifies the administration of Oracle database related physical
files
by allowing the administrator to reference disk groups rather than
individual disks and files, which are managed by ASM.
he ASM functionality is an extention of the Oracle Managed Files (OMF)
functionality that also includes striping and mirroring to provide balanced
and secure storage.
The ASM functionality is controlled by an ASM instance. This is not a
full database instance, just the memory structures and as such is very
small and lightweight.it has its own processes and pfile or spfile.
This is a special instance that does not have any data files, there is only
ASM instance one per server which manages all ASM files for each
database. The instance looks after the disk groups and allows access to
the ASM files. Databases access the files directly but uses the ASM
instance to locate them. If the ASM instance is shutdown then the
database will either be automatically shutdown or crash.
The main components of ASM are disk groups, each of which comprise
of several physical disks that are controlled as a single unit. The physical
disks are known as ASM disks, while the files that reside on the disks are
known as ASM files. The locations and names for the files are controlled
by ASM, but user-friendly aliases and directory structures can be defined
for ease of reference.
The level of redundancy and the granularity of the striping can be
controlled using templates. Default templates are provided for each file
type stored by ASM, but additional templates can be defined as needed.
Failure groups are defined within a disk group to support the required
level of redundancy. For two-way mirroring you would expect a disk
group to contain two failure groups so individual files are written to two
locations.
In summary ASM provides the following functionality:
● Manages groups of disks, called disk groups.
● Prevents fragmentation of disks, so you don't need to manually
relocate data to tune I/O performance
● Adding disks is straight forward - ASM automatically performs
online disk reorganization when you add or remove storage

166
9866465379

● Manages disk redundancy within a disk group.


● Provides near-optimal I/O balancing without any manual tuning.
● Enables management of database objects without specifying mount
points and filenames.
● Supports large files.
● ASM and non-ASM oracle files can coexist
ASM Processes
There are a number of new processes that are started when using ASM,
both the ASM instance and Database will start new processes
ASM instance
RBAL (rebalance master) : coordinates the rebalancing when a
new disk is add or removed
ARB[1-9](rebalance) : actually does the work requested
by the RBAL process (upto 9 of
these)
Database Instance
RBAL : opens and closes the ASM disk
ASMB : connects to the ASM instance via
session and is the communication between ASM and RBMS, requests
could be file creation, deletion, resizing and also various statistics and
status messages.
ASM registers its name and disks with the RDBMS via the cluster
synchronization service (CSS). This is why the oracle cluster services
must be running, even if the node and instance is not clustered. The ASM
must be in mount mode in order for a RDBMS to use it and you only
require the instance type in the parameter file
Intialization parameters to be used to create an ASM instance
INSTANCE_TYPE - Set to ASM or RDBMS depending on the instance
type. The default is RDBMS.
DB_UNIQUE_NAME - Specifies a globally unique name for the database.
This defaults to +ASM but must be altered if you intend to run multiple
ASM instances.
ASM_POWER_LIMIT -The maximum power for a rebalancing operation
on an ASM instance. The valid values range from 1 to 11, with 1 being
the default. The higher the limit the more resources are allocated resulting
in faster rebalancing operations. This value is also used as the default
when the POWER clause is omitted from a rebalance operation.
ASM_DISKGROUPS - The list of disk groups that should be mounted by

167
9866465379

an ASM instance during instance startup, or by the ALTER DISKGROUP


ALL MOUNT statement. ASM configuration changes are automatically
reflected in this parameter.
ASM_DISKSTRING - Specifies a value that can be used to limit the disks
considered for discovery. Altering the default value may improve the
speed of disk group mount time and the speed of adding a disk to a disk
group. Changing the parameter to a value which prevents the discovery of
already mounted disks results in an error. The default value is NULL
allowing all suitable disks to be considered.

Creating an ASM instance


First login into the host machine (linux) as root user and execute the
following command to configure CSS service.
[root@server1 ~]# /oracle/10.2/bin/localconfig add
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Configuration for local CSS has been initialized

Adding to inittab
Startup will be queued to init within 90 seconds.
Checking the status of new Oracle init process...
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
server1
CSS is active on all nodes.
Oracle CSS service is installed and running under init(1M)
[root@server1 ~]#
To check
[root@server1 ~]# /oracle/10.2/bin/crsctl check css
CSS appears healthy
[root@server1 ~]#
To stop
[root@server1 ~]# /oracle/10.2/bin/crsctl stop crs
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@server1 ~]#
To start
[root@server1 ~]# /oracle/10.2/bin/crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
[root@server1 ~]#

168
9866465379

creating an ASM instance and Disk group using DBCA


Login as oracle DBA os user then execute dbca from shell prompt.

click the Next Button.

169
9866465379

Select Automatic Storage Management option


Then click the Next Button

170
9866465379

Type Sys Password for ASM instance


Click the Next Button

171
9866465379

Click Ok, Then click the next Button.

172
9866465379

Click The Finish Button

173
9866465379

click No , Then exit.

Now the ASM instance is created and running.


By default ASM instance name is +ASM. To connect use
[raju@server1 ~]$ export ORACLE_SID=+ASM
[raju@server1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Fri Feb 5 22:30:22 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production

174
9866465379

With the Partitioning, OLAP and Data Mining options


SQL>
SQL> select instance_name from v$instance;
INSTANCE_NAME
----------------
+ASM
SQL>
By default DBCA will create an spfile for ASM instsance.
its name will be spfile+ASM.ora and we can also create pfile from this.
ASM Disk Group
An ASM disk group is a logical volume that is created from the
underlying physical disks. If storage grows you simply add disks to the
disks groups, the number of groups can remain the same.
ASM file management has a number of good benefits over normal 3rd
party LVM's
performance
redundancy
ease of management
security
ASM Stripping
ASM stripes files across all the disks within the disk group thus
increasing performance, each stripe is called an ‘allocation unit’. ASM
offers two types of stripping which is dependent on the type of database
file.
Coarse Stripping : used for datafile, archive logs (1MB stripes)
fine Stripping : Used for online redo logs, control file,
flashback files (128KB stripes)
ASM Mirroring
Disk mirroring provides data redundancy, this means that if a disk were
to fail Oracle will use the other mirrored disk and would continue as
normal. Oracle mirrors at the extent level, so you have a primary extent
and a mirrored extent. When a disk fails, ASM rebuilds the failed disk
using mirrored extents from the other disks within the group, this may
have a slight impact on performance as the rebuild takes place.
All disks that share a common controller are in what is called a failure

175
9866465379

group, you can ensure redundancy by mirroring disks on separate failure


groups which in turn are on different controllers, ASM will ensure that
the primary extent and the mirrored extent are not in the same failure
group. When mirroring you must define failure groups otherwise the
mirroring will not take place.
There are three forms of Mirroring
External redundancy - doesn't have failure groups and thus is effectively
a no-mirroring strategy
Normal redundancy - provides two-way mirroring of all extents in a disk
group, which result in two failure groups
High redundancy - provides three-way mirroring of all extents in a disk
group, which result in three failure groups
ASM Files
The data files you create under ASM are not like the normal database
files, when you create a file you only need to specify the disk group that
the files needs to be created in, Oracle will then create a stripped file
across all the disks within the disk and carry out any redundancy
required, ASM files are OMF files. ASM naming is dependent on the
type file being created, here are the different file-naming conventions
fully qualified ASM filenames - are used when referencing existing ASM
files (+dgroupA/dbs/controlfile/CF.123.456789)
numeric ASM filenames - are also only used when referencing existing
ASM files (+dgroupA.123.456789)
alias ASM filenames - employ a user friendly name and are used when
creating new files and when you refer to existing files
alias filenames with templates - are strictly for creating new ASM files
incomplete ASM filenames - consist of a disk group only and are used for
creation only.

ASM Disk Group creation


First prepare the partitions so that candidate disks for ASM can be
created.
Login to the host computer using root user and create partitions.
Use fdisk command to create partions
[root@server1 ~]# fdisk /dev/sda

176
9866465379

The number of cylinders for this disk is set to 38913.


There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help):


Command (m for help): p

Disk /dev/sda: 320.0 GB, 320072933376 bytes


255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/sda1 * 1 7649 61440561 7 HPFS/NTFS
/dev/sda2 7650 7662 104422+ 83 Linux
/dev/sda3 7663 11486 30716280 83 Linux
/dev/sda4 11487 38913 220307377+ 5 Extended
/dev/sda5 11487 15310 30716248+ 83 Linux
/dev/sda6 15311 15820 4096543+ 82 Linux swap
/dev/sda7 15821 15833 104391 83 Linux
/dev/sda8 15834 22207 51199123+ 83 Linux
/dev/sda9 22208 22717 4096543+ 82 Linux swap
/dev/sda10 22718 22730 104391 83 Linux
/dev/sda11 22731 24005 10241406 83 Linux
/dev/sda12 24006 24515 4096543+ 82 Linux swap

Command (m for help):


Command (m for help): n

177
9866465379

First cylinder (24516-38913, default 24516):


Using default value 24516
Last cylinder or +size or +sizeM or +sizeK (24516-38913, default
38913): +3000M
Command (m for help):
Command (m for help): n
First cylinder (24882-38913, default 24882):
Using default value 24882
Last cylinder or +size or +sizeM or +sizeK (24882-38913, default
38913): +3000M
Command (m for help):
Now Two partitions have been created. save it.
Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or
resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@server1 ~]# partprobe
[root@server1 ~]#
Already there are 12 partitions. Now we have created 13 and 14.
Now install ASMlib packages. First check the kernel release and
download
release specific asmlib packages.
[root@server1 ~]# uname -r
2.6.9-34.ELsmp
[root@server1 ~]#
[root@server1 ~]# ls

178
9866465379

oracleasm-2.6.9-34.ELsmp-2.0.3-1.i686.rpm
oracleasmlib-2.0.4-1.el4.i386.rpm
oracleasm-support-2.1.3-1.el4.i386.rpm
[root@server1 ~]#
[root@server1 ~]# rpm -ivh oracleasm*
Preparing... ########################################### [100%]
1:oracleasm-support
########################################### [ 33%]
2:oracleasm-2.6.9-34.ELsm
########################################### [ 67%]
3:oracleasmlib ###########################################
[100%]
[root@server1 ~]#
After installing oracleasm command is created in /etc/init.d and using this
we can configure ASM library driver then mark the required partions as
ASM candidate disks.
[root@server1 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: raju


Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [y]:
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]

179
9866465379

[root@server1 ~]#
Now we can mark the ASM candidate disks
[root@server1 ~]# /etc/init.d/oracleasm createdisk d1 /dev/sda13
Marking disk "d1" as an ASM disk: [ OK ]
[root@server1 ~]# /etc/init.d/oracleasm createdisk d2 /dev/sda14
Marking disk "d2" as an ASM disk: [ OK ]
[root@server1 ~]#
[root@server1 ~]# /etc/init.d/oracleasm listdisks
D1
D2
[root@server1 ~]#
Now login as oracle DBA os user and create ASM disk group using
DBCA by selecting ASM candidate disks.
[raju@server1 ~]$ dbca

180
9866465379

click the Next Button

181
9866465379

Select ASM then click the Next Button

182
9866465379

Now click Create New

183
9866465379

Select candidates, Give group name , select redundancy and click OK

184
9866465379

Now we can see that disk group is created and mounted.


Then click Finish Button

Now ASM instance is running and disk group is ready so that we can
create
database on this disk group either by using DBCA or manually.
Creating ASM instance and Disk group manually

● To create an ASM instance first create a file called


init+ASM.ora with the following parameters

Instance_type = asm
large_pool_size=12M
asm_diskstring='ORCL:D*'

185
9866465379

asm_diskgroups='DG1'
background_dump_dest='/oracle/admin/+ASM/bdump'
core_dump_dest='/oracle/admin/+ASM/cdump'
user_dump_dest='/oracle/admin/+ASM/udump'

Next, using SQL*Plus connect to the ide instance.


[raju@server1 ~]$ export ORACLE_SID=+ASM
[raju@server1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Sat Feb 6 00:07:06 2010


Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to an idle instance.
SQL>
Create an spfile using the contents of the init+ASM.ora file
SQL > create spfile from pfile;
Finally, start the instance with the NOMOUNT option.
SQL> startup nomount;
ASM instance started
Total System Global Area 83886080 bytes
Fixed Size 1217836 bytes
Variable Size 57502420 bytes
ASM Cache 25165824 bytes
SQL>
add the instance name at the end of /etc/oratab file.
Now create disk group.
SQL> create diskgroup dg1
2 failgroup f1 disk 'ORCL:D1'
3 failgroup f2 disk 'ORCL:D2';
Diskgroup created.
SQL>
SQL> select name, path from v$asm_disk where name is not null;

186
9866465379

NAME PATH
------------------------------ ------------------------------
D1 ORCL:D1
D2 ORCL:D2

SQL>
SQL> select name, type, total_mb, free_mb from v$asm_diskgroup;

NAME TYPE TOTAL_MB FREE_MB


------------------------------ ------ ---------- ----------
DG1 NORMAL 5740 5638
SQL>

● Startup and Shutdown of ASM Instances

ASM instance are started and stopped in a similar way to normal database
instances. The options for the STARTUP command are:
12. FORCE - Performs a SHUTDOWN ABORT before restarting the
ASM instance.
13. MOUNT - Starts the ASM instance and mounts the disk groups
specified by the ASM_DISKGROUPS parameter.
14. NOMOUNT - Starts the ASM instance without mounting any disk
groups.
15. OPEN - This is not a valid option for an ASM instance.
The options for the SHUTDOWN command are:
15. NORMAL - The ASM instance waits for all connected ASM
instances and SQL sessions to exit then shuts down.
16. IMMEDIATE - The ASM instance waits for any SQL
transactions to complete then shuts down. It doesn't wait for
sessions to exit.
17. TRANSACTIONAL - Same as IMMEDIATE.
18. ABORT - The ASM instance shuts down instantly.

● Administering ASM Disk Groups

Disk groups can be deleted using the DROP DISKGROUP statement.

187
9866465379

SQL > DROP DISKGROUP dg1 INCLUDING CONTENTS;


Disks can be added or removed from disk groups using the ALTER
DISKGROUP statement. Remember that the wildcard "*" can be used to
reference disks so long as the resulting string does not match a disk
already used by an existing disk group.
ALTER DISKGROUP disk_group_1 ADD DISK
'/devices/disk*3',
'/devices/disk*4';
-- Drop a disk.
ALTER DISKGROUP disk_group_1 DROP DISK diska2;
Disk groups can be rebalanced manually using the REBALANCE clause of
the ALTER DISKGROUP statement. If the POWER clause is omitted the
ASM_POWER_LIMIT parameter value is used. Rebalancing is only
needed when the speed of the automatic rebalancing is not appropriate.
ALTER DISKGROUP disk_group_1 REBALANCE POWER 5;
Disk groups are mounted at ASM instance startup and unmounted at
ASM instance shutdown. Manual mounting and dismounting can be
accomplished using the ALTER DISKGROUP statement as seen below.
ALTER DISKGROUP ALL DISMOUNT;
ALTER DISKGROUP ALL MOUNT;
ALTER DISKGROUP disk_group_1 DISMOUNT;
ALTER DISKGROUP disk_group_1 MOUNT;

● Directories

A directory heirarchy can be defined using the ALTER DISKGROUP


statement to support ASM file aliasing. The following examples show
how ASM directories can be created, modified and deleted.
-- Create a directory.
ALTER DISKGROUP disk_group_1 ADD DIRECTORY
'+disk_group_1/my_dir';
-- Rename a directory.
ALTER DISKGROUP disk_group_1 RENAME DIRECTORY
'+disk_group_1/my_dir' TO '+disk_group_1/my_dir_2';
-- Delete a directory and all its contents.
ALTER DISKGROUP disk_group_1 DROP DIRECTORY

188
9866465379

'+disk_group_1/my_dir_2' FORCE;

● Aliases

Aliases allow you to reference ASM files using user-friendly names,


rather than the fully qualified ASM filenames.
-- Create an alias using the fully qualified filename.
ALTER DISKGROUP disk_group_1 ADD ALIAS
'+disk_group_1/my_dir/my_file.dbf'
FOR '+disk_group_1/mydb/datafile/my_ts.342.3';
-- Create an alias using the numeric form filename.
ALTER DISKGROUP disk_group_1 ADD ALIAS
'+disk_group_1/my_dir/my_file.dbf' FOR '+disk_group_1.342.3';
-- Rename an alias.
ALTER DISKGROUP disk_group_1 RENAME ALIAS
'+disk_group_1/my_dir/my_file.dbf'
TO '+disk_group_1/my_dir/my_file2.dbf';
-- Delete an alias.
ALTER DISKGROUP disk_group_1 DELETE ALIAS
'+disk_group_1/my_dir/my_file.dbf';

● Files

Files are not deleted automatically if they are created using aliases, as
they are not Oracle Managed Files (OMF), or if a recovery is done to a
point-in-time before the file was created. For these circumstances it is
necessary to manually delete the files, as shown below.
-- Drop file using an alias.
ALTER DISKGROUP disk_group_1 DROP FILE
'+disk_group_1/my_dir/my_file.dbf';
-- Drop file using a numeric form filename.
ALTER DISKGROUP disk_group_1 DROP FILE
'+disk_group_1.342.3';
-- Drop file using a fully qualified filename.
ALTER DISKGROUP disk_group_1 DROP FILE
'+disk_group_1/mydb/datafile/my_ts.342.3';

189
9866465379

Check a diskgroups integrity


alter diskgroup diskgrpA check all;

● ASM Views

The ASM configuration can be viewed using the V$ASM_% views, which
often contain different information depending on whether they are
queried from the ASM instance, or a dependant database instance.
View ASM Instance DB Instance
Displays a row for each
V$ASM_ALIA alias present in every disk
S Returns no rows
group mounted by the ASM
instance.
Displays a row for each
V$ASM_CLIE database instance using a Displays a row for the ASM
NT instance if the database has
disk group managed by the
open ASM files.
ASM instance.
Displays a row for each
disk discovered by the Displays a row for each disk in
V$ASM_DISK ASM instance, including disk groups in use by the
disks which are not part of database instance.
any disk group.

V$ASM_DISK Displays a row for each Displays a row for each disk
GROUP disk group discovered by group mounted by the local
the ASM instance. ASM instance.
Displays a row for each file
V$ASM_FILE for each disk group Displays no rows.
mounted by the ASM
instance.
Displays a row for each file
V$ASM_OPER for each long running
ATION Displays no rows.
operation executing in the
ASM instance.

190
9866465379

Displays a row for each


Displays a row for each template present in each disk
V$ASM_TEMP template present in each group mounted by the ASM
LATE disk group mounted by the instance with which the
ASM instance. database instance
communicates.
Once an ASM instance is present disk groups can be used for the
following parameters in database instances (INSTANCE_TYPE=RDBMS)
to allow ASM file creation:
● DB_CREATE_FILE_DEST
● DB_CREATE_ONLINE_LOG_DEST_n
● DB_RECOVERY_FILE_DEST
● CONTROL_FILES
● LOG_ARCHIVE_DEST_n
● LOG_ARCHIVE_DEST
● STANDBY_ARCHIVE_DEST
ASM filenames can be used in place of conventional filenames for most
Oracle file types, including controlfiles, datafiles, logfiles etc. For
example, the following command creates a new tablespace with a datafile
in the disk_group_1 disk group.
CREATE TABLESPACE my_ts DATAFILE '+disk_group_1' SIZE
100M AUTOEXTEND ON;
Automatic Storage Management is always installed by the Oracle
Universal Installer when you install your database software. The
Database Configuration Assistant (DBCA) determines if an ASM
instance already exists, and if not, then you are given the option of
creating and configuring an ASM instance as part of database creation
and configuration. If an ASM instance already exists, then it is used
instead.
DBCA also configures your instance parameter file and password file.
ASM imposes the following limits:
● 63 disk groups in a storage system
● 10,000 ASM disks in a storage system
● 4 petabyte maximum storage for each ASM disk
● 40 exabyte maximum storage for each storage system
● 1 million files for each disk group
● 2.4 terabyte maximum storage for each file

191
9866465379

The recommended method of creating your database is to use the


Database Configuration Assistant (DBCA). However, if you choose to
create your database manually using the CREATE DATABASE statement,
then Automatic Storage Management enables you to create a database
and all of its underlying files with a minimum of input from you.
Assume the following initialization parameter setting:
DB_CREATE_FILE_DEST = '+dgroup2'
The following statement creates the tablespace and its datafile:
CREATE TABLESPACE tspace2;

The following statement creates an undo tablespace with a datafile that


has an alias name and its attributes are set by the user defined template
my_undo_temp. It assumes a directory has been created in disk group
dgroup3 to contain the alias name and that the user defined template
exists. Because an alias is used to create the datafile, the file is not an
Oracle-managed file and will not be automatically deleted when the
tablespace is dropped.
CREATE UNDO TABLESPACE myundo
DATAFILE '+dgroup3(my_undo_temp)/myfiles/my_undo_ts' SIZE
200M;

Adding New Redo Log Files: Example

The following example creates a log file with a member in each of the
disk groups dgroup1 and dgroup2.
The following parameter settings are included in the initialization
parameter file:
DB_CREATE_ONLINE_LOG_DEST_1 = '+dgroup1'
DB_CREATE_ONLINE_LOG_DEST_2 = '+dgroup2'

The following statement is issued at the SQL prompt:


ALTER DATABASE ADD LOGFILE;

192
9866465379

Migrating to ASM Using RMAN (with example)

We can use the following method to migrate database from regular file
system to ASM disk group.

assume that ORACLE_SID=prod


First configure ASM instance and create disk group.
assume that disk group name is DG1
First login to production server using DBA os user (raju)
[raju@server1 ~]$
Then connect to the database
[raju@server1 ~]$ export ORACLE_SID=prod
[raju@server1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Fri Feb 5 12:43:20 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL>
Then disable change tracking (only available in Enterprise Edition) if it is
currently being used by executing the following command.
SQL> ALTER DATABASE DISABLE BLOCK CHANGE
TRACKING;
Modify the following parameters in the parameter file of the target
database as follows
Set the DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n
parameters to the relevant ASM disk groups.
SQL> alter system set
2 DB_CREATE_FILE_DEST='+DG1'
3 scope=spfile;

System altered.

SQL> alter system set


2 DB_CREATE_ONLINE_LOG_DEST_1='+DG1'
3 scope=spfile;

193
9866465379

System altered.

SQL>
Remove the CONTROL_FILES parameter from the spfile so the control
files will be moved to the DB_CREATE_* destination and the spfile gets
updated automatically. If you are using a pfile the CONTROL_FILES
parameter must be set to the appropriate ASM files or aliases.
SQL> alter system reset control_files
2 scope=spfile
3 sid='*';

System altered.
SQL>
Then shutdown the database.
SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release
10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
[raju@server1 ~]$
Now start the database in nomount mode
[raju@server1 ~]$ rman target /
Recovery Manager: Release 10.2.0.1.0 - Production on Fri Feb 5
12:50:48 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database (not started)
RMAN>
RMAN> startup nomount;
Oracle instance started
Total System Global Area 603979776 bytes
Fixed Size 1220796 bytes
Variable Size 197136196 bytes
Database Buffers 398458880 bytes

194
9866465379

Redo Buffers 7163904 bytes


RMAN>
Restore the controlfile into the new location from the old location
RMAN > RESTORE CONTROLFILE FROM
'/oracle/oradata/prod/control01.ctl';
Starting restore at 05-FEB-10
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK

channel ORA_DISK_1: copied control file copy


output filename=+DG1/prod/controlfile/backup.256.710166757
Finished restore at 05-FEB-10
RMAN>
Mount the database
RMAN> alter database mount;
database mounted
released channel: ORA_DISK_1
RMAN >
Copy the database into the ASM disk group.
RMAN> BACKUP AS COPY DATABASE FORMAT '+DG1';

Starting backup at 05-FEB-10


allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK
channel ORA_DISK_1: starting datafile copy
input datafile fno=00001 name=/oracle/oradata/prod/system01.dbf
output filename=+DG1/prod/datafile/system.257.710166811
tag=TAG20100205T123331 recid=2 stamp=710166831
.
input datafile fno=00005 name=/oracle/oradata/prod/example01.dbf
output filename=+DG1/prod/datafile/example.259.710166851

195
9866465379

tag=TAG20100205T123331 recid=4 stamp=710166857


tag=TAG20100205T123331 recid=5 stamp=710166864
.
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile fno=00004 name=/oracle/oradata/prod/users01.dbf
output filename=+DG1/prod/datafile/users.261.710166865
tag=TAG20100205T123331 recid=6 stamp=710166865
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 05-FEB-10

Starting Control File and SPFILE Autobackup at 05-FEB-10


piece
handle=/oracle/flash_recovery_area/PROD/autobackup/2010_02_05/o1_
mf_s_710166658_5pqjqv8v_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 05-FEB-10
RMAN >
Switch all datafile to the new ASM location
RMAN> SWITCH DATABASE TO COPY;

datafile 1 switched to datafile copy


"+DG1/prod/datafile/system.257.710166811"
datafile 2 switched to datafile copy
"+DG1/prod/datafile/undotbs1.260.710166859"
datafile 3 switched to datafile copy
"+DG1/prod/datafile/sysaux.258.710166837"
datafile 4 switched to datafile copy
"+DG1/prod/datafile/users.261.710166865"
datafile 5 switched to datafile copy
"+DG1/prod/datafile/example.259.710166851"
RMAN >
Open the database
RMAN> ALTER DATABASE OPEN;
RMAN> ALTER DATABASE OPEN;

196
9866465379

database opened
RMAN>
Create new redo logs in ASM and delete the old ones.
SQL> select member from v$logfile;
MEMBER
--------------------------------------------------------------------------------
/oracle/oradata/prod/redo03.log
/oracle/oradata/prod/redo02.log
/oracle/oradata/prod/redo01.log
SQL> ALTER DATABASE ADD LOGFILE;
Database altered.
SQL> ALTER DATABASE ADD LOGFILE;
Database altered.
SQL> alter database drop logfile group 1;
SQL> alter database drop logfile group 2;
SQL> alter database drop logfile group 3;
Enable change tracking if it was being used
SQL> ALTER DATABASE ENABLE BLOCK CHANGE
TRACKING;

197

Das könnte Ihnen auch gefallen