Sie sind auf Seite 1von 103

How to Set an Oracle Env. variable ORACLE_HOME?

What is ORACLE_HOME used for?

* The ORACLE_HOME is an environment variable which is used to set and define the
path of Oracle Home (server) Directory.
* The ORACLE_HOME directory will have the sub directories, binaries, executables,
programs, scripts, etc. for the Oracle Database.
* This directory can be used by any user who wants to use the particular database.
* If the ORACLE_HOME variable is defined as an environment variable, then during
the installation process, the Oracle Home Path will be set to the directory defined as
default. If the variable is not defined, then the Oracle will take its own default location.
i.e. The ORACLE_HOME variable does not have to be preset as an environment
variable, it can be set during the installation process.
* Basically The ORACLE_HOME variable is in the following ORACLE_BASE
directory:
ORACLE_HOME=$ORACLE_BASE/product/10.2.0.

What is ORACLE_BASE used for?

* The ORACLE_BASE is also an environment variable to define the base/root level


directory where you will have the Oracle Database directory tree ORACLE_HOME
defined under the ORACLE_BASE directory.
* Basically, The ORACLE_BASE directory is a higher-level directory, than
ORACLE_HOME, that you can use to install the various Oracle Software Products and
the same Oracle base directory can be used for more than one installation.

Note: If you did not set the ORACLE_BASE environment variable before starting OUI,
the Oracle home directory is created in an app/username/directory on the first existing
and writable directory from /u01 through /u09 for UNIX and Linux systems, or on the
disk drive with the most available space for Windows systems. If /u01 through /u09 does
not exist on the UNIX or Linux system, then the default location is
user_home_directory/app/username.

How to check if ORACLE_HOME is set already?

On Unix/Linux Systems:

Basically, before or after the Oracle Database is installed, the oracle user profile, the
environment variable file, is prepared where all the required environment variables for
Oracle are set. i.e. ORACLE_BASE, ORACLE_HOME, ORACLE_SID,PATH,
LD_LIBRARY_PATH, NLS_LANG, etc.

The user profile file can be


.bash_profile Bash Shell
.profile Bourne Shell or Korn shell
.login C Shell

Note: This user profile file will be under users home directory i.e. $HOME/.bash_profile

To check specific environment variable set:

$ echo $ORACLE_HOME

To check all the environment variables set:

$ env

On Windows Systems:

To check specific environment variable set:

C:\> set ORACLE_HOME

OR

C:\echo %ORACLE_HOME%

To check all the environment variables set:

C:\> set

Or

C:\> env

Other way, to check the ORACLE_HOME, is as follows.

Start -> Run -> Regedit (enter) -> HKEY_LOCAL_MACHINE -> SOFTWARE >
ORACLE

i.e. My Computer\HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE

How to check using sqlplus command:

To find the ORACLE_HOME path in Oracle Database

How to set the ORACLE_HOME environment variable?

On Unix/Linux Systems:
Define the ORACLE_HOME value in the user profile file i.e. .bash_profile or .profile

ORACLE_HOME=$ORACLE_BASE/product/10.2.0
export ORACLE_HOME

Source the user profile as follows:

Bash shell:

$ . ./.bash_profile

Bourne shell or Korn shell:

$ . ./.profile

C shell:

% source ./.login

If no profile file is set with environment variables, then physically also be set as follows:

Bourne, Bash, or Korn shell:

$ ORACLE_BASE=/oracle/app
$ export ORACLE_BASE
$ ORACLE_HOME=$ORACLE_BASE/product/10.2.0
$ export ORACLE_HOME

C Shell:

% setenv ORACLE_BASE /oracle/app


% setenv ORACLE_HOME /oracle/app/product/10.2.0

On Windows Systems:

My Computer -> Properties -> Advanced -> Environment Variables -> System
Variables -> New/Edit/Delete (to set the variables)

After setting the environment variables as above, open a fresh CMD tool and check
whether they set properly or not. Do not try on already opened CMD tool to make sure
the variables set or not.

Another way to physically set the variables as follow at the DOS prompt:

C:\> set ORACLE_HOME=C:\oracle\app\product\10.2.0


C:\> echo %ORACLE_HOME%
Manually Creating an ASM Instance

Contents

1. Overview
2. Configuring Oracle Cluster Synchronization Services (CSS)
3. Creating the ASM Instance
4. Identify RAW Devices
5. Starting the ASM Instance
6. Verify RAW / Logical Disk Are Discovered
7. Creating Disk Groups
8. Using Disk Groups
9. Startup Scripts

Overview

Automatic Storage Management (ASM) is a new feature in Oracle10g that alleviates the
DBA from having to manually manage and tune disks used by Oracle databases. ASM
provides the DBA with a file system and volume manager that makes use of an Oracle
instance (referred to as an ASM instance) and can be managed using either SQL or Oracle
Enterprise Manager.

Only one ASM instance is required per node. The same ASM instance can manage ASM
storage for all 10g databases running on the node.

When the DBA installs the Oracle10g software and creates a new database, creating an
ASM instance is a snap. The DBCA provides a simple check box and an easy wizard to
create an ASM instance as well as an Oracle database that makes use of the new ASM
instance for ASM storage. But, what happens when the DBA is migrating to Oracle10g or
didn't opt to use ASM when a 10g database was first created. The DBA will need to know
how to manually create an ASM instance and that is what this article provides.
Configuring Oracle Cluster Synchronization Services (CSS)

Automatic Storage Management (ASM) requires the use of Oracle Cluster


Synchronization Services (CSS), and as such, CSS must be configured and running before
attempting to use ASM. The CSS service is required to enable synchronization between
an ASM instance and the database instances that rely on it for database file storage.

In a non-RAC environment, the Oracle Universal Installer will configure and start a
single-node version of the CSS service. For Oracle Real Application Clusters (RAC)
installations, the CSS service is installed with Oracle Cluster Ready Services (CRS) in a
separate Oracle home directory (also called the CRS home directory). For single-node
installations, the CSS service is installed in and runs from the same Oracle home as the
Oracle database.

Because CSS must be running before any ASM instance or database instance starts,
Oracle Universal Installer configures it to start automatically when the system starts. For
Linux / UNIX platforms, the Oracle Universal Installer writes the CSS configuration
tasks to the root.sh which is run by the DBA after the installation process.

With Oracle10g R1, CSS was always configured regardless of whether you chose to
configure ASM or not. On the Linux / UNIX platform, CSS was installed and configured
via the root.sh script. This caused a lot of problems since many did not know what this
process was, and for most of them, didn't want the CSS process running since they were
not using ASM.

Oracle listened carefully to the concerns (and strongly worded complaints) about the CSS
process and in Oracle10g R2, will only configure this process when it is absolutely
necessary. In Oracle10g R2, for example, if you don't choose to configure an ASM stand-
alone instance or if you don't choose to configure a database that uses ASM storage,
Oracle will not automatically configure CSS in the root.sh script.

In the case where the CSS process is not configured to run on the node (see above), you
can make use of the $ORACLE_HOME/bin/localconfig script in Linux / UNIX or
%ORACLE_HOME%\bin\localconfig.bat batch file in Windows. For example in Linux,
run the following command as root to configure CSS outside of the root.sh script after
the fact:

$ su
# $ORACLE_HOME/bin/localconfig all

/etc/oracle does not exist. Creating it now.


Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Configuration for local CSS has been initialized

Adding to inittab
Startup will be queued to init within 90 seconds.
Checking the status of new Oracle init process...
Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.


linux3
CSS is active on all nodes.
Oracle CSS service is installed and running under init(1M)
Note that if you attempt to configure ASM after the fact, the Database
Configuration Assistant (DBCA) detects whether CSS is configured on the
node. If it does not detect CSS as configured (and running), the installer
prompts the user to run 'localconfig add' as necessary.
When performing an Oracle10g Custom install, the issue can become a bit
more confusing. During a custom install, Oracle will ask for any DB
configuration questions during the install itself. It then invokes the DBCA at
the end of the install in "custom" mode where the DBCA asks all the
questions. As such, at the time Oracle prompts the user to run the root.sh
script for a custom install, it does not know whether they will choose to
configure ASM or not. Oracle will err on the side of what the majority of
people would do. In this case, it means that Oracle will not configure CSS at
all in the root.sh script in the case of a custom install since the majority of
users will not be using ASM anyway. Here, Oracle relies on the fact that if
CSS is not configured, the DBCA will prompt the user to go run
'localconfig add' as root. Once this is done, then CSS will be configured
and the DBCA will allow the user to proceed with the configuration of ASM.

Creating the ASM Instance

The following steps can be used to create a fully functional ASM instance named +ASM.
The node I am using in this example also has a regular 10g database running named
TESTDB. These steps should all be carried out by the oracle UNIX user account:
1. Create Admin Directories

We start by creating the admin directories from the ORACLE_BASE. The admin
directories for the existing database on this node, (TESTDB), is located at
$ORACLE_BASE/admin/TESTDB. The new +ASM admin directories will be created
alongside the TESTDB database:

UNIX

mkdir -p $ORACLE_BASE/admin/+ASM/bdump
mkdir -p $ORACLE_BASE/admin/+ASM/cdump
mkdir -p $ORACLE_BASE/admin/+ASM/hdump
mkdir -p $ORACLE_BASE/admin/+ASM/pfile
mkdir -p $ORACLE_BASE/admin/+ASM/udump

Microsoft Windows

mkdir %ORACLE_BASE%\admin\+ASM\bdump
mkdir %ORACLE_BASE%\admin\+ASM\cdump
mkdir %ORACLE_BASE%\admin\+ASM\hdump
mkdir %ORACLE_BASE%\admin\+ASM\pfile
mkdir %ORACLE_BASE%\admin\+ASM\udump
2. Create Instance Parameter File

In this step, we will manually create an instance parameter file for the ASM
instance. This is actually an easy task as most of the parameters that are used for a
normal instance are not used for an ASM instance. Note that you should be fine
by accepting the default size for the database buffer cache, shared pool, and many
of the other SGA memory sructures. The only exception is the large pool. I like to
manually set this value to at least 12MB. In most cases, the SGA memory
footprint is less then 100MB. Let's start by creating the file init.ora and placing
that file in $ORACLE_BASE/admin/+ASM/pfile. The initial parameters to use for
the file are:

UNIX

$ORACLE_BASE/admin/+ASM/pfile/init.ora
###########################################
# Automatic Storage Management
###########################################
# _asm_allow_only_raw_disks=false
# asm_diskgroups='TESTDB_DATA1'

# Default asm_diskstring values for supported platforms:


# Solaris (32/64 bit) /dev/rdsk/*
# Windows NT/XP \\.\orcldisk*
# Linux (32/64 bit) /dev/raw/*
# HPUX /dev/rdsk/*
# HPUX(Tru 64) /dev/rdisk/*
# AIX /dev/rhdisk/*
# asm_diskstring=''

###########################################
# Diagnostics and Statistics
###########################################
background_dump_dest=/u01/app/oracle/admin/+ASM/bdump
core_dump_dest=/u01/app/oracle/admin/+ASM/cdump
user_dump_dest=/u01/app/oracle/admin/+ASM/udump

###########################################
# Miscellaneous
###########################################
instance_type=asm
compatible=10.1.0.4.0
###########################################
# Pools
###########################################
large_pool_size=12M

###########################################
# Security and Auditing
###########################################
remote_login_passwordfile=exclusive

Microsoft Windows

%ORACLE_BASE%\admin\+ASM\pfile\init.ora
###########################################
# Automatic Storage Management
###########################################
# _asm_allow_only_raw_disks=false
# asm_diskgroups='TESTDB_DATA1'

# Default asm_diskstring values for supported platforms:


# Solaris (32/64 bit) /dev/rdsk/*
# Windows NT/XP \\.\orcldisk*
# Linux (32/64 bit) /dev/raw/*
# HPUX /dev/rdsk/*
# HPUX(Tru 64) /dev/rdisk/*
# AIX /dev/rhdisk/*
# asm_diskstring=''

###########################################
# Diagnostics and Statistics
###########################################
background_dump_dest=C:\oracle\product\10.1.0\admin\+ASM\bdump
core_dump_dest=C:\oracle\product\10.1.0\admin\+ASM\cdump
user_dump_dest=C:\oracle\product\10.1.0\admin\+ASM\udump

###########################################
# Miscellaneous
###########################################
instance_type=asm
compatible=10.1.0.4.0

###########################################
# Pools
###########################################
large_pool_size=12M

###########################################
# Security and Auditing
###########################################
remote_login_passwordfile=exclusive
After creating the $ORACLE_BASE/admin/+ASM/pfile/init.ora file, UNIX
users should create the following symbolic link:

$ ln -s $ORACLE_BASE/admin/+ASM/pfile/init.ora
$ORACLE_HOME/dbs/init+ASM.ora

Identify RAW Devices

Before starting the ASM instance, we should identify the RAW device(s) (UNIX) or
logical drives (Windows) that will be used as ASM disks. For the purpose of this article, I
have four RAW devices setup on Linux:
# ls -l /dev/raw/raw[1234]
crw-rw---- 1 oracle dba 162, 1 Jun 2 22:04 /dev/raw/raw1
crw-rw---- 1 oracle dba 162, 2 Jun 2 22:04 /dev/raw/raw2
crw-rw---- 1 oracle dba 162, 3 Jun 2 22:04 /dev/raw/raw3
crw-rw---- 1 oracle dba 162, 4 Jun 2 22:04 /dev/raw/raw4

Attention Linux Users!

This article does not use Oracle's ASMLib I/O libraries. If you plan on using Oracle's
ASMLib, you will need to install and configure ASMLib, as well as mark all disks using:

/etc/init.d/oracleasm createdisk <ASM_VOLUME_NAME> <LINUX_DEV_DEVICE>


. For more information on using Oracle ASMLib, see "Installing Oracle10g Release 1
(10.1.0) on Linux - (RHEL 4)".

Attention Windows Users!

A task that must to be performed for Microsoft Windows users is to tag the logical drives
that you will want to use for ASM storage. This is done using a new utility that is included
with Oracle10g called asmtool. This tool can be run either before or after creating the
ASM instance. asmtool is responsible for initializing the drive headers and marks drives
for use by ASM. This really assists in reducing the risk of overwriting a usable drive that
is being used for normal operating system files.
Starting the ASM Instance

Once the instance parameter file is in place, it is time to start the ASM instance. It is
important to note that an ASM instance never mounts an actual database. The ASM
instance is responsible for mounting and managing disk groups.
Attention Windows Users!

If you are running in Microsoft Windows, you will need to manually create a new
Windows service to run the new instance. This is done using the ORADIM utility which
allows you to create both the instance and the service in one command.

UNIX

# su - oracle
$ ORACLE_SID=+ASM; export ORACLE_SID
$ sqlplus "/ as sysdba"

SQL> startup
ASM instance started

Total System Global Area 75497472 bytes


Fixed Size 777852 bytes
Variable Size 74719620 bytes
Database Buffers 0 bytes
Redo Buffers 0 bytes
ORA-15110: no diskgroups mounted

SQL> create spfile from pfile='/u01/app/oracle/admin/


+ASM/pfile/init.ora';

SQL> shutdown
ASM instance shutdown

SQL> startup
ASM instance started

Microsoft Windows

C:\> oradim -new -asmsid +ASM -syspwd change_on_install


-pfile C:\oracle\product\10.1.0\admin\+ASM\pfile\init.ora -spfile
-startmode manual -shutmode immediate

Instance created.

C:\> oradim -edit -asmsid +ASM -startmode a

C:\> set oracle_sid=+ASM


C:\> sqlplus "/ as sysdba"

SQL> startup pfile='C:\oracle\product\10.1.0\admin\+ASM\pfile\init.ora';


ASM instance started

Total System Global Area 125829120 bytes


Fixed Size 769268 bytes
Variable Size 125059852 bytes
Database Buffers 0 bytes
Redo Buffers 0 bytes
ORA-15110: no diskgroups mounted

SQL> create spfile from pfile='C:\oracle\product\10.1.0\admin\


+ASM\pfile\init.ora';
File created.

SQL> shutdown
ASM instance shutdown

SQL> startup
ASM instance started
You will notice when starting the ASM instance, we received the error:
ORA-15110: no diskgroups mounted
This error can be safely ignored.

Notice also that we created a server parameter file (SPFILE) for the ASM instance. This
allows Oracle to automatically record new disk group names in the asm_diskgroups
instance parameter, so that those disk groups can be automatically mounted whenever the
ASM instance is started.

Now that the ASM instance is started, all other Oracle database instances running on the
same node will be able to find it.

Verify RAW / Logical Disk Are Discovered

At this point, we have an ASM instance running, but no disk groups to speak of. ASM
disk groups are created using from RAW (or logical) disks.

Available (candidate) disks for ASM are discovered by use of the asm_diskstring
instance parameter. This parameter contains the path(s) that Oracle will use to discover
(or see) these candidate disks. In most cases, you shouldn't have to set this value as the
default value is set for the supported platform. The following table is a list of default
values for asm_diskstring on supported platforms when the value of the instance
parameter is set to NULL (the value is not set):

Operating System Default Search String


Solaris (32/64 bit) /dev/rdsk/*
Windows NT/XP \\.\orcldisk*

Linux (32/64 bit) /dev/raw/*

HP-UX /dev/rdsk/*

HP-UX(Tru 64) /dev/rdisk/*

AIX /dev/rhdisk/*

For the purpose of this article, I have four RAW devices setup on Linux:

# ls -l /dev/raw/raw[1234]
crw-rw---- 1 oracle dba 162, 1 Jun 2 22:04 /dev/raw/raw1
crw-rw---- 1 oracle dba 162, 2 Jun 2 22:04 /dev/raw/raw2
crw-rw---- 1 oracle dba 162, 3 Jun 2 22:04 /dev/raw/raw3
crw-rw---- 1 oracle dba 162, 4 Jun 2 22:04 /dev/raw/raw4
I now need to determine if Oracle can find these four disks. The view V$ASM_DISKcan be
queried from the ASM instance to determine which disks are being used or may
potentially be used as ASM disks. Note that you must log into the ASM instance with
SYSDBA privileges. Here is the query that I ran from the ASM instance:
$ ORACLE_SID=+ASM; export ORACLE_SID
$ sqlplus "/ as sysdba"

SQL> SELECT group_number, disk_number, mount_status, header_status,


state, path
2 FROM v$asm_disk

GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE PATH


------------ ----------- ------- ------------ -------- ---------------
0 0 CLOSED CANDIDATE NORMAL /dev/raw/raw1
0 1 CLOSED CANDIDATE NORMAL /dev/raw/raw2
0 2 CLOSED CANDIDATE NORMAL /dev/raw/raw3
0 3 CLOSED CANDIDATE NORMAL /dev/raw/raw4
Note the value of zero in the GROUP_NUMBER column for all four disks. This
indicates that a disk is available but hasn't yet been assigned to a disk group. The next
section details the steps for creating a disk group.

Creating Disk Groups

In this section, I will create a new disk group named TESTDB_DATA1 and assign all four
discovered disks to it. The disk group will be configured for NORMAL REDUNDANCY which
results in two-way mirroring of al files within the disk group. Within the disk group, I
will be configuring two failure groups, which defines two independent sets of disk that
should never contain more than one copy of mirrored data (mirrored extents).

For the purpose of this article, it is assumed that /dev/raw/raw1 and /dev/raw/raw2 are
on one controller while /dev/raw/raw3 and /dev/raw/raw4 are on another controller. I
want the ASM disk configuration so that any data files that are written to /dev/raw/raw1
and /dev/raw/raw2 will be mirrored to /dev/raw/raw3 and /dev/raw/raw4. I want
ASM to guarantee that data on /dev/raw/raw1 is never mirrored to /dev/raw/raw2 and
that data on /dev/raw/raw3 is never mirrored to /dev/raw/raw4. With this type of
configuration, I can loose an entire controller and still have access to all of my data.
When configuring failure groups, you should put all disks that share a controller (or any
resource for that matter) into their own failure group. If that resource were to fail, you
would still have access to the data as ASM guarantees that no mirrored data will exist in
the same failure group.

The new disk group should be created from the ASM instance using the following SQL:

SQL> CREATE DISKGROUP testdb_data1 NORMAL REDUNDANCY


2 FAILGROUP controller1 DISK '/dev/raw/raw1', '/dev/raw/raw2'
3 FAILGROUP controller2 DISK '/dev/raw/raw3', '/dev/raw/raw4';

Diskgroup created.

Now, let's take a look at the new disk group and disk details:

SQL> select group_number, name, total_mb, free_mb, state, type


2 from v$asm_diskgroup;

GROUP_NUMBER NAME TOTAL_MB FREE_MB STATE TYPE


------------ -------------- ---------- ---------- ----------- ------
1 TESTDB_DATA1 388 282 MOUNTED NORMAL

SQL> select group_number, disk_number, mount_status, header_status,


state, path, failgroup
2 from v$asm_disk;

GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE PATH


FAILGROUP
------------ ----------- ------- ------------ -------- ---------------
------------
1 0 CACHED MEMBER NORMAL /dev/raw/raw1
CONTROLLER1
1 1 CACHED MEMBER NORMAL /dev/raw/raw2
CONTROLLER1
1 2 CACHED MEMBER NORMAL /dev/raw/raw3
CONTROLLER2
1 3 CACHED MEMBER NORMAL /dev/raw/raw4
CONTROLLER2

Using Disk Groups


Finally, let's start making use of the new disk group! Disk groups can be used in place of
actual file names when creating database files, redo log members, control files, etc.

Let's now login to the database instance running on the node that will be making use of
the new ASM instance. For this article, I have a database instance already created and
running on the node named TESTDB. The database was created using the local file system
for all database files, redo log members, and control files:

$ ORACLE_SID=TESTDB; export ORACLE_SID


$ sqlplus "/ as sysdba"

SQL> @dba_files_all

Tablespace Name
File Class Filename
File Size
--------------------
----------------------------------------------------------
--------------
SYSAUX
/u05/oradata/TESTDB/datafile/o1_mf_sysaux_19cv6mwk_.dbf
241,172,480
SYSTEM
/u05/oradata/TESTDB/datafile/o1_mf_system_19cv5rmv_.dbf
471,859,200
TEMP
/u05/oradata/TESTDB/datafile/o1_mf_temp_19cv6sy9_.tmp
24,117,248
UNDOTBS1
/u05/oradata/TESTDB/datafile/o1_mf_undotbs1_19cv6c37_.dbf
214,958,080
USERS
/u05/oradata/TESTDB/datafile/o1_mf_users_19cv72yw_.dbf
5,242,880
[ CONTROL FILE ] /u03/oradata/TESTDB/controlfile/o1_mf_19cv5m84_.ctl
[ CONTROL FILE ] /u04/oradata/TESTDB/controlfile/o1_mf_19cv5msk_.ctl
[ CONTROL FILE ] /u05/oradata/TESTDB/controlfile/o1_mf_19cv5n34_.ctl
[ ONLINE REDO LOG ] /u03/oradata/TESTDB/onlinelog/o1_mf_1_19cv5n8d_.log
10,485,760
[ ONLINE REDO LOG ] /u03/oradata/TESTDB/onlinelog/o1_mf_2_19cv5o6l_.log
10,485,760
[ ONLINE REDO LOG ] /u03/oradata/TESTDB/onlinelog/o1_mf_3_19cv5pdy_.log
10,485,760
[ ONLINE REDO LOG ] /u04/oradata/TESTDB/onlinelog/o1_mf_1_19cv5nbr_.log
10,485,760
[ ONLINE REDO LOG ] /u04/oradata/TESTDB/onlinelog/o1_mf_2_19cv5oml_.log
10,485,760
[ ONLINE REDO LOG ] /u04/oradata/TESTDB/onlinelog/o1_mf_3_19cv5pt4_.log
10,485,760
[ ONLINE REDO LOG ] /u05/oradata/TESTDB/onlinelog/o1_mf_1_19cv5nsf_.log
10,485,760
[ ONLINE REDO LOG ] /u05/oradata/TESTDB/onlinelog/o1_mf_2_19cv5p1b_.log
10,485,760
[ ONLINE REDO LOG ] /u05/oradata/TESTDB/onlinelog/o1_mf_3_19cv5q8j_.log
10,485,760
--------------
sum
1,051,721,728

Let's now create a new tablespace that makes use of the new disk group:

SQL> create tablespace users2 datafile '+TESTDB_DATA1' size 100m;

Tablespace created.

And that's it! The CREATE TABLESPACE command (above) uses a datafile named
+TESTDB_DATA1. Note that the plus sign (+) in front of the name TESTDB_DATA1 indicates
to Oracle that this name is a disk group name, and not an operating system file name. In
this example, the TESTDB instance queries the ASM instance for a new file in that disk
group and uses that file for the tablespace data. Let's take a look at that new file name:

SQL> @dba_files_all

Tablespace Name
File Class Filename
File Size
--------------------
----------------------------------------------------------
--------------
SYSAUX
/u05/oradata/TESTDB/datafile/o1_mf_sysaux_19cv6mwk_.dbf
241,172,480
SYSTEM
/u05/oradata/TESTDB/datafile/o1_mf_system_19cv5rmv_.dbf
471,859,200
TEMP
/u05/oradata/TESTDB/datafile/o1_mf_temp_19cv6sy9_.tmp
24,117,248
UNDOTBS1
/u05/oradata/TESTDB/datafile/o1_mf_undotbs1_19cv6c37_.dbf
214,958,080
USERS
/u05/oradata/TESTDB/datafile/o1_mf_users_19cv72yw_.dbf
5,242,880
USERS2 +TESTDB_DATA1/testdb/datafile/users2.256.560031579
104,857,600
[ CONTROL FILE ] /u03/oradata/TESTDB/controlfile/o1_mf_19cv5m84_.ctl
[ CONTROL FILE ] /u04/oradata/TESTDB/controlfile/o1_mf_19cv5msk_.ctl
[ CONTROL FILE ] /u05/oradata/TESTDB/controlfile/o1_mf_19cv5n34_.ctl
[ ONLINE REDO LOG ] /u03/oradata/TESTDB/onlinelog/o1_mf_1_19cv5n8d_.log
10,485,760
[ ONLINE REDO LOG ] /u03/oradata/TESTDB/onlinelog/o1_mf_2_19cv5o6l_.log
10,485,760
[ ONLINE REDO LOG ] /u03/oradata/TESTDB/onlinelog/o1_mf_3_19cv5pdy_.log
10,485,760
[ ONLINE REDO LOG ] /u04/oradata/TESTDB/onlinelog/o1_mf_1_19cv5nbr_.log
10,485,760
[ ONLINE REDO LOG ] /u04/oradata/TESTDB/onlinelog/o1_mf_2_19cv5oml_.log
10,485,760
[ ONLINE REDO LOG ] /u04/oradata/TESTDB/onlinelog/o1_mf_3_19cv5pt4_.log
10,485,760
[ ONLINE REDO LOG ] /u05/oradata/TESTDB/onlinelog/o1_mf_1_19cv5nsf_.log
10,485,760
[ ONLINE REDO LOG ] /u05/oradata/TESTDB/onlinelog/o1_mf_2_19cv5p1b_.log
10,485,760
[ ONLINE REDO LOG ] /u05/oradata/TESTDB/onlinelog/o1_mf_3_19cv5q8j_.log
10,485,760

--------------
sum
1,156,579,328

Startup Scripts

Most Linux / UNIX users have a script used to start and stop Oracle services on system
restart. On UNIX platforms, the convention is to put all start / stop commands in a single
shell script named dbora. The dbora script may differ on every database server only
slightly as each database server has different requirements for handling Apache, TNS
listener and other services. The dbora script should be place in /etc/init.d.

In this section, I will provide a dbora shell script that can used to start all required Oracle
services including the Oracle Cluster Synchronization Services (CSS), ASM instance,
database server(s), and the Oracle TNS listener process. This script will utilize the Oracle
supplied scripts $ORACLE_HOME/bin/dbstart and $ORACLE_HOME/bin/dbshut to handle
starting and stopping the Oracle database(s). The dbora will be run by the UNIX init
process, and reads the /etc/oratab file to dynamically determine which database(s) to
start and stop.

Create dbora File

The first step is to create the dbora shell script and place it in the /etc/init.d directory:

/etc/init.d/dbora
# +------------------------------------------------------------------------+
# | FILE : dbora |
# | DATE : 09-AUG-2006 |
# | HOSTNAME : linux3.idevelopment.info |
# +------------------------------------------------------------------------+
# +---------------------------------+
# | FORCE THIS SCRIPT TO BE IGNORED |
# +---------------------------------+
# exit

# +---------------------------------+
# | PRINT HEADER INFORMATION |
# +---------------------------------+
echo " "
echo "+----------------------------------+"
echo "| Starting Oracle Database Script. |"
echo "| 0 : $0 |"
echo "| 1 : $1 |"
echo "+----------------------------------+"
echo " "

# +-----------------------------------------------------+
# | ALTER THE FOLLOWING TO REFLECT THIS SERVER SETUP |
# +-----------------------------------------------------+

HOSTNAME=linux3.idevelopment.info
ORACLE_HOME=/u01/app/oracle/product/10.1.0/db_1
SLEEP_TIME=120
ORACLE_OWNER=oracle
DATE=`date "+%m/%d/%Y %H:%M"`

export HOSTNAME ORACLE_HOME SLEEP_TIME ORACLE_OWNER DATE

# +----------------------------------------------+
# | VERIFY THAT ALL NEEDED SCRIPTS ARE AVAILABLE |
# | BEFORE CONTINUING. |
# +----------------------------------------------+
if [ ! -f $ORACLE_HOME/bin/dbstart -o ! -d $ORACLE_HOME ]; then
echo " "
echo "+-------------------------------------+"
echo "| ERROR: |"
echo "| Oracle startup: cannot start |"
echo "| cannot find dbstart |"
echo "+-------------------------------------+"
echo " "
exit
fi

# +---------------------------+
# | START/STOP CASE STATEMENT |
# +---------------------------+
case "$1" in

start)

echo " "


echo "+----------------------------------------+"
echo "| ************************************** |"
echo "| >>>>>>>>> START PROCESS <<<<<<<<<< |"
echo "| ************************************** |"
echo "+----------------------------------------+"
echo " "

echo "Going to sleep for $SLEEP_TIME seconds..."


sleep $SLEEP_TIME
echo " "
su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/dbstart"

echo " "


echo "+---------------------------------------------------+"
echo "| About to start the listener process in |"
echo "| $ORACLE_HOME |"
echo "+---------------------------------------------------+"
echo " "

su - $ORACLE_OWNER -c "lsnrctl start listener"

touch /var/lock/subsys/dbora

;;

stop)

echo " "


echo "+----------------------------------------+"
echo "| ************************************** |"
echo "| >>>>>>>>>> STOP PROCESS <<<<<<<<<< |"
echo "| ************************************** |"
echo "+----------------------------------------+"
echo " "

echo " "


echo "+-------------------------------------------------------+"
echo "| About to stop the listener process in |"
echo "| $ORACLE_HOME |"
echo "+-------------------------------------------------------+"
echo " "

su - $ORACLE_OWNER -c "lsnrctl stop listener"

echo " "


echo "+-------------------------------------------------------+"
echo "| About to stop all Oracle databases |"
echo "| running. |"
echo "+-------------------------------------------------------+"
echo " "

su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/dbshut"

rm -f /var/lock/subsys/dbora

;;

*)

echo $"Usage: $prog {start|stop}"


exit 1

esac

echo " "


echo "+----------------------+"
echo "| ENDING ORACLE SCRIPT |"
echo "+----------------------+"
echo " "

exit

After the dbora shell script is in place, perform the following tasks as the root user:

# chmod 755 dbora


# chown root:root dbora

# ln -s /etc/init.d/dbora /etc/rc5.d/S99dbora
# ln -s /etc/init.d/dbora /etc/rc0.d/K10dbora
# ln -s /etc/init.d/dbora /etc/rc6.d/K10dbora
# exit

Modify oratab File

The next step is to edit the /etc/oratab file to allow the dbora script to automatically
start and stop databases. Simply alter the final field in the +ASM and TESTDB entry from N
to Y.

Ensure that the ASM instance is started BEFORE any databases that are making use
of disk groups contained in it.
...
+ASM:/u01/app/oracle/product/10.1.0/db_1:Y
TESTDB:/u01/app/oracle/product/10.1.0/db_1:Y
...

Modify /etc/inittab File

The final step to manually edit the script /etc/inittab so that the entry to respawn
init.cssd comes before running the runlevel 3.

Orignal /etc/inittab file:


(...)
# System initialization.
si::sysinit:/etc/rc.d/rc.sysinit

l0:0:wait:/etc/rc.d/rc 0
l1:1:wait:/etc/rc.d/rc 1
l2:2:wait:/etc/rc.d/rc 2
l3:3:wait:/etc/rc.d/rc 3
l4:4:wait:/etc/rc.d/rc 4
l5:5:wait:/etc/rc.d/rc 5
l6:6:wait:/etc/rc.d/rc 6
(...)
h1:35:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 </dev/null
Modified /etc/inittab file:
(...)
# System initialization.
si::sysinit:/etc/rc.d/rc.sysinit

l0:0:wait:/etc/rc.d/rc 0
l1:1:wait:/etc/rc.d/rc 1
l2:2:wait:/etc/rc.d/rc 2
h1:35:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 </dev/null
l3:3:wait:/etc/rc.d/rc 3
l4:4:wait:/etc/rc.d/rc 4
l5:5:wait:/etc/rc.d/rc 5
l6:6:wait:/etc/rc.d/rc 6
(...)
For Solaris users, you will need to manually edit the script /etc/inittab so that the
entry for init.cssd comes before running the runlevel 3. As explained in Metalink Note
ID: 264235.1, the fix is as follows:
Orignal /etc/inittab file:
(...)
s2:23:wait:/sbin/rc2 >/dev/msglog 2<>/dev/msglog </dev/console
s3:3:wait:/sbin/rc3 >/dev/msglog 2<>/dev/msglog </dev/console
s5:5:wait:/sbin/rc5 >/dev/msglog 2<>/dev/msglog </dev/console
(...)
h1:3:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 </dev/null
Modified /etc/inittab file:
(...)
s2:23:wait:/sbin/rc2 >/dev/msglog 2<>/dev/msglog </dev/console
h1:3:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 </dev/null
s3:3:wait:/sbin/rc3 >/dev/msglog 2<>/dev/msglog </dev/console
s5:5:wait:/sbin/rc5 >/dev/msglog 2<>/dev/msglog </dev/console
(...)

Bug: 3458327 - Automatic Startup On Reboot Fails When Database Uses ASM

This bug is "NOT" fixed in the 10.1.0.4.0 Patch Set!!!!!!

If you have been following this article and applied the 10.1.0.4 patchset (and
modified the /etc/inittab file to force init.cssd to run (actually to respaen)
before running runlevel 3), this bug should not affect you. If you are using 10.1.0.3
(and below), however, this bug may not allow the Oracle ASM instance to start,
which will also prevent any other instances that have disk groups within that ASM
instance to start. As they exist, the dbstart and dbshut scripts are not ASM aware
with 10.1.0.3 and below. Even with patchset 10.1.0.4.0, we had to manually modify
the /etc/inittab script. When the dbora script attempts to start the ASM
database, even after the ocssd.bin is up and running, you will receive the error:

ORA-29701: unable to connect to Cluster Manager

The problem is simply a matter of ordering of when services are started and that is
why we needed to modify the /etc/inittab file. Upon entering a certain runlevel
(e.g. runlevel 3), init starts all the 'respawn lines' AFTER the 'wait' lines have
finished. It is important to understand that the S96init.cssd lines does not
actually start the CSSD, it merely removes the 'NORUN' line. Then S99dbora tries
to start the instances (and fails). Then, finally, init starts the CSSD.

Note that I used /etc/rc5.d/S99 to start the dbora script. You should make note
that the dbora script MUST run after the /etc/init.d/init.cssd if you are
starting an ASM instance. For Linux, the OUI (and manually running localconfig
all) places the start for init.cssd as /etc/rc3.d/S96init.cssd.

You will also notice that I had to put a sleep 120 in the dbora script before
starting any databases/instances. The dbora script will sleep for 120 seconds to
ensure that ocssd.bin daemon is running before starting any ASM instances.

NOTE: Creating ASM Instance on RedHat AS 3 / AS 2.1/


CentOS 4 /CentOS 3 works same
way as that of on Rehdat EL 4

Implementing Automatic Storage Management involves


allocating partitioned disks for Oracle Database with
preferences for striping and mirroring. Automatic Storage
Management manages the disk space for you.
This helps avoid the need for traditional disk management
tools such as Logical Volume Managers (LVM), file
systems, and the numerous commands necessary to
manage both. The synchronization between Automatic
Storage Management and the database instance is
handled by Oracle Cluster Synchronization Services
(CSS).
Tasks covered:

Pre-Creation Task:
Partitoning Disk

ASM Creatioin/implementation Using UNIX IO:

Binding Raw Devices and Setting Permissions


Creating ASM instance and diskgroups unsing dbca
Creating asm instance and diskgroup Manually without dbca
Use of ASM to an existing database

ASM Creation/Configuration Using Oracle's ASMLib IO:

Download and Install the appropriate ASMLib software


Stamping physical devices as an ASM Disk
Binding Partitions with the Raw Devices
Creating ASM instacne and diskgroups using dbca
Creating ASM instance and diskgroup manually without dbca

Creating database through dbca that uses ASM storage


option.:

Pre-creation task:

To include devices in a diskgroup, you can specify either whole-drive


device names or partition device. Based on the
Redunduncy Level, you need more devices (or partitions). I have two
extra disks attached to my machine one is internal
harddrive (IDE) and one is external (SCSI) hd.

NOTE: Oracle recommends that you create a single whole-disk


partition on each disk that you want to use.
[root@shree ~]# fdisk -l

Disk /dev/hda: 60.0 GB, 60022480896 bytes


255 heads, 63 sectors/track, 7297 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/hda1 * 1 1567 12586896 c W95 FAT32 (LBA)
/dev/hda2 1568 1632 522112+ 83 Linux
/dev/hda3 1633 2154 4192965 82 Linux swap
/dev/hda4 2155 7297 41311147+ 5 Extended
/dev/hda5 2155 7297 41311116 83 Linux

Disk /dev/hdb: 122.9 GB, 122942324736 bytes


255 heads, 63 sectors/track, 14946 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

Disk /dev/sda: 122.9 GB, 122942324736 bytes


255 heads, 63 sectors/track, 14946 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


[root@shree ~]#
The device name varies besed on the type of the disk.

Disk Device Name


Type Format Description

IDE /dev/hdxn In this example, x is a letter that identifies the


disk IDE disk and n is the partition number. For
example, /dev/had is the first disk on the first
IDE bus.

SCSI /dev/sdxn In this example, x is a letter that identifies the


disk SCSI disk and n is the partition number. For
example, /dev/sda is the first disk on the first
SCSI bus.
/dev/rd/cxdypz
RAID Depending on the RAID controller, RAID devices can
disk /dev/ida/cxdypz have different device names. In the examples shown,
x is a number that identifies the controller, y is
a number that identifies the disk, and z is a
number that identifies the partition. For
example, /dev/ida/c0d1 is the second logical drive
on the first controller.

Partitioning DIsks:

I have created 4 physical paritions on /dev/sda and 4 on /dev/hdb just


just so that It seems that I have more disk
available for experiments. If you are going to create asm disks on
production server, then it is highly recommended that you
create single partition on the whole device. One of the reason is you
have one controller per disk in this case so as faster IO.

[root@shree ~]# fdisk /dev/hdb

The number of cylinders for this disk is set to 14946.


There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/hdb: 122.9 GB, 122942324736 bytes


255 heads, 63 sectors/track, 14946 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes


Device Boot Start End Blocks Id System

Command (m for help): n


Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-14946, default 1):<RETURN>
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-14946,
default 14946): +10000M

Command (m for help): n


Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (1218-14946, default 1218):<RETURN>
Using default value 1218
Last cylinder or +size or +sizeM or +sizeK (1218-14946,
default 14946): +40000M

Command (m for help): n


Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (6082-14946, default 6082):<RETURN>
Using default value 6082
Last cylinder or +size or +sizeM or +sizeK (6082-14946,
default 14946): +40000M

Command (m for help): n


Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 4
First cylinder (10946-14946, default 10946):<RETURN>
Using default value 10946
Last cylinder or +size or +sizeM or +sizeK (10946-14946,
default 14946): +40000M

Command (m for help): p


Disk /dev/hdb: 122.9 GB, 122942324736 bytes
255 heads, 63 sectors/track, 14946 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hdb1 * 1 1217 9775521 83 Linux
/dev/hdb2 1218 6081 39070080 83 Linux
/dev/hdb3 6082 10945 39070080 83 Linux
/dev/hdb4 10946 14946 32138032+ 83 Linux

Command (m for help): w


The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
The kernel still uses the old table.

The new table will be used at the next reboot.

NOTE: You do not need to reboot the machine just to activate the
created partitions tables available to kernel.
You can use the below command instead of reboorting the machine:

[root@shree ~]# partprobe

The same way, I partitioned the /dev/sda and the final partition table
looks like below:
[root@shree ~]# fdisk -l

Disk /dev/hda: 60.0 GB, 60022480896 bytes


255 heads, 63 sectors/track, 7297 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/hda1 * 1 1567 12586896 c W95 FAT32 (LBA)
/dev/hda2 1568 1632 522112+ 83 Linux
/dev/hda3 1633 2154 4192965 82 Linux swap
/dev/hda4 2155 7297 41311147+ 5 Extended
/dev/hda5 2155 7297 41311116 83 Linux

Disk /dev/hdb: 122.9 GB, 122942324736 bytes


255 heads, 63 sectors/track, 14946 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/hdb1 * 1 1217 9775521 83 Linux
/dev/hdb2 1218 6081 39070080 83
/dev/hdb3 6082 10945 39070080 83 Linux
/dev/hdb4 10946 14946 32138032+ 83 Linux

Disk /dev/sda: 122.9 GB, 122942324736 bytes


255 heads, 63 sectors/track, 14946 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/sda1 1 3648 29302528+ 83 Linux
/dev/sda2 3649 7296 29302560 83 Linux
/dev/sda3 7297 10944 29302560 83 Linux
/dev/sda4 10945 14946 32146065 83 Linux

ASM feature support two different types of IO.

1. Standard UNIX IO.


2. ASMLib IO.

This document covers both the IO types.

ASM Creation/Implementation Using UNIX IO:

Binding Rawdevices and setting permissions:

I have used two of the newly created partitions of /dev/hdb4 and /dev/sda4 to create a
diskgroup called DATA_GRP.
You need to bind this partitions with the raw devices on the Linux system. I have added
the below lines into the
/etc/sysconfig/rawdevices and restarted the rawdevices service.

[root@shree ~]# cat /etc/sysconfig/rawdevices


# raw device bindings
# format: <rawdev> <major> <minor>
# <rawdev> <blockdev>
# example: /dev/raw/raw1 /dev/sda1
# /dev/raw/raw2 8 5

/dev/raw/raw1 /dev/sda1
/dev/raw/raw2 /dev/sda2
/dev/raw/raw3 /dev/sda3
/dev/raw/raw4 /dev/sda4
/dev/raw/raw5 /dev/hdb4
[root@shree ~]# service rawdevices restart

Also, you need to change the ownership of these devices to oracle


user. Raw devices are refreshed with the default
permissions and ownership every time you reboot your system. For
this reason, I add these lines to the /etc/rc.local so
that every time machine reboots, this devices are assigned correct
ownership/permissions

[root@shree ~]# chown oracle.dba /dev/raw/raw1


[root@shree ~]# chown oracle.dba /dev/raw/raw2
[root@shree ~]# chown oracle.dba /dev/raw/raw3
[root@shree ~]# chown oracle.dba /dev/raw/raw4
[root@shree ~]# chown oracle.dba /dev/raw/raw5
[root@shree ~]# chmod 660 /dev/raw/raw1
[root@shree ~]# chmod 660 /dev/raw/raw2
[root@shree ~]# chmod 660 /dev/raw/raw3
[root@shree ~]# chmod 660 /dev/raw/raw4
[root@shree ~]# chmod 660 /dev/raw/raw5

Please add the below lines to the /etc/rc.local

for i in `seq 1 5`
do
chown oracle.dba /dev/raw/raw$i
chmod 660 /dev/raw/raw$i
done

Creating ASM Instance and diskgroups using dbca:

To Create an ASM Instance using dbca, please connect as oracle


user and type dbca.
Follow these steps to create an ASM instance and
diskgroups.

Creating ASM Instance and diskgroups manually without dbca:

create the password file:


[oracle@shree ~]$ orapwd file=$ORACLE_HOME/dbs/orapw+ASM password=changeIt
entries=5

Create required directories:


[oracle@shree ~]$ mkdir -p $ORACLE_BASE/admin/+ASM
[oracle@shree ~]$ cd $ORACLE_BASE/admin/+ASM
[oracle@shree +ASM]$ mkdir bdump
[oracle@shree +ASM]$ mkdir udump
[oracle@shree +ASM]$ mkdir cdump
[oracle@shree +ASM]$ mkdir pfile

Create the init+ASM.ora file:


Using vi editor or any other editor you like, create the
init+ASM.ora file under the $ORACLE_HOME/dbs
directory and add the below lines into this file.

background_dump_dest='/u01/app/admin/+ASM/bdump'
core_dump_dest='/u01/app/admin/+ASM/cdump'
instance_type='asm'
large_pool_size=12M
remote_login_passwordfile='SHARED'
user_dump_dest='/u01/app/admin/+ASM/udump'

[oracle@shree ~]$ cat $ORACLE_HOME/dbs/init+ASM.ora


background_dump_dest='/u01/app/admin/+ASM/bdump'
core_dump_dest='/u01/app/admin/+ASM/cdump'
instance_type='asm'
large_pool_size=12M
remote_login_passwordfile='SHARED'
user_dump_dest='/u01/app/admin/+ASM/udump'
[oracle@shree ~]$

Create spfile+ASM.ora and start the instance using that file:

[oracle@shree ~]$ export ORACLE_SID=+ASM


[oracle@shree ~]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Thu Dec 1


14:06:35 2005

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> create spfile from pfile;

File created.

SQL> startup mount

ASM instance started


Total System Global Area 83886080 bytes
Fixed Size 1217836 bytes
Variable Size 57502420 bytes
ASM Cache 25165824 bytes
ORA-15110: no diskgroups mounted

SQL> alter system set asm_diskstring =


'/dev/raw/raw1', '/dev/raw/raw2', '/dev/raw/raw3',
'/dev/raw/raw4', '/dev/raw/raw5';

System altered.

SQL> alter system set asm_diskgroups = 'DATA_GRP';

System altered.
SQL> create diskgroup data_grp
2 failgroup data_grp_f1 disk '/dev/raw/raw4'
3 failgroup data_grp_f2 disk '/dev/raw/raw5';

Diskgroup created.

SQL> set linesize 100


SQL> col path format a15
SQL> select name, path from v$asm_disk where name is
not null;

NAME PATH
--------------- ---------------
DATA_GRP_0001 /dev/raw/raw5
DATA_GRP_0000 /dev/raw/raw4

SQL> select name, type, total_mb, free_mb from


v$asm_diskgroup;

NAME TYPE TOTAL_MB FREE_MB


--------------- ------ ---------- ----------
DATA_GRP NORMAL 62776 62701

Open the /etc/oratab file and add the following line at the end:
+ASM:/u01/app/oracle/product/10.2.0/db_1:Y

Use ASM storage option to an Existing Database which is currently


using Filesystem option:

SQL> set linesize 100


SQL> col path format a15
SQL> col name format a50
SQL> select name from v$datafile;

NAME
--------------------------------------------------
/u01/app/oradata/db102/system01.dbf
/u01/app/oradata/db102/undotbs01.dbf
/u01/app/oradata/db102/sysaux01.dbf
/u01/app/oradata/db102/users01.dbf

SQL> select name, path from v$asm_disk where name is


not null;

no rows selected

SQL> create tablespace indx01 datafile '+DATA_GRP';

Tablespace created.

SQL> drop tablespace indx01;

Tablespace dropped.

SQL> create tablespace indx01 datafile '+DATA_GRP'


SIZE 100m extent management local uniform size 1m;

Tablespace created.
SQL> drop tablespace indx01;

Tablespace dropped.

SQL> create tablespace indx01


2 datafile '+DATA_GRP' SIZE 100m
3 extent management local
4 segment space management auto
5 uniform size 1m;

Tablespace created.

SQL> select name from v$datafile;

NAME
--------------------------------------------------
/u01/app/oradata/db102/system01.dbf
/u01/app/oradata/db102/undotbs01.dbf
/u01/app/oradata/db102/sysaux01.dbf
/u01/app/oradata/db102/users01.dbf
+DATA_GRP/db102/datafile/indx01.258.576105687

ASM Creation/Implementation Using Oracle's ASMLib:

Configure Disks that will be used as ASM using ASMLib:

Current Partition table look like this:


[root@shree ~]# fdisk -l

Disk /dev/hda: 60.0 GB, 60022480896 bytes


255 heads, 63 sectors/track, 7297 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/hda1 * 1 1567 12586896 c W95 FAT32 (LBA)
/dev/hda2 1568 1632 522112+ 83 Linux
/dev/hda3 1633 2154 4192965 82 Linux swap
/dev/hda4 2155 7297 41311147+ 5 Extended
/dev/hda5 2155 7297 41311116 83 Linux

Disk /dev/hdb: 122.9 GB, 122942324736 bytes


255 heads, 63 sectors/track, 14946 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/hdb1 * 1 1217 9775521 83 Linux
/dev/hdb2 1218 6081 39070080 83
/dev/hdb3 6082 10945 39070080 83 Linux
/dev/hdb4 10946 14946 32138032+ 83 Linux

Disk /dev/sda: 122.9 GB, 122942324736 bytes


255 heads, 63 sectors/track, 14946 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/sda1 1 3648 29302528+ 83 Linux
/dev/sda3 4866 7905 24418800 83 Linux
/dev/sda4 7906 14946 32146065+ 83 Linux

I decided to use /dev/sda1 and dev/hdb4 devices to be configured by using ASM library
drivers.

Download and Install the appropriate ASM Library Driver Software:

Please download the appropriate drivers from Oracle technology Network that best suits
your linux kernel and
architecture. You can run the below command and see which drivers are best suited for
your machine.

[root@shree ~]# uname -a


Linux shree 2.6.9-11.0.0.10.3.EL #1 Tue Jul 5
12:20:09 PDT 2005 i686 athlon i386 GNU/Linux
[root@shree ~]# uname -mi
i686 i386

You must install the following Packages, where version is the


version of the ASM library driver, arch is the
system architecture and kernel is the version of the kernel
that you are using.

oracleasm-support-version.arch.rpm
oracleasm-kernel-verson.arch.rpm
orcleasmlib-version.arch.rpm

I downloaded the below rpms and istalled them as root user


( su - root if not logged in as root).

[root@shree asmlib]# rpm -Uvh oracleasm-support-


2.0.1-1.i386.rpm \
> oracleasm-2.6.9-22.EL-
2.0.0-1.i686.rpm \
> oracleasmlib-2.0.1-
1.i386.rpm

Preparing...
###########################################
[100%]
1:oracleasm-support
###########################################
[ 33%]
2:oracleasm-2.6.9-22.EL
###########################################
[ 67%]
3:oracleasmlib
###########################################
[100%]
[root@shree asmlib]#
I downloaded the below rpms and istalled them as root user for my firewire project on
redhat EL3.6

[root@shree rhel3]# rpm -e oracleasm-support-


2.0.0-1
[root@shree rhel3]# rpm -Uvh
oracleasm_support_2.0.0_1.i386.rpm \
> oracleasm-2.4.21-27.0.2.ELorafw1-1.0.4-
1.i686.rpm \
> oracleasmlib_2.0.0_1.i386.rpm
Preparing...
###########################################
[100%]
1:oracleasm-support
###########################################
[ 33%]
2:oracleasm-2.4.21-
27.0.2########################################
### [ 67%]
3:oracleasmlib
###########################################
[100%]
[root@shree rhel3]#

Enter the following command to run oracleasm init script with


configure option.

[root@shree rhel3]# /etc/init.d/oracleasm configure


Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the


Oracle ASM library
driver. The following questions will determine
whether the driver is
loaded on boot and what permissions it will have.
The current values
will be shown in brackets ('[]'). Hitting <ENTER>
without typing an
answer will keep that current value. Ctrl-C will
abort.

Default user to own the driver interface []: oracle


Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n)
[y]:
Writing Oracle ASM library driver
configuration: [ OK ]
Creating /dev/oracleasm mount
point: [ OK ]
Loading module
"oracleasm": [ OK ]
Mounting ASMlib driver
filesystem: [ OK ]
Scanning system for ASM
disks: [ OK ]
[root@shree rhel3]#

Configure the Disk Device(s) that will be used in ASM diskgroup (stamping devises as
an ASM disks):
[root@shree root]# /etc/init.d/oracleasm createdisk
DSK1 /dev/sda1
Marking disk "/dev/sda1" as an ASM
disk: [ OK ]
[root@shree root]# /etc/init.d/oracleasm createdisk
DSK2 /dev/hdb4
Marking disk "/dev/hdb4" as an ASM
disk: [ OK ]
[root@shree root]#
[root@shree root]# /etc/init.d/oracleasm listdisks
DSK1
DSK2
[root@shree root]#

NOTE: The disk name (Dsk1 and Dsk2 in our example) must have this
charectoristics:
They MUST start with the uppercase letter. They can contain uppercase
letters, numbers and
underscore charactors.

Binding the partitions with the raw devices:

Add the below lines into the /etc/sysconfig/rawdevices and restarted the rawdevices
service.

[root@shree ~]# cat /etc/sysconfig/rawdevices


# raw device bindings
# format: <rawdev> <major> <minor>
# <rawdev> <blockdev>
# example: /dev/raw/raw1 /dev/sda1
# /dev/raw/raw2 8 5

/dev/raw/raw1 /dev/sda1
/dev/raw/raw2 /dev/sda2
/dev/raw/raw3 /dev/sda3
/dev/raw/raw4 /dev/sda4
/dev/raw/raw5 /dev/hdb4

[root@shree ~]# service rawdevices restart

Also, you need to change the ownership of these devices to oracle


user.

[root@shree ~]# chown oracle.dba /dev/raw/raw1


[root@shree ~]# chown oracle.dba /dev/raw/raw2
[root@shree ~]# chown oracle.dba /dev/raw/raw3
[root@shree ~]# chown oracle.dba /dev/raw/raw4
[root@shree ~]# chown oracle.dba /dev/raw/raw5
[root@shree ~]# chmod 660 /dev/raw/raw1
[root@shree ~]# chmod 660 /dev/raw/raw2
[root@shree ~]# chmod 660 /dev/raw/raw3
[root@shree ~]# chmod 660 /dev/raw/raw4
[root@shree ~]# chmod 660 /dev/raw/raw5

Please add the below lines to the /etc/rc.local so that these are
set at every boot.
for i in `seq 1 5`
do
chown oracle.dba /dev/raw/raw$i
chmod 660 /dev/raw/raw$i
done

Creating ASM Instance and Diskgroups using dbca:

To Create an ASM Instance using dbca, please connect as oracle


user and type dbca.
Follow these steps to create an ASM instance and
diskgroups.

Creating ASM Instance and Diskgroup manually without dbca:

Configure the Disk Device(s) that will be used in ASM diskgroup (stamping devises as
an ASM disks):

[root@shree root]# /etc/init.d/oracleasm createdisk


DSK1 /dev/sda1
Marking disk "/dev/sda1" as an ASM
disk: [ OK ]
[root@shree root]# /etc/init.d/oracleasm createdisk
DSK2 /dev/hdb4
Marking disk "/dev/hdb4" as an ASM
disk: [ OK ]
[root@shree root]#
[root@shree root]# /etc/init.d/oracleasm listdisks
DSK1
DSK2
[root@shree root]#

create the password file:


[oracle@shree ~]$ orapwd file=$ORACLE_HOME/dbs/orapw+ASM password=changeIt
entries=5

Create required directories:

[oracle@shree ~]$ mkdir -p $ORACLE_BASE/admin/+ASM


[oracle@shree ~]$ cd $ORACLE_BASE/admin/+ASM
[oracle@shree +ASM]$ mkdir bdump
[oracle@shree +ASM]$ mkdir udump
[oracle@shree +ASM]$ mkdir cdump
[oracle@shree +ASM]$ mkdir pfile

Create the init+ASM.ora file:

Using vi editor or any other editor you like, create the


init+ASM.ora file under the $ORACLE_HOME/dbs
directory and add the below lines into this file.
asm_diskgroups='PROD_DB_GRP'
asm_diskstring='ORCL:*'
background_dump_dest='/u01/app/admin/+ASM/bdump'
core_dump_dest='/u01/app/admin/+ASM/cdump'
instance_type='asm'
large_pool_size=12M
remote_login_passwordfile='SHARED'
user_dump_dest='/u01/app/admin/+ASM/udump'

[oracle@shree ~]$ cat $ORACLE_HOME/dbs/init+ASM.ora


asm_diskgroups='PROD_DB_GRP'
asm_diskstring='ORCL:*'
background_dump_dest='/u01/app/admin/+ASM/bdump'
core_dump_dest='/u01/app/admin/+ASM/cdump'
instance_type='asm'
large_pool_size=12M
remote_login_passwordfile='SHARED'
user_dump_dest='/u01/app/admin/+ASM/udump'
[oracle@shree ~]$
Create spfile+ASM.ora and start the instance using that file:

[oracle@shree ~]$ export ORACLE_SID=+ASM


[oracle@shree ~]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Dec 4


21:17:35 2005

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> create spfile from pfile;

File created.

SQL> startup mount

ASM instance started


Total System Global Area 83886080 bytes
Fixed Size 1217836 bytes
Variable Size 57502420 bytes
ASM Cache 25165824 bytes
ORA-15032: not all alterations performed
ORA-15063: ASM discovered an insufficient number of
disks for diskgroup
"PROD_DB_GRP"

SQL> show parameter disk

NAME TYPE VALUE


------------------------------------ -----------
------------------------------
asm_diskgroups string
PROD_DB_GRP
asm_diskstring string
ORCL:*
disk_asynch_io boolean TRUE
SQL> create diskgroup data_grp
2 failgroup f1 disk 'ORCL:DSK1'
3 failgroup f2 disk 'ORCL:DSK2';

Diskgroup created.

SQL> set linesize 100


SQL> col name format a15
SQL> col path format a15
SQL> select name, path from v$asm_disk where name is
not null;

NAME PATH
--------------- ---------------
DSK1 ORCL:DSK1
DSK2 ORCL:DSK2

SQL> select name, type, total_mb, free_mb from


v$asm_diskgroup;

NAME TYPE TOTAL_MB FREE_MB


--------------- ------ ---------- ----------
DATA_GRP NORMAL 59999 59897

Open the /etc/oratab file and add the following line at the end:
+ASM:/u01/app/oracle/product/10.2.0/db_1:Y

connect as oracle user and type dbca. Follow the steps below to create
Database with ASM storage using dbca.
Click Next
Select General Purpose and then Click Next. You can select the other option that best
suits your application.
Enter the database and instance name. db102
Enter the Password for SYS, SYSTEM, DBSNMP and SYSMAN
Select ASM Option
Enter the Password of SYS schema of an ASM instance.
This screen show all the groups that are mounted using the +ASM instacce. Select one
whichever you wnat to have
these database files to be resided on.
CLICK Next

Click NEXT
You can select sample schema to be created. If you donot have any schema 9data) to
work /practice on, you can go
for this option.
Click OK
Click NEXT
Click Next
Verify the location of Datafie, controlfiles and logfiles to makesure that they will be
created under the right locaion (group)
Click Finish
Click OK
CLICK OK
Automatic Storage Management (ASM)

Automatic Storage Management (ASM) is oracles logical volume manager, it uses OMF
(Oracle Managed Files) to name and locate the database files. It can use raw disks,
filesystems or files which can be made to look like disks as long as the device is raw.
ASM uses its own database instance to manage the disks, it has its own processes and
pfile or spfile, it uses ASM disk groups to manage disks as one logical unit.

The benefits of ASM are

Provides automatic load balancing over all the available disks, thus reducing hot
spots in the file system
Prevents fragmentation of disks, so you don't need to manually relocate data to
tune I/O performance
Adding disks is straight forward - ASM automatically performs online disk
reorganization when you add or remove storage
Uses redundancy features available in intelligent storage arrays
The storage system can store all types of database files
Using disk group makes configuration easier, as files are placed into disk groups
ASM provides stripping and mirroring (fine and coarse gain - see below)
ASM and non-ASM oracle files can coexist
ASM is free!!!!!!!!!!!!!

The three components of ASM are

is a special instance that does not have any data files, there is only ASM
instance one per server which manages all ASM files for each database. The
ASM instance looks after the disk groups and allows access to the ASM files.
Instance Databases access the files directly but uses the ASM instance to locate them.
If the ASM instance is shutdown then the database will either be
automatically shutdown or crash.
ASM Disk Disks are grouped together via disk groups, these are very much like logical
Groups volumes.
Files are stored in the disk groups and benefit from the disk group features
ASM Files
i.e. stripping and mirroring.
database is allowed to have multiple disk groups
You can store all of your database files as ASM files
Disk group comprises a set of disk drives
ASM disk groups are permitted to contain files from more than one
ASM
disk
Summary
Files are always spread over every disk in an ASM disk group and
belong to one disk group only

ASM allocates disk space in allocation units of 1MB


Not Managed by ASM - Oracle binaries, alert log, trace files, init.ora or password
file
Managed by ASM - Datafiles, SPFILES, redo log files, archived log files, RMAN
backup set / image copies, flash recovery area.

ASM Processes

There are a number of new processes that are started when using ASM, both the ASM
instance and Database will start new processes
ASM Instance
RBAL
(rebalance coordinates the rebalancing when a new disk is add or removed
master)
ARB[1-9]
actually does the work requested by the RBAL process (upto 9 of these)
(rebalance)
Database Instance
RBAL opens and closes the ASM disk
connects to the ASM instance via session and is the communication
ASMB between ASM and RBMS, requests could be file creation, deletion, resizing
and also various statistics and status messages.

ASM registers its name and disks with the RDBMS via the cluster synchronization
service (CSS). This is why the oracle cluster services must be running, even if the node
and instance is not clustered. The ASM must be in mount mode in order for a RDBMS to
use it and you only require the instance type in the parameter file.

ASM Disk Groups

An ASM disk group is a logical volume that is created from the underlying physical
disks. If storage grows you simply add disks to the disks groups, the number of groups
can remain the same.

ASM file management has a number of good benefits over normal 3rd party LVM's

performance
redundancy
ease of management
security

ASM Stripping

ASM stripes files across all the disks within the disk group thus increasing performance,
each stripe is called an allocation unit. ASM offers two types of stripping which is
dependent on the type of database file

Coarse Stripping used for datafile, archive logs (1MB stripes)


Fine Stripping used for online redo logs, controlfile, flashback files(128KB stripes)

ASM Mirroring

Disk mirroring provides data redundancy, this means that if a disk were to fail Oracle will
use the other mirrored disk and would continue as normal. Oracle mirrors at the extent
level, so you have a primary extent and a mirrored extent. When a disk fails, ASM
rebuilds the failed disk using mirrored extents from the other disks within the group, this
may have a slight impact on performance as the rebuild takes place.

All disks that share a common controller are in what is called a failure group, you can
ensure redundancy by mirroring disks on separate failure groups which in turn are on
different controllers, ASM will ensure that the primary extent and the mirrored extent are
not in the same failure group. When mirroring you must define failure groups otherwise
the mirroring will not take place.

There are three forms of Mirroring

External redundancy - doesn't have failure groups and thus is effectively a no-
mirroring strategy
Normal redundancy - provides two-way mirroring of all extents in a disk group,
which result in two failure groups
High redundancy - provides three-way mirroring of all extents in a disk group,
which result in three failure groups

ASM Files

The data files you create under ASM are not like the normal database files, when you
create a file you only need to specify the disk group that the files needs to be created in,
Oracle will then create a stripped file across all the disks within the disk and carry out
any redundancy required, ASM files are OMF files. ASM naming is dependent on the
type file being created, here are the different file-naming conventions

fully qualified ASM filenames - are used when referencing existing ASM files
(+dgroupA/dbs/controlfile/CF.123.456789)
numeric ASM filenames - are also only used when referencing existing ASM files
(+dgroupA.123.456789)
alias ASM filenames - employ a user friendly name and are used when creating
new files and when you refer to existing files
alias filenames with templates - are strictly for creating new ASM files
incomplete ASM filenames - consist of a disk group only and are used for creation
only.

Creating ASM Instance

Creating a ASM instance is like creating a normal instance but the parameter file will be
smaller, ASM does not mount any data files, it only maintains ASM metadata. ASM
normally only needs about 100MB of disk space and will consume about 25MB of
memory for the SGA, ASM does not have a data dictionary like a normal database so you
must connect to the instance using either O/S authentication as SYSDBA or SYSOPER
or using a password file.
The main parameters in the instance parameter file will be

instance_type - you have two types RDBMS or ASM


instance_name - the name of the ASM instance
asm_power_limit - maximum speed of rebalancing disks, default is 1 and the
range is 1 - 11 (11 being the fastest)
asm_diskstring - this is the location were oracle will look for disk discovery
asm_diskgroups - diskgroups that will be mounted automatically when the ASM
instance is started.

You can start an ASM instance with nomount, mount but not open. When shutting down a
ASM instance this passes the shutdown command to the RDBMS (normal, immediate,
etc)

ASM Configuration
instance_type=asm
instance_name=+asm
asm_power_limit=2
Parameter file
asm_diskstring=\\.\f:,\\.\g:,\\.\h:
(init+asm.ora)
asm_diskgroup= dgroupA, dgroupB

Note: file should be created in $ORACLE_HOME/database


Create service
c:> oradim new asmsid +ASM startmode manual
(windows only)
Set the oracle_sid c:> set ORACLE_SID=+ASM (windows only)
environment variable
(windows or unix) export ORACLE_SID=+ASM (unix only)
c:> sqlplus /nolog;
sql> connect / as sysdba;
Login to ASM instance sql> startup pfile=init+asm.ora
and start instance
Note: sometimes you get a ora-15110 which means that the
diskgroups are not created yet.
ASM Operations
Instance name select instance_name from v$instance;
create diskgroup diskgrpA high redundancy
failgroup failgrpA disk \\.\f: name disk1
failgroup failgrpB disk \\.\g: name disk2 force
failgroup failgrpC disk \\.\h: name disk3;
Create disk group
create diskgroup diskgrpA external redundancy

Note: force is used if disk has been in a previous diskgroup,


external redundancy uses third party mirroring i.e SAN
alter diskgroup diskgrpA add disk
Add disks to a group '\\.\i:' name disk4;
'\\.\j:' name disk5;
Remove disks from a
alter diskgroup diskgrpA drop disk disk6;
group
Remove disk group drop diskgroup diskgrpA including contents
resizing disk group alter diskgroup diskgrpA resize disk 'disk3' size 500M;
Undo remove disk
alter database diskgrpA undrop disks;
group

select name, group_number, name, type, state, total_mb, free_mb


from v$asm_diskgroup;
Display diskgroup info select group_number, disk_number, name, failgroup, create_date,
path, total_mb from v$asm_disk;
select group_number, operation, state, power, actual, sofar,
est_work, est_rate, est_minutes from v$asm_operation;
alter diskgroup diskgrpA rebalance power 8;
Rebalance a diskgroup
(after disk failure and
Note: to speed up rebalancing increase the level upto 11, remember
disk has been
that this will also decrease performance, you can also use the wait
replaced)
parameter this will hold the commandline until it is finished

Dismount or mount a
alter diskgroup diskgrpA dismount;
diskgroup
alter diskgroup diskgrpA mount;
Check a diskgroups
integrity alter diskgroup diskgrpA check all;
alter diskgroup diskgrpA add directory '+diskgrpA/dir1'
Diskgroup Directory
Note: this is required if you use aliases when creating databse files
i.e '+diskgrpA/dir/control_file1'
alter diskgroup diskgrpA add alias '+diskgrpA/dir/second.dbf' for
adding and drop
'+diskgrpB/datafile/table.763.1';
aliases
alter diskgroup diskgrpA drop alias '+diskgrpA/dir/second.dbf'
Drop files from a
alter diskgroup diskgrpA drop file '+diskgrpA/payroll/payroll.dbf';
diskgroup
Using ASM Disks

create tablespace test datafile +diskgrpA size 100m;


Examples of using alter tablespace test add datafile +diskgrpA size 100m;
ASM disks alter database add logfile group 4 +dg_log1,+dg_log2 size
100m;
alter system set log_archive_dest_1=location=+dg_arch1;
alter system set db_recovery_file_dest=+dg_flash;
select path, reads, writes, read_time, write_time,
read_time/decode(reads,0,1,reads) "AVGRDTIME",
Display performance
write_time/decode(writes,0,1,writes) "AVGWRTIME"
from v$asm_disk_stat;

RMAN backup

RMAN is the only way to backup ASM disks.

Backup backup as copy database format +dgroup1

Recovery Manager (RMAN)

RMAN can do everything a normal backup can do, however RMAN has its own backup
catalog to record the backups that took place. The database can be in two formats
archivelog mode or nonarchivelog mode

archivelog - Oracle saves the filled redo logs files, which means you can recovery
the database to any point in time using the archived logs
nonarchivelog - the redo logs are overwritten and not saved, but can only be
recovery from the last backup.

There are several types of backup

whole backup - you backup the database as a whole which includes the
controlfiles and spfile
partial backup - you back only a part of the database such a tablespace, one data
file
consistent - a consistent backup does not need to go through recover when being
restored, normally associated with a closed backup
inconsistent - a inconsistent backup always needs to be recovered
open - is backup taken when the database is running, also known as a hot, warm,
online backup
closed - is a backup taken when the database is shutdown also know as a cold,
offline backup

The benefits to using RMAN are

Human error is minimized as RMAN keeps tracks of all the backups


Simple command interface
Unused block compression lets you skip unused data blocks, thus saving space
and time.
RMAN can be fully automated
Can perform error checking when backing up or during recovery
Can perform image copies which are similar to operating system backup
Can be used with 3rd party backup management software like Veritas Netbackup
It is well integrated into OEM, so you can make use of Oracle's scheduler

RMAN Architecture

RMAN operates via a server session connecting to the target database, it gets the
metadata from the target, this is called the RMAN repository. The repository will contain
information on

Data file backup sets and copies


Archived redo log copies and backup sets
Tablespaces and data file information
Stored scripts (only can be used if using a recovery catalog)
RMAN configuration settings

The Recovery Catalog

RMAN will use the controlfile on the target database to store repository information
regarding any backups for that server, this information can also be stored in a recovery
catalog (optional) which resides on a rman server its own database (default size should be
about 115MB) which should be dedicated to RMAN, information is still written to
controlfile even if a recovery catalog is used.

The information stored in the controlfile is stored in the reusable sections called circular
reuse records and non-circular reuse records. The circular reuse records have non-critical
information that can be overwritten if needed. Some of the non-circular re-useable
sections consists of data files and redo log information. RMAN can backup archive logs,
controlfile, data files, spfile and tablespaces it does not backup temporary tablespaces,
redo logs, password file, init.ora.

The controlfile based repository will retain data for only the time specified by the
instance parameter CONTROL_FILE_RECORD_KEEP_TIME this defaults to seven
days.
Useful View
displays information about the control file
V$CONTROLFILE_RECORD_SECTION
record sections

Media Management Layer

If you backup to tapes you require additional software called MML (media management
layer) or media manager. MML is a API that interfaces with different vendors tape
libraries.

RMAN terminology

backup piece - operating system file containing the backup of a data file,
controlfile, etc
backup set - logical structure that contains one or more backup pieces, all relevant
backup pieces are contained in a backup set
image copy - similar to operating system copies like cp or dd, they will contain all
block if if not used (disk only)
proxy copy - media manger is given control of the copying process
channel - Channel allocation is a method of connecting rman and the target
database while also specifying the type of backup i.e. disk or tape, they can
created manually or automatically.

Connecting to RMAN

There are a number of ways to connect to RMAN and it depends on where the recovery
catalog is

c:\> set ORACLE_SID=D01


set the Oracle SID
ORACLE_SID=D01; export ORACLE_SID
c:\> rman
connect to the target server use oracle_sid rman> connect target /
and local controlfile
c:\> rman target=sys/<password>@d01
rman> connect catalog
connect the recovery manager catalog
rman_user/password@d01
connect target & recovery manager rman> connect target orcl catalog
catalog rman_user/password@d01
Configuring RMAN persistent settings

RMANs persistent settings which are stored in the controlfile (reason why must be in
mount mode) of the target database (#default means that parameter is at default setting)
or a recovery catalog if used

rman> show all

CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default


CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE
TYPE DISK TO '%F';
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED
BACKUPSET PARALLELISM 1;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK
Displaying TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE
DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT 'z:/orabackup/
%U';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO
'C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\SNCFD01.ORA';#defa
ult
set default
device to be configure default device type to sbt;
a tape drive
set default
device to be configure default device type to disk;
a disk drive
set default
backup to be
configure device type disk backup type to copy;
a image
copy
default disk
backup to a
configure device type disk backup type to compressed backupset;
compressed
backupset
(upto 20%
ratio)
default tape
backup to a
compressed
backupset configure device type sbt backup type to compressed backupset;
(upto 20%
ratio)
Set degree
configure device type disk parallelism 4;
of
configure device type sbt parallelism 4;
parallelism
configure backup optimization on;
Backup
optimization Note: this will ensure that RMAN doesn't perform a backup if it has already
backed up identical versions.

CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30


DAYS
CONFIGURE CONTROLFILE AUTOBACKUP ON
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO BACKUPSET
PARALLELISM 1
My basic
rman config CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE
TYPE DISK TO 's:\ora_backup\controlfile_%F'

CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT


's:\ora_backup\ora_dev_f%t_s%s_s%p' SET CONTROLFILE
AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO
's:\ora_backup\controlfile_%F';

Channel Parameters/Options

The parameters/options are use to control the resources used by RMAN, there are many
options probably best to consult the oracle documentation.

channel device type - set default location of backups can be disk or sbt
channel rate - limits i/o bandwith KB, MB or GB
channel maxpiecesize - limits the size of the backup pieces
channel maxsetsize - limits the size of the backup sets
channel connect - instructs a specific instance to perform an operation
duration - controls time for backup job (hours/mins)
parms - send specific instructions to tape library
rman> configure channel device type disk format s:\ora_backup\ora_dev_f
%t_s%s_s%p;
examples
rman> configure channel device type disk rate = 5m;
rman> configure channel device type disk maxpiecesize = 2g;
rman> configure channel device type disk maxsetsize = 10g;

Backup Retention

Default is redundancy 1 which means always attempt to have one backup image or
backupset of every data file, archive log and controlfile

keep backups for 30


rman> configure retention policy to recover window of 30 days;
days
keep aleast 2 copies rman> configure retention policy to redundancy 2;
reset back to 1 (default) rman> configure retention policy clear;
extend the retention rman> change backupset tag monthly_backup keep until time 01-
period dec-07 logs;

Backup Tagging

examples rman> backup database tag monthly_backup;


rman> backup database as copy tag monthly_backup;
You can use format options with backup commands to specify a location
and name for backup pieces

%F - combines database identifier (DBID), day, month, year and sequence


number
Format
%U - Specifies a system generated unique filename (default)
options
%u - specifies an 8 character name
%d - name of database
%s - backup set number
%t - backup set timestamp
%p - piece number within the backup set
Controlfile Backup

rman> backup current controlfile;


rman> configure controlfile autobackup on; (default location flash recovery
area)
examples
rman> configure controlfile autobackup format for device type disk to
z:\orabackup\controlfile_%F;

Tablespace Excludes

rman> configure exclude from tablespace test; (exclude test tablespace from
backup)
examples
rman> configure exclude from tablespace test clear; (remove the exclude test
tablespace from backup)
rman> backup database noexeclude (ignore any exclude settings)

Creating Backups

rman> run {
allocate channel c1 type disk;
backup database format db_%u_%d_%s; (the backup set name for
the data file)
backup format log_t%t_s%s_p%p; (the backup set name from the
archive logs)
(archivelog all);
}

rman> run {
allocate channel c1 type disk;
Backup Sets allocate channel c2 type disk;
backup
(datafile 1,2,3 channel c1)
(archivelog all channel c2);
}

rman> backup as compressed backupset database;


rman> backup incremental level 0 database; - baseline incremental
backup (a full backup )
rman> backup incremental level 1 database; - incremental backup
must have baseline backup
rman> backup incremental level 1 cumulative database; - backs up
from last level 0 backup (NOT FULL)

rman> backup as backupset copy of tablespace sysaux; (create a


backupset from a image copy)

## make that all redo logs are archived - see redo


rman> backup database (archivelog all);

rman> run {
allocate channel c1 type disk;
copy datafile 1 to z:\orabackup\system01.dbf, current controlfile
to z:\orabackup\control01.ctl;
}
Backup Images
rman> backup as copy database;
rman> backup as copy copy of database;
rman> backup copy as copy tablespace sysaux;
rman> backup as copy datafile 2;
rman> configure device type disk parallelism 3; (must have 3
channels)
Parallel Streams
Note : You only configure the number of streams to the number of
channels, if you configure more they will not start. Remember that
you need multiple channels configured to use the streams.
# need to clear the 'controlfile autoback format' then the flash
Backup controlfile recovery area will be used.
and spfile to flash rman> configure controlfile autobackup format for device type disk
recovery area clear
rman> backup current controlfile;
rman> backup device type disk copies 2 datafile 1 format ''disk1/df1_
%U', '/disk2/df1_%U';
rman> backup as copy copy of database from tag 'test' check logical
tag 'duptest';
rman> backup database plus archivelog;
rman> backup as copy duration 04:00 minimize time database;
rman> backup as compressed backupset database plus archivelog;
Other examples
Note:
logical - perform logical check of the backup files
duration - time limit to perform the backup
minimize - perform the backup as fast as it can
compressed - compress the backup set, remember it will take longer
to recovery as it needs to uncompress
Validating/Cross Checking Backups

You can validate a backup set before you restore which ensures that backup files exist in
the proper locations and that they are readable and free from any logical and physical
corruptions, you can also crosscheck backup sets to make sure they are available and
have not been deleted (backup sets can be deleted from the operating system level).

Validate a backup rman> validate backupset 1;


crosscheck rman> crosscheck backupset 1;

Viewing backups

The v$ views information regarding backups is always located in the target databases or
target databases controlfile.

The list commands are used to determine files impacted by the change, crosscheck and
delete commands. The report command is accurate when the control and RMAN
repository are synchronized which can be performed by the change, crosscheck and
delete commands

rman> list copy;


list all image copies
rman> list archivelog all;
rman> list backup;
list all backups rman> list backupset by backup [summary|
verbose];
list backed up files rman> list backupset by file;
list backed up databases; rman> list backup of database;
list all datafile number 1 backups rman> list backup of datafile 1;
list backed up controlfiles rman> list backup of controlfile;
rman> list script names;
list backup scripts
rman> list global script names;
list all backups no longer required according
rman> report obsolete;
to retention policy
list all the physical datafiles rman> report schema;
list files that require backing up rman> report need backup
Useful Views
v$controlfile_record_Section displays information about the control file
record sections
lists each file backed up, also has
v$backup_files
compressed option
v$backup_set lists backup sets
v$backup_piece lists backup pieces
v$backup_redolog lists archived backup logs
v$backup_spfile lists spfiles
names of SBT devices that have been
v$backup_device
linked to rman
lists all changed configurations settings
v$rman_configuration
(good point to check config)
v$rman_status Status of all completed rman jobs
v$backup_corruption provides important corruption information
v$copy_corruption provides important corruption information

Deleting Backups

To removed old archive logs use "delete all" option, if all is missed only the archive logs
in the primary destination will be deleted.

rman> delete backupset 12;


rman> delete backupset tag=monthly_backup;
rman> delete copy of datafile 6;
rman> delete copy of archivelog all;
Examples
rman> delete obsolete;

Note:
obsolete - delete all backups no longer needed due to retention levels

Catalog commands

The catalog command helps you identify and catalog any files that aren't recorded in
RMAN's repository and thus are known to RMAN

rman> catalog datafilecopy


catalog all data files copies
'c:\oracle\backup\users01.dbf';
rman> catalog backuppiece
catalog all backup pieces copies
':c\oracle\backup\backup_20.bkp';
Search for uncatalog files in a
rman> catalog start with ':c\oracle\backup';
directory
Delete discrepancy in catalog rman> delete force noprompt archivelog sequence 40;

Block change tracking

Block change tracking is used to backup very large databases,when you enable change
block tracking a new process CTWR is then started:

Enabling alter database enable block change tracking using file


c:\oracle\tracking\block_tracking.log
Viewing select filename, status, bytes from v$block_change_tracking;
Disabling alter database disable block change tracking;

Redo

All the Oracle changes made to the db are recorded in the redo log files, these files along
with any archived redo logs enable a dba to recover the database to any point in the past.
Oracle will write all commited changes to the redo logs first before applying them to the
data files. The redo logs guarantee that no committed changes are ever lost. Redo log files
consist of redo records which are group of change vectors each referring to specific
changes made to a data block in the db. The changes are first kept in the redo buffer but
are quickly written to the redo log files.

There are two types of redo log files online and archive. Oracle uses the concept of
groups and a minimum of 2 groups are required, each group having at least one file. They
are used in a circular fashion when one group fills up oracle will switch to the next log
group.

The LGWR process writes redo information from the redo buffer to the online redo logs
when

user commits a transaction


redo log buffer becomes 1/3 full
redo buffer contains 1MB of changed records
switch of the log files
The log group can be in one of four states

Current log group that is being actively being written too.


Active the files in the log group are required for instance recovery
the files in the log group are not required for instance recovery and can be over
Inactive
written
Unused log group has never been written too, a new group.

A log file can be in one of four states

Invalid the file is corrupt or missing


Stale the log file is new and never been used
Deleted the log file is no longer being used
<blank> the log file is currently being used

Log group and log files commands

Configuration
Creating new log alter database add logfile group
group ('c:\oracle\redo3a.log','c:\oracle\redo3b.log') size 10M;
Adding new log file to
alter database add logfile member 'c:\oracle\redo3c.log' to group3;
existing group

shutdown database
rename file
Renaming log file in
startup database in mount mode
existing group
alter database rename file 'old name' to'new name'
open database
backup controlfile
Drop log group alter database drop logfile group 3;
Drop log file from
alter database drop logfile member 'c:\oracle\redoc.log'
existing group
Maintaining
alter database clear logfile group 3;
alter database clear unarchived logfile group 3;
Clearing Log groups
Note: used the unarchived option when a loggroup has not ben
archived
Logswitch and alter system checkpoint;
Checkpointing
alter system switch logfile;
alter system archive log current;
alter system archive log all;

# Difference between them are


switch logfile - will switch logfile and return prompt immediately,
archiving will take place in the background
log current - will switch logfile and return prompt only when
logfile has been successfully archived
log all - will only archiving full log files

Note: I have discussed checkpoints


select le.leseq "Current log sequence No",
100*cp.cpodr_bno/le.lesiz "Percent Full",
cp.cpodr_bno "Current Block No",
le.lesiz "Size of Log in Blocks"
Display the redo usage
from x$kcccp cp, x$kccle le
where le.leseq =CP.cpodr_seq
and bitand(le.leflg,24) = 8
/
Useful Views
V$LOG displays log file information from the control file.
V$LOGFILE contains information about redo log files.

Archived Logs

When a redo log file fills up and before it is used again the file is archived for safe
keeping, this archive file with other redo log files can recover a database to any point in
time. It is best practice to turn on ARCHIVELOG mode which performs the archiving
automatically.

The log files can be written to a number of destinations (up to 10 locations), even to a
standby database, using the parameters log_archive_dest_n and
log_archive_min_succeed_dest you can control how Oracle writes its log files.

Configuration
alter system set log_archive_dest_1 =
'location=c:\oracle\archive' scope=spfile;
alter system set log_archive_format = 'arch_%d_%t_%r_
%s.log' scope=spfile;
Enabling
shutdown database
startup database in mount mode
alter database archivelog;
startup database in open mode

Archive format options


%r - resetlogs ID (required parameter)
%s - log sequence number (required parameter)
%t - thread number (required parameter)
%d - database ID (not required)
Disabling alter database noarchivelog;
archive log list;
Displaying select name, log_mode from v$database;
select archiver from v$instance;
Maintainance
show parameter log_archive_dest
Display system
show parameter log_archive_format
parameters
show parameter log_archive_min_succeed_dest
Useful Views
V$ARCHIVED_LOG Display the archived log files
V$INSTANCE Display if database is in archive mode
V$DATABASE Display if database is in archive mode

Oracle Database Architecture overview

There are two terms that are used with Oracle

Database - A collection of physical operating system files


Instance - A set of Oracle processes and a SGA (allocation of memory)

These two are very closely related but a database can be mounted and opened by many
instances. An instance may mount and open only a single database at any one point in
time.

The File Structure

The are a number of different file types that make up a database

Parameter File - These files tells Oracle were to find the control files. Also they
detail how big the memory area will be, etc
Data Files - These hold the tables, indexes and all other segments
Temp Files - used for disk-based sorting and temporary storage
Redo Log Files - Our transaction logs
Undo log files - allows a user to rollback a transaction and provides read
consistency.
Archive Log Files - Redo log files which have been archived
Control File - Details the location of data and log files and other relevant
information about their state.
Password File - Used to authenticate users logging in into the database.
Log files - alert.log contains database changes and events including startup
information.
trace files - are debugging files.

Parameter Files

In order for Oracle to start it needs some basically information, this information is
supplied by using a parameter file. The parameter file can be either a pfile or a spfile:

pfile - a very simple plain text file which can be manually edited via vi or notepad
spfile - a binary which cannot be manually edited (Oracle 9i or higher required)

The parameter file for Oracle is the commonly know file init.ora or init<oracle
sid>.ora, the file contains key/value pairs of information that Oracle uses when starting
the database. The file contains information such as database name, caches sizes, location
of control files, etc.

By Default the location of the parameter file is

windows - $ORACLE_ HOME\database


unix - $ORACLE_ HOME/dbs

The main difference between the spfile and pfile is that instance parameters can be
changed dynamically using a spfile, where as you require a instance reboot to load pfile
parameters.

To convert the file from one of the other you can perform the following

create pfile using a spfile create pfile='c:\oracle\pfile\initD10.ora' from spfile;


startup db using pfile startup pfile='c:\oracle\pfile\initD10.ora';

create spfile using a pfile


create spfile from pfile;
Display spfile location show parameter spfile

Data Files
By Default Oracle will create at least two data files, the system data file which holds the
data dictionary and sysaux data file which non-dictionary objects are stored, however
there will be many more which will hold various types of data, a data file will belong to
one tablespace only (see tablespaces for further details).

Data files can be stored on a number of different filesystem types

Cooked - these are normally filesystems that can be accessed using "ls"
commands in unix
Raw - these are raw disk partitions which cannot be viewed, normally used to
avoid filesystem buffering.
ASM - automatic storage management is Oracle new database filesystem (see asm
for further details).
Clustered FS - this is a special filesystem used in Oracle RAC environments.

Data files contain the following

Segments - are database objects, a table, a index, rollback segments. Every object
that consumes space is a segment. Segments themselves consist of one or more
extents.
Extents - are a contiguous allocation of space in a file. Extents, in turn, consist of
data blocks
Blocks - are the smallest unit of space allocation in Oracle. Blocks normally are
2KB, 4KB, 8KB, 16KB or 32KB in size but can be larger.

The relationship between segments, extents and blocks looks like this

The parameter DB_BLOCK_SIZE determines the default block size of the database.
Determining the block size depends on what you are going to do with the database, if you
are using small rows then use a small block size (oracle recommends 8KB), if you are
using LOB's then the block size should be larger.

OLTP - online transaction processing database would benefit from a


2KB or 4KB
small block size
8KB (default) Most databases would be OK to use the default size
DW - data warehouses, media database would benefit from a larger
16KB or 32KB
block size
Notes
You can have different block sizes within the database, each tablespace having a different
block size depending on what is stored in the tablespace. For an example

System tablespace could use the default 8KB and the OLTP tablespace could use a block
size of 4KB.

There are few parameters that cannot be changed after installing Oracle and the
DB_BLOCK_SIZE is one of them, so make sure to select the correct choice when
installing Oracle.

A data block will be made up of the following, the two main area's are the free space and
the data area.

contains information regarding the type of block (a table block, index block,
Header etc), transaction information regarding active and past transactions on the
block and the address (location) of the block on the disk
Table
contains information about the tables that store rows in this block
Directory
contains information describing the rows that are to be found on the block.
Row
This is an array of pointers to where the rows are to be found in the data
Directory
portion of the block.
Block The three above pieces are know as the Block Overhead and are used by
overhead Oracle to manage the block itself.
Free space available space within the block
Data data within the block
Tablespaces

A tablespace is a container which holds segments. Each and every segment belongs to
exactly one tablespace. Segments never cross tablespace boundaries. A tablespace itself
has one or more files associated with it. An extent will be contained entirely within one
data file.

So in summary the Oracle hierarchy is as follows:

A database is made up of one or more tablespaces


A tablespace is made up of one or more data files, a tablespace contains segments
A segment (table, index, etc) is made up of one or more extents. A segment exists
in a tablespace but may have data in many data files within a tablespace.
An extent is a continuous set of blocks on a disk. An extent is in a single
tablespace and is always in a single file within that tablespace.
A block is the smallest unit of allocation in the database. A block is the smallest
unit of i/o used by the database.

The minimum tablespaces required are the system and sysaux tablespace, the following
reasons are why tablespaces are used.

Tablespaces make it easier to allocate space quotas to users in the database


Tablespaces enable you to perform partial backups and recoveries based on the
tablespace as a unit
Tablespaces can be allocated to different disks and controllers to improve
performance
You can take tablespaces offline without affecting the entire database
You can import and export specific application data by using the import and
export utilities at the tablespace.

There are a number of types that a tablespace can be

Bigfile tablespaces, will have only one file which can range from 8-128 terabytes.
Smallfile tablespaces (default), can have multiple files but the files are smaller
than a bigfile tablespace.
Temporary tablespaces, contain data that only persists for the duration a users
session, used for sorting
Permanent tablespaces, any tablespace that is not temporary one.
Undo tablespaces, Oracle uses this to rollback or undo changes to the db.
Read-only, no write operations are allowed.

See tablespaces for detailed information regarding creating, resizing, etc

Temp Files

Oracle will use temporary files to store results of a large sort operations when there is
insufficient memory to hold all of it in RAM. Temporary files never have redo
information (see below) generated for them, although they have undo information
generated which in turns creates a small amount of redo information. Temporary data
files never need to be backed up ever as they cannot be restored.

Redo log files

All the Oracle changes made to the db are recorded in the redo log files, these files along
with any archived redo logs enable a dba to recover the database to any point in the past.
Oracle will write all committed changes to the redo logs first before applying them to the
data files. The redo logs guarantee that no committed changes are ever lost. Redo log files
consist of redo records which are group of change vectors each referring to specific
changes made to a data block in the db. The changes are first kept in the redo buffer but
are quickly written to the redo log files.

There are two types of redo log files online and archive. Oracle uses the concept of
groups and a minimum of 2 groups are required, each group having at least one file, they
are used in a circular fashion when one group fills up oracle will switch to the next log
group.

See redo on how to configure and maintain the log files.

Archive Redo log

When a redo log file fills up and before it is used again the file is archived for safe
keeping, this archive file with other redo log files can recover a database to any point in
time. It is best practice to turn on ARCHIVELOG mode which performs the archiving
automatically.

See redo on how to enable archiving and maintain the archive log files.

Undo File
When you change data you should be able to either rollback that change or to provide a
read consistent view of the original data. Oracle uses undo data (change vectors) to store
the original data, this allows a user to rollback the data to its original state if required.
This undo data is stored in the undo tablespace. See undo for further information.

Control file

The control is one of the most important files within Oracle, the file contains data and
redo log location information, current log sequence numbers, RMAN backup set details
and the SCN (system change number - see below for more details). This file should have
multiple copies due to it's importance. This file is used in recovery as the control file
notes all checkpoint information which allows oracle to recover data from the redo logs.
This file is the first file that Oracle consults when starting up.

The view V$CONTROLFILE can be used to list the controlfiles, you can also use the
V$CONTROLFILE_RECORD_SECTION to view the controlfile's record structure.

You can also log any checkpoints while the system is running by setting the
LOG_CHECKPOINTS_TO_ALERT to true.

See recovering critical files for more information.

Password file

This file optional and contains the names of the database users who have been granted the
special SYSDBA and SYSOPER admin privilege.

Log files

The alert.log file contains important startup information, major database changes and
system events, this will probably be the first file that will be looked at when you have
database issues. The file contains log switches, db errors, warnings and other messages. If
this file is removed Oracle creates another one automatically.

Trace Files

Traces files are debugging files which can trace background process information
(LGWR, DBWn, etc), core dump information (ora-600 errors, etc) and user processing
information (SQL).

Oracle Managed Files

The OMF feature aims to set a standard way of laying out Oracle files, there is no need to
worry about file names and the physical location of the files themselves. The method is
suited in small to medium environments, OMF simplifies the initial db creation as well as
on going file management.

System Change (Commit) Number (SCN)

The SCN is an important quantifier that oracle uses to keep track of its state at any given
point in time. The SCN is used to keep track of all changes within the database, its a
logical timestamp that is used by oracle to order events that have occurred within the
database. SCN's are increasing sequence numbers and are used in redo logs to confirm
that transactions have been committed, all SCN's are unique. SCN's are used in crash
recovery as the control maintains a SCN for each data file, if the data files are out of sync
after a crash oracle can reapply the redo log information to bring the database backup to
the point of the crash. You can even take the database back in time to a specific SCN
number (or point in time).

Checkpoints

Checkpoints are important events that synchronize the database buffer cache and the
datafiles, they are used with recovery. Checkpoints are used as a starting point for a
recovery, it is a framework that enables the writing of dirty blocks to disk based on a
System Change or Commit Number (for SCN see above) and a Redo Byte Address
(RBA) validation algorithm and limits the number of blocks to recover.

The checkpoint collects all the dirty buffers and writes them to disk, the SCN is
associated with a specific RBA in the log, which is used to determine when all the buffers
have been written.

Tablespaces

Tablespaces are used to organize tables and indexes into manageable groups, tablespaces
themselves are made up of one for more data/temp files.

Oracle has 4 different types of tablespace

Permanent - uses data files and normally contains the system (data dictionary)
and users data
Temporary - is used to store objects for the duration of a users session, temp files
are used to create temporary tablespaces
Undo - is a permanent type of tablespace that are used to store undo data which if
required would undo changes of data by users
Read only - is a permanent tablespace that can only be read, no writes can take
place, but the tablespace can be made read/write.

Every oracle database has at least two tablespaces


System - is a permanent tablespace and contains the vital data dictionary
(metadata about the database)
Sysaux - is an auxiliary tablespaces and contains performance statistics collected
by the database.

Tablespace Management

There are two ways to manage a tablespace

Extents are the basic unit of a tablespace and are managed in bitmaps that
are kept within the data file header for all the blocks within that data file.
For example, if a tablespace is made up of 128KB extents, each 128KB
extent is represented by a bit in the extent bitmap for this file, the bitmap
values indicate if the extent is used or free. The bitmap is updated when the
extent changes there is no updating on any data dictionary tables thus
increasing performance.

Locally Extents are tracked via bitmaps not using recursive SQL which means a
(default) performance improvement.

Locally managed tablespaces cannot be converted into a dictionary


managed one. The benefits of using a local managed tablespace

relieves contention on the system tablespace


free extents are not managed by the data dictionary

no need to specify storage parameters


The extent allocation is managed by the data dictionary and thus updating
the extent information requires that you access the data dictionary, on heavy
used systems this can cause a performance drop.
Dictionary
Managed
extents are tracked via FET$ and UET$ using recursive SQL.

Dictionary managed tablespaces can be converted to a locally managed one.

There are a number of things that you should know about tablespaces.

Local tablespaces are the default in oracle 10g


A dictionary tablespace can be changed into a local table but a local tablespace
cannot be changed into a dictionary one
If the system tablespace is locally managed then you can only create locally
managed tablespaces, trying to create a dictionary one will fail
Local tablespaces are better in performance than dictionary managed tablespaces
as you have to constantly check the data dictionary during the course of extent
management (called recursive SQL).
Extent Management

Anytime an object needs to grow in size space is added to that object by extents. When
you are using locally managed tablespaces there are two options that the extent size can
be managed

This means the extent will vary in size, the first extent starts at 64k and
progressively increased to 64MB by the database. The database
automatically decides what size the new extent will be based on segment
Autoallocate
growth patterns.
(default)
Autoallocate is useful if you aren't sure about growth rate of an object
and you let oracle decide.
Create the extents the same size by specifying the size when create the
tablespace.

This is default for temporary tablespace but not available for undo
Uniform
tablespaces.

Be careful with uniform as it can waste space, use this option you are
know what the growth rate of the objects are going to be.

Segment Space Management

Segment space management is how oracle deals with free space with in an oracle data
block. The segment space management you specify at tablespace creation time applies to
all segments you later create in the tablespace.

Oracle uses two methods to deal with free space

Oracle manages the free space in the data blocks by using free lists and a
pair of storage parameters PCTFREE and PCTUSED. When the block
reaches the PCTUSED percentage the block is then removed from the
freelist, when the block falls below the PCTFREE threshold the block is
Manual
then placed back on the freelist. Oracle has to perform a lot of hard work
maintaining these lists, a slow down in performance can occur when you are
making lots of changes to the blocks as Oracle needs to keep checking the
block thresholds.
Oracle does not use freelist when using automatic mode, Instead oracle uses
Automatic
bitmaps. A bitmap which is contained in a bitmap block, indicates whether
(default)
free space in a data block is below 25%, between 25%-50%, between 50%-
75% or above 75%. For an index block the bitmaps can tell you whether the
blocks are empty or formatted. Bitmaps do use additional space but this is
less than 1% for most large objects.

The performance gain from using automatic segment management can be


quite striking.

Permanent Tablespaces

Tablespaces can be either small tablespaces or big tablespaces

Small tablespace - The tablespace can be made up of a number of data files each
of which can be quite large in size
Big tablespace - The tablespace will only be made up of one data file and this can
get extremely large.

Tablespace commands

create tablespace test datafile 'c:\oracle\test.dbf' size 2G;


create tablespace test datafile 'c:\oracle\test.dbf' 2G extent
Creating management local uniform size 1M maxsize unlimited;
create bigfile tablespace test datafile 'c:\oracle\bigfile.dbf'
2G;
Creating non-standard block create tablespace test datafile 'c:\oracle\test.dbf' size 2G
size blocksize 8K;

drop tablespace test;


Removing
drop tablespace test including contents and datafiles;
(removes the contents and the physical data files)
alter tablespace test rename to test99;
alter tablespace test [offline|online];
alter tablespace test [readonly|read write];
Modifying alter tablespace test [begin backup | end backup];

Note: use v$backup to see tablespace is in backup mode


(see below)
Adding data files alter tablespace test add datafile 'c:\oracle\test02.dbf' 2G;
Dropping data files alter tablespace test drop datafile 'c:\oracle\test02.dbf';
Autoextending See Datafile commands below
alter tablespace test rename datafile 'c:\oracle\test.dbf' to
Rename a data file
'c:\oracle\test99.dbf';
create tablespace test datafile 'c:\oracle\test.dbf' 2G extent
Tablespace management
management manual;
create tablespace test datafile 'c:\oracle\test.dbf' 2G
Extent management
uniform size 1M maxsize unlimited;
create tablespace test datafile 'c:\oracle\test.dbf' 2G
Segment Space management
segment space management manual;
select property_value from database_properties where
Display default tablespace property_name =
'DEFAULT_PERMANENT_TABLESPACE';
Set default tablespace alter database default tablespace users;
select property_value from database_properties where
Display default tablespace type
property_name = 'DEFAULT_TBS_TYPE';
alter database set default bigfile tablespace;
Set default tablespace type
alter database set default smallfile tablespace;
set long 1000000
Get properties of an existing select
tablespace DBMS_METADATA.GET_DDL('TABLESPACE','USE
RS') from dual;
select tablespace_name, round(sum(bytes/1024/1024),1)
Free Space "FREE MB" from dba_free_space group by
tablespace_name;
select tablespace_name, b.status from dba_data_files a,
Display backup mode
v$backup b where a.file_id = b.file#;
Useful Views
DBA_TABLESPACES describes all tablespaces in the database
DBA_DATA_FILES describes database files
DBA_TABLESPACE_GROU
describes all tablespace groups in the database
PS
describes the storage allocated for all segments in the
DBA_SEGEMENTS
database
describes the free extents in all tablespaces in the
DBA_FREE_SPACE
database
V$TABLESPACE displays tablespace information from the control file
V$BACKUP displays the backup status of all online datafiles
DATABASE_PROPERTIES lists Permanent database properties

Datafile Commands

Resizing alter database datafile 'c:\oracle\test.dbf' resize 3G;


alter database datafile 'c:\oracle\test.dbf' offline;
Offlining
Note: you must offline the tablespace first
Onlining alter database datafile 'c:\oracle\test.dbf' online;
Renaming alter database rename file 'c:\oracle\test.dbf' to 'c:\oracle\test99.dbf';
alter database datafile 'c:\oracle\test.dbf' autoextend on;
alter database datafile 'c:\oracle\test.dbf' autoextend off;
Autoexend
select file_name, autoextensible from dba_data_files;

If you create tablespaces with non-standard block sizes you must set the
DB_nK_CACHE_SIZE parameter, there are 5 nonstandard sizes 2k, 4k, 8k, 16k and 32k.
The DB_CACHE_SIZE parameter sets the default block size for all new tablespace if the
block size option is emitted.

Temporary tablespaces

Temporary tablespaces are used for order by, group by and create index. It is required
when the system tablespace is locally managed. In oracle 10g you can now create
temporary tablespace groups which means you can use multiple temporary tablespaces
simultaneously.

The benefits of using a temporary tablespace group are

SQL queries are less likely to run out of space


You can specify multiple default temporary tablespaces at the db level
Parallel execution can utilize multiple temporary tablespaces
single user can simultaneously use multiple temp tablespaces in different sessions.

Temporary tablespace commands

create temporary tablespace temp tempfile


Creating non temp group
'c:\oracle\temp.dbf' size 2G autoextend on;
create temporary tablespace temp tempfile
Creating temp group
'c:\oracle\temp.dbf' size 2G tablespace group '';
alter tablespace temp02 tablespace group tempgrp;
Adding to temp group
Note: if no group exists oracle will create it
Removing from temp group alter tablespace temp02 tablespace group '';
select group_name, tablespace_name from
Displaying temp groups
dba_tablespace_groups;
Make user use temp group alter user vallep temporary tablespace tempgrp;
Display default temp tbs select property_value from database_properties where
property_name =
'DEFAULT_TEMPORARY_TABLESPACE';
set default temp tbs alter database default temporary tablespace temp02;
select tablespace_name, sum(bytes_used),
Display free temp space sum(bytes_free) from v$temp_space_header group by
tablespace_name
SELECT b.tablespace,
ROUND(((b.blocks*p.value)/1024/1024),2)||'M'
"SIZE",
a.sid||','||a.serial# SID_SERIAL,
a.username,
a.program
Who is using temp segments
FROM sys.v_$session a,
sys.v_$sort_usage b,
sys.v_$parameter p
WHERE p.name = 'db_block_size'
AND a.saddr = b.session_addr
ORDER BY b.tablespace, b.blocks;
Useful Tables
DBA_TEMP_FILES describes database temporary files
DBA_TABLESPACE_GROUPS describes all tablespace groups in the database
contains information about every sort segment in a
V$SORT_SEGMENT given instance. The view is only updated when the
tablespace is of the temporary type
V$TEMPSEG_USAGE describes temporary segment usage

See tables for more information on temporary tables.

Undo Tablespaces

Undo tablespaces are used to store original data after it has been changed, if a user
decides to rollback a change the information in the undo tablespace is used to put back
the data in its original state.

Undo tablespaces are used for the following

Rolling back transactions explicitly with a ROLLBACK command


Rolling back transactions implicitly (automatic instance recovery)
Reconstructing read-consistent image of data
Recovering from logical corruptions

Creating create undo tablespace undotbs02 datafile ' c:\oracle\undo01.dbf' size 2G;
set default alter system set undo_tablespace='undotbs02';

See undo for more information.

Tablespace quotas

You can assign a user tablespace quota thus limiting to a certain amount of storage space
within the tablespace. By default a user has none when the account is first created, see
users for information on tablespace quotas.

Tablespace Alerts

The MMON daemon checks tablespace usage every 10 mins to see if any thresholds have
been exceeded and raises any alerts. There are two types of alerts warning (low space
warning) and critical (action should be taken immediately). Both thresholds can be
changed via OEM or DBMS_SERVER_ALERT package.

Oracle Managed Files

Oracle can make file handling a lot easier by managing the oracle files itself, there are
three parameters that can be set so that oracle will manage the data, temp, redo, archive
and flash logs for you

DB_CREATE_FILE_DEST - sets the default location of the data/temp files


DB_CREATE_ONLINE_LOG_DEST_n - sets the default location of the redo,
archived log files and controlfiles.
DB_RECOVERY_FILE_DEST - sets the default location of the flashback logs.

alter system set db_create_file_dest=':c\oracle\data'


setting db_create_file_dest
scope=both;
alter system set
setting
db_create_online_log_dest_n='c:\oracle\archive'
db_create_online_log_dest
scope=both;
Creating create tablespace user01;
Removing drop tablespace user01;
Adding datafile alter tablespace user01 add datafile 1G;

Tablespace Logging

Tablespace logging can be overridden by logging specification at the table-level.


Recovering Critical Files

Recovering critical files would include control files

Recover/Re-create a controlfile

There a number of ways to recover a controlfile, restore of a backup (rman, user


managed) or re-create the control. If you were to lose a controlfile while the database is
up, just re-create it, no data should be lost.

sql> startup nomount; (as far as you go with a damaged controlfile)

rman> connect target / (because the db name is in the controlfile you must
connect like this)
rman> set dbid 2615281366; (must be supplied)
rman> set controlfile autobackup format for device type to disk
's:\ora_backup\controlfile_%F'; (must point to where controlfile is)
RMAN
rman> restore controlfile from autobackup;
rman> alter database mount; (must be run before starting the recovery)
rman> recover database;
rman> alter database open resetlogs; (explained later)

note: must use the set command within rman when you have lost a
controlfile. set=memory, configure=controlfile.
sql> alter database backup controlfile to trace; (file located in
USER_DUMP_DEST)
Re-create the
c:\> edit the trace file and obtain the controlfile create part
control file
sql> @c:\restore_controlfile.txt (file obtained from the above information
obtained from the trace file)

Resetlogs

The restlogs clause is required in most incomplete recovery to open the database. It resets
the redo log sequence for the oracle database. For recovery through a resetlogs to work, it
is vital that the names generated for the archive logs let oracle distinguish between logs
produced by different incarnations. This is why you use the %r in the parameter
log_archive_format, %r is the incarnation other wise archive logs could be written over.

After a resetlogs there will be a new database incarnation number and the log switch
number will be reset. In previous version all old backups and archive logs would have
been useless but not any more in Oracle 10g.

Oracle Scheduler
Oracle has a built-in scheduler that helps you automate jobs from within the oracle
database database. The dbms_scheduler package contains various functions and
procedures that manage the scheduler, although this can also be achieved via the OEM.
The scheduler is like cron, it will schedule jobs at particular time and run them. All
scheduler tasks can be views through dba_scheduler_jobs view. You cannot schedule
Operating System jobs (either scripts or Binary) via the scheduler this must be done via
cron

The scheduler uses a modular approach to managing tasks which enables the reuse of
similar jobs.

Basic scheduler components

The scheduler has 5 basic components

a job instructs the scheduler to run a specific program at a specific date/time, a


Jobs job can run execute PL/SQL code, a native binary executable, java application
or a shell scripts.

Schedules when and how frequently a job should run (start date, optional end date, repeat
interval), you can also run a job when a specific database event occurs.
contains the metadata about a scheduler job. A program includes the program
Programs name, the program type (PL/SQL, shell script) and the program action which is
the actual name of the program or script to run.
the scheduler uses oracle streams advanced queuing feature to raise events and
Events start database jobs based on the events. An event is a message sent by an
application or process when it notices some action or occurrence.
you can use the concept of a scheduler chain to link related programs together.
Chains Thus running of a specific program could be made contingent on the successful
running of certain other programs.

Advanced scheduler components

associate one or more jobs with a resource manager consumer group and
also control logging levels, you can use classes to perform
Job Classes
assign job priority levels for individual jobs, with higher-priority
(groups)
jobs always starting before a lower-priority job

specify common attributes for a set of jobs

Windows a window in date/time when a job should launch a interval of time when
the job can run
Window
logical method of grouping windows
Groups

Scheduler Architecture

The architecture consists of the job table, job coordinator and the job workers (slaves),
the job table contains information about jobs (job name, program name and job owner).
The job coordinator regularly looks in the job table to find out what jobs to execute, the
job coordinator creates and manages the job worker processes which actually execute the
job.

Job table - houses all the active jobs


Job coordinator - insures that jobs are being run on time, spawns and removes
slave jobs, write/read job info to/from cache, query job table, pass job information
to jobs slaves
Job slaves - process that carry out the jobs. Updates job log when job completes.

Processes

CJQ0 - (job coordinator) monitors dba_scheduler_jobs table for jobs then


launches jobs slaves.
Jnnn - (job slaves) processes that carry out the task

Note: Jnnn is limited by the JOB_QUEUE_PROCESSES, default = 10, if zero scheduler


will not run (only requirement to start Scheduler)

The scheduler_admin role contains all scheduler system privileges, with the
admin_option clause, it will allow you to

create, drop or alter job classes, windows and window groups


stop any job
start and stop windows prematurely

There are a number of privileges regarding the scheduler

Create job
Create any job
Execute any program
Execute any class
Manage scheduler
Execute on <job, program or class>
Alter on <job, program or class>
All on <job, program or class>

Enabling/Disabling
When enabling a job all sub-jobs are enabled, when enabling a window only that window
gets enabled not sub-windows, when referencing a window always prefix with a SYS.

dbms_scheduler.enable('backup_job');
Enabling
dbms_scheduler.enable('backup_job', backup_program,
SYS.window_group_1); (enable multiple jobs)
Disabling dbms_scheduler.disable('backup_job');

Attributes

These are the only way to alter a schedule. By default objects are set to false when
created.

dbms_scheduler.set_attribute - <name>,<attribute>,<value>
dbms_scheduler.set_attribute_null - <name>,<attribute> (set value to NULL)

dbms_scheduler.set_attribute_null(name=> 'test_job', attribute=>


'end_date');
Alter schedule
Note: sets end date to NULL

Creating a job

A schedule defined within a job object is know as an inline schedule, where as an


independent schedule object is referred to as a stored schedule. Inline schedules cannot
be reference by any other objects.

When a job exceeds its END_DATE attribute it will be dropped only if the auto_drop
attribute is set to true, otherwise it will be disabled. In either case the state column will be
set to completed in the job table.

Job_name - job name


Job_type - can be any of the following plsql_block, stored_procedure, executable
Job_action - pl/sql code, stored procedure or a executable
Number_of_arguements - the number of arguments that the job accepts range is 0
(default) to 255.
Program_name - program associated with this job
Start_date - the start date
Repeat_interval - states how often the job should be run (see intervals)
Schedule_name - identifies the job
End_date - the end date (job will be set to completed and the enable flag set to
false)
Job_class - the class the job is assigned to
Comments - comments
Enabled - true job is enable, false (default) job is disabled
Auto_drop - the jobs is dropped once completed (run once only jobs) default is
true.

Jobs support an overload procedure based on the number of arguments.

dbms_scheduler.create_job (
Job_name=> 'cola_job',
Job_type=> 'PLSQL_BLOCK',
Create Job Job_action=> 'update employees set salary = salary * 1.5;',
Start_date=> '10-oct-2007 06:00:00 am',
Repeat_interval=> 'FREQ=YEARLY',
Comments=> 'Cost of living adjustments'
);
select job_name, enabled, run_count from user_scheduler_jobs;
Display Jobs
Note: default job is disabled by default (false)
Copying dbms_scheduler.copy_job('cola_job', 'raise_job');
dbms_scheduler.stop_job(job_name=> 'cola_job', force=> true);
Stopping
Note: using force stops the job faster
exec dbms_scheduler.drop_job('cola_job');
Deleting
Note: removes the job permanently
select job_name, enabled, run_count from user_scheduler_jobs;
Displaying
Note: copied job is disabled by default(false)
dbms_scheduler.run_job('cola_job', true);
dbms_scheduler.run_job('cola_job', false);

Note:
Running
true - runs immediately, synchronously, control does not return to user, no
run count update
false - runs immediately, asynchronously, control does return to user, updates
run count
dbms_scheduler.set_attributes(
name => 'test_job',
attribute => 'job_priority',
Priority value => 1
);

Note: priorities are between 1-5, 1 being the highest (default is 3)


Job Classes

Group larger jobs together, characteristics can be inherited by all jobs within the
group
Classes can be assigned to a resource consumer group
Jobs can prioritize with the class

All jobs must belong to one class default is DEFAULT_JOB_CLASS

Job_class_name - unique name within the sys schema


Resource_consumer_group - resource group to which job belongs
Service - service which the jobs belongs to, used in RAC
Logging_level - (see below)
Log_history - how long log history is kept default is 30 days
Comments - comments

Logging levels

DBMS_SCHEDULER.LOGGING_OFF - no logging for any jobs in this class


DBMS_SCHEDULER.LOGGING_RUNS - info is written to the jobs log (start
time, successful, etc)
DBMS_SCHEDULER.LOGGING_FULL - record management operations on the
class, suck as creating new jobs, disable/enabling

dbms_scheduler.create_job_class (
Job_class_name=> 'low_priority_class',
Resource_consumer_group=> 'low_group',
Creating
Logging_level=> DBMS_SCHEDULER.LOGGING_FULL,
Log_history=> 60,
Comment=> 'low priority job class'
);
Dropping dbms_scheduler.drop_class('low_priority_class, high_priority_class');
dbms_scheduler.set_attribute(
name => 'reports_jobs',
Assigning attribute => 'job_class',
value => 'low_priority_class'
);
dbms_scheduler.set_attribute(name => 'reports_jobs', attribute =>
Prioritizing
'job_priority', value => 2);
dbms_scheduler.alter_attributes (
name => 'reports_jobs',
Alter
attribute => 'start_date',
attributes
value => '15-JAN-08 08:00:00'
);
Scheduler programs

Program_name - the name of the program


Program_type - can be any of the following plsql_block, stored_procedure,
executable
Program_action - pl/sql code, stored procedure or a executable
Number_of_arguements - the number of arguments that the job accepts range is 0
(default) to 255.
Enabled - true job is enable, false (default) job is disabled
Comments - comments

dbms_scheduler.create_program (
Program_name => 'stats_program',
Program_type => 'stored_procedure',
Creating the program
Program_action => 'dbms_stats.gather_schema_stats',
Number_of_arguments => 1,
Comments => 'gather stats for a schema'
);

dbms_scheduler.define_program_argument(
Program_name => 'stats_program',
Creating the argument
Argument_position => 1,
Argument_type => 'varchar2'
);

dbms_scheduler.drop_program_argument(
Dropping the argument Program_name => 'stats_program',
Argument_position => 1
);

dbms_scheduler.drop_program(
Dropping the program Program_name => 'stats_program',
force => true
);

Programs

You use the SET_JOB_ARGUMENTS or SET_JOB_ANYDATA_VALUE to set the


program arguments.

Program_name - programs name


Program_type - can be any of the following plsql_block, stored_procedure,
executable
Program_action - pl/sql code, stored procedure or a executable
Number_of_arguments - the number of arguments that the job accepts range is 0
(default) to 255.
Enabled - true job is enable, false (default) job is disabled
Comments - comments

dbms_scheduler.create_program(
Program_name => 'stats_program',
Program_type => 'stored_procedure',
Creating programs
Program_action => 'dbms_stats.gather_schema_stats',
Number_of_arguments => 1,
Comments => 'Gather stats for a schema'
);
dbms_scheduler.define_program_argument(
program_name => 'stats_program',
Define program argument argument_position => 1,
argument_type => 'varchar2'
);
dbms_scheduler.drop_program_argument(
program_name => 'stats_program',
Drop program argument
argument_position => 1
);

dbms_scheduler.drop_program(
Drop program program_name => 'stats_program',
force => true
);
dbms_scheduler.enable_program('stats_program');
Enable/Disable
dbms_scheduler.disable_program('stats_program');

Schedules

Schedule_name - name of the schedule must be unique


Start_date - the start_date
End_date - the end_date
Repeat_interval - states how often the job should be run
Comments - comments

dbms_scheduler.create_schedule(
schedule_name => 'nightly_8_schedule',
Create
start_date => systimestamp,
repeat_interval => 'FREQ=DAILY; BYHOUR=20',
comments => 'run nightly at 8:00pm'
);
Remove dbms_scheduler.drop_schedule('nightly_8_schedule');

Intervals

Interval elements

FREQ - required and values are yearly, monthly, weekly, daily, hourly, minutely,
secondly
INTERVAL - how often it repeats default 1 means every day, 2 would be every
other day
BYMONTHLY - can use (1-12) or (JAN-DEC) or (1,3,12), etc
BYWEEKNO - the week number
BYYEARDAY - the date of the year as a number
BYMONTHDAY - (1-31), -1 eans last day of the month
BYDAY - (MON-SUN)
BYHOUR - (0-23)
BYMINUTE - (0-59)
BYSECOND - (0-59)

Interval rules

Frequency must be first element


Elements must be separated by a semi-colon and each one can only be represented
once.
Element values must be separated by a comma.
Elements are case-insensitive and whitespaces are allowed
BYWEEKNO, frequency must be set to yearly
Can use negative numbers (-1) with the BY elements ( BYMONTH -1 will return
last day of month)
BYDAY when used with yearly or monthly you can use -1SAT last Saturday of
month -2SAT second last Saturday, etc
Monday is always the first day of the week.
Calendaring does not use time zones/ daylight saving (dst)

Interval examples

Every Monday - freq=weekly; byday=mon


Every other Monday - freq=weekly; byday=mon interval=2;
Last day of each month - freq=monthly; bymonthday=-1;
Every 7 jan - freq=yearly; bymonth=jan; bymonthday=7;
2 nd wed of each month - freq=monthly; byday=2wed;
Every hour - freq=hourly
Every 4 hours - freq=hourly; interval=4;
Hourly on 1 st day of month - freq=hourly; bymonthday=1;
15 th day of every other month - freq=monthly; bymonthday=15; interval=2

dbms_scheduler:
<calendar_string>,<start_date>,<return_date_after>,<next_run_date>

declare
start_date timestamp;
return_date_after timestamp;
next_run_date timestamp;
BEGIN
Testing start_date := to_timestamp_tz( '10-oct-2007 10:00:00', 'DD-MON-YYYY
Interval HH24:MI:SS')
return_date_after := start_date;
for i in 1..10 loop
dbms_scheduler.evaluate_calendar_string( 'freq=monthly; interval=2;
bymonthday=15', start_date, null,next_run_date);
dbms_output.put_line('next_run_date: ' || next_run_date);
end loop;
END;
/

Managing Chains

In order to manage chains you need both the create job and rules engine privileges, their
are many other options that allow you to drop a chain, drop rules from a chain, disable a
chain, alter a chain and so on (see the Oracle docs for more information)

dbms_rule_adm.grant_system_privilege(dbms_rule_adm.create_rule_obj,
'vallep'),
Privilege
dbms_rule_adm.grant_system_privilege(dbms_rule_adm.create_rule_obj,
'vallep'),
dbms_rule_adm.create_evaluation_context_obj, 'vallep')
dbms_scheduler.create_chain(
chain_name => 'test_chain',
rule_set_name => NULL,
Create
evaluation_interval => NULL,
comments => NULL
);
dbms_scheduler.define_chain_step('test_chain', 'step1', 'program1');
dbms_scheduler.define_chain_step('test_chain', 'step2', 'program2');
Define chain dbms_scheduler.define_chain_step('test_chain', 'step3', 'program3');

Note: a chain step can point to a program, an event or another chain


dbms_scheduler.define_chain_rule('test_chain', 'TRUE', 'START step1');
dbms_scheduler.define_chain_rule('test_chain', 'step1 completed', 'start step2,
step3');
dbms_scheduler.define_chain_rule('test_chain', 'step2 completed and step3
completed', end);
Define chain
rules Note:
the 1st rule states that step1 should be run, which means the scheduler will
start program1
the 2nd rule states that step2 and step3 should run if step1 has completed
sucessfully
the final rule states that when step2 and step3 finish the chain will end
BEGIN
dbms_scheduler.create_job(
job_name => 'test_chain_job',
job_type => 'CHAIN',
job_action => 'test_chain',
repeat_interval => 'freq=daily;byhour=13;byminute=0;bysecond=0',
enabled => true
);
END;
Embedding
Jobs in OR
Chains
BEGIN
dbms_scheduler.run_chain(
chain_name => 'my_chain1',
job_name => 'quick_chain_job',
start_steps => 'my_step1, my_step2');
END;

Note: the first option creates a job which runs the chain, you also have the
option of using run_chain to run a chain without creating a job first.

Managing Events
You can create both jobs and schedules that are based strictly on events and not calendar
time. There are two attributes that need highlighting

is conditional expression that takes its value from the event source queue
table and uses Oracle streams advanced queuing rules. You specify
event_condition object attributes in this expression and prefix them with tab.user_data.
Review the dbms_aqadm package to learn about advanced queuing and
related rules.
queue_spec determines the queue into which the job-triggering event will be queued.

There are many more options than below please refer to the Oracle documentation for a
full listing.

BEGIN
dbms_scheduler.create_job(
job_name => 'test_job',
program_name => 'test_program',
start_date => '15-JAN-08 08:00:00',
event_condition => 'tab.user_data.event_name =
Create event based
''FILE_ARRIVAL''',
Job
queue_spec => 'test_events_q',
enabled => true,
comments => 'An event based job');
END;

Note: the job will run when the event indicates that a file has arrived.
BEGIN
dbms_scheduler.create_event_scheule(
schedule_name => 'appowner.file_arrival',
start_date => systimestamp,
event_condition => 'tab.user_data.object_owner = ''APPOWNER''
Create event based and tab.user_data.event_name = ''FILE_ARRIVAL'
schedule and extract hour from tab.user_data.event_timestamp < 12',
queue_spec => 'test_events_q');
END;

Note: the schedule will start the job when the event indicates that a
file has arrived before noon

Windows

Window_name - name of window in the SYS schema


Resource_plan - the resource plan used by the window
Start_date - the start date
Duration - how long the window stays open
Schedule_name - schedule name associated with window
Repeat_interval - how often the window repeats
End_date - the end_date
Window_priority - only relevant when 2 windows are open, values are LOW
(default) or HIGH

dbms_scheduler.create_window(
window_name => 'work_hours_window',
start_date => '14-JAN-08 08:00:00',
Creating a window using a
duration => interval '10' hour,
schedule (so schedule will open
resource_plan => 'day_plan',
window)
schedule_name => 'work_hours_schedule',
window_priority => 'high',
comment => 'Work Hours Window'
);

dbms_scheduler.open_window(
window_name => 'work_hours_window',
Opening a window manually
duration => interval '20' minute,
force => true
);
dbms_scheduler.close_window( window_name=>
Closing window manually 'work_hours_window' );
dbms_scheduler.disable_window( name =>
Disable window
'work_hours_window');

select log_id, trunc(log_date) log_date, window_name,


Displaying window logs operation from dba_scheduler_window_log;
select log_id, trunk(log_date) window_name,
actual_duration from dba_scheduler_window_details;

Purging logs

Purge Logs dbms_schedule.purge_log(log_history => 14, which_log =>


'JOB_LOG');
dbms_scheduler.set_scheduler_attribute( 'log_history', '60');
Set scheduler log
dbms_scheduler.set_scheduler_attribute( which_log=> [window_log
parameter
| job_log], '60');
Display information

select job_name, status, error# from


dba_scheduler_job_run_details where job_name =
'FAIL_JOB';
select job_name, state, run_count from
dba_scheduler_jobs;
Display select job_name, state, run_count from
user_scheduler_jobs;
select window_name, next_start_date from
dba_scheduler_windows;
select log_id, trunc(log_date) log_date, owner,
job_name, operation from dba_scheduler_job_log
order by log_id;
Useful Views
*_scheduler_schedules all defined schedules
*_scheduler_programs all defined programs
all registered program arguments and default values
*_scheduler_program_arguments
if exist
all defined jobs both enabled and disabled and if they
*_scheduler_jobs
are running/executing
*_scheduler_global_attribute current values of all scheduler attributes
*_scheduler_job_classes all defined job classes
*_scheduler_windows all defined windows
*_scheduler_job_run_details details on all completed (successful or failed) jobs
*_scheduler_window_groups all window groups
*_scheduler_wingroup_members all members of all groups
the state info on all jobs that are currently being
*_scheduler_running_jobs
run/executed.
*_scheduler_job_run_details check status and duration of execution for all jobs
*_scheduler_job_log enables you to audit job-management activities
Default Jobs

PURGE_LOG - deletes entries from the scheduler job if more than 30 days
GATHER_STATS_JOBS - gathers optimiser statistics has two windows
weeknight_window and weekend_window.