Beruflich Dokumente
Kultur Dokumente
* The ORACLE_HOME is an environment variable which is used to set and define the
path of Oracle Home (server) Directory.
* The ORACLE_HOME directory will have the sub directories, binaries, executables,
programs, scripts, etc. for the Oracle Database.
* This directory can be used by any user who wants to use the particular database.
* If the ORACLE_HOME variable is defined as an environment variable, then during
the installation process, the Oracle Home Path will be set to the directory defined as
default. If the variable is not defined, then the Oracle will take its own default location.
i.e. The ORACLE_HOME variable does not have to be preset as an environment
variable, it can be set during the installation process.
* Basically The ORACLE_HOME variable is in the following ORACLE_BASE
directory:
ORACLE_HOME=$ORACLE_BASE/product/10.2.0.
Note: If you did not set the ORACLE_BASE environment variable before starting OUI,
the Oracle home directory is created in an app/username/directory on the first existing
and writable directory from /u01 through /u09 for UNIX and Linux systems, or on the
disk drive with the most available space for Windows systems. If /u01 through /u09 does
not exist on the UNIX or Linux system, then the default location is
user_home_directory/app/username.
On Unix/Linux Systems:
Basically, before or after the Oracle Database is installed, the oracle user profile, the
environment variable file, is prepared where all the required environment variables for
Oracle are set. i.e. ORACLE_BASE, ORACLE_HOME, ORACLE_SID,PATH,
LD_LIBRARY_PATH, NLS_LANG, etc.
Note: This user profile file will be under users home directory i.e. $HOME/.bash_profile
$ echo $ORACLE_HOME
$ env
On Windows Systems:
OR
C:\echo %ORACLE_HOME%
C:\> set
Or
C:\> env
Start -> Run -> Regedit (enter) -> HKEY_LOCAL_MACHINE -> SOFTWARE >
ORACLE
i.e. My Computer\HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE
On Unix/Linux Systems:
Define the ORACLE_HOME value in the user profile file i.e. .bash_profile or .profile
ORACLE_HOME=$ORACLE_BASE/product/10.2.0
export ORACLE_HOME
Bash shell:
$ . ./.bash_profile
$ . ./.profile
C shell:
% source ./.login
If no profile file is set with environment variables, then physically also be set as follows:
$ ORACLE_BASE=/oracle/app
$ export ORACLE_BASE
$ ORACLE_HOME=$ORACLE_BASE/product/10.2.0
$ export ORACLE_HOME
C Shell:
On Windows Systems:
My Computer -> Properties -> Advanced -> Environment Variables -> System
Variables -> New/Edit/Delete (to set the variables)
After setting the environment variables as above, open a fresh CMD tool and check
whether they set properly or not. Do not try on already opened CMD tool to make sure
the variables set or not.
Another way to physically set the variables as follow at the DOS prompt:
Contents
1. Overview
2. Configuring Oracle Cluster Synchronization Services (CSS)
3. Creating the ASM Instance
4. Identify RAW Devices
5. Starting the ASM Instance
6. Verify RAW / Logical Disk Are Discovered
7. Creating Disk Groups
8. Using Disk Groups
9. Startup Scripts
Overview
Automatic Storage Management (ASM) is a new feature in Oracle10g that alleviates the
DBA from having to manually manage and tune disks used by Oracle databases. ASM
provides the DBA with a file system and volume manager that makes use of an Oracle
instance (referred to as an ASM instance) and can be managed using either SQL or Oracle
Enterprise Manager.
Only one ASM instance is required per node. The same ASM instance can manage ASM
storage for all 10g databases running on the node.
When the DBA installs the Oracle10g software and creates a new database, creating an
ASM instance is a snap. The DBCA provides a simple check box and an easy wizard to
create an ASM instance as well as an Oracle database that makes use of the new ASM
instance for ASM storage. But, what happens when the DBA is migrating to Oracle10g or
didn't opt to use ASM when a 10g database was first created. The DBA will need to know
how to manually create an ASM instance and that is what this article provides.
Configuring Oracle Cluster Synchronization Services (CSS)
In a non-RAC environment, the Oracle Universal Installer will configure and start a
single-node version of the CSS service. For Oracle Real Application Clusters (RAC)
installations, the CSS service is installed with Oracle Cluster Ready Services (CRS) in a
separate Oracle home directory (also called the CRS home directory). For single-node
installations, the CSS service is installed in and runs from the same Oracle home as the
Oracle database.
Because CSS must be running before any ASM instance or database instance starts,
Oracle Universal Installer configures it to start automatically when the system starts. For
Linux / UNIX platforms, the Oracle Universal Installer writes the CSS configuration
tasks to the root.sh which is run by the DBA after the installation process.
With Oracle10g R1, CSS was always configured regardless of whether you chose to
configure ASM or not. On the Linux / UNIX platform, CSS was installed and configured
via the root.sh script. This caused a lot of problems since many did not know what this
process was, and for most of them, didn't want the CSS process running since they were
not using ASM.
Oracle listened carefully to the concerns (and strongly worded complaints) about the CSS
process and in Oracle10g R2, will only configure this process when it is absolutely
necessary. In Oracle10g R2, for example, if you don't choose to configure an ASM stand-
alone instance or if you don't choose to configure a database that uses ASM storage,
Oracle will not automatically configure CSS in the root.sh script.
In the case where the CSS process is not configured to run on the node (see above), you
can make use of the $ORACLE_HOME/bin/localconfig script in Linux / UNIX or
%ORACLE_HOME%\bin\localconfig.bat batch file in Windows. For example in Linux,
run the following command as root to configure CSS outside of the root.sh script after
the fact:
$ su
# $ORACLE_HOME/bin/localconfig all
Adding to inittab
Startup will be queued to init within 90 seconds.
Checking the status of new Oracle init process...
Expecting the CRS daemons to be up within 600 seconds.
The following steps can be used to create a fully functional ASM instance named +ASM.
The node I am using in this example also has a regular 10g database running named
TESTDB. These steps should all be carried out by the oracle UNIX user account:
1. Create Admin Directories
We start by creating the admin directories from the ORACLE_BASE. The admin
directories for the existing database on this node, (TESTDB), is located at
$ORACLE_BASE/admin/TESTDB. The new +ASM admin directories will be created
alongside the TESTDB database:
UNIX
mkdir -p $ORACLE_BASE/admin/+ASM/bdump
mkdir -p $ORACLE_BASE/admin/+ASM/cdump
mkdir -p $ORACLE_BASE/admin/+ASM/hdump
mkdir -p $ORACLE_BASE/admin/+ASM/pfile
mkdir -p $ORACLE_BASE/admin/+ASM/udump
Microsoft Windows
mkdir %ORACLE_BASE%\admin\+ASM\bdump
mkdir %ORACLE_BASE%\admin\+ASM\cdump
mkdir %ORACLE_BASE%\admin\+ASM\hdump
mkdir %ORACLE_BASE%\admin\+ASM\pfile
mkdir %ORACLE_BASE%\admin\+ASM\udump
2. Create Instance Parameter File
In this step, we will manually create an instance parameter file for the ASM
instance. This is actually an easy task as most of the parameters that are used for a
normal instance are not used for an ASM instance. Note that you should be fine
by accepting the default size for the database buffer cache, shared pool, and many
of the other SGA memory sructures. The only exception is the large pool. I like to
manually set this value to at least 12MB. In most cases, the SGA memory
footprint is less then 100MB. Let's start by creating the file init.ora and placing
that file in $ORACLE_BASE/admin/+ASM/pfile. The initial parameters to use for
the file are:
UNIX
$ORACLE_BASE/admin/+ASM/pfile/init.ora
###########################################
# Automatic Storage Management
###########################################
# _asm_allow_only_raw_disks=false
# asm_diskgroups='TESTDB_DATA1'
###########################################
# Diagnostics and Statistics
###########################################
background_dump_dest=/u01/app/oracle/admin/+ASM/bdump
core_dump_dest=/u01/app/oracle/admin/+ASM/cdump
user_dump_dest=/u01/app/oracle/admin/+ASM/udump
###########################################
# Miscellaneous
###########################################
instance_type=asm
compatible=10.1.0.4.0
###########################################
# Pools
###########################################
large_pool_size=12M
###########################################
# Security and Auditing
###########################################
remote_login_passwordfile=exclusive
Microsoft Windows
%ORACLE_BASE%\admin\+ASM\pfile\init.ora
###########################################
# Automatic Storage Management
###########################################
# _asm_allow_only_raw_disks=false
# asm_diskgroups='TESTDB_DATA1'
###########################################
# Diagnostics and Statistics
###########################################
background_dump_dest=C:\oracle\product\10.1.0\admin\+ASM\bdump
core_dump_dest=C:\oracle\product\10.1.0\admin\+ASM\cdump
user_dump_dest=C:\oracle\product\10.1.0\admin\+ASM\udump
###########################################
# Miscellaneous
###########################################
instance_type=asm
compatible=10.1.0.4.0
###########################################
# Pools
###########################################
large_pool_size=12M
###########################################
# Security and Auditing
###########################################
remote_login_passwordfile=exclusive
After creating the $ORACLE_BASE/admin/+ASM/pfile/init.ora file, UNIX
users should create the following symbolic link:
$ ln -s $ORACLE_BASE/admin/+ASM/pfile/init.ora
$ORACLE_HOME/dbs/init+ASM.ora
Before starting the ASM instance, we should identify the RAW device(s) (UNIX) or
logical drives (Windows) that will be used as ASM disks. For the purpose of this article, I
have four RAW devices setup on Linux:
# ls -l /dev/raw/raw[1234]
crw-rw---- 1 oracle dba 162, 1 Jun 2 22:04 /dev/raw/raw1
crw-rw---- 1 oracle dba 162, 2 Jun 2 22:04 /dev/raw/raw2
crw-rw---- 1 oracle dba 162, 3 Jun 2 22:04 /dev/raw/raw3
crw-rw---- 1 oracle dba 162, 4 Jun 2 22:04 /dev/raw/raw4
This article does not use Oracle's ASMLib I/O libraries. If you plan on using Oracle's
ASMLib, you will need to install and configure ASMLib, as well as mark all disks using:
A task that must to be performed for Microsoft Windows users is to tag the logical drives
that you will want to use for ASM storage. This is done using a new utility that is included
with Oracle10g called asmtool. This tool can be run either before or after creating the
ASM instance. asmtool is responsible for initializing the drive headers and marks drives
for use by ASM. This really assists in reducing the risk of overwriting a usable drive that
is being used for normal operating system files.
Starting the ASM Instance
Once the instance parameter file is in place, it is time to start the ASM instance. It is
important to note that an ASM instance never mounts an actual database. The ASM
instance is responsible for mounting and managing disk groups.
Attention Windows Users!
If you are running in Microsoft Windows, you will need to manually create a new
Windows service to run the new instance. This is done using the ORADIM utility which
allows you to create both the instance and the service in one command.
UNIX
# su - oracle
$ ORACLE_SID=+ASM; export ORACLE_SID
$ sqlplus "/ as sysdba"
SQL> startup
ASM instance started
SQL> shutdown
ASM instance shutdown
SQL> startup
ASM instance started
Microsoft Windows
Instance created.
SQL> shutdown
ASM instance shutdown
SQL> startup
ASM instance started
You will notice when starting the ASM instance, we received the error:
ORA-15110: no diskgroups mounted
This error can be safely ignored.
Notice also that we created a server parameter file (SPFILE) for the ASM instance. This
allows Oracle to automatically record new disk group names in the asm_diskgroups
instance parameter, so that those disk groups can be automatically mounted whenever the
ASM instance is started.
Now that the ASM instance is started, all other Oracle database instances running on the
same node will be able to find it.
At this point, we have an ASM instance running, but no disk groups to speak of. ASM
disk groups are created using from RAW (or logical) disks.
Available (candidate) disks for ASM are discovered by use of the asm_diskstring
instance parameter. This parameter contains the path(s) that Oracle will use to discover
(or see) these candidate disks. In most cases, you shouldn't have to set this value as the
default value is set for the supported platform. The following table is a list of default
values for asm_diskstring on supported platforms when the value of the instance
parameter is set to NULL (the value is not set):
HP-UX /dev/rdsk/*
AIX /dev/rhdisk/*
For the purpose of this article, I have four RAW devices setup on Linux:
# ls -l /dev/raw/raw[1234]
crw-rw---- 1 oracle dba 162, 1 Jun 2 22:04 /dev/raw/raw1
crw-rw---- 1 oracle dba 162, 2 Jun 2 22:04 /dev/raw/raw2
crw-rw---- 1 oracle dba 162, 3 Jun 2 22:04 /dev/raw/raw3
crw-rw---- 1 oracle dba 162, 4 Jun 2 22:04 /dev/raw/raw4
I now need to determine if Oracle can find these four disks. The view V$ASM_DISKcan be
queried from the ASM instance to determine which disks are being used or may
potentially be used as ASM disks. Note that you must log into the ASM instance with
SYSDBA privileges. Here is the query that I ran from the ASM instance:
$ ORACLE_SID=+ASM; export ORACLE_SID
$ sqlplus "/ as sysdba"
In this section, I will create a new disk group named TESTDB_DATA1 and assign all four
discovered disks to it. The disk group will be configured for NORMAL REDUNDANCY which
results in two-way mirroring of al files within the disk group. Within the disk group, I
will be configuring two failure groups, which defines two independent sets of disk that
should never contain more than one copy of mirrored data (mirrored extents).
For the purpose of this article, it is assumed that /dev/raw/raw1 and /dev/raw/raw2 are
on one controller while /dev/raw/raw3 and /dev/raw/raw4 are on another controller. I
want the ASM disk configuration so that any data files that are written to /dev/raw/raw1
and /dev/raw/raw2 will be mirrored to /dev/raw/raw3 and /dev/raw/raw4. I want
ASM to guarantee that data on /dev/raw/raw1 is never mirrored to /dev/raw/raw2 and
that data on /dev/raw/raw3 is never mirrored to /dev/raw/raw4. With this type of
configuration, I can loose an entire controller and still have access to all of my data.
When configuring failure groups, you should put all disks that share a controller (or any
resource for that matter) into their own failure group. If that resource were to fail, you
would still have access to the data as ASM guarantees that no mirrored data will exist in
the same failure group.
The new disk group should be created from the ASM instance using the following SQL:
Diskgroup created.
Now, let's take a look at the new disk group and disk details:
Let's now login to the database instance running on the node that will be making use of
the new ASM instance. For this article, I have a database instance already created and
running on the node named TESTDB. The database was created using the local file system
for all database files, redo log members, and control files:
SQL> @dba_files_all
Tablespace Name
File Class Filename
File Size
--------------------
----------------------------------------------------------
--------------
SYSAUX
/u05/oradata/TESTDB/datafile/o1_mf_sysaux_19cv6mwk_.dbf
241,172,480
SYSTEM
/u05/oradata/TESTDB/datafile/o1_mf_system_19cv5rmv_.dbf
471,859,200
TEMP
/u05/oradata/TESTDB/datafile/o1_mf_temp_19cv6sy9_.tmp
24,117,248
UNDOTBS1
/u05/oradata/TESTDB/datafile/o1_mf_undotbs1_19cv6c37_.dbf
214,958,080
USERS
/u05/oradata/TESTDB/datafile/o1_mf_users_19cv72yw_.dbf
5,242,880
[ CONTROL FILE ] /u03/oradata/TESTDB/controlfile/o1_mf_19cv5m84_.ctl
[ CONTROL FILE ] /u04/oradata/TESTDB/controlfile/o1_mf_19cv5msk_.ctl
[ CONTROL FILE ] /u05/oradata/TESTDB/controlfile/o1_mf_19cv5n34_.ctl
[ ONLINE REDO LOG ] /u03/oradata/TESTDB/onlinelog/o1_mf_1_19cv5n8d_.log
10,485,760
[ ONLINE REDO LOG ] /u03/oradata/TESTDB/onlinelog/o1_mf_2_19cv5o6l_.log
10,485,760
[ ONLINE REDO LOG ] /u03/oradata/TESTDB/onlinelog/o1_mf_3_19cv5pdy_.log
10,485,760
[ ONLINE REDO LOG ] /u04/oradata/TESTDB/onlinelog/o1_mf_1_19cv5nbr_.log
10,485,760
[ ONLINE REDO LOG ] /u04/oradata/TESTDB/onlinelog/o1_mf_2_19cv5oml_.log
10,485,760
[ ONLINE REDO LOG ] /u04/oradata/TESTDB/onlinelog/o1_mf_3_19cv5pt4_.log
10,485,760
[ ONLINE REDO LOG ] /u05/oradata/TESTDB/onlinelog/o1_mf_1_19cv5nsf_.log
10,485,760
[ ONLINE REDO LOG ] /u05/oradata/TESTDB/onlinelog/o1_mf_2_19cv5p1b_.log
10,485,760
[ ONLINE REDO LOG ] /u05/oradata/TESTDB/onlinelog/o1_mf_3_19cv5q8j_.log
10,485,760
--------------
sum
1,051,721,728
Let's now create a new tablespace that makes use of the new disk group:
Tablespace created.
And that's it! The CREATE TABLESPACE command (above) uses a datafile named
+TESTDB_DATA1. Note that the plus sign (+) in front of the name TESTDB_DATA1 indicates
to Oracle that this name is a disk group name, and not an operating system file name. In
this example, the TESTDB instance queries the ASM instance for a new file in that disk
group and uses that file for the tablespace data. Let's take a look at that new file name:
SQL> @dba_files_all
Tablespace Name
File Class Filename
File Size
--------------------
----------------------------------------------------------
--------------
SYSAUX
/u05/oradata/TESTDB/datafile/o1_mf_sysaux_19cv6mwk_.dbf
241,172,480
SYSTEM
/u05/oradata/TESTDB/datafile/o1_mf_system_19cv5rmv_.dbf
471,859,200
TEMP
/u05/oradata/TESTDB/datafile/o1_mf_temp_19cv6sy9_.tmp
24,117,248
UNDOTBS1
/u05/oradata/TESTDB/datafile/o1_mf_undotbs1_19cv6c37_.dbf
214,958,080
USERS
/u05/oradata/TESTDB/datafile/o1_mf_users_19cv72yw_.dbf
5,242,880
USERS2 +TESTDB_DATA1/testdb/datafile/users2.256.560031579
104,857,600
[ CONTROL FILE ] /u03/oradata/TESTDB/controlfile/o1_mf_19cv5m84_.ctl
[ CONTROL FILE ] /u04/oradata/TESTDB/controlfile/o1_mf_19cv5msk_.ctl
[ CONTROL FILE ] /u05/oradata/TESTDB/controlfile/o1_mf_19cv5n34_.ctl
[ ONLINE REDO LOG ] /u03/oradata/TESTDB/onlinelog/o1_mf_1_19cv5n8d_.log
10,485,760
[ ONLINE REDO LOG ] /u03/oradata/TESTDB/onlinelog/o1_mf_2_19cv5o6l_.log
10,485,760
[ ONLINE REDO LOG ] /u03/oradata/TESTDB/onlinelog/o1_mf_3_19cv5pdy_.log
10,485,760
[ ONLINE REDO LOG ] /u04/oradata/TESTDB/onlinelog/o1_mf_1_19cv5nbr_.log
10,485,760
[ ONLINE REDO LOG ] /u04/oradata/TESTDB/onlinelog/o1_mf_2_19cv5oml_.log
10,485,760
[ ONLINE REDO LOG ] /u04/oradata/TESTDB/onlinelog/o1_mf_3_19cv5pt4_.log
10,485,760
[ ONLINE REDO LOG ] /u05/oradata/TESTDB/onlinelog/o1_mf_1_19cv5nsf_.log
10,485,760
[ ONLINE REDO LOG ] /u05/oradata/TESTDB/onlinelog/o1_mf_2_19cv5p1b_.log
10,485,760
[ ONLINE REDO LOG ] /u05/oradata/TESTDB/onlinelog/o1_mf_3_19cv5q8j_.log
10,485,760
--------------
sum
1,156,579,328
Startup Scripts
Most Linux / UNIX users have a script used to start and stop Oracle services on system
restart. On UNIX platforms, the convention is to put all start / stop commands in a single
shell script named dbora. The dbora script may differ on every database server only
slightly as each database server has different requirements for handling Apache, TNS
listener and other services. The dbora script should be place in /etc/init.d.
In this section, I will provide a dbora shell script that can used to start all required Oracle
services including the Oracle Cluster Synchronization Services (CSS), ASM instance,
database server(s), and the Oracle TNS listener process. This script will utilize the Oracle
supplied scripts $ORACLE_HOME/bin/dbstart and $ORACLE_HOME/bin/dbshut to handle
starting and stopping the Oracle database(s). The dbora will be run by the UNIX init
process, and reads the /etc/oratab file to dynamically determine which database(s) to
start and stop.
The first step is to create the dbora shell script and place it in the /etc/init.d directory:
/etc/init.d/dbora
# +------------------------------------------------------------------------+
# | FILE : dbora |
# | DATE : 09-AUG-2006 |
# | HOSTNAME : linux3.idevelopment.info |
# +------------------------------------------------------------------------+
# +---------------------------------+
# | FORCE THIS SCRIPT TO BE IGNORED |
# +---------------------------------+
# exit
# +---------------------------------+
# | PRINT HEADER INFORMATION |
# +---------------------------------+
echo " "
echo "+----------------------------------+"
echo "| Starting Oracle Database Script. |"
echo "| 0 : $0 |"
echo "| 1 : $1 |"
echo "+----------------------------------+"
echo " "
# +-----------------------------------------------------+
# | ALTER THE FOLLOWING TO REFLECT THIS SERVER SETUP |
# +-----------------------------------------------------+
HOSTNAME=linux3.idevelopment.info
ORACLE_HOME=/u01/app/oracle/product/10.1.0/db_1
SLEEP_TIME=120
ORACLE_OWNER=oracle
DATE=`date "+%m/%d/%Y %H:%M"`
# +----------------------------------------------+
# | VERIFY THAT ALL NEEDED SCRIPTS ARE AVAILABLE |
# | BEFORE CONTINUING. |
# +----------------------------------------------+
if [ ! -f $ORACLE_HOME/bin/dbstart -o ! -d $ORACLE_HOME ]; then
echo " "
echo "+-------------------------------------+"
echo "| ERROR: |"
echo "| Oracle startup: cannot start |"
echo "| cannot find dbstart |"
echo "+-------------------------------------+"
echo " "
exit
fi
# +---------------------------+
# | START/STOP CASE STATEMENT |
# +---------------------------+
case "$1" in
start)
touch /var/lock/subsys/dbora
;;
stop)
su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/dbshut"
rm -f /var/lock/subsys/dbora
;;
*)
esac
exit
After the dbora shell script is in place, perform the following tasks as the root user:
# ln -s /etc/init.d/dbora /etc/rc5.d/S99dbora
# ln -s /etc/init.d/dbora /etc/rc0.d/K10dbora
# ln -s /etc/init.d/dbora /etc/rc6.d/K10dbora
# exit
The next step is to edit the /etc/oratab file to allow the dbora script to automatically
start and stop databases. Simply alter the final field in the +ASM and TESTDB entry from N
to Y.
Ensure that the ASM instance is started BEFORE any databases that are making use
of disk groups contained in it.
...
+ASM:/u01/app/oracle/product/10.1.0/db_1:Y
TESTDB:/u01/app/oracle/product/10.1.0/db_1:Y
...
The final step to manually edit the script /etc/inittab so that the entry to respawn
init.cssd comes before running the runlevel 3.
l0:0:wait:/etc/rc.d/rc 0
l1:1:wait:/etc/rc.d/rc 1
l2:2:wait:/etc/rc.d/rc 2
h1:35:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 </dev/null
l3:3:wait:/etc/rc.d/rc 3
l4:4:wait:/etc/rc.d/rc 4
l5:5:wait:/etc/rc.d/rc 5
l6:6:wait:/etc/rc.d/rc 6
(...)
For Solaris users, you will need to manually edit the script /etc/inittab so that the
entry for init.cssd comes before running the runlevel 3. As explained in Metalink Note
ID: 264235.1, the fix is as follows:
Orignal /etc/inittab file:
(...)
s2:23:wait:/sbin/rc2 >/dev/msglog 2<>/dev/msglog </dev/console
s3:3:wait:/sbin/rc3 >/dev/msglog 2<>/dev/msglog </dev/console
s5:5:wait:/sbin/rc5 >/dev/msglog 2<>/dev/msglog </dev/console
(...)
h1:3:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 </dev/null
Modified /etc/inittab file:
(...)
s2:23:wait:/sbin/rc2 >/dev/msglog 2<>/dev/msglog </dev/console
h1:3:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 </dev/null
s3:3:wait:/sbin/rc3 >/dev/msglog 2<>/dev/msglog </dev/console
s5:5:wait:/sbin/rc5 >/dev/msglog 2<>/dev/msglog </dev/console
(...)
Bug: 3458327 - Automatic Startup On Reboot Fails When Database Uses ASM
If you have been following this article and applied the 10.1.0.4 patchset (and
modified the /etc/inittab file to force init.cssd to run (actually to respaen)
before running runlevel 3), this bug should not affect you. If you are using 10.1.0.3
(and below), however, this bug may not allow the Oracle ASM instance to start,
which will also prevent any other instances that have disk groups within that ASM
instance to start. As they exist, the dbstart and dbshut scripts are not ASM aware
with 10.1.0.3 and below. Even with patchset 10.1.0.4.0, we had to manually modify
the /etc/inittab script. When the dbora script attempts to start the ASM
database, even after the ocssd.bin is up and running, you will receive the error:
The problem is simply a matter of ordering of when services are started and that is
why we needed to modify the /etc/inittab file. Upon entering a certain runlevel
(e.g. runlevel 3), init starts all the 'respawn lines' AFTER the 'wait' lines have
finished. It is important to understand that the S96init.cssd lines does not
actually start the CSSD, it merely removes the 'NORUN' line. Then S99dbora tries
to start the instances (and fails). Then, finally, init starts the CSSD.
Note that I used /etc/rc5.d/S99 to start the dbora script. You should make note
that the dbora script MUST run after the /etc/init.d/init.cssd if you are
starting an ASM instance. For Linux, the OUI (and manually running localconfig
all) places the start for init.cssd as /etc/rc3.d/S96init.cssd.
You will also notice that I had to put a sleep 120 in the dbora script before
starting any databases/instances. The dbora script will sleep for 120 seconds to
ensure that ocssd.bin daemon is running before starting any ASM instances.
Pre-Creation Task:
Partitoning Disk
Pre-creation task:
Partitioning DIsks:
NOTE: You do not need to reboot the machine just to activate the
created partitions tables available to kernel.
You can use the below command instead of reboorting the machine:
The same way, I partitioned the /dev/sda and the final partition table
looks like below:
[root@shree ~]# fdisk -l
I have used two of the newly created partitions of /dev/hdb4 and /dev/sda4 to create a
diskgroup called DATA_GRP.
You need to bind this partitions with the raw devices on the Linux system. I have added
the below lines into the
/etc/sysconfig/rawdevices and restarted the rawdevices service.
/dev/raw/raw1 /dev/sda1
/dev/raw/raw2 /dev/sda2
/dev/raw/raw3 /dev/sda3
/dev/raw/raw4 /dev/sda4
/dev/raw/raw5 /dev/hdb4
[root@shree ~]# service rawdevices restart
for i in `seq 1 5`
do
chown oracle.dba /dev/raw/raw$i
chmod 660 /dev/raw/raw$i
done
background_dump_dest='/u01/app/admin/+ASM/bdump'
core_dump_dest='/u01/app/admin/+ASM/cdump'
instance_type='asm'
large_pool_size=12M
remote_login_passwordfile='SHARED'
user_dump_dest='/u01/app/admin/+ASM/udump'
File created.
System altered.
System altered.
SQL> create diskgroup data_grp
2 failgroup data_grp_f1 disk '/dev/raw/raw4'
3 failgroup data_grp_f2 disk '/dev/raw/raw5';
Diskgroup created.
NAME PATH
--------------- ---------------
DATA_GRP_0001 /dev/raw/raw5
DATA_GRP_0000 /dev/raw/raw4
Open the /etc/oratab file and add the following line at the end:
+ASM:/u01/app/oracle/product/10.2.0/db_1:Y
NAME
--------------------------------------------------
/u01/app/oradata/db102/system01.dbf
/u01/app/oradata/db102/undotbs01.dbf
/u01/app/oradata/db102/sysaux01.dbf
/u01/app/oradata/db102/users01.dbf
no rows selected
Tablespace created.
Tablespace dropped.
Tablespace created.
SQL> drop tablespace indx01;
Tablespace dropped.
Tablespace created.
NAME
--------------------------------------------------
/u01/app/oradata/db102/system01.dbf
/u01/app/oradata/db102/undotbs01.dbf
/u01/app/oradata/db102/sysaux01.dbf
/u01/app/oradata/db102/users01.dbf
+DATA_GRP/db102/datafile/indx01.258.576105687
I decided to use /dev/sda1 and dev/hdb4 devices to be configured by using ASM library
drivers.
Please download the appropriate drivers from Oracle technology Network that best suits
your linux kernel and
architecture. You can run the below command and see which drivers are best suited for
your machine.
oracleasm-support-version.arch.rpm
oracleasm-kernel-verson.arch.rpm
orcleasmlib-version.arch.rpm
Preparing...
###########################################
[100%]
1:oracleasm-support
###########################################
[ 33%]
2:oracleasm-2.6.9-22.EL
###########################################
[ 67%]
3:oracleasmlib
###########################################
[100%]
[root@shree asmlib]#
I downloaded the below rpms and istalled them as root user for my firewire project on
redhat EL3.6
Configure the Disk Device(s) that will be used in ASM diskgroup (stamping devises as
an ASM disks):
[root@shree root]# /etc/init.d/oracleasm createdisk
DSK1 /dev/sda1
Marking disk "/dev/sda1" as an ASM
disk: [ OK ]
[root@shree root]# /etc/init.d/oracleasm createdisk
DSK2 /dev/hdb4
Marking disk "/dev/hdb4" as an ASM
disk: [ OK ]
[root@shree root]#
[root@shree root]# /etc/init.d/oracleasm listdisks
DSK1
DSK2
[root@shree root]#
NOTE: The disk name (Dsk1 and Dsk2 in our example) must have this
charectoristics:
They MUST start with the uppercase letter. They can contain uppercase
letters, numbers and
underscore charactors.
Add the below lines into the /etc/sysconfig/rawdevices and restarted the rawdevices
service.
/dev/raw/raw1 /dev/sda1
/dev/raw/raw2 /dev/sda2
/dev/raw/raw3 /dev/sda3
/dev/raw/raw4 /dev/sda4
/dev/raw/raw5 /dev/hdb4
Please add the below lines to the /etc/rc.local so that these are
set at every boot.
for i in `seq 1 5`
do
chown oracle.dba /dev/raw/raw$i
chmod 660 /dev/raw/raw$i
done
Configure the Disk Device(s) that will be used in ASM diskgroup (stamping devises as
an ASM disks):
File created.
Diskgroup created.
NAME PATH
--------------- ---------------
DSK1 ORCL:DSK1
DSK2 ORCL:DSK2
Open the /etc/oratab file and add the following line at the end:
+ASM:/u01/app/oracle/product/10.2.0/db_1:Y
connect as oracle user and type dbca. Follow the steps below to create
Database with ASM storage using dbca.
Click Next
Select General Purpose and then Click Next. You can select the other option that best
suits your application.
Enter the database and instance name. db102
Enter the Password for SYS, SYSTEM, DBSNMP and SYSMAN
Select ASM Option
Enter the Password of SYS schema of an ASM instance.
This screen show all the groups that are mounted using the +ASM instacce. Select one
whichever you wnat to have
these database files to be resided on.
CLICK Next
Click NEXT
You can select sample schema to be created. If you donot have any schema 9data) to
work /practice on, you can go
for this option.
Click OK
Click NEXT
Click Next
Verify the location of Datafie, controlfiles and logfiles to makesure that they will be
created under the right locaion (group)
Click Finish
Click OK
CLICK OK
Automatic Storage Management (ASM)
Automatic Storage Management (ASM) is oracles logical volume manager, it uses OMF
(Oracle Managed Files) to name and locate the database files. It can use raw disks,
filesystems or files which can be made to look like disks as long as the device is raw.
ASM uses its own database instance to manage the disks, it has its own processes and
pfile or spfile, it uses ASM disk groups to manage disks as one logical unit.
Provides automatic load balancing over all the available disks, thus reducing hot
spots in the file system
Prevents fragmentation of disks, so you don't need to manually relocate data to
tune I/O performance
Adding disks is straight forward - ASM automatically performs online disk
reorganization when you add or remove storage
Uses redundancy features available in intelligent storage arrays
The storage system can store all types of database files
Using disk group makes configuration easier, as files are placed into disk groups
ASM provides stripping and mirroring (fine and coarse gain - see below)
ASM and non-ASM oracle files can coexist
ASM is free!!!!!!!!!!!!!
is a special instance that does not have any data files, there is only ASM
instance one per server which manages all ASM files for each database. The
ASM instance looks after the disk groups and allows access to the ASM files.
Instance Databases access the files directly but uses the ASM instance to locate them.
If the ASM instance is shutdown then the database will either be
automatically shutdown or crash.
ASM Disk Disks are grouped together via disk groups, these are very much like logical
Groups volumes.
Files are stored in the disk groups and benefit from the disk group features
ASM Files
i.e. stripping and mirroring.
database is allowed to have multiple disk groups
You can store all of your database files as ASM files
Disk group comprises a set of disk drives
ASM disk groups are permitted to contain files from more than one
ASM
disk
Summary
Files are always spread over every disk in an ASM disk group and
belong to one disk group only
ASM Processes
There are a number of new processes that are started when using ASM, both the ASM
instance and Database will start new processes
ASM Instance
RBAL
(rebalance coordinates the rebalancing when a new disk is add or removed
master)
ARB[1-9]
actually does the work requested by the RBAL process (upto 9 of these)
(rebalance)
Database Instance
RBAL opens and closes the ASM disk
connects to the ASM instance via session and is the communication
ASMB between ASM and RBMS, requests could be file creation, deletion, resizing
and also various statistics and status messages.
ASM registers its name and disks with the RDBMS via the cluster synchronization
service (CSS). This is why the oracle cluster services must be running, even if the node
and instance is not clustered. The ASM must be in mount mode in order for a RDBMS to
use it and you only require the instance type in the parameter file.
An ASM disk group is a logical volume that is created from the underlying physical
disks. If storage grows you simply add disks to the disks groups, the number of groups
can remain the same.
ASM file management has a number of good benefits over normal 3rd party LVM's
performance
redundancy
ease of management
security
ASM Stripping
ASM stripes files across all the disks within the disk group thus increasing performance,
each stripe is called an allocation unit. ASM offers two types of stripping which is
dependent on the type of database file
ASM Mirroring
Disk mirroring provides data redundancy, this means that if a disk were to fail Oracle will
use the other mirrored disk and would continue as normal. Oracle mirrors at the extent
level, so you have a primary extent and a mirrored extent. When a disk fails, ASM
rebuilds the failed disk using mirrored extents from the other disks within the group, this
may have a slight impact on performance as the rebuild takes place.
All disks that share a common controller are in what is called a failure group, you can
ensure redundancy by mirroring disks on separate failure groups which in turn are on
different controllers, ASM will ensure that the primary extent and the mirrored extent are
not in the same failure group. When mirroring you must define failure groups otherwise
the mirroring will not take place.
External redundancy - doesn't have failure groups and thus is effectively a no-
mirroring strategy
Normal redundancy - provides two-way mirroring of all extents in a disk group,
which result in two failure groups
High redundancy - provides three-way mirroring of all extents in a disk group,
which result in three failure groups
ASM Files
The data files you create under ASM are not like the normal database files, when you
create a file you only need to specify the disk group that the files needs to be created in,
Oracle will then create a stripped file across all the disks within the disk and carry out
any redundancy required, ASM files are OMF files. ASM naming is dependent on the
type file being created, here are the different file-naming conventions
fully qualified ASM filenames - are used when referencing existing ASM files
(+dgroupA/dbs/controlfile/CF.123.456789)
numeric ASM filenames - are also only used when referencing existing ASM files
(+dgroupA.123.456789)
alias ASM filenames - employ a user friendly name and are used when creating
new files and when you refer to existing files
alias filenames with templates - are strictly for creating new ASM files
incomplete ASM filenames - consist of a disk group only and are used for creation
only.
Creating a ASM instance is like creating a normal instance but the parameter file will be
smaller, ASM does not mount any data files, it only maintains ASM metadata. ASM
normally only needs about 100MB of disk space and will consume about 25MB of
memory for the SGA, ASM does not have a data dictionary like a normal database so you
must connect to the instance using either O/S authentication as SYSDBA or SYSOPER
or using a password file.
The main parameters in the instance parameter file will be
You can start an ASM instance with nomount, mount but not open. When shutting down a
ASM instance this passes the shutdown command to the RDBMS (normal, immediate,
etc)
ASM Configuration
instance_type=asm
instance_name=+asm
asm_power_limit=2
Parameter file
asm_diskstring=\\.\f:,\\.\g:,\\.\h:
(init+asm.ora)
asm_diskgroup= dgroupA, dgroupB
Dismount or mount a
alter diskgroup diskgrpA dismount;
diskgroup
alter diskgroup diskgrpA mount;
Check a diskgroups
integrity alter diskgroup diskgrpA check all;
alter diskgroup diskgrpA add directory '+diskgrpA/dir1'
Diskgroup Directory
Note: this is required if you use aliases when creating databse files
i.e '+diskgrpA/dir/control_file1'
alter diskgroup diskgrpA add alias '+diskgrpA/dir/second.dbf' for
adding and drop
'+diskgrpB/datafile/table.763.1';
aliases
alter diskgroup diskgrpA drop alias '+diskgrpA/dir/second.dbf'
Drop files from a
alter diskgroup diskgrpA drop file '+diskgrpA/payroll/payroll.dbf';
diskgroup
Using ASM Disks
RMAN backup
RMAN can do everything a normal backup can do, however RMAN has its own backup
catalog to record the backups that took place. The database can be in two formats
archivelog mode or nonarchivelog mode
archivelog - Oracle saves the filled redo logs files, which means you can recovery
the database to any point in time using the archived logs
nonarchivelog - the redo logs are overwritten and not saved, but can only be
recovery from the last backup.
whole backup - you backup the database as a whole which includes the
controlfiles and spfile
partial backup - you back only a part of the database such a tablespace, one data
file
consistent - a consistent backup does not need to go through recover when being
restored, normally associated with a closed backup
inconsistent - a inconsistent backup always needs to be recovered
open - is backup taken when the database is running, also known as a hot, warm,
online backup
closed - is a backup taken when the database is shutdown also know as a cold,
offline backup
RMAN Architecture
RMAN operates via a server session connecting to the target database, it gets the
metadata from the target, this is called the RMAN repository. The repository will contain
information on
RMAN will use the controlfile on the target database to store repository information
regarding any backups for that server, this information can also be stored in a recovery
catalog (optional) which resides on a rman server its own database (default size should be
about 115MB) which should be dedicated to RMAN, information is still written to
controlfile even if a recovery catalog is used.
The information stored in the controlfile is stored in the reusable sections called circular
reuse records and non-circular reuse records. The circular reuse records have non-critical
information that can be overwritten if needed. Some of the non-circular re-useable
sections consists of data files and redo log information. RMAN can backup archive logs,
controlfile, data files, spfile and tablespaces it does not backup temporary tablespaces,
redo logs, password file, init.ora.
The controlfile based repository will retain data for only the time specified by the
instance parameter CONTROL_FILE_RECORD_KEEP_TIME this defaults to seven
days.
Useful View
displays information about the control file
V$CONTROLFILE_RECORD_SECTION
record sections
If you backup to tapes you require additional software called MML (media management
layer) or media manager. MML is a API that interfaces with different vendors tape
libraries.
RMAN terminology
backup piece - operating system file containing the backup of a data file,
controlfile, etc
backup set - logical structure that contains one or more backup pieces, all relevant
backup pieces are contained in a backup set
image copy - similar to operating system copies like cp or dd, they will contain all
block if if not used (disk only)
proxy copy - media manger is given control of the copying process
channel - Channel allocation is a method of connecting rman and the target
database while also specifying the type of backup i.e. disk or tape, they can
created manually or automatically.
Connecting to RMAN
There are a number of ways to connect to RMAN and it depends on where the recovery
catalog is
RMANs persistent settings which are stored in the controlfile (reason why must be in
mount mode) of the target database (#default means that parameter is at default setting)
or a recovery catalog if used
Channel Parameters/Options
The parameters/options are use to control the resources used by RMAN, there are many
options probably best to consult the oracle documentation.
channel device type - set default location of backups can be disk or sbt
channel rate - limits i/o bandwith KB, MB or GB
channel maxpiecesize - limits the size of the backup pieces
channel maxsetsize - limits the size of the backup sets
channel connect - instructs a specific instance to perform an operation
duration - controls time for backup job (hours/mins)
parms - send specific instructions to tape library
rman> configure channel device type disk format s:\ora_backup\ora_dev_f
%t_s%s_s%p;
examples
rman> configure channel device type disk rate = 5m;
rman> configure channel device type disk maxpiecesize = 2g;
rman> configure channel device type disk maxsetsize = 10g;
Backup Retention
Default is redundancy 1 which means always attempt to have one backup image or
backupset of every data file, archive log and controlfile
Backup Tagging
Tablespace Excludes
rman> configure exclude from tablespace test; (exclude test tablespace from
backup)
examples
rman> configure exclude from tablespace test clear; (remove the exclude test
tablespace from backup)
rman> backup database noexeclude (ignore any exclude settings)
Creating Backups
rman> run {
allocate channel c1 type disk;
backup database format db_%u_%d_%s; (the backup set name for
the data file)
backup format log_t%t_s%s_p%p; (the backup set name from the
archive logs)
(archivelog all);
}
rman> run {
allocate channel c1 type disk;
Backup Sets allocate channel c2 type disk;
backup
(datafile 1,2,3 channel c1)
(archivelog all channel c2);
}
rman> run {
allocate channel c1 type disk;
copy datafile 1 to z:\orabackup\system01.dbf, current controlfile
to z:\orabackup\control01.ctl;
}
Backup Images
rman> backup as copy database;
rman> backup as copy copy of database;
rman> backup copy as copy tablespace sysaux;
rman> backup as copy datafile 2;
rman> configure device type disk parallelism 3; (must have 3
channels)
Parallel Streams
Note : You only configure the number of streams to the number of
channels, if you configure more they will not start. Remember that
you need multiple channels configured to use the streams.
# need to clear the 'controlfile autoback format' then the flash
Backup controlfile recovery area will be used.
and spfile to flash rman> configure controlfile autobackup format for device type disk
recovery area clear
rman> backup current controlfile;
rman> backup device type disk copies 2 datafile 1 format ''disk1/df1_
%U', '/disk2/df1_%U';
rman> backup as copy copy of database from tag 'test' check logical
tag 'duptest';
rman> backup database plus archivelog;
rman> backup as copy duration 04:00 minimize time database;
rman> backup as compressed backupset database plus archivelog;
Other examples
Note:
logical - perform logical check of the backup files
duration - time limit to perform the backup
minimize - perform the backup as fast as it can
compressed - compress the backup set, remember it will take longer
to recovery as it needs to uncompress
Validating/Cross Checking Backups
You can validate a backup set before you restore which ensures that backup files exist in
the proper locations and that they are readable and free from any logical and physical
corruptions, you can also crosscheck backup sets to make sure they are available and
have not been deleted (backup sets can be deleted from the operating system level).
Viewing backups
The v$ views information regarding backups is always located in the target databases or
target databases controlfile.
The list commands are used to determine files impacted by the change, crosscheck and
delete commands. The report command is accurate when the control and RMAN
repository are synchronized which can be performed by the change, crosscheck and
delete commands
Deleting Backups
To removed old archive logs use "delete all" option, if all is missed only the archive logs
in the primary destination will be deleted.
Note:
obsolete - delete all backups no longer needed due to retention levels
Catalog commands
The catalog command helps you identify and catalog any files that aren't recorded in
RMAN's repository and thus are known to RMAN
Block change tracking is used to backup very large databases,when you enable change
block tracking a new process CTWR is then started:
Redo
All the Oracle changes made to the db are recorded in the redo log files, these files along
with any archived redo logs enable a dba to recover the database to any point in the past.
Oracle will write all commited changes to the redo logs first before applying them to the
data files. The redo logs guarantee that no committed changes are ever lost. Redo log files
consist of redo records which are group of change vectors each referring to specific
changes made to a data block in the db. The changes are first kept in the redo buffer but
are quickly written to the redo log files.
There are two types of redo log files online and archive. Oracle uses the concept of
groups and a minimum of 2 groups are required, each group having at least one file. They
are used in a circular fashion when one group fills up oracle will switch to the next log
group.
The LGWR process writes redo information from the redo buffer to the online redo logs
when
Configuration
Creating new log alter database add logfile group
group ('c:\oracle\redo3a.log','c:\oracle\redo3b.log') size 10M;
Adding new log file to
alter database add logfile member 'c:\oracle\redo3c.log' to group3;
existing group
shutdown database
rename file
Renaming log file in
startup database in mount mode
existing group
alter database rename file 'old name' to'new name'
open database
backup controlfile
Drop log group alter database drop logfile group 3;
Drop log file from
alter database drop logfile member 'c:\oracle\redoc.log'
existing group
Maintaining
alter database clear logfile group 3;
alter database clear unarchived logfile group 3;
Clearing Log groups
Note: used the unarchived option when a loggroup has not ben
archived
Logswitch and alter system checkpoint;
Checkpointing
alter system switch logfile;
alter system archive log current;
alter system archive log all;
Archived Logs
When a redo log file fills up and before it is used again the file is archived for safe
keeping, this archive file with other redo log files can recover a database to any point in
time. It is best practice to turn on ARCHIVELOG mode which performs the archiving
automatically.
The log files can be written to a number of destinations (up to 10 locations), even to a
standby database, using the parameters log_archive_dest_n and
log_archive_min_succeed_dest you can control how Oracle writes its log files.
Configuration
alter system set log_archive_dest_1 =
'location=c:\oracle\archive' scope=spfile;
alter system set log_archive_format = 'arch_%d_%t_%r_
%s.log' scope=spfile;
Enabling
shutdown database
startup database in mount mode
alter database archivelog;
startup database in open mode
These two are very closely related but a database can be mounted and opened by many
instances. An instance may mount and open only a single database at any one point in
time.
Parameter File - These files tells Oracle were to find the control files. Also they
detail how big the memory area will be, etc
Data Files - These hold the tables, indexes and all other segments
Temp Files - used for disk-based sorting and temporary storage
Redo Log Files - Our transaction logs
Undo log files - allows a user to rollback a transaction and provides read
consistency.
Archive Log Files - Redo log files which have been archived
Control File - Details the location of data and log files and other relevant
information about their state.
Password File - Used to authenticate users logging in into the database.
Log files - alert.log contains database changes and events including startup
information.
trace files - are debugging files.
Parameter Files
In order for Oracle to start it needs some basically information, this information is
supplied by using a parameter file. The parameter file can be either a pfile or a spfile:
pfile - a very simple plain text file which can be manually edited via vi or notepad
spfile - a binary which cannot be manually edited (Oracle 9i or higher required)
The parameter file for Oracle is the commonly know file init.ora or init<oracle
sid>.ora, the file contains key/value pairs of information that Oracle uses when starting
the database. The file contains information such as database name, caches sizes, location
of control files, etc.
The main difference between the spfile and pfile is that instance parameters can be
changed dynamically using a spfile, where as you require a instance reboot to load pfile
parameters.
To convert the file from one of the other you can perform the following
Data Files
By Default Oracle will create at least two data files, the system data file which holds the
data dictionary and sysaux data file which non-dictionary objects are stored, however
there will be many more which will hold various types of data, a data file will belong to
one tablespace only (see tablespaces for further details).
Cooked - these are normally filesystems that can be accessed using "ls"
commands in unix
Raw - these are raw disk partitions which cannot be viewed, normally used to
avoid filesystem buffering.
ASM - automatic storage management is Oracle new database filesystem (see asm
for further details).
Clustered FS - this is a special filesystem used in Oracle RAC environments.
Segments - are database objects, a table, a index, rollback segments. Every object
that consumes space is a segment. Segments themselves consist of one or more
extents.
Extents - are a contiguous allocation of space in a file. Extents, in turn, consist of
data blocks
Blocks - are the smallest unit of space allocation in Oracle. Blocks normally are
2KB, 4KB, 8KB, 16KB or 32KB in size but can be larger.
The relationship between segments, extents and blocks looks like this
The parameter DB_BLOCK_SIZE determines the default block size of the database.
Determining the block size depends on what you are going to do with the database, if you
are using small rows then use a small block size (oracle recommends 8KB), if you are
using LOB's then the block size should be larger.
System tablespace could use the default 8KB and the OLTP tablespace could use a block
size of 4KB.
There are few parameters that cannot be changed after installing Oracle and the
DB_BLOCK_SIZE is one of them, so make sure to select the correct choice when
installing Oracle.
A data block will be made up of the following, the two main area's are the free space and
the data area.
contains information regarding the type of block (a table block, index block,
Header etc), transaction information regarding active and past transactions on the
block and the address (location) of the block on the disk
Table
contains information about the tables that store rows in this block
Directory
contains information describing the rows that are to be found on the block.
Row
This is an array of pointers to where the rows are to be found in the data
Directory
portion of the block.
Block The three above pieces are know as the Block Overhead and are used by
overhead Oracle to manage the block itself.
Free space available space within the block
Data data within the block
Tablespaces
A tablespace is a container which holds segments. Each and every segment belongs to
exactly one tablespace. Segments never cross tablespace boundaries. A tablespace itself
has one or more files associated with it. An extent will be contained entirely within one
data file.
The minimum tablespaces required are the system and sysaux tablespace, the following
reasons are why tablespaces are used.
Bigfile tablespaces, will have only one file which can range from 8-128 terabytes.
Smallfile tablespaces (default), can have multiple files but the files are smaller
than a bigfile tablespace.
Temporary tablespaces, contain data that only persists for the duration a users
session, used for sorting
Permanent tablespaces, any tablespace that is not temporary one.
Undo tablespaces, Oracle uses this to rollback or undo changes to the db.
Read-only, no write operations are allowed.
Temp Files
Oracle will use temporary files to store results of a large sort operations when there is
insufficient memory to hold all of it in RAM. Temporary files never have redo
information (see below) generated for them, although they have undo information
generated which in turns creates a small amount of redo information. Temporary data
files never need to be backed up ever as they cannot be restored.
All the Oracle changes made to the db are recorded in the redo log files, these files along
with any archived redo logs enable a dba to recover the database to any point in the past.
Oracle will write all committed changes to the redo logs first before applying them to the
data files. The redo logs guarantee that no committed changes are ever lost. Redo log files
consist of redo records which are group of change vectors each referring to specific
changes made to a data block in the db. The changes are first kept in the redo buffer but
are quickly written to the redo log files.
There are two types of redo log files online and archive. Oracle uses the concept of
groups and a minimum of 2 groups are required, each group having at least one file, they
are used in a circular fashion when one group fills up oracle will switch to the next log
group.
When a redo log file fills up and before it is used again the file is archived for safe
keeping, this archive file with other redo log files can recover a database to any point in
time. It is best practice to turn on ARCHIVELOG mode which performs the archiving
automatically.
See redo on how to enable archiving and maintain the archive log files.
Undo File
When you change data you should be able to either rollback that change or to provide a
read consistent view of the original data. Oracle uses undo data (change vectors) to store
the original data, this allows a user to rollback the data to its original state if required.
This undo data is stored in the undo tablespace. See undo for further information.
Control file
The control is one of the most important files within Oracle, the file contains data and
redo log location information, current log sequence numbers, RMAN backup set details
and the SCN (system change number - see below for more details). This file should have
multiple copies due to it's importance. This file is used in recovery as the control file
notes all checkpoint information which allows oracle to recover data from the redo logs.
This file is the first file that Oracle consults when starting up.
The view V$CONTROLFILE can be used to list the controlfiles, you can also use the
V$CONTROLFILE_RECORD_SECTION to view the controlfile's record structure.
You can also log any checkpoints while the system is running by setting the
LOG_CHECKPOINTS_TO_ALERT to true.
Password file
This file optional and contains the names of the database users who have been granted the
special SYSDBA and SYSOPER admin privilege.
Log files
The alert.log file contains important startup information, major database changes and
system events, this will probably be the first file that will be looked at when you have
database issues. The file contains log switches, db errors, warnings and other messages. If
this file is removed Oracle creates another one automatically.
Trace Files
Traces files are debugging files which can trace background process information
(LGWR, DBWn, etc), core dump information (ora-600 errors, etc) and user processing
information (SQL).
The OMF feature aims to set a standard way of laying out Oracle files, there is no need to
worry about file names and the physical location of the files themselves. The method is
suited in small to medium environments, OMF simplifies the initial db creation as well as
on going file management.
The SCN is an important quantifier that oracle uses to keep track of its state at any given
point in time. The SCN is used to keep track of all changes within the database, its a
logical timestamp that is used by oracle to order events that have occurred within the
database. SCN's are increasing sequence numbers and are used in redo logs to confirm
that transactions have been committed, all SCN's are unique. SCN's are used in crash
recovery as the control maintains a SCN for each data file, if the data files are out of sync
after a crash oracle can reapply the redo log information to bring the database backup to
the point of the crash. You can even take the database back in time to a specific SCN
number (or point in time).
Checkpoints
Checkpoints are important events that synchronize the database buffer cache and the
datafiles, they are used with recovery. Checkpoints are used as a starting point for a
recovery, it is a framework that enables the writing of dirty blocks to disk based on a
System Change or Commit Number (for SCN see above) and a Redo Byte Address
(RBA) validation algorithm and limits the number of blocks to recover.
The checkpoint collects all the dirty buffers and writes them to disk, the SCN is
associated with a specific RBA in the log, which is used to determine when all the buffers
have been written.
Tablespaces
Tablespaces are used to organize tables and indexes into manageable groups, tablespaces
themselves are made up of one for more data/temp files.
Permanent - uses data files and normally contains the system (data dictionary)
and users data
Temporary - is used to store objects for the duration of a users session, temp files
are used to create temporary tablespaces
Undo - is a permanent type of tablespace that are used to store undo data which if
required would undo changes of data by users
Read only - is a permanent tablespace that can only be read, no writes can take
place, but the tablespace can be made read/write.
Tablespace Management
Extents are the basic unit of a tablespace and are managed in bitmaps that
are kept within the data file header for all the blocks within that data file.
For example, if a tablespace is made up of 128KB extents, each 128KB
extent is represented by a bit in the extent bitmap for this file, the bitmap
values indicate if the extent is used or free. The bitmap is updated when the
extent changes there is no updating on any data dictionary tables thus
increasing performance.
Locally Extents are tracked via bitmaps not using recursive SQL which means a
(default) performance improvement.
There are a number of things that you should know about tablespaces.
Anytime an object needs to grow in size space is added to that object by extents. When
you are using locally managed tablespaces there are two options that the extent size can
be managed
This means the extent will vary in size, the first extent starts at 64k and
progressively increased to 64MB by the database. The database
automatically decides what size the new extent will be based on segment
Autoallocate
growth patterns.
(default)
Autoallocate is useful if you aren't sure about growth rate of an object
and you let oracle decide.
Create the extents the same size by specifying the size when create the
tablespace.
This is default for temporary tablespace but not available for undo
Uniform
tablespaces.
Be careful with uniform as it can waste space, use this option you are
know what the growth rate of the objects are going to be.
Segment space management is how oracle deals with free space with in an oracle data
block. The segment space management you specify at tablespace creation time applies to
all segments you later create in the tablespace.
Oracle manages the free space in the data blocks by using free lists and a
pair of storage parameters PCTFREE and PCTUSED. When the block
reaches the PCTUSED percentage the block is then removed from the
freelist, when the block falls below the PCTFREE threshold the block is
Manual
then placed back on the freelist. Oracle has to perform a lot of hard work
maintaining these lists, a slow down in performance can occur when you are
making lots of changes to the blocks as Oracle needs to keep checking the
block thresholds.
Oracle does not use freelist when using automatic mode, Instead oracle uses
Automatic
bitmaps. A bitmap which is contained in a bitmap block, indicates whether
(default)
free space in a data block is below 25%, between 25%-50%, between 50%-
75% or above 75%. For an index block the bitmaps can tell you whether the
blocks are empty or formatted. Bitmaps do use additional space but this is
less than 1% for most large objects.
Permanent Tablespaces
Small tablespace - The tablespace can be made up of a number of data files each
of which can be quite large in size
Big tablespace - The tablespace will only be made up of one data file and this can
get extremely large.
Tablespace commands
Datafile Commands
If you create tablespaces with non-standard block sizes you must set the
DB_nK_CACHE_SIZE parameter, there are 5 nonstandard sizes 2k, 4k, 8k, 16k and 32k.
The DB_CACHE_SIZE parameter sets the default block size for all new tablespace if the
block size option is emitted.
Temporary tablespaces
Temporary tablespaces are used for order by, group by and create index. It is required
when the system tablespace is locally managed. In oracle 10g you can now create
temporary tablespace groups which means you can use multiple temporary tablespaces
simultaneously.
Undo Tablespaces
Undo tablespaces are used to store original data after it has been changed, if a user
decides to rollback a change the information in the undo tablespace is used to put back
the data in its original state.
Creating create undo tablespace undotbs02 datafile ' c:\oracle\undo01.dbf' size 2G;
set default alter system set undo_tablespace='undotbs02';
Tablespace quotas
You can assign a user tablespace quota thus limiting to a certain amount of storage space
within the tablespace. By default a user has none when the account is first created, see
users for information on tablespace quotas.
Tablespace Alerts
The MMON daemon checks tablespace usage every 10 mins to see if any thresholds have
been exceeded and raises any alerts. There are two types of alerts warning (low space
warning) and critical (action should be taken immediately). Both thresholds can be
changed via OEM or DBMS_SERVER_ALERT package.
Oracle can make file handling a lot easier by managing the oracle files itself, there are
three parameters that can be set so that oracle will manage the data, temp, redo, archive
and flash logs for you
Tablespace Logging
Recover/Re-create a controlfile
rman> connect target / (because the db name is in the controlfile you must
connect like this)
rman> set dbid 2615281366; (must be supplied)
rman> set controlfile autobackup format for device type to disk
's:\ora_backup\controlfile_%F'; (must point to where controlfile is)
RMAN
rman> restore controlfile from autobackup;
rman> alter database mount; (must be run before starting the recovery)
rman> recover database;
rman> alter database open resetlogs; (explained later)
note: must use the set command within rman when you have lost a
controlfile. set=memory, configure=controlfile.
sql> alter database backup controlfile to trace; (file located in
USER_DUMP_DEST)
Re-create the
c:\> edit the trace file and obtain the controlfile create part
control file
sql> @c:\restore_controlfile.txt (file obtained from the above information
obtained from the trace file)
Resetlogs
The restlogs clause is required in most incomplete recovery to open the database. It resets
the redo log sequence for the oracle database. For recovery through a resetlogs to work, it
is vital that the names generated for the archive logs let oracle distinguish between logs
produced by different incarnations. This is why you use the %r in the parameter
log_archive_format, %r is the incarnation other wise archive logs could be written over.
After a resetlogs there will be a new database incarnation number and the log switch
number will be reset. In previous version all old backups and archive logs would have
been useless but not any more in Oracle 10g.
Oracle Scheduler
Oracle has a built-in scheduler that helps you automate jobs from within the oracle
database database. The dbms_scheduler package contains various functions and
procedures that manage the scheduler, although this can also be achieved via the OEM.
The scheduler is like cron, it will schedule jobs at particular time and run them. All
scheduler tasks can be views through dba_scheduler_jobs view. You cannot schedule
Operating System jobs (either scripts or Binary) via the scheduler this must be done via
cron
The scheduler uses a modular approach to managing tasks which enables the reuse of
similar jobs.
Schedules when and how frequently a job should run (start date, optional end date, repeat
interval), you can also run a job when a specific database event occurs.
contains the metadata about a scheduler job. A program includes the program
Programs name, the program type (PL/SQL, shell script) and the program action which is
the actual name of the program or script to run.
the scheduler uses oracle streams advanced queuing feature to raise events and
Events start database jobs based on the events. An event is a message sent by an
application or process when it notices some action or occurrence.
you can use the concept of a scheduler chain to link related programs together.
Chains Thus running of a specific program could be made contingent on the successful
running of certain other programs.
associate one or more jobs with a resource manager consumer group and
also control logging levels, you can use classes to perform
Job Classes
assign job priority levels for individual jobs, with higher-priority
(groups)
jobs always starting before a lower-priority job
Windows a window in date/time when a job should launch a interval of time when
the job can run
Window
logical method of grouping windows
Groups
Scheduler Architecture
The architecture consists of the job table, job coordinator and the job workers (slaves),
the job table contains information about jobs (job name, program name and job owner).
The job coordinator regularly looks in the job table to find out what jobs to execute, the
job coordinator creates and manages the job worker processes which actually execute the
job.
Processes
The scheduler_admin role contains all scheduler system privileges, with the
admin_option clause, it will allow you to
Create job
Create any job
Execute any program
Execute any class
Manage scheduler
Execute on <job, program or class>
Alter on <job, program or class>
All on <job, program or class>
Enabling/Disabling
When enabling a job all sub-jobs are enabled, when enabling a window only that window
gets enabled not sub-windows, when referencing a window always prefix with a SYS.
dbms_scheduler.enable('backup_job');
Enabling
dbms_scheduler.enable('backup_job', backup_program,
SYS.window_group_1); (enable multiple jobs)
Disabling dbms_scheduler.disable('backup_job');
Attributes
These are the only way to alter a schedule. By default objects are set to false when
created.
dbms_scheduler.set_attribute - <name>,<attribute>,<value>
dbms_scheduler.set_attribute_null - <name>,<attribute> (set value to NULL)
Creating a job
When a job exceeds its END_DATE attribute it will be dropped only if the auto_drop
attribute is set to true, otherwise it will be disabled. In either case the state column will be
set to completed in the job table.
dbms_scheduler.create_job (
Job_name=> 'cola_job',
Job_type=> 'PLSQL_BLOCK',
Create Job Job_action=> 'update employees set salary = salary * 1.5;',
Start_date=> '10-oct-2007 06:00:00 am',
Repeat_interval=> 'FREQ=YEARLY',
Comments=> 'Cost of living adjustments'
);
select job_name, enabled, run_count from user_scheduler_jobs;
Display Jobs
Note: default job is disabled by default (false)
Copying dbms_scheduler.copy_job('cola_job', 'raise_job');
dbms_scheduler.stop_job(job_name=> 'cola_job', force=> true);
Stopping
Note: using force stops the job faster
exec dbms_scheduler.drop_job('cola_job');
Deleting
Note: removes the job permanently
select job_name, enabled, run_count from user_scheduler_jobs;
Displaying
Note: copied job is disabled by default(false)
dbms_scheduler.run_job('cola_job', true);
dbms_scheduler.run_job('cola_job', false);
Note:
Running
true - runs immediately, synchronously, control does not return to user, no
run count update
false - runs immediately, asynchronously, control does return to user, updates
run count
dbms_scheduler.set_attributes(
name => 'test_job',
attribute => 'job_priority',
Priority value => 1
);
Group larger jobs together, characteristics can be inherited by all jobs within the
group
Classes can be assigned to a resource consumer group
Jobs can prioritize with the class
Logging levels
dbms_scheduler.create_job_class (
Job_class_name=> 'low_priority_class',
Resource_consumer_group=> 'low_group',
Creating
Logging_level=> DBMS_SCHEDULER.LOGGING_FULL,
Log_history=> 60,
Comment=> 'low priority job class'
);
Dropping dbms_scheduler.drop_class('low_priority_class, high_priority_class');
dbms_scheduler.set_attribute(
name => 'reports_jobs',
Assigning attribute => 'job_class',
value => 'low_priority_class'
);
dbms_scheduler.set_attribute(name => 'reports_jobs', attribute =>
Prioritizing
'job_priority', value => 2);
dbms_scheduler.alter_attributes (
name => 'reports_jobs',
Alter
attribute => 'start_date',
attributes
value => '15-JAN-08 08:00:00'
);
Scheduler programs
dbms_scheduler.create_program (
Program_name => 'stats_program',
Program_type => 'stored_procedure',
Creating the program
Program_action => 'dbms_stats.gather_schema_stats',
Number_of_arguments => 1,
Comments => 'gather stats for a schema'
);
dbms_scheduler.define_program_argument(
Program_name => 'stats_program',
Creating the argument
Argument_position => 1,
Argument_type => 'varchar2'
);
dbms_scheduler.drop_program_argument(
Dropping the argument Program_name => 'stats_program',
Argument_position => 1
);
dbms_scheduler.drop_program(
Dropping the program Program_name => 'stats_program',
force => true
);
Programs
dbms_scheduler.create_program(
Program_name => 'stats_program',
Program_type => 'stored_procedure',
Creating programs
Program_action => 'dbms_stats.gather_schema_stats',
Number_of_arguments => 1,
Comments => 'Gather stats for a schema'
);
dbms_scheduler.define_program_argument(
program_name => 'stats_program',
Define program argument argument_position => 1,
argument_type => 'varchar2'
);
dbms_scheduler.drop_program_argument(
program_name => 'stats_program',
Drop program argument
argument_position => 1
);
dbms_scheduler.drop_program(
Drop program program_name => 'stats_program',
force => true
);
dbms_scheduler.enable_program('stats_program');
Enable/Disable
dbms_scheduler.disable_program('stats_program');
Schedules
dbms_scheduler.create_schedule(
schedule_name => 'nightly_8_schedule',
Create
start_date => systimestamp,
repeat_interval => 'FREQ=DAILY; BYHOUR=20',
comments => 'run nightly at 8:00pm'
);
Remove dbms_scheduler.drop_schedule('nightly_8_schedule');
Intervals
Interval elements
FREQ - required and values are yearly, monthly, weekly, daily, hourly, minutely,
secondly
INTERVAL - how often it repeats default 1 means every day, 2 would be every
other day
BYMONTHLY - can use (1-12) or (JAN-DEC) or (1,3,12), etc
BYWEEKNO - the week number
BYYEARDAY - the date of the year as a number
BYMONTHDAY - (1-31), -1 eans last day of the month
BYDAY - (MON-SUN)
BYHOUR - (0-23)
BYMINUTE - (0-59)
BYSECOND - (0-59)
Interval rules
Interval examples
dbms_scheduler:
<calendar_string>,<start_date>,<return_date_after>,<next_run_date>
declare
start_date timestamp;
return_date_after timestamp;
next_run_date timestamp;
BEGIN
Testing start_date := to_timestamp_tz( '10-oct-2007 10:00:00', 'DD-MON-YYYY
Interval HH24:MI:SS')
return_date_after := start_date;
for i in 1..10 loop
dbms_scheduler.evaluate_calendar_string( 'freq=monthly; interval=2;
bymonthday=15', start_date, null,next_run_date);
dbms_output.put_line('next_run_date: ' || next_run_date);
end loop;
END;
/
Managing Chains
In order to manage chains you need both the create job and rules engine privileges, their
are many other options that allow you to drop a chain, drop rules from a chain, disable a
chain, alter a chain and so on (see the Oracle docs for more information)
dbms_rule_adm.grant_system_privilege(dbms_rule_adm.create_rule_obj,
'vallep'),
Privilege
dbms_rule_adm.grant_system_privilege(dbms_rule_adm.create_rule_obj,
'vallep'),
dbms_rule_adm.create_evaluation_context_obj, 'vallep')
dbms_scheduler.create_chain(
chain_name => 'test_chain',
rule_set_name => NULL,
Create
evaluation_interval => NULL,
comments => NULL
);
dbms_scheduler.define_chain_step('test_chain', 'step1', 'program1');
dbms_scheduler.define_chain_step('test_chain', 'step2', 'program2');
Define chain dbms_scheduler.define_chain_step('test_chain', 'step3', 'program3');
Note: the first option creates a job which runs the chain, you also have the
option of using run_chain to run a chain without creating a job first.
Managing Events
You can create both jobs and schedules that are based strictly on events and not calendar
time. There are two attributes that need highlighting
is conditional expression that takes its value from the event source queue
table and uses Oracle streams advanced queuing rules. You specify
event_condition object attributes in this expression and prefix them with tab.user_data.
Review the dbms_aqadm package to learn about advanced queuing and
related rules.
queue_spec determines the queue into which the job-triggering event will be queued.
There are many more options than below please refer to the Oracle documentation for a
full listing.
BEGIN
dbms_scheduler.create_job(
job_name => 'test_job',
program_name => 'test_program',
start_date => '15-JAN-08 08:00:00',
event_condition => 'tab.user_data.event_name =
Create event based
''FILE_ARRIVAL''',
Job
queue_spec => 'test_events_q',
enabled => true,
comments => 'An event based job');
END;
Note: the job will run when the event indicates that a file has arrived.
BEGIN
dbms_scheduler.create_event_scheule(
schedule_name => 'appowner.file_arrival',
start_date => systimestamp,
event_condition => 'tab.user_data.object_owner = ''APPOWNER''
Create event based and tab.user_data.event_name = ''FILE_ARRIVAL'
schedule and extract hour from tab.user_data.event_timestamp < 12',
queue_spec => 'test_events_q');
END;
Note: the schedule will start the job when the event indicates that a
file has arrived before noon
Windows
dbms_scheduler.create_window(
window_name => 'work_hours_window',
start_date => '14-JAN-08 08:00:00',
Creating a window using a
duration => interval '10' hour,
schedule (so schedule will open
resource_plan => 'day_plan',
window)
schedule_name => 'work_hours_schedule',
window_priority => 'high',
comment => 'Work Hours Window'
);
dbms_scheduler.open_window(
window_name => 'work_hours_window',
Opening a window manually
duration => interval '20' minute,
force => true
);
dbms_scheduler.close_window( window_name=>
Closing window manually 'work_hours_window' );
dbms_scheduler.disable_window( name =>
Disable window
'work_hours_window');
Purging logs
PURGE_LOG - deletes entries from the scheduler job if more than 30 days
GATHER_STATS_JOBS - gathers optimiser statistics has two windows
weeknight_window and weekend_window.