Sie sind auf Seite 1von 7


ASM manages only database files OMF including datafiles, tempfiles, control files, online
redo logs, archive logs, RMAN backup sets, server parameter files, change tracking files,
and flashback logs.
An ASM instance is required on each node
Does not have datafiles, controlfile, redologs and dictionary
Processes dedicated for ASM are RBAL and ARBn
Stripe size is 128k for controlfiles and logs, 1M for all others

Failure Group and Redundancy

A Failure Group is are groups of disks dependent on a single point of failure such a
controller, each mirror should be located
on different Failure Group
Normal redundancy = 2 way mirroring, 2 Failure groups
External redundancy = no mirroring, manual
High redundancy = 3 way mirroring, 3 Failure groups

When configured for RAC the disks must be physically shared

ASM startup
ASM startup is configured using:
chkconfig --list oracleasm
So there is a corresponding script used to start it up in:

Migrate a database to ASM

#RMAN must be used to migrate data files to ASM storage
alter system set db_create_file_dest='+DATA' scope=spfile;
alter system set control_files='' scope=spfile;
shutdown immediate;
startup nomount;
restore controlfile from '/u01/ORCL/control1.ctl';
alter database mount;
backup as copy database format '+DATA';
switch database to copy;
recover database;
alter database open;
alter tablespace temp add tempfile;
alter database tempfile '/u01/ORCL/temp1.dbf DROP';

#OUI and DBCA may be used for installation from any node in RAC

#The packages are located in OS dvd media in

rpm -Uvh oracle-support-1.0.3-1.i386.rpm
rpm -Uvh oracleasm-2.4.21-EL-l.0.3-1.1686.rpm
#this is not in the dvd, you have to download it
rpm -Uvh oracleasmlib-l.0.0-1.1386.rpm

To create an entry for each ASM instance in the OCR, use the following code:
srvctl add asm -n londonl -i +ASM1 -o $ORACLE_HOME
srvctl add asm -n london2 -1 +ASM2 -o $ORACLE_HOME
enable, disable, config, status
ASM parameters
*.asm_diskstring='/u03/asmdisks/*', '/u04/asmdisks/*' #help asm to find disks, default is
null that means ORCL:*
*.asm_power_limit=1 #Determines how aggressively the rebalancing operation is performed.
1 (slowest) to 11 (fastest), default 1.

#RAC required
*.cluster_database=#true for RAC

*.large_pool_size=41943040 #12M-16M is good
NOTE: -> Note: may be worth try to tune *.db_cache_size=50000000 (default is 24M)

diskgroup creation
#from ASM instance
#Normal redundancy means defining two failure groups so using a two-way mirroring
FAILGROUP ctlr1 DISK '/dev/raw/raw1','/dev/raw/raw2'
FAILGROUP ctlr2 DISK '/dev/raw/raw3','/dev/raw/raw4';

#add one disk, Discovers disks in ASM_DISKSTRING

ALTER DISKGROUP diskgroupl ADD DISK 'dev/sdil';
#add many disks, Discovers disks in ASM_DISKSTRING
ALTER DISKGROUP diskgroupl ADD DISK 'dev/sdi*';
#Discovers disks in ASM_DISKSTRING
alter diskgroup d1 resize disk ...
#If you do not include a FAILGROUP clause, then a new failure group will be created for
the new disk.
#You can specify a failure group for the new disk using the following:
ALTER DISKGROUP diskgroupl FAILGROUP failgroupl DISK 'dev/sdil';

Various commands
#View disk infos reading from headers
ASMCMD>lsdsk -kpIt
#the disk was part of a dropped disk group but contents have not been removed
#use dd if=/dev/zero of=/dev/raw/raw1 bs=1024 count=4 to make it a CANDIDATE

#manually drop a disk group 10g, you must manually delete headers
drop diskgroup dgroup1;
dd if=/dev/zero of=/dev/raw/raw1 bs=1024 count=4
#11g, drop a disk group
drop diskgroup dgroup1 force including contents;

alter diskgroup diskgroup1 rebalance power 4

chkconfig --list oracleasm

Configure the driver

Configuration is stored in [/etc/sysconfig/oracleasm]
#Run on each node
/etc/init.d/oracleasm configure
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y

/etc/init.d/oracleasm start
/etc/init.d/oracleasm stop
/etc/init.d/oracleasm status
#add a disk to asm. Capital name
/etc/init.d/oracleasm createdisk VOL1 /dev/sdb1

#needs to be run from other ASM nodes to make them aware

/etc/init.d/oracleasm scandisks
/etc/init.d/oracleasm listdisks
/etc/init.d/oracleasm querydisk VOL1
/etc/init.d/oracleasm deletedisk VOL1

# ORACLEASM_SCANORDER: Matching patterns to order disk scanning
# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan

ASM tools
srvctl start/stop asm -n node_name : start/stop ASM instance
srvctl remove asm -n node_name : delete ASM instance
srvctl config asm -n node_name : Verify ASM instance

tablespace creation
#from client database

Prevent database from accessing ASM


ASM Dynamic Views: ASM Instance Information

Shows every alias for every disk group mounted by the ASM instance
Shows which database instance(s) are using any ASM disk groups that are being mounted by
this ASM instance
Discovers and lists disks in ASM_DISKSTRING, including disks that are not part of any
ASM disk group
[name, allocation, unit_size, compatibility, database_compatibility]
Describes information about ASM disk groups mounted by the ASM instance. Discovers disks
Lists each ASM file in every ASM disk group mounted by the ASM instance
Like its counterpart, V$SESSION_LONGOPS, it shows each long-running ASM operation in the
ASM instance
Lists each template present in every ASM disk group mounted by the ASM instance

Fixed views
X$KFALS: ASM aliases
X$KFDSK: ASM disks
X$KFFIL: ASM files
X$KFGRP: ASM disk groups
X$KFGMG: ASM operations
X$KFKID: ASM disk performance
X$KFNCL: ASM clients
X$KFMTA: ASM templates

11g requires compatible.asm=11.1


Background processes
RBAL: Rebalances slaves
ARBn: Rebalances data extent
GMON: Diskgroup monitor
PSP0: Start and stop ARBn

File Names

Fully Qualified File Names


Numeric File

Alias File Names


ALTER DISKGROUP diskgroupl ADD DIRECTORY '+diskgroupl/directoryl';

ALTER DISKGROUP diskgroupl ADD ALIAS '+diskgroupl/directoryl/filel' FOR

PARAMETERFILE Parameter file
DUMPSET Dump set
CONTROLFILE Control file
ARCHIVELOG Archived redo log
ONLINE LOG Online redo log
AUTOBACKUP Autobackup control file
XTRANSPORT Transportable tablespace
CHANGETRACKING Change tracking file
FLASHBACK Flashback log
DATAGUARDCONFIG Data Guard configuration

CREATE TABLESPACE tsl DATAFILE '+diskgroupl(datafile)/tsl.dbf'


ASM administration using FTP

Need XML DB (default for DBCA)
@$ORACLE_HOME/rdbms/admin/catxdbdbca.sql 7777 8888

ASM administration using ASMCMD

asmcmd -p


ASM 11g New Features

Disk Group, Striping used 128Kb, automatic rebalance
+ Failure group, only for Normal o High Redundancy disk group
+ Disk
+ ASM File, max size 140 petabyes(11g), 35 terabytes(10g)
+ File Extent(1,4,16,32,64Mb), resides on single disk, a small extent extend to
next size before creating the next extent reducing shared pool
+ AU Allocation Unit
#basic allocation unit
#[compatible.rdbms=11.1 size may be (1,2,4,8,16,32,64Mb)]
#[compatible.rdbms=10.2 size may be (1,2,4,8Mb)]
Db will automatically defragment on problems allocating/extending extents. Manual disk
group rebalance avoid external fragmentation

Fast Mirror Resync

Requires compatible.rdbms=11.1
DISK_REPAIR_TIME(def. 3.6h) enable Fast Mirror Resync, if the disk is offline is not
dropped until value specified. Default unit is hours.
alter diskgroup dgroupA set attribute 'disk_reair_time'="2D6H30M"
#change is effective only for online disks of the group
If no disk content is damaged or modified, temporary failure, resynchronize only
changed extents
1) Take disk offline:
alter disk group dgroup1 disk data_00001 drop after 0 h; # override disk_repair_time
2) Wipe out the disk headers:
dd if=/dev/zero of=asm_disk1 bs=1024 count=100
3) Add disk back to group:
alter diskgroup dgroup1 add disk '/dev/raw/raw1' size 100M;
alter diskgroup dA online disks all;
#takes online all disks for given diskgroup
alter diskgroup dgroupA online;
#Disk opened write only, stale extents only are copied, disk read/write
alter diskgroup dA offline disks in failuregroup f1 DROP AFTER 4h;
# override disk_repair_time. Keep disk offline not dropping for until the time
specified. Remember that a disk group may have more failuregroups
alter diskgroup dA online disks in failuregroup f1 POWER 2 WAIT;
#wait 2 hours before bring dA online again. Remember that a disk group may have more
alter diskgroup dA drop disks in failuregroup f1 FORCE;
#use to drop a disk group that you are unable to repair. Remember that a disk group
may have more failuregroups

* Mirroring *
2 way, 3 way , external redundancy
Normal redundancy: 2 failure groups, 2 way mirroring, all local disk belong to same
failure group, only 1 preferred failure group for group
High redundancy: 3 failure groups, 3 way mirroring, maximum of 2 failure groups for
site with local disks,up to 2 preferred failure group for group
External redundancy: No failure groups
* ASM Preferred Mirror Read *
Requires compatible.rdbms=11.1
Once you configure a preferred mirror read (see asm_preferred_read_failure_groups),
every node can read from its local disks, only local
create diskgroup dg6 external redundancy disk '/dev/raw/raw1' attribute 'au_size'='8M';
attribute 'compatible.asm'='11.1';
* OS User *
SYSASM instead of SYSDBA, member of OSASM group
grant sysasm to aldo;

Variable Size Extents

The extent size is automatically increased based on file size. Extent size can vary in
a single file or across files. No manual configuration required.
Performances increased when opening files. Less memory to manage the extent map. Fewer
extent pointers required

Compatibility Params
Compatibility can only be advanced
ASM 11g supports both 11g and 10g, compatible.asm and compatible.rdbms must be manually
advanced since default values are 10.1
* Attributes *
compatible.rdbms #Default 10.1. The minimum db version to mount a disk group, once
increased cannot be lowered. Must be advanced after advancing compatible.asm
#11.1 enable ASM Preferred Mirror Read, Fast Mirror Resync, Variable
Size Extents, different Allocation Unit sizes(see AU Allocation Uint)
compatible.asm #Default 10.1. Control ASM data structure, cannot be lower than
#Must be advanced before advancing compatible.rdbms
template.redundancy: unprotect, mirror, high
template.tname.striping: coarse, fine
* Check command *
Verify ASM disk group metadata directories, cross check files extent maps and
allocation tables, check link between metadata directory and file directory,
check the link of the alias directory, check for unreachable blocks on metadata
directories, repair[def.]/norepair, disk consistency if verified
10g: check all, file, disk, disks in failgroup; -> 11g: check;

alter diskgroup t dismount;
alter diskgroup t mount RESTRICT; or startup retrict;
#clients wont be able to access disk group, if you add a disk a rebalance is
alter diskgroup t dismount;
alter diskgroup t mount [NOFORCE(def.) | FORCE];
#NOFORCE wont mount an incomplete disk group.
#FORCE you must restore missing disk before disk_repair_time, FORCE requires at
least one disk offline, FORCE fails if all disk are online
drop diskgroup g1 force include contents;
#Command fail if disk in use, must specify with

cp +DATA/.../TBSFV.223.333 +DATA/.../pippo.bak #copy a file locally
cp +DATA/.../TBSFV.223.333 /home/.../pippo.bak #copy a file to the OS and viceversa
cp +DATA/.../TBSFV.223.333 +DATA/.../pippo.bak \sys@mydb . +ASM2 : +D2/jj/.../pippo.dbf
#Copy to a remote ASM instance
lsdsk <-d><-i><-[l]k><-[l]s><-p>;
#list visible disk. In connected mode(default) reads V$... and GV$..., in non-
connected scans disk headers after a warning message.
<-I> force non-connected mode
<-k> detailed infos
<-s> shows I/O stats
<-p> status
<-t> repair related infos
<-d> limits to disk group
read from headers
remap dg5 d1 5000-7500;
#remap a range of unreadable bad disk sectors with correct content. Repair blocks
that have I/O errors. EM may also be used
md_backup [-b backup_file(def. ambr_backup_intermediate_file)] [-g
#backup into a text file metadata infos
mkdir +DGROUP1/abc
mkalias TBSF.23.1222 +DGROUP1/abc/users.dbf

MD_RESTORE command
recreate diskgroups and restore its metadata only from the previously backed up file.
Cannot recover corrupted data
md_restore [-b backup_file(def. ambr_backup_intermediate_file)]
<-t[FULL(create diskgroups and restore its metadata), NODG(restore metadata for an
existing diskgroup), NEWDG(new diskgroup and restore metadata)]>;
<-f> write commands to a file
<-g> select diskgroups, all if undefined
<-o> rename diskgroup
<-i> ignore errors
md_restore -t newdg -o 'DGNAME=dg3:dg4' -b your_file
#restore dg3 giving a different name dg4