Beruflich Dokumente
Kultur Dokumente
Topic
Version
Platform
CONTENTS
Oracle 10g RAC
1. REQUIREMENTS
2. OPERATING SYSTEM INSTALLATION
3. CONFIGURING OPERATING SYSTEM
4. CONFIGURING USER and SHARED DISK
5. CONFIGURING OCFS2 (Oracle Clustered File System)
6. INSTALL AND CONFIGURE ORACLE CLUSTERWARE
7. INSTALL ORACLE DATABASE SOFTWARE
8. CONFIGURING DISKS FOR ASM
9. CREATE ORACLE DATABASE USING OCFS2 AND ORACLE ASM
10.COMMAND LINE UTILITIES
11.HOW TO CHANGE RAC DB NOARCHIVELOG TO ARCHIVELOG MODE
12. UPGRADING CLUSTER WARE AND ORACLE DATABASE
13. CLUSTERWARE AND ASM ADMINISTRATION
----------------------------------------------------------------------------------------------1. REQUIREMENTS
----------------------------------------------------------------------------------------------Hardware:
========
Servers /Nodes
Processor
Ram
Hard Disk
Network Cards
Shared Disk
: Min 2 nodes
: PIV and above
: Min 1GB
: 15 GB for Operating System and 10 GB for Oracle Cluster ware
And Oracle Database Software
: 2 NIC cards in each node (1 for Public IP, 1 for Private IP)
: ISCSI Shared Storage
Software:
========
Operating System
Oracle Cluster Software
Oracle Database Software
OCFS2 rpm
http://oss.oracle.com/projects/ocfs/
ASM RPM
http://oss.oracle.com/projects/asm/
- 10 GB
- 200 MB
- 8 GB
swap
/tmp
/var
/opt
/oracle
/crs
/home
- Twice of RAM
- 1GB
- 2 GB
- 500 M
- 10 GB
- 5 GB
- 10 GB
Host Name
Public IP
Firewall
SE Linux
Packages
- Fixed Size
- for root
- Fixed Size
- for Boot Loader
- Fixed Size
- for selected packages
[ select all packages while installation]
- Fixed Size
- swap / virtual memory
- Fixed Size
- temporary files area
- Fixed Size
- O/S log files
- Fixed Size
- for optional packages
- for oracle database software
- for oracle cluster ware software
- for storing user files
Mask: 255.255.255.0
Mask: 255.0.0.0
# ifconfig
# hostname
# vi /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1
localhost.localdomain localhost
#PUBLIC IP
192.168.10.1
192.168.10.2
rac1
rac2
#PRIVATE IP
10.0.0.1
10.0.0.2
rac1-priv
rac2-priv
#VIRTUAL IP
192.9.200.226
192.9.200.227
rac1-vip
rac2-vip
Note:Ensurethatthenodenamesarenotincludedfortheloopbackaddressinthe
/etc/hostsfile.Ifthemachinenameislistedintheintheloopbackaddressentryas
below:
vi /etc/sysctl.conf
kernel.shmmax = 4294967295
kernel.shmall = 268435456
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default=1048576
net.core.rmem_max=1048576
net.core.wmem_default=262144
net.core.wmem_max=262144
Load the sysctl settings (without reboot) on all nodes
vi /etc/sysconfig/network
Hostname=rac1
d. Check Firewall is disabled
-->
-->
-->
-->
-->
-->
# passwd oracle
partition
/dev/sdb1
/dev/sdb2
/dev/sdb3
/dev/sdb5
size
500MB
500mb
10Gb
10G
# fdisk /dev/sdb
mountpoint
/ocr
/vote
/oradata
+ORCL_DATA1
Start
End
Blocks
Id
partitions
System
>Currentlythereareno
Start
1
14
27
34
End
Blocks
13
104391
26
104422+
33
56227+
16709 133949970
Id
83
83
83
5
System
Linux
Linux
Linux
Extended
>Type'p'toprintallpartitions
Start
1
124
2557
4990
4990
9854
End
123
2556
4989
16709
9853
14717
Blocks
987966
19543072+
19543072+
94140900
39070048+
39070048+
Id
83
83
83
5
83
83
System
Linux
Linux
Linux
Extended
Linux
Linux
>Type'w'tosaveandquit
Aftercreatingallrequiredpartitions,youshouldnowupdatethekernelofthepartition
changesusingthefollowingsyntaxastherootuseraccount:
[root@rac1~]#partprobe
>issuethiscommandfromallnodes
[root@rac1root]#fdisk -l
partitionsonremainingnodes.
/dev/sdb >Tochecktheupdatedlistof
Start
1
124
2557
4990
4990
9854
End
123
2556
4989
16709
9853
14717
Blocks
987966
19543072+
19543072+
94140900
39070048+
39070048+
Id
83
83
83
5
83
83
System
Linux
Linux
Linux
Extended
Linux
Linux
Ensure that the date and time of the all nodes are same (unless you are using Network Time).
To set the date and time now, you can execute the following commands:
/lib/modules/2.6.9-22.EL/kernel/drivers/char/hangcheck-timer.ko
hangcheckmargin:Thisparameterdefinesthemaximumhangdelaythatshould
betoleratedbeforehangchecktimerresetstheRACnode.Itdefinesthemarginof
errorinseconds.Thedefaultvalueis180seconds;Oraclerecommendssettingitto
180seconds.
Setthehangchecktimersettingsin/etc/modprobe.conf(allnodes)
Addhangchecktimermodulein/etc/rc.localtoprobeitateverystartup:
May 29 11:40:35 rac1 kernel: Hangcheck: starting hangcheck timer 0.5.0 (tick is 30
seconds, margin is 180 seconds).
f. Configure racnodes for remote access:
Before you can install Oracle RAC 10g, you must configure secure shell (SSH) for the UNIX user account you
plan to use to install Oracle Clusterware 10g and the Oracle Database 10g software. The installation and
configuration tasks described in this section will need to be performed on both Oracle RAC nodes. As
configured earlier in this article, the software owner for Oracle Clusterware 10g and the Oracle Database 10g
software will be "oracle".
The goal here is to setup user equivalence for the oracle UNIX user account. User equivalence enables the
oracle UNIX user account to access all other nodes in the cluster (running commands and copying files)
without the need for a password. Oracle added support in 10g Release 1 for using the SSH tool suite for setting
up user equivalence. Before Oracle Database 10g, user equivalence had to be configured using remote shell
(RSH).
Note: The first step in configuring SSH is to create an RSA public/private key pair on both Oracle RAC nodes
in the cluster. The command to do this will create a public and private key for RSA (for a total of two keys per
node). The content of the RSA public keys will then need to be copied into an authorized key file which is then
distributed to both Oracle RAC nodes in the cluster.
Performallthebelowsteps(1to3)intheallthenodes
1. Log on as the oracle UNIX user account.
# su - oracle
2. If necessary, create the .ssh directory in the oracle user's home directory and set the correct
permissions to ensure that only the oracle user has read and write permissions:
$ mkdir -p ~/.ssh
$ chmod 700 ~/.ssh
3.
Enter the following command to generate an RSA key pair (public and private key) for the SSH protocol:
$ /usr/bin/ssh-keygen -t rsa
At the prompts:
Accept the default location for the key files (press [ENTER]).
Performthefollowingstepsfromthefirstnodeonly.
First, determine if an authorized key file already exists on the node (~/.ssh/authorized_keys). In most
cases this will not exist since this article assumes you are working with a new install. If the file doesn't exist,
create it now:
In this step, use SCP (Secure Copy) or SFTP (Secure FTP) to copy the content of the
~/.ssh/id_rsa.pub public key from both Oracle RAC nodes in the cluster to the authorized key file
just created (~/.ssh/authorized_keys). Again, this will be done from linux1. You will be
prompted for the oracle UNIX user account password for both Oracle RAC nodes accessed.
The following example is being run from lrac1 and assumes a two-node cluster, with nodes lrac1
and lrac2:
Change the permission of the authorized key file for both Oracle RAC nodes in the cluster by
logging into the node and running the following:
In previous editions of this article, this would be the time where you would need to download the OCFS2
software from http://oss.oracle.com/. This is no longer necessary since the OCFS2 software is included with
Oracle Enterprise Linux. The OCFS2 software stack includes the following packages:
InstalltheOCFS2rpms(allnodes):
[root@rac1ocfs]#ls
ocfs22.6.978.EL1.2.31.i686.rpm
ocfs2tools1.2.21.i386.rpm
ocfs2console1.2.21.i386.rpm
/etc/init.d/o2cb
/etc/init.d/o2cb
/etc/init.d/o2cb
/etc/init.d/o2cb
/etc/init.d/o2cb
status
load
online ocfs2
offline ocfs2
unload
>checkstatus
>Loadallocfsmodules
>Onlinetheclusterwecreated:ocfs
>Offlinetheclusterwecreated:ocfs
>Unloadallocfsmodules
Afterterinstallingocfsrpmschecktheocfsfilesystemstatus
/etc/init.d/o2cb status
Ctrl-C
FormatthedrivewithOCFS2Filesystem(fromOnlyONEnode):
Note: OCFS supports format and mount the OCFS2 file system using either the GUI tool ocfs2console or
the command-line tool mkfs.ocfs2. From the ocfs2console utility, use the menu [Tasks] [Format].
The instructions below demonstrate how to create the OCFS2 file system using the command-line tool
mkfs.ocfs2.
#mkfs.ocfs2b4KC32KN4Locr/dev/sdb1
whereb>Blocksize
C>Cluster
N>Nodes
L>Label
Note: Need to format which are all disks which are using OCFS2 file system
#mkfs.ocfs2b4KC32KN4Locr/dev/sdb2
#mkfs.ocfs2b4KC32KN4Locr/dev/sdb3
Mount the OCFS2 Filesystem
Now that the file system is created, we can mount it. Let's first do it using the command-line, then I'll show how
to include it in the /etc/fstab to have it mount on each boot
/ocr
/vote
/oradata
/ocr
/vote
/oradata
ocfs2 _netdev,datavolume,nointr 0 0
ocfs2 _netdev,datavolume,nointr 0 0
ocfs2 _netdev,datavolume,nointr 0 0
/vote /oradata
/oradata;
drwxrwxrx3oracledba4096May2521:29/ocr
drwxrwxrx3oracledba4096May2521:29/vote
drwxrwxrx3oracledba4096May2521:29/oradata
Logoutfromrootuser(GUI)andLogintoOracleuser
a.DownloadandextracttheOracle10gRelease2Clusterware(onlyPrimarynode)
[oracle10g@rac10gsw]$unzip10201_clusterware_linux32.zip
[oracle10g@rac110gsw]$ls
[oracle10g@rac110gsw]$unzip10201_clusterware_linux32.zip
[oracle10g@rac110gsw]$ls
clusterware
[oracle10g@rac110gsw]$cdclusterware/
[oracle10g@rac1clusterware]$ls
cluvfydocinstallresponserpmrunInstallerstageupgradewelcome.html
Verifythenodeconnectivity(onlyONEnode):
[oracle10g@rac1cluvfy]$
[oracle10g@rac1cluvfy]$./runcluvfy.shcompnodeconnrac1,rac2verbose
Verifytheaccesstosharedstorage(onlyONEnode):
[oracle10g@rac1cluvfy]$./runcluvfy.shcompssanrac1,rac2verbose
VerifytheprerequisitesforCRSinstallation(onlyONEnode):
[oracle10g@rac1cluvfy]$./runcluvfy.shstageprecrsinstnrac1,rac2verbose
Ifalltheaboveverificationscompletesuccessfully,thenyoucanproceedwiththeCRS
installation.
c.InvoketheOracleUniversalInstaller(onlyONEnode)
[oracle10g@rac1clusterware]$./runInstaller
1.ClickNext
2.ChoosepathforOraInventory/home/oracle/oraInventory,ChooseOraclegroup
asoinstall
3.HomeName:OraCrs10g_home
Path:/crs/app/product/10.2.0/crs
4.Verifyrequirementsandclicknext
5.Specifyclusterconfigurationdetails:
ClusterName:crs
Clusternodes:rac1 rac1priv
rac1vip
rac2 rac2priv
rac2vip
6.Verifythenetworkinterfaceusage:
eth0 192.168.10.0 Public
eth1 10.10.0.0
Private
7.SpecifyOracleClusterRegistryLocation:(Needtoselectexternalredundancy
only)
Location:
/ocr/ocr1.dbf
8.SpecifyVotingDiskLocation:(Needtoselectexternalredundancyonly)
Location:/vote/vote1.dbf
9.ClickInstalltostartinstallation
10.ExecuteConfigurationScripts:
ExecuteorainstRoot.shonallnodesasrootuseronly
[root@rac1~]#/home/oracle/oraInventory/orainstRoot.sh
Changingpermissionsof/home/oracle/oraInventoryto770.
Changinggroupnameof/home/oracle/oraInventorytooinstall
Theexecutionofthescriptiscomplete
[Donotexecutesimultaneouslyonallnodes]
[root@rac1~]#/crs/app/oracle/crs/root.sh
Whileexecutingroot.shinanyoftheremotenode,ifyougetamessageeth0isnot
publicorany
similarerror,youneedtoexecutetheVIPCA(VirtualIPConfigurationAssistant)
manually.
Runningvipcamanually:
[root@rac2~]#sh /crs/app/oracle/crs/bin/vipca
EntertheproperIPaddressofyourVIPanditsaliasnames,thenclickFinish
tocompletetheconfiguration.
YoucanverifythepingingofVirtualIPaddressnow:
[oracle@rac2~]$ping rac1-vip
[oracle@rac1~]$ping rac2-vip
ReturntotheExecuteConfigurationScriptsScreenandClickOK
11.Oncetheconfigurationsarerunsuccessfully,clickExittoexittheinstallation
d.Postinstallverification(allnodes)
Listtheclusternodes:
[oracle@rac1~]$/crs/app/product/crs/bin/olsnodes -n
rac11
rac22
Checkoracleclusterautostartupscripts:
[oracle@rac1~]$ls -l /etc/init.d/init.*
rxrxrx1rootroot1951May2921:30/etc/init.d/init.crs
rxrxrx1rootroot4716May2921:30/etc/init.d/init.crsd
rxrxrx1rootroot35396May2921:30/etc/init.d/init.cssd
rxrxrx1rootroot3192May2921:30/etc/init.d/init.evmd
Checkclusterreadyservices:
[oracle@rac1~]$ps -ef
| grep crs
Checkclustersynchronizationservices:
[oracle@rac1~]$ps -ef | grep
CheckthepingingofVirtualIP:
[oracle@rac1~]$ping rac1-vip
[oracle@rac2~]$ping
rac2-vip
css
VerifytheprerequisitesforRDBMSinstallation(onlyONEnode):
[oracle@rac1cluvfy]$cd clusterware/cluvfy
[oracle@rac1cluvfy]$./runcluvfy.sh
Ifalltheaboveverificationscompletesuccessfully,thenyoucanproceedwiththeCRS
installation.
a.DownloadandextracttheOracle10gRelease2DatabaseSoftware(oneNODEonly)
[oracle10g@rac110gRAC]$unzipOra10gSetup.zip
[oracle10g@rac110gRAC]$cddatabase/
[oracle10g@rac1database]$ls
docinstallresponserunInstallerstagewelcome.html
b.InvoketheOracleUniversalInstaller(oneNODEonly)
[oracle10g@rac1database]$./runInstaller
next
1.Youcanverifytheclusterinstallationbyclickinginstalledproducts.Click
2.ChooseEnterpriseEdition
3.ChooseHomedetails:
Name:OraDb10g_home1
Path:/oracle10g/oracle/product/10.2.0/db_1
4.ClickSelectallforinstallinginallclusterednodes
5.Verifytherequirementsandclicknext
6.ChooseInstalldatabaseSoftwareonly
7.Clickinstalltostartinstallation
8.ExecuteConfigurationScripts
Executeroot.shinallnodes(oneatatime)asrootuseronly
[root@rac1~]#/oracle/product/10.2.0/db_1/root.sh
Oncethescriptsarerunsuccessfully,returntoExecuteConfiguration
Scriptswindow
andclickok
9.ClickExittoexittheinstallation
c. SettheOracleEnvironment
Editthe.bash_profileoforacleuser(allnodes):
export ORACLE_BASE=/oracle/product/
export ORACLE_HOME=/oracle/product/10.2.0/db_1
export ORA_CRS_HOME=/crs/oracle/product/10.2.0/crs
export ORACLE_SID=orcl1
--> change sid in other nodes
export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:$PATH
exportLD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORA_CRS_HOME/lib
Executethe.bash_profile(allnodes):
[oracle@rac1~]$..bash_profile
Verifytheenvironment:
[oracle@rac1~]$echo$ORACLE_HOME
/oracle/product/10.2.0/db_1
Edit /etc/rc.local to add permission details to be assigned at every startup (all nodes):
[root@rac1 ~]# vi /etc/rc.local
chown oracle:dba /dev/raw/raw4; chmod 600 /dev/raw/raw4
fast_start_mttr_target
instance_groups (for parallel query operations)
Spfile Features:
Oracle recommends to use spfile
Easier to use and manage
Single, central location
Parameters are applicable to all instances
It permits dynamic changes
Persistent
Can specify common values and specific values
*.open_cursors=300
racins1.open_cursors=200