Sie sind auf Seite 1von 24

Oracle 10g Real Application Cluster

Topic
Version
Platform

: Real Application Clusters


: Oracle 10g Release 2
: RHEL AS 4 Update 4

CONTENTS
Oracle 10g RAC
1. REQUIREMENTS
2. OPERATING SYSTEM INSTALLATION
3. CONFIGURING OPERATING SYSTEM
4. CONFIGURING USER and SHARED DISK
5. CONFIGURING OCFS2 (Oracle Clustered File System)
6. INSTALL AND CONFIGURE ORACLE CLUSTERWARE
7. INSTALL ORACLE DATABASE SOFTWARE
8. CONFIGURING DISKS FOR ASM
9. CREATE ORACLE DATABASE USING OCFS2 AND ORACLE ASM
10.COMMAND LINE UTILITIES
11.HOW TO CHANGE RAC DB NOARCHIVELOG TO ARCHIVELOG MODE
12. UPGRADING CLUSTER WARE AND ORACLE DATABASE
13. CLUSTERWARE AND ASM ADMINISTRATION

----------------------------------------------------------------------------------------------1. REQUIREMENTS
----------------------------------------------------------------------------------------------Hardware:
========
Servers /Nodes
Processor
Ram
Hard Disk
Network Cards
Shared Disk

: Min 2 nodes
: PIV and above
: Min 1GB
: 15 GB for Operating System and 10 GB for Oracle Cluster ware
And Oracle Database Software
: 2 NIC cards in each node (1 for Public IP, 1 for Private IP)
: ISCSI Shared Storage

Software:
========
Operating System
Oracle Cluster Software
Oracle Database Software
OCFS2 rpm

: RedHat Linux AS4 Update 4


: Oracle 10g Release 2 Clusterware
: Oracle 10g Release 2 Enterprise Edition for Linux
: Down load OCFS2 RPM as per your operating system
Kernel use below link

http://oss.oracle.com/projects/ocfs/
ASM RPM

: Down Load ASM RPM as per your operating system


Kernel

http://oss.oracle.com/projects/asm/

----------------------------------------------------------------------------------------------2. OPERATING SYSTEM INSTALLATION


----------------------------------------------------------------------------------------------Partitions (select manual disk partitioning):
/
/boot
/usr

- 10 GB
- 200 MB
- 8 GB

swap
/tmp
/var
/opt
/oracle
/crs
/home

- Twice of RAM
- 1GB
- 2 GB
- 500 M
- 10 GB
- 5 GB
- 10 GB

Host Name
Public IP
Firewall
SE Linux
Packages

- Fixed Size
- for root
- Fixed Size
- for Boot Loader
- Fixed Size
- for selected packages
[ select all packages while installation]
- Fixed Size
- swap / virtual memory
- Fixed Size
- temporary files area
- Fixed Size
- O/S log files
- Fixed Size
- for optional packages
- for oracle database software
- for oracle cluster ware software
- for storing user files

: rac1 and rac2 respectively for both nodes


: Configure now or Configure using DHCP
: Select nofirewall
: Select Disable
: Select Customize and select Everything

------------------------------------------------------------------------------------------------------3. CONFIGURING OPERATING SYSTEM


------------------------------------------------------------------------------------------------------a. Hostname and IP address
Use 'neat' command to assign IP address and Hostname
# neat
RAC1
RAC2
Public IP
: 192.168.10.1
192.168.10.2
Private IP
: 10.10.0.1
10.10.0.2
[Activate both network cards]

Mask: 255.255.255.0
Mask: 255.0.0.0

Verify your configurations by (on both nodes):

# ifconfig
# hostname

--> for ip addresses


--> for hostname

b. Setup the Hosts file (all nodes)

# vi /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1
localhost.localdomain localhost

#PUBLIC IP
192.168.10.1
192.168.10.2

rac1
rac2

#PRIVATE IP
10.0.0.1
10.0.0.2

rac1-priv
rac2-priv

#VIRTUAL IP
192.9.200.226
192.9.200.227

rac1-vip
rac2-vip

Note:Ensurethatthenodenamesarenotincludedfortheloopbackaddressinthe
/etc/hostsfile.Ifthemachinenameislistedintheintheloopbackaddressentryas
below:

127.0.0.1 rac1 localhost.localdomain localhost


itwillneedtoberemovedasshownbelow:

127.0.0.1 localhost.localdomain localhost


IftheRACnodenameislistedfortheloopbackaddress,youwillreceivethefollowingerror
duringtheRACinstallation:
ORA00603:ORACLEserversessionterminatedbyfatalerror

Ping each other node to check the connectivity:

[rac1]# ping rac2


[rac1]# ping rac2-priv
[rac2]# ping rac1
[rac2]# ping rac1-priv
Note: Virtual IP will not ping until the clusterware is installed
c. Setup the kernel parameters (all nodes)

vi /etc/sysctl.conf
kernel.shmmax = 4294967295
kernel.shmall = 268435456
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default=1048576
net.core.rmem_max=1048576
net.core.wmem_default=262144
net.core.wmem_max=262144
Load the sysctl settings (without reboot) on all nodes

[root@rac1 ~]# sysctl -p

Set the hostname and domain name (on all nodes):


Edit below file

vi /etc/sysconfig/network
Hostname=rac1
d. Check Firewall is disabled

[root@rac1 ~]# /etc/rc.d/init.d/iptables status


Firewall is stopped.
e.DisableSELinux(ifenabled)onallnodes:

[root@rac1 ~]# /usr/bin/system-config-securitylevel &


f. Enable/Disable services (both nodes)
#
#
#
#
#
#
#

chkconfig sendmail off


chkconfig cups off
chkconfig xinetd on
chkconfig telnet on
chkconfig vsftpd on
service xinetd restart
service vsftpd restart

-->
-->
-->
-->
-->
-->

turn off the sendmail configuration


turn off the printer service (optional)
for telnet service
enable telnet
for ftp service
restart the services

----------------------------------------------------------------------------------------------4. CONFIGURING USER and SHARED DISK


----------------------------------------------------------------------------------------------a. Create Oracle user and Directories
You will be using OCFS2 to store the files required to be shared for the Oracle Clusterware
software. When using OCFS2, the UID of the UNIX user oracle and GID of the UNIX
group dba should be identical on all machines in the cluster. If either the UID or GID are
different, the files on the OCFS file system may show up as "unowned" or may even be
owned by a different user.
Execute following commands in all node:
# groupadd -g 501 oinstall
# groupadd -g 502 dba
# groupadd -g 503 oper
# useradd -m -u 501 -g oinstall -G dba -d /home/oracle -s
/bin/bash -c "Oracle Software Owner" oracle
Setthepasswordfortheoracleaccount

# passwd oracle

(Change the pwd all node)


Changing password for user oracle.
New UNIX password: oracle
Retype new UNIX password: oracle
passwd: all authentication tokens updated successfully.

Create mount point for cluster files in all nodes:


# mkdir p /ocr
--> mount point for OCFS2
# mkdir -p /vote
--> mount point for OCFS2
# mkdir -p /oradata
--> mount point for OCFS2

b. Create partitions in the shared disk [ FROM ONE NODE ONLY]


Note: (If have any old partitions is there need to delete old partitions using fdisk
utility)
FileSystem
ocfs2
ocfs2
ocfs2
ASM

partition
/dev/sdb1
/dev/sdb2
/dev/sdb3
/dev/sdb5

size
500MB
500mb
10Gb
10G

# fdisk /dev/sdb

mountpoint
/ocr
/vote
/oradata
+ORCL_DATA1

using for all db


Oracle database files

--> give the name of the device detected [ /dev/sdb]


>Type'p'toprintpartitions

Command (m for help): p

Disk /dev/sdb: 40.4 GB, 137438952960 bytes


255 heads, 63 sectors/track, 16709 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot

Start

End

Blocks

Id

partitions

System
>Currentlythereareno

Command (m for help): n


>Type'n'forcreatingnew
partition
Command action
e
extended
p
primary partition (1-4)
p
Partition number (1-4):1
First cylinder (1-16709, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-16709, default 16709):
+500M

Create 3 Primary partitions and remaining as Extended partitions


Command (m for help):p
Disk /dev/sdb: 40.4 GB, 137438952960 bytes
255 heads, 63 sectors/track, 16709 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
/dev/sdb1
/dev/sdb2
/dev/sdb3
/dev/sdb4

Start
1
14
27
34

End
Blocks
13
104391
26
104422+
33
56227+
16709 133949970

Id
83
83
83
5

System
Linux
Linux
Linux
Extended

Command (m for help):n


>Type'n'forcreatingextended
partition
First cylinder (34-16709, default 34):
Using default value 34
Last cylinder or +size or +sizeM or +sizeK (34-16709, default 16709):
+40G
Command (m for help): n
First cylinder (47-16709, default 47):
Using default value 47
Last cylinder or +size or +sizeM or +sizeK (47-16709, default 16709):
+40G
....
....
Command (m for help): p

>Type'p'toprintallpartitions

Disk /dev/sdb: 40 GB, 137438952960 bytes


255 heads, 63 sectors/track, 16709 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
/dev/sdb1
/dev/sdb2
/dev/sdb3
/dev/sdb4
/dev/sdb5
/dev/sdb6

Start
1
124
2557
4990
4990
9854

End
123
2556
4989
16709
9853
14717

Blocks
987966
19543072+
19543072+
94140900
39070048+
39070048+

Id
83
83
83
5
83
83

System
Linux
Linux
Linux
Extended
Linux
Linux

>Type'w'tosaveandquit

Command (m for help): w


The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

Aftercreatingallrequiredpartitions,youshouldnowupdatethekernelofthepartition
changesusingthefollowingsyntaxastherootuseraccount:
[root@rac1~]#partprobe

>issuethiscommandfromallnodes

[root@rac1root]#fdisk -l
partitionsonremainingnodes.

/dev/sdb >Tochecktheupdatedlistof

Disk /dev/sdb: 40.4 GB, 137438952960 bytes


255 heads, 63 sectors/track, 16709 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
/dev/sdb1
/dev/sdb2
/dev/sdb3
/dev/sdb4
/dev/sdb5
/dev/sdb6

Start
1
124
2557
4990
4990
9854

End
123
2556
4989
16709
9853
14717

Blocks
987966
19543072+
19543072+
94140900
39070048+
39070048+

Id
83
83
83
5
83
83

System
Linux
Linux
Linux
Extended
Linux
Linux

c. Setting Shell limits for Oracle User


To improve the performance of the software on Linux systems, Oracle recommends you
increase the following shell limits for the oracle user:
Maximum number of open file descriptors
-> nofile
Maximum number of processes available to a single user -> nproc

-> 65536 (hard limit)


-> 16384 (hard limit)

Execute the following from all nodes:

[root@rac1 ~]# cat >> /etc/security/limits.conf <<EOF


oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
EOF
cat >> /etc/pam.d/login <<EOF
session required /lib/security/pam_limits.so
EOF
For the Bourne, Bash, or Korn shell, add the following lines to the /etc/profile file by
running the following command:

cat >> /etc/profile <<EOF


if [ \$USER = "oracle" ]; then
if [ \$SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
EOF
Setting correct date and time
During the installation of Oracle Clusterware, the Database, and the Companion CD, the
Oracle Universal Installer (OUI) first installs the software to the local node running the
installer (i.e. linux1). The software is then copied remotely to all of the remaining nodes in
the cluster (i.e. linux2). During the remote copy process, the OUI will execute the UNIX
"tar" command on each of the remote nodes to extract the files that were archived and
copied over. If the date and time on the node performing the install is greater than that of the
node it is copying to, the OUI will throw an error from the "tar" command indicating it is
attempting to extract files stamped with a time in the future.

Ensure that the date and time of the all nodes are same (unless you are using Network Time).
To set the date and time now, you can execute the following commands:

rac1# date -s "9/13/2010 23:00:00"


rac2# date -s "9/13/2010 23:00:30"
>node2isgreaterthannode1forsafety
e. Configuring hangcheck-timer
Starting with Oracle9i Release 2 (9.2.0.2), the watchdog daemon has been deprecated by a
Linux kernel module named hangchecktimer which addresses availability and
reliability problems much better. The hangcheck timer is loaded into the Linux kernel
and checks if the system hangs. It will set a timer and check the timer after a certain amount
of time. There is a configurable threshold to hangcheck that, if exceeded will reboot the
machine.
The hangcheck-timer was normally shipped only by Oracle, however, this module is now
included with Red Hat Linux AS starting with kernel versions 2.4.9-e.12 and higher

[root@rac1 ~]# find /lib/modules -name "hangcheck-timer.ko"

/lib/modules/2.6.9-22.EL/kernel/drivers/char/hangcheck-timer.ko

--> check the module presence


hangchecktick: This parameter defines the period of time between checks of system
health. The default value is 60 seconds; Oracle recommends setting it to 30 seconds.

hangcheckmargin:Thisparameterdefinesthemaximumhangdelaythatshould
betoleratedbeforehangchecktimerresetstheRACnode.Itdefinesthemarginof
errorinseconds.Thedefaultvalueis180seconds;Oraclerecommendssettingitto
180seconds.
Setthehangchecktimersettingsin/etc/modprobe.conf(allnodes)

[root@rac1 ~] su # echo "options hangcheck-timer hangcheck_tick=30


hangcheck_margin=180" >> /etc/modprobe.conf

Addhangchecktimermodulein/etc/rc.localtoprobeitateverystartup:

[root@rac1 ~]# vi /etc/rc.local


/sbin/modprobe hangcheck-timer

To test the hangcheck-timer module manually (before reboot):


[root@rac1 ~]# modprobe hangcheck-timer
[root@rac1 ~]# grep Hangcheck /var/log/messages | tail -2

May 29 11:40:35 rac1 kernel: Hangcheck: starting hangcheck timer 0.5.0 (tick is 30
seconds, margin is 180 seconds).
f. Configure racnodes for remote access:
Before you can install Oracle RAC 10g, you must configure secure shell (SSH) for the UNIX user account you
plan to use to install Oracle Clusterware 10g and the Oracle Database 10g software. The installation and
configuration tasks described in this section will need to be performed on both Oracle RAC nodes. As
configured earlier in this article, the software owner for Oracle Clusterware 10g and the Oracle Database 10g
software will be "oracle".
The goal here is to setup user equivalence for the oracle UNIX user account. User equivalence enables the
oracle UNIX user account to access all other nodes in the cluster (running commands and copying files)
without the need for a password. Oracle added support in 10g Release 1 for using the SSH tool suite for setting
up user equivalence. Before Oracle Database 10g, user equivalence had to be configured using remote shell
(RSH).
Note: The first step in configuring SSH is to create an RSA public/private key pair on both Oracle RAC nodes
in the cluster. The command to do this will create a public and private key for RSA (for a total of two keys per
node). The content of the RSA public keys will then need to be copied into an authorized key file which is then
distributed to both Oracle RAC nodes in the cluster.

Performallthebelowsteps(1to3)intheallthenodes
1. Log on as the oracle UNIX user account.

# su - oracle
2. If necessary, create the .ssh directory in the oracle user's home directory and set the correct
permissions to ensure that only the oracle user has read and write permissions:

$ mkdir -p ~/.ssh
$ chmod 700 ~/.ssh
3.

Enter the following command to generate an RSA key pair (public and private key) for the SSH protocol:

$ /usr/bin/ssh-keygen -t rsa
At the prompts:
Accept the default location for the key files (press [ENTER]).

Performthefollowingstepsfromthefirstnodeonly.
First, determine if an authorized key file already exists on the node (~/.ssh/authorized_keys). In most
cases this will not exist since this article assumes you are working with a new install. If the file doesn't exist,
create it now:

4.$ touch ~/.ssh/authorized_keys


5.$ cd ~/.ssh
6.$ ls -l *.pub
-rw-r--r-- 1 oracle oinstall 395 Jul 30 18:51 id_rsa.pub
The listing above should show the id_rsa.pub public key created in the previous section.
7.

In this step, use SCP (Secure Copy) or SFTP (Secure FTP) to copy the content of the
~/.ssh/id_rsa.pub public key from both Oracle RAC nodes in the cluster to the authorized key file
just created (~/.ssh/authorized_keys). Again, this will be done from linux1. You will be
prompted for the oracle UNIX user account password for both Oracle RAC nodes accessed.
The following example is being run from lrac1 and assumes a two-node cluster, with nodes lrac1
and lrac2:

$ ssh rac1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys


The authenticity of host 'linux1 (192.168.10.1)' can't be established.
RSA key fingerprint is 46:0f:1b:ac:a6:9d:86:4d:38:45:85:76:ad:3b:e7:c9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'linux1,192.168.10.1' (RSA) to the ist of
known hosts.
oracle@linux1's password: xxxxx

$ ssh rac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys


The authenticity of host 'linux2 (192.168.10.2)' can't be established.
RSA key fingerprint is eb:39:98:a7:d9:61:02:a2:80:3b:75:71:70:42:d0:07.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'linux2,192.168.10.2' (RSA) to the ist of
known hosts.
oracle@linux2's password: xxxxx

$ scp ~/.ssh/authorized_keys linux2:.ssh/authorized_keys


oracle@linux2's password: xxxxx

Change the permission of the authorized key file for both Oracle RAC nodes in the cluster by
logging into the node and running the following:

$ chmod 600 ~/.ssh/authorized_keys


$ ssh rac2
$ ssh rac1

Now ssh prompt is didnt asked the password.

----------------------------------------------------------------------------------------------5. CONFIGURING OCFS2 (Oracle Clustered File System)


----------------------------------------------------------------------------------------------Install andConfigureOCFS2

In previous editions of this article, this would be the time where you would need to download the OCFS2
software from http://oss.oracle.com/. This is no longer necessary since the OCFS2 software is included with
Oracle Enterprise Linux. The OCFS2 software stack includes the following packages:

InstalltheOCFS2rpms(allnodes):
[root@rac1ocfs]#ls
ocfs22.6.978.EL1.2.31.i686.rpm
ocfs2tools1.2.21.i386.rpm
ocfs2console1.2.21.i386.rpm

[root@rac1 ofcs]# rpm -ivh *.rpm


Preparing...###########################################[100%]
1:ocfs2tools###########################################[33%]
2:ocfs22.6.978.EL###########################################[67%]
3:ocfs2console###########################################[100%]
ConfigureClusterNodestoOCFS(onallnodes):
[root@rac1~]#ocfs2console&
Cluster>ConfigureNodes>NodeConfigurationwindow>Add>Enterthehostname
andipaddressofallnodes(keepingportnumberunchanged)>Apply>Close>File>
Quit
[root@rac1 ~]# cat /etc/ocfs2/cluster.conf >toverifytheconfiguration
Understanding O2CB Service:
Before we can do anything with OCFS2 like formatting or mounting the file system,
we need to first have OCFS2's cluster stack, O2CB, running (which it will be as a
result of the configuration process performed above). The stack includes the
following services:
NM: Node Manager that keep track of all the nodes in the cluster.conf
HB: Heart beat service that issues up/down notifications when nodes join or
leave the cluster
TCP: Handles communication between the nodes
DLM: Distributed lock manager that keeps track of all locks, its owners and
status
CONFIGFS: User space driven configuration file system mounted at /config
DLMFS: User space interface to the kernel space DLM
All of the above cluster services have been packaged in the o2cb system service
(/etc/init.d/o2cb)
You can use the following commands to manage the o2cb services:

/etc/init.d/o2cb
/etc/init.d/o2cb
/etc/init.d/o2cb
/etc/init.d/o2cb
/etc/init.d/o2cb

status
load
online ocfs2
offline ocfs2
unload

>checkstatus
>Loadallocfsmodules
>Onlinetheclusterwecreated:ocfs
>Offlinetheclusterwecreated:ocfs
>Unloadallocfsmodules

Afterterinstallingocfsrpmschecktheocfsfilesystemstatus

/etc/init.d/o2cb status

Module "configfs": Loaded


Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Heartbeat dead threshold: 31
Network idle timeout: 30000
Network keepalive delay: 2000
Network reconnect delay: 2000
Checking O2CB heartbeat: Not active

Oracle is recommended to set the heart as 61 seconds


1. Need to offline the ocfs2 service and uload the modules
# /etc/init.d/o2cb offline
Stopping O2CB cluster ocfs2: OK
The above command will offline the cluster we created, ocfs2.
# /etc/init.d/o2cb unload
Unmounting ocfs2_dlmfs filesystem: OK
Unloading module "ocfs2_dlmfs": OK
Unmounting configfs filesystem: OK
Unloading module "configfs": OK
The above command will unload all OCFS2 modules.
Configure O2CB to Start on Boot and Adjust O2CB Heartbeat Threshold
You now need to configure the on-boot properties of the OC2B driver so that the cluster stack services will start
on each boot. You will also be adjusting the OCFS2 Heartbeat Threshold from its default setting of 31 to 61.
Perform the following on both Oracle RAC nodes in the cluster:
# /etc/init.d/o2cb configure
Configuring the O2CB driver.
This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets ('[]'). Hitting

<ENTER> without typing an answer will keep that current value.


will abort.

Ctrl-C

Load O2CB driver on boot (y/n) [n]: y


Cluster to start on boot (Enter "none" to clear) [ocfs2]: ocfs2
Specify heartbeat dead threshold (>=7) [31]: 61
Specify network idle timeout in ms (>=5000) [30000]: 30000
Specify network keepalive delay in ms (>=1000) [2000]: 2000
Specify network reconnect delay in ms (>=2000) [2000]: 2000
Writing O2CB configuration: OK
Loading module "configfs": OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK

FormatthedrivewithOCFS2Filesystem(fromOnlyONEnode):
Note: OCFS supports format and mount the OCFS2 file system using either the GUI tool ocfs2console or
the command-line tool mkfs.ocfs2. From the ocfs2console utility, use the menu [Tasks] [Format].
The instructions below demonstrate how to create the OCFS2 file system using the command-line tool
mkfs.ocfs2.

#mkfs.ocfs2b4KC32KN4Locr/dev/sdb1
whereb>Blocksize
C>Cluster
N>Nodes
L>Label
Note: Need to format which are all disks which are using OCFS2 file system

#mkfs.ocfs2b4KC32KN4Locr/dev/sdb2
#mkfs.ocfs2b4KC32KN4Locr/dev/sdb3
Mount the OCFS2 Filesystem
Now that the file system is created, we can mount it. Let's first do it using the command-line, then I'll show how
to include it in the /etc/fstab to have it mount on each boot

# mount -t ocfs2 -o datavolume,nointr -L "ocr"


# mount -t ocfs2 -o datavolume,nointr -L "vote"

/ocr
/vote

# mount -t ocfs2 -o datavolume,nointr -L "oradata"

/oradata

Add entry in /etc/fstab for auto mount at startup (all nodes):


/dev/sdb1
/dev/sdb2
/dev/sdb3

/ocr
/vote
/oradata

ocfs2 _netdev,datavolume,nointr 0 0
ocfs2 _netdev,datavolume,nointr 0 0
ocfs2 _netdev,datavolume,nointr 0 0

Check permissions on the new file system (all nodes):


# ls -ld /ocr

/vote /oradata

drwxr-xr-x 3 root root 4096 May 25 21:29 /ocr


Change it to Oracle user's access (from both nodes) - all nodes:

# chown oracle:oinstall /ocr /vote


# chmod 775 /ocr /vote /oradata
# ls -ld /ocr /vote /oradata

/oradata;

drwxrwxrx3oracledba4096May2521:29/ocr
drwxrwxrx3oracledba4096May2521:29/vote
drwxrwxrx3oracledba4096May2521:29/oradata

----------------------------------------------------------------------------------------------------------------6. INSTALL AND CONFIGURE ORACLE CLUSTERWARE

Logoutfromrootuser(GUI)andLogintoOracleuser
a.DownloadandextracttheOracle10gRelease2Clusterware(onlyPrimarynode)
[oracle10g@rac10gsw]$unzip10201_clusterware_linux32.zip
[oracle10g@rac110gsw]$ls
[oracle10g@rac110gsw]$unzip10201_clusterware_linux32.zip
[oracle10g@rac110gsw]$ls
clusterware
[oracle10g@rac110gsw]$cdclusterware/
[oracle10g@rac1clusterware]$ls
cluvfydocinstallresponserpmrunInstallerstageupgradewelcome.html
Verifythenodeconnectivity(onlyONEnode):
[oracle10g@rac1cluvfy]$
[oracle10g@rac1cluvfy]$./runcluvfy.shcompnodeconnrac1,rac2verbose
Verifytheaccesstosharedstorage(onlyONEnode):

[oracle10g@rac1cluvfy]$./runcluvfy.shcompssanrac1,rac2verbose

VerifytheprerequisitesforCRSinstallation(onlyONEnode):
[oracle10g@rac1cluvfy]$./runcluvfy.shstageprecrsinstnrac1,rac2verbose
Ifalltheaboveverificationscompletesuccessfully,thenyoucanproceedwiththeCRS
installation.
c.InvoketheOracleUniversalInstaller(onlyONEnode)
[oracle10g@rac1clusterware]$./runInstaller
1.ClickNext
2.ChoosepathforOraInventory/home/oracle/oraInventory,ChooseOraclegroup
asoinstall
3.HomeName:OraCrs10g_home
Path:/crs/app/product/10.2.0/crs
4.Verifyrequirementsandclicknext
5.Specifyclusterconfigurationdetails:
ClusterName:crs
Clusternodes:rac1 rac1priv
rac1vip
rac2 rac2priv
rac2vip
6.Verifythenetworkinterfaceusage:
eth0 192.168.10.0 Public
eth1 10.10.0.0
Private
7.SpecifyOracleClusterRegistryLocation:(Needtoselectexternalredundancy
only)
Location:
/ocr/ocr1.dbf
8.SpecifyVotingDiskLocation:(Needtoselectexternalredundancyonly)
Location:/vote/vote1.dbf
9.ClickInstalltostartinstallation
10.ExecuteConfigurationScripts:
ExecuteorainstRoot.shonallnodesasrootuseronly
[root@rac1~]#/home/oracle/oraInventory/orainstRoot.sh
Changingpermissionsof/home/oracle/oraInventoryto770.
Changinggroupnameof/home/oracle/oraInventorytooinstall
Theexecutionofthescriptiscomplete

[Donotexecutesimultaneouslyonallnodes]
[root@rac1~]#/crs/app/oracle/crs/root.sh

Whileexecutingroot.shinanyoftheremotenode,ifyougetamessageeth0isnot
publicorany
similarerror,youneedtoexecutetheVIPCA(VirtualIPConfigurationAssistant)
manually.
Runningvipcamanually:

[root@rac2~]#sh /crs/app/oracle/crs/bin/vipca
EntertheproperIPaddressofyourVIPanditsaliasnames,thenclickFinish
tocompletetheconfiguration.
YoucanverifythepingingofVirtualIPaddressnow:
[oracle@rac2~]$ping rac1-vip
[oracle@rac1~]$ping rac2-vip
ReturntotheExecuteConfigurationScriptsScreenandClickOK
11.Oncetheconfigurationsarerunsuccessfully,clickExittoexittheinstallation
d.Postinstallverification(allnodes)
Listtheclusternodes:
[oracle@rac1~]$/crs/app/product/crs/bin/olsnodes -n
rac11
rac22
Checkoracleclusterautostartupscripts:
[oracle@rac1~]$ls -l /etc/init.d/init.*
rxrxrx1rootroot1951May2921:30/etc/init.d/init.crs
rxrxrx1rootroot4716May2921:30/etc/init.d/init.crsd
rxrxrx1rootroot35396May2921:30/etc/init.d/init.cssd
rxrxrx1rootroot3192May2921:30/etc/init.d/init.evmd
Checkclusterreadyservices:
[oracle@rac1~]$ps -ef

| grep crs

Checkclustersynchronizationservices:
[oracle@rac1~]$ps -ef | grep
CheckthepingingofVirtualIP:
[oracle@rac1~]$ping rac1-vip
[oracle@rac2~]$ping

rac2-vip

css

----------------------------------------------------------------------------------------------7. INSTALL ORACLE DATABASE SOFTWARE

VerifytheprerequisitesforRDBMSinstallation(onlyONEnode):
[oracle@rac1cluvfy]$cd clusterware/cluvfy
[oracle@rac1cluvfy]$./runcluvfy.sh

stage -pre dbinst -n rac1,rac2 -verbose

Ifalltheaboveverificationscompletesuccessfully,thenyoucanproceedwiththeCRS
installation.
a.DownloadandextracttheOracle10gRelease2DatabaseSoftware(oneNODEonly)
[oracle10g@rac110gRAC]$unzipOra10gSetup.zip
[oracle10g@rac110gRAC]$cddatabase/
[oracle10g@rac1database]$ls
docinstallresponserunInstallerstagewelcome.html
b.InvoketheOracleUniversalInstaller(oneNODEonly)
[oracle10g@rac1database]$./runInstaller
next

1.Youcanverifytheclusterinstallationbyclickinginstalledproducts.Click

2.ChooseEnterpriseEdition
3.ChooseHomedetails:
Name:OraDb10g_home1
Path:/oracle10g/oracle/product/10.2.0/db_1
4.ClickSelectallforinstallinginallclusterednodes
5.Verifytherequirementsandclicknext
6.ChooseInstalldatabaseSoftwareonly
7.Clickinstalltostartinstallation
8.ExecuteConfigurationScripts
Executeroot.shinallnodes(oneatatime)asrootuseronly
[root@rac1~]#/oracle/product/10.2.0/db_1/root.sh
Oncethescriptsarerunsuccessfully,returntoExecuteConfiguration
Scriptswindow
andclickok
9.ClickExittoexittheinstallation

c. SettheOracleEnvironment
Editthe.bash_profileoforacleuser(allnodes):
export ORACLE_BASE=/oracle/product/
export ORACLE_HOME=/oracle/product/10.2.0/db_1
export ORA_CRS_HOME=/crs/oracle/product/10.2.0/crs
export ORACLE_SID=orcl1
--> change sid in other nodes
export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:$PATH
exportLD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORA_CRS_HOME/lib

Executethe.bash_profile(allnodes):
[oracle@rac1~]$..bash_profile
Verifytheenvironment:
[oracle@rac1~]$echo$ORACLE_HOME
/oracle/product/10.2.0/db_1

----------------------------------------------------------------------------------------------9. CREATE ORACLE DATABASE


----------------------------------------------------------------------------------------------a. Configure Listeners on all nodes (as it would be required for creating database by
DBCA)
Use netca and configure listeners and start the listener (from only ONE node)
[oracle10g@rac1 ~]$ netca
1. Choose RAC option
2. Choose Listener configuration
3. Choose Add
4. Add the name, protocol, and port details
5. Wait for listener to start in all nodes
If listener does not start, you can manually start the listeners by using LSNRCTL
command line utility
6. Click finish to exit

c. Create database on OCFS2 file system using DBCA .


Invoke Database Configuration Assistant (DBCA) - from only ONE node
[oracle10g@rac1 ~]$ dbca
1. Choose RAC Options
2. Choose Create database

3. Click Select All to select all the nodes


4. Choose the type of database General Purpose
5. Specify the Global Database Name as orcl
6. Choose Configure database with enterprise manager
7. Specify the passwords for user accounts
8. Choose Automatic storage management
9. Select the OCFS2 file system (eg: /oradata) where the database has to be created
10. Choose Use Oracle Managed Files and edit the OCFS file system (/oradata)
name if you want
11. Choose whether to use Flash Recovery Area and Archiving to your database
12. Select whether to create sample schemas
13. Review the services
14. Choose Automatic or custom for memory management. Ensure that you have
enough space for shared pool.
15. Review the files
16. Click Finish
17. Review the Initialization variables and click OK to start the database creation
18. Please wait until the database is created successfully
19. A summary is shown at the end for your information. Click exit
(you can note the SID, SPFile path and OEM address)
20. The database gets restarted after clicking exit button
You can visit OEM Home page by using the address
(Like: http://rac1:1158/em)

----------------------------------------------------------------------------------------------8. CONFIGURING DISKS FOR ASM


----------------------------------------------------------------------------------------------a. Configure disks for ASM with Standard Linux I/O
If you want to use ASM for keeping database files
Install the ASM rpms

#rpm ivh oracleasm*.rpm


Edit /etc/sysconfig/rawdevices to map the devices with Linux raw files (all nodes)
[root@rac1 ~]# vi /etc/sysconfig/rawdevices
--> /dev/sdb1 /dev/sdb2 /dev/sdb3 is not mapped as it used for OCFS2
/dev/raw/raw4 /dev/sda5

--> sda4 is not mapped as it is extended partition

Restart the rawdevices service (all nodes):


[root@rac1 ~]# service rawdevices restart
Change permissions to raw devices, so that the Oracle user has read and write access
(all nodes):
[root@rac1 ~]# chown oracle:dba /dev/raw/raw4; chmod 600 /dev/raw/raw4

Edit /etc/rc.local to add permission details to be assigned at every startup (all nodes):
[root@rac1 ~]# vi /etc/rc.local
chown oracle:dba /dev/raw/raw4; chmod 600 /dev/raw/raw4

b. Configure ASM instance (using DBCA)


Invoke Database Configuration Assistant (DBCA) - from only ONE node
[oracle@rac1 ~]$ dbca
1. Choose RAC Options
2. Choose Configure Automatic Storage Management
3. Click Select All to select all the nodes to be configured
4. Specify the password for SYS user of ASM instance.
Choose Spfile for creating parameter file. Specify the location of OCFS2 file
system:
/oradata/spfile+ASM.ora
5. Click Ok to create ASM instance
6. Initially, you'll not have any diskgroups created. Click create new to created
diskgroups.
7. Give the diskgroup name, select disk paths required, specify the fail group name,
and click OK
8. Please wait until the disk group is created.
9. Now, you can see the list the diskgroups created.
10. Similarly, you can create many diskgroups with the existing disks
11. Finally, click finish to exit ASM configuration
You can verify the asm instance (all nodes):
[oracle10g@rac1 admin]$ ps -ef | grep asm
You can login to asm instance:
[oracle@rac1 admin]$ export ORACLE_SID=+ASM1
[oracle@rac1 admin]$ sqlplus
SQL*Plus: Release 10.2.0.1.0 - Production on Wed May 30 09:47:57 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Enter user-name: /as sysdba
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
SQL> select name from v$asm_diskgroup;
SQL> select group_number, disk_number, mount_status, header_status, state, name,
failgroup from v$asm_disk;
SQL> select group_number, name, state, free_mb, offline_disks from v$asm_diskgroup;
c. Create database using DBCA
Invoke Database Configuration Assistant (DBCA) - from only ONE node

[oracle10g@rac1 ~]$ dbca


1. Choose RAC Options
2. Choose Create database
3. Click Select All to select all the nodes
4. Choose the type of database General Purpose
5. Specify the Global Database Name as orcl
6. Choose Configure database with enterprise manager
7. Specify the passwords for user accounts
8. Choose Automatic storage management
9. Select the Disk groups where the database has to be created
10. Choose Use Oracle Managed Files and edit the diskgroup name if you want
11. Choose whether to use Flash Recovery Area and Archiving to your database
12. Select whether to create sample schemas
13. Review the services
14. Choose Automatic or custom for memory management. Ensure that you have
enough space for
shared pool.
15. Review the files
16. Click Finish
17. Review the Initialization variables and click OK to start the database creation
18. Please wait until the database is created successfully
19. A summary is shown at the end for your information. Click exit
(you can note the SID, SPFile path and OEM address)
20. The database gets restarted after clicking exit button
You can visit OEM Home page by using the address
(Like: http://rac1:1158/em)

------------------------------------------------------------------------------------------------------10. COMMAND LINE UTILITIES


------------------------------------------------------------------------------------------------------Using command line utilities:
Manual configuration of OEM dbconsole:
1. Ensure that you have listeners and password files created for your database and asm
instances in all instances
2. Configure the repository for Enterprise Manager

$ emca -repos create


3. Configure the EM dbconsole
$ emca -config dbcontrol db
4. For any help in the syntax, you can use the
following command
$ emca help=y

5. Start EM DBConsole during next startup:


$ emctl start dbconsole
Manual configuration of database services with srvctl:
AddASMdetails:

$ srvctl add asm -n rac1 -i +ASM1 -o $ORACLE_HOME


$ srvctl add asm -n rac2 -i +ASM2 -o $ORACLE_HOME
$ srvctl enable asm -n rac1
$ srvctl enable adm -n rac2
Adddatabasedetails:
$ srvctl add database -d orcl -o /oracle/product/10.2.0/db_1
$ srvctl add instance -d orcl -i orcl1 -n rac1
$ srvctl add instance -d orcl -i orcl2 -n rac2
Checktheconfiguration:
$ srvctl config database -d orcl
Start or Stop database:
$ srvctl { start | stop } database -d orcl [ -o normal ]
Start or Stop instance:
$ srvctl { start | stop } instance -d orcl -i orcl1
CheckconfigurationintheOCR:
$crs_statt
Follow the steps (manual) to shutdown your servers:

$ emctl stop dbconsole


$srvctlstopdatabasedorcl
$srvctlstatusdatabasedorcl
$srvctlstopasmnrac1
$srvctlstopasmnrac2
$srvctlstopnodeappsnrac1
#crsctlstopcrs

--> to stop the Database Console


>tostopthedatabaseinalltheinstances
>tocheckthestatus
>tostoptheASMineachnode
>tostoptheASMineachnode
>tostopotherutilitiesineachnode
>tostopclusterreadyservices

Checklist at next startup:


Switch on the shared disks
Switch on the Servers
Check for CSS and CRS Processes (all nodes)

[oracle@rac1 ~]$ crsctl check crs


Check the Pinging between nodes
[oracle@rac2 ~]$ ping rac1
[oracle@rac2 ~]$ ping rac1-priv
[oracle@rac2 ~]$ ping rac1-vip
Check whether OCFS is mounted
[oracle@rac2 ~]$ mount | grep /oradata
Start the Node applications in all nodes
[oracle@rac1 ~]$ srvctl status nodeapps -n rac1
VIP is running on node: rac1
GSD is not running on node: rac1
Listener is not running on node: rac1
ONS daemon is running on node: rac1
[oracle@rac1 ~]$ srvctl start nodeapps -n rac1
[oracle@rac1 ~]$ srvctl start nodeapps -n rac2

Start the ASM instance in all nodes


[oracle@rac1 ~]$ srvctl start asm -n rac1
[oracle@rac1 ~]$ srvctl start asm -n rac2

Start the Database instance from one node


[oracle@rac1 ~]$ srvctl start database -d orcl

Start the Enterprise Manager DB Console on all nodes


[oracle@rac1 ~]$ emctl start dbconsole
[oracle@rac2 ~]$ emctl start dbconsole

Configure Network interfaces:

$ oifcfg getif -node rac1


$ oifcfg getif -global
$oifcfgsetif<interfacename>/<subnet>:<cluster_interconnect|public>
Note on Parameter file:
Parameter Types:
Identical across instances
db_name
compatible
cluster_database
control_file
db_block_size
Unique across instances
instance_name
instance_number
rollback_segments
thread
undo_tablespace
Multi-Valued Parameters

fast_start_mttr_target
instance_groups (for parallel query operations)
Spfile Features:
Oracle recommends to use spfile
Easier to use and manage
Single, central location
Parameters are applicable to all instances
It permits dynamic changes
Persistent
Can specify common values and specific values
*.open_cursors=300
racins1.open_cursors=200

Das könnte Ihnen auch gefallen