Sie sind auf Seite 1von 3

RAC Database High Level Setps

Sr.
HIGH Level Steps for RAC Setup
No
1 Verify whether all IP's and Host name are properly configured in /etc/hosts file and ifconfig -a

2 Verify the /etc/hosts file name for Public, Private and VIP hostname.

3 Verify SCAN name provided by sysadmin is working as expected in round robin fashion
Verify Whether the Physical memory and Swap space are configured as required by oracle RAC
4
standards.
Verify whether Storage requirements are properly met and configured correctly as communicated earlier
5
by the mail.
Set the below mentioned Kernel parameters on the server as per oracle RAC requirement.
a) SHARED MEMORY
b) SEMAPHORES
6 c) FILE HANDLES
d) LOCAL IP RANGE
e) HUGE PAGES
f) NETWORKING
7 Configured shell limits.
Restart both the servers to check the server is coming online without any issues, after all the above
8
changes.
Verify the separate mount point /u01 is provided on both nodes which will be local to each node.
Creating the below directories as a pre-requisite for the Oracle Grid Software Installation
9 a) Creating the Oracle Base Directory /u01/app/oracle/
b) Creating the Oracle Inventory Directory
c) Creating the Oracle Grid Infrastructure Home Directory /u01/app/11.2.0.4/grid/
10 Verify if any missing rpms.

11 Verify SeLinux is enabled

12 Verify NTP is disabled


Installing ASM Libraries
a) kmod-oracleasm.x86_64 0:2.0.8-15.el7
13
b) oracleasmlib.x86_64 0:2.0.12-1.el7
c) oracleasm-support.x86_64 0:2.1.8-3.el7
Verify following Below User and Groups are created on both the nodes and make sure Group id and
User id should be same on both the nodes
--------------------------
Add Groups
14
==========
/usr/sbin/groupadd -g 501 oinstall
/usr/sbin/groupadd -g 502 dba
/usr/sbin/groupadd -g 503 oper

1|Page
RAC Database High Level Setps
/usr/sbin/groupadd -g 504 asmadmin
/usr/sbin/groupadd -g 506 asmdba
/usr/sbin/groupadd -g 507 asmoper

Add User
========
cat /etc/passwd
/usr/sbin/useradd -u 601 -c "Oracle Grid-RDBMS Owner" -g oinstall -G
dba,oper,asmadmin,asmdba,asmoper -d /home/oracle -s /bin/ksh oracle
15 Verify passwordless SSH is configured and enable for both nodes for Oracle user and disable any banners

16 Configure ASM Libraries


Creating and configuring (/etc/sysconfig/oracleasm) the ASM Disks.
a) OCRVOTE1
b) OCRVOTE2
17 c) OCRVOTE3
d) DATADG_DISK01
e) FRADG_DISK01
f) REDODG_DISK01
18 Upload GRID and Database software on server- node one
Install CVU Rpms on both the nodes as root user
19
a) cvuqdisk-1.0.9-1.rpm
After downloading the Oracle Database/RAC software, and before attempting any installation,
download Patch 19404309 from My Oracle Support, and apply the patch using the instructions in the
20
patch README.

21 RAN CLUVFY for system readiness and get them fix any issues by sysadmin

22 Install RAC Grid Software


Apply the below Oracle Bug Patches for Grid and RDBMS respectively
23
1. 18370031
Create ASM Disk Groups using ASMCA utility.
a) DATADG
24
b) REDODG
c) FRADG
RAN CLUVFY to ensure that we have completed the required system/cluster configuration and pre-
25
installation steps so that Oracle RDBMS as (Oracle RAC) installation
26 Install Database Software on both nodes as dbhome_1

27 Apply the Bug Patch 19692824 before running root.sh


Repeat the above RDBMS Software installing for two Other DB HOME’s (dbhome_2 and dbhome_3) and
28
apply bug fix patches
Create 3 Databases (LGNP, MISP, TMSP) in each different ORACLE_HOME that was installed.
Following are the ORACLE_HOME directories for all three Databases.
29
1) TMSP = /u01/app/oracle/product/11.2.0.4/dbhome_1
2) MISP = /u01/app/oracle/product/11.2.0.4/dbhome_2

2|Page
RAC Database High Level Setps
3) LGNP = /u01/app/oracle/product/11.2.0.4/dbhome_3

Configure the three Custom Listeners as below on each node.


1. LISTENER1510 with port, 1510 (TMSP)
30
2. LISTENER1520 with port, 1520 (MISP)
3. LISTENER1530 with port, 1530 (LGNP)
Configure Local_listener Parameter on both nodes for each DB, with respective port numbers as below:
DB : TMSP1 & 2 -> local_listener=1510
31
DB : MISP1 & 2 -> local_listener=1520
DB : LGNP1 & 2 -> local_listener=1530
Bounce the Server on each node one at time to Test DB availability and was working fine without any
32
down time for the database.
Verify Health check on below items found to be working fine
1. Cluster health check
2. Voting Disk health check
3. OCR health check
4. ASM Disk group health check
5. SCAN Working in Round Robin fashion
33
6. SCAN Listener Health Check
7. SCAN IP’s registration with cluster
8. Verify Remote listener pointing to SCAN name and default port of 1521.
9. Verify Local listener pointing to VIP’s with respect to each host VIP and with respective custom
port.
10. All 3 DB’s Health Check with HTML Reports

3|Page

Das könnte Ihnen auch gefallen