Beruflich Dokumente
Kultur Dokumente
Issue 02
Date 2019-01-30
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://e.huawei.com
Purpose
This document describes the infrastructure and application scenarios of the KunLun Database
11g (RAC) solution, and provides technical and configuration suggestions on deploying
Oracle Database 11g on KunLun servers.
Intended Audience
This document is intended for pre-sales or sales personnel to promote the KunLun Oracle 11g
solution to customers and provide technical solution suggestions. It is also applicable to
customers or deployment personnel for configuring Oracle Database 11g on KunLun servers.
The engineers must:
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Symbol Description
Change History
Changes between document issues are cumulative. The latest document issue contains all
changes made in previous issues.
Issue 02 (2019-01-30)
This is the second official release, 3.6.2 Configuring OSWatcher is updated.
Issue 01 (2018-11-30)
This is the first official release.
Contents
swap swap 20 GB
/boot Ext4 1 GB
11.2.0.X 11.2.0.4 None January 2015 January 2018 The baseline version
Extended is 11.2.0.1.
services in the 11.2.0.4 is the final
first year are patch set version for
provided for 11.2.
free (January Each patch set for
2015 to 11.2 is a complete
January 2016). installation package.
No new patches are
provided for 11.2.0.1
after September 13,
2011.
No new patches are
provided for 11.2.0.2
after October 31,
2013.
No new patches are
provided for 11.2.0.3
after August 27, 2015.
asmdba 1011
oper 1002
asmdba 1011
asmoper 1012
oper 1002
data2 300 GB
vdisk2 30 GB
vdisk3 30 GB
NOTE
You can change the environment variables and installation directories based on customer's requirements.
In this case, you need to change the environment variables and installation directories accordingly
during the Oracle RAC installation to ensure consistency.
Table 1-4 Oracle RAC environment variable and installation directories plan
User Environmental Variable Variable Value
The software selected during OS installation may be different. If an RPM package is missing,
manually configure the YUM source and install the RPM package. The YUM source needs to
be installed on both nodes.
Step 3 Run the following command to mount the image. Change sr0 to the actual device name.
mount /dev/sr0 /mnt
After the storage system is connected, install the following RPM package and run the rescan-
scsi-bus.sh script to scan for the mapping information required by the multipathing software
(if Huawei UltraPath is used, skip this step).
yum install sg3_utils*
Install the following packages to provide a GUI for the installation of Oracle Grid
Infrastructure and Oracle Database.
yum install java*
yum install xdpy*
During the installation check of Oracle Grid Infrastructure, the following packages need to be
installed. If other packages need to be installed during the installation check, mount the
images to install them.
yum install compat*
yum install libaio*
yum install ksh*
yum install gcc*
Step 5 If the database node OS is RHEL 7, the system displays an alarm indicating that the compat-
libstdc++-33-3.2.3 package is missing during the installation of Oracle Grid Infrastructure
and Oracle Database. Download the compat-libstdc++-33-3.2.3 package (64-bit) from the
official Red Hat website and install the package.
rpm -ivh compat-libstdc++-33-3.2.3-72.el7.x86_64.rpm
----End
l The following uses two database nodes as an example to describe how to install Oracle
RAC.
l Perform the following operations on each database node.
l In a test environment, you can set only one SCAN IP address.
Procedure
Step 1 In this example, the /etc/hosts file is configured based on following table. Change the values
according to the actual situation.
Step 2 Run the vi /etc/hosts command on each database node, and modify the following parameters
(modify the public, private, virtual, and SCAN IP addresses based the network plan). If no
DNS server is available, configure a SCAN IP address.
[root@dbn01 ~]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1 localhost.localdomain localhost
#Public
192.168.35.41 dbn01
192.168.35.42 dbn02
#Private
10.10.0.1 dbn01-priv
10.10.0.2 dbn02-priv
#Virtual
192.168.35.125 dbn01-vip
192.168.35.126 dbn02-vip
#scan
192.168.35.124 dbn-scan
Step 3 Change the value of HOSTNAME to the planned host name based on the OS type.
NOTE
Change the host name of each database node. The host name cannot contain special characters such as
underscores (_) and at signs (@). It is recommended that the host names be lowercase letters.
l If the database node OS is RHEL 6, run the following command to configure the host
name. Then disconnect and reestablish the remote SSH connection for the configuration
to take effect.
[root@dbn01 ~]# vim /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=dbn01
l If the database node OS is RHEL 7, run the following command to configure the host
name. Then disconnect and reestablish the remote SSH connection for the configuration
to take effect.
[root@dbn01~ ]# hostnamectl set-hostname dbn01
----End
The commands for disabling the firewall are different on different OSs. For details about how
to disable the firewall, visit the official website of each OS.
l Disabling THP is a risky operation. Verify that the parameters are correctly modified
before restarting the node.
l Perform this operation on each node.
l If the database node OS is RHEL 6, run the following command to perform a check:
Run the following command:
[root@dbn01~]# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
If "always" is displayed in the command output, run the following command to disable
THP and add "transparent_hugepage=never" to the end of the "kernel..." line in the /
boot/efi/EFI/redhat/grub.conf file.
The following uses the RHEL 6.8 default kernel as an example.
[root@dbn01~]# vi /boot/efi/EFI/redhat/grub.conf
title Red Hat Enterprise Linux 6 (2.6.32-642.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-642.el6.x86_64 ro root=UUID=7b3b7562-
f254-4572-8e25-5e49c7a42e37 rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=us
rd_NO_DM quiet nowatchdog nosoftlockup rcupdate.rcu_cpu_stall_timeout=600
rcupdate.rcu_cpu_stall_suppress=1 console=tty0 console=ttyS0,115200
transparent_hugepage=never
initrd /initramfs-2.6.32-642.el6.x86_64.img
Restart the node and check whether "never" is displayed in the command output.
[root@dbn01 ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
l If the database node OS is RHEL 7, run the following command to perform a check:
[root@dbn01~]# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
If "always" is displayed in the command output, run the following command to disable
THP and add "transparent_hugepage=never" to the end of the
"GRUB_CMDLINE_LINUX..." line in the /etc/default/grub file:
[root@dbn01 ~]# vi /etc/default/grub
...
GRUB_CMDLINE_LINUX="crashkernel=auto rhgb quiet efi=old_map nowatchdog
nosoftlockup rcupdate.rcu_cpu_stall_timeout=600
rcupdate.rcu_cpu_stall_suppress=1 hpet=disable pmtmr=0 console=tty0
console=ttyS0,115200 transparent_hugepage=never"
Procedure
Step 1 Run the following commands:
Step 2 Change the passwords of user grid and user oracle to ensure that all database nodes have the
same password.
[root@dbn01 ~]# passwd oracle
Changing password for user oracle.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: all authentication tokens updated successfully.
----End
Procedure
Log in as users oracle and grid separately and run the following commands to configure the
password-free interconnection service (the following uses user oracle as an example):
NOTE
If an error is reported during the following operations, check whether any format mistake exists in the
command, such as redundant spaces or Chinese characters.
Step 3 Copy the public key from the local node to the peer node.
Before performing this operation, you must modify the /etc/hosts file to enable the host name
and IP address resolution.
The following uses dbn01 and dbn02 as an example.
Run the following commands on RAC node 1:
ssh dbn01 "echo $(cat /home/oracle/.ssh/id_dsa.pub) >> /home/oracle/.ssh/
authorized_keys"
ssh dbn02 "echo $(cat /home/oracle/.ssh/id_dsa.pub) >> /home/oracle/.ssh/
authorized_keys"
ssh dbn01 "echo $(cat /home/oracle/.ssh/id_rsa.pub) >> /home/oracle/.ssh/
authorized_keys"
ssh dbn02 "echo $(cat /home/oracle/.ssh/id_rsa.pub) >>
/home/oracle/.ssh/authorized_keys"
Run the following commands and enter the password of user oracle when prompted:
dbn01: # ssh dbn01 "echo $(cat /home/oracle/.ssh/id_dsa.pub) >> /home/
oracle /.ssh/authorized_keys"
The authenticity of host 'dbn01 (192.168.35.41)' can't be established.
ECDSA key fingerprint is ee:4c:78:4b:d8:5f:8d:44:85:c5:46:9c:90:9d:13:bd [MD5].
Are you sure you want to continue connecting (yes/no)? // Enter yes.
Warning: Permanently added 'dbn01,192.168.35.41' (ECDSA) to the list of known
hosts.
Password: // Enter the password.
On the two nodes, use SSH to log in to each other. If you can log in to the peer node without
entering the password, the trust relationship is established.
On node 1, run the ssh dbn02 command to log in to node 2 without entering the password.
On node 2, run the ssh dbn01 command to log in to node 1 without entering the password.
----End
l Configure the environment variables for users oracle, grid and root by referring to 1.2.6
Oracle RAC Environment Variable Plan.
l Perform the following operations on each database node.
Procedure
Step 1 Before configuring Oracle environment variables for each Oracle RAC node, configure
ORACLE_UNQNAME for the database (in this example the value is dbn). Ensure that a
unique ORACLE_SID is assigned to each RAC node (such as dbn01 and dbn02).
dbn01: ORACLE_SID=dbn01
dbn02: ORACLE_SID=dbn02
Step 2 Open the .bash_profile file as user oracle and add the following content. When adding the
content for the other node, you need to change the value of ORACLE_SID. (The following
uses dbn01 as an example.)
NOTE
When adding the content for the other node, change the value of ORACLE_SID to dbn02. If the
system has more database nodes, change the values accordingly (such as dbn03 and dbn04).
[root@dbn01~]# su - oracle
[oracle@dbn01 ~]# vi .bash_profile
export PATH
export TMPDIR=$TMP
export ORACLE_HOSTNAME=dbn01
export ORACLE_UNQNAME=db0
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db
export ORACLE_SID=dbn01
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
Step 3 Open the .bash_profile file as user grid add the following content. When adding the content
for the other node, you need to change the value of ORACLE_SID. (The following uses
dbn01 as an example.)
NOTE
To add the content for the other node, change the value of ORACLE_SID to +ASM2. If the system has
more database nodes, change the values accordingly (such as +ASM3 and +ASM4).
[root@dbn01~]# su - grid
[grid@dbn01 ~]# vi .bash_profile
export PATH
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=dbn01
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_SID=+ASM1
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
Step 4 Set environment variables for user root by adding the following content (the following uses
dbn01 as an example):
[root@dbn01 ~]# vi .bash_profile
export GRID_HOME=/u01/app/11.2.0/grid
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db
export PATH=$GRID_HOME/bin:$GRID_HOME/OPatch:/sbin:/bin:/usr/sbin:/usr/bin
----End
Procedure
Step 1 Set the system user login limit parameter.
[root@dbn01 ~]# vi /etc/pam.d/login
Session required pam_limits.so #Add the line to the end of the configuration file.
After the parameter is set, the user login process loads the pam_limits.so module and sets the
login limits according to the /etc/security/limits.conf file.
Step 2 Set SELINUX to disabled.
[root@dbn01~]# vi /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
NOTE
l Add new parameters to the end of the configuration file instead of the middle to prevent the
parameters from being overwritten.
l The value of memlock is 90% of the physical memory (The value unit is KB).
l If the OS is RHEL 6, add the following content:
[root@dbn01 ~]# vi /etc/security/limits.conf
#ORACLE SETTING
grid soft nproc 65536
grid hard nproc 65536
grid soft nofile 65536
grid hard nofile 65536
oracle soft nproc 65536
oracle hard nproc 65536
oracle soft nofile 65536
oracle hard nofile 65536
oracle soft memlock 1425011166
oracle hard memlock 1425011166
l Add new parameters to the end of the configuration file instead of the middle to prevent the
parameters from being overwritten.
l kernel.shmmax: The value is calculated based on the physical memory capacity. In this example,
the physical memory of the node is 512 GB. It is recommended that the value of kernel.shmmax be
70% of the physical memory of the node. The value unit is byte. The value in this example is
calculated as follows: 512 x 70% x 1024 x 1024 x 1024 = 384829069721.6 (rounded up to
384829069722).
l kernel.shmall: calculated based on SGA/PAGE_SIZE.
l kernel.sem: If the number of processes supported by a single database exceeds 12000, set this
parameter based on the actual situation.
[root@dbn01~]# vi /etc/sysctl.conf
#ORACLE SETTING
kernel.shmall = 4294967296
kernel.shmmax = 384829069722
kernel.shmmni = 4096
kernel.sem = 12000 1536000 100 128
fs.file-max = 6815744
fs.aio-max-nr = 3145728
net.ipv4.ip_local_port_range = 9000 65500
net.ipv4.ipfrag_high_thresh = 16777216
net.ipv4.ipfrag_low_thresh = 15728640
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
vm.min_free_kbytes= 1048576
Step 6 Run the following command as user root to activate the newly configured system parameters:
[root@dbn01~]# sysctl -p
----End
After the HugePages parameters are applied, the memory space is immediately allocated and
occupied. Ensure that the parameters are correct. If the number is too large, all system
memory will be occupied and the node OS will become abnormal.
Perform the following operations on each database node.
Procedure
Step 1 Check whether the huge page memory size in the current system is 2048 KB (2 MB).
[root@dbn01~]# cat /proc/meminfo |grep Hugepagesize
Hugepagesize: 2048 kB
vm.nr_hugepages = (node physical memory x 0.65 x 0.75 + 2) x 1024 //Huge page memory
size
If the physical memory size is 512 GB and the unit of the huge page memory size is MB, the
calculation method is as follows:
Step 3 Add the following content to the end of the /etc/sysctl.conf file:
[root@dbn01~]# vi /etc/sysctl.conf
vm.nr_hugepages= 128819
Step 4 Save the configuration, run the following commands for the configuration to take effect, and
check whether the configuration is effective:
[root@dbn01~]# sysctl -p
[root@dbn01~]# sysctl -a|grep nr_hugepages
vm.nr_hugepages = 128819
vm.nr_hugepages_mempolicy = 128819
----End
NOTE
NOTE
Ignore the NTP alarms generated when installing Oracle Grid Infrastructure. The chrony package may
cause errors during the installation check of Oracle Grid Infrastructure. Before configuring the NTP
service, run the rpm -e chrony-2.1.1-3.el7.x86_64 command to uninstall the chrony package (modify
the file name based on the actual situation).
l If the OS is RHEL 6, perform the following steps:
a. Run the service ntpd start and chkconfig ntpd on commands to automatically
start the NTP service at system startup.
[root@dbn01 ~]# service ntpd start
[root@dbn01 ~]# chkconfig ntpd on
b. Run the vi /etc/ntp.conf command to open the NTP configuration file, and add
"server 192.168.35.31" to the file.
[root@dbn01 ~]# vi /etc/ntp.conf
Press I to edit the ntp.conf file and add the NTP server IP address to the file.
Format: server NTP server IP address
The following uses 192.168.35.31 as an example.
server 192.168.35.31
Press Esc to switch the vi editor to the CLI mode. Press the colon (:) key to go to
the bottom line. Type wq and press Enter to save the modification and exit the vi
editor.
c. Run the service ntp restart command to restart the NTP service.
[root@dbn01 ~]# service ntpd restart
l If the database node OS is RHEL 7, run the following command to modify the
configuration file:
a. Run the systemctl enable ntpd.service command to automatically start the NTP
service at system startup.
[root@dbn01~]# systemctl enable ntpd.service
Press I to edit the ntp.conf file and add the NTP server IP address to the file.
Format:
server NTP server IP address
The following uses 192.168.35.31 as an example.
server 192.168.35.31
Press Esc to switch the vi editor to the CLI mode, press the colon (:) key to go to
the bottom line, and type wq and press Enter to save the modification and exit the
vi editor.
c. Run the systemctl restart ntpd.service command to restart the NTP service.
[root@dbn01~]# systemctl restart ntpd.service
The following method applies only to Huawei storage devices. For non-Huawei storage devices, see the
official configuration guide.
For details about the storage mapping, see the configuration guide on the official Huawei website:
http://support.huawei.com/hedex/hdx.do?docid=EDOC1000084404&lang=en.
Step 1 After the storage configuration is complete, install the multipathing software on the server.
Download UltraPath from official Huawei website:
https://support.huawei.com/enterprise/en/cloud-storage/ultrapath-pid-8576127
Step 2 Log in to the server as user root, navigate to the directory of the software package, and run
the following command to grant permission on the install.sh script.
chmod 775 *
Step 3 Run the sh install.sh -f unattend_install.conf command to start the installation. After the
installation is complete, restart the system for the installation to take effect.
----End
Use the Linux kernel manager udev to configure the ASM disks. The udev commands may vary on
different OSs.
Procedure
Step 1 Query the SCSI IDs of all shared logical volumes.
NOTE
All volumes are shared volumes. Therefore, you can run the query commands on only one database
node.
l If the database node OS is RHEL 6, run the following commands (the following uses
sdb and sdc as an example).
[root@dbn01~]# /sbin/scsi_id -g -u -d /dev/sdb
3648fd8e10027e6d80550a12100000027
[root@dbn01~]# /sbin/scsi_id -g -u -d /dev/sdc
3648fd8e10027e6d80550a13d00000028
l If the database node OS is RHEL 7, run the following commands (the following uses
sdb and sdc as an example):
[root@dbn01~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdb
3648fd8e10027e6d80550a12100000027
[root@dbn01~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdc
3648fd8e10027e6d80550a13d00000028
Step 2 Log in to each database node as user root and edit the udev rule file.
NOTE
l If no file is found, run the vi command to create a file. Each device has a configuration item and
each configuration item contains multiple parameters. The SCSI ID of each volume is unique.
l Configure the following parameters based on the actual situation:
l RESULT: The format is RESULT=="3688860300000000ae036568967094421". Each device has a
unique SCSI ID. You can query the SCSI ID by running the scsi_id -g -u /dev/sdb command.
l SYMLINK: The format is SYMLINK+="asmdisk/OCRDISK01". asmdisk indicates the folder
name of the ASM disk group under /dev/, and OCRDISK01 indicates the ASM disk name. Set this
parameter based on the actual situation.
l OWNER and GROUP: The parameter formats are OWNER="grid" and GROUP="asmadmin".
Set the parameters based on the actual situation. The following uses user grid and user group
asmadmin as an example.
Name the volumes in SYMLINK mode. Do not use the NAME mode.
l If the database node OS is RHEL 6, add the following rules for the disks to be expanded
to the rule file (the following uses sdb and sdc as an example):
[root@dbn01 u01]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-
whitespace --device=/dev/$name", RESULT=="3688860300000000ae036568967094421",
SYMLINK+="asmdisk/OCRDISK01", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-
whitespace --device=/dev/$name", RESULT=="36888603000000009e036568967094421",
SYMLINK+="asmdisk/OCRDISK02", OWNER="grid", GROUP="asmadmin", MODE="0660"
l If the database node OS is RHEL 7, add the following rules for the disks to be expanded
to the rule file (the following uses sdb and sdc as an example):
[root@dbn01 u01]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --
whitelisted --replace-whitespace --device=/dev/$name",
RESULT=="3648fd8e1005e339f62e1bfb90000000c", SYMLINK+="asmdisk/OCRDISK01",
OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --
whitelisted --replace-whitespace --device=/dev/$name", RESULT=="
3648fd8e1005e339f760c521700000004", SYMLINK+="asmdisk/OCRDISK02",
OWNER="grid", GROUP="asmadmin", MODE="0660"
Step 3 Save the file and run the following commands to apply the rules:
[root@dbn01 u01]# /sbin/udevadm control --reload-rules
[root@dbn01 u01]# /sbin/udevadm trigger --type=devices --action=change
Do not run the /sbin/start_udev command to apply the rules when the services are running.
----End
l You can upload and decompress the packages on only one node (any directory except /u01).
l RHEL 7 is incompatible with Oracle 11g R2. Therefore, if the database node OS is RHEL 7, you
need to download and install the 9404309, 18370031, and 19692824 patches for Oracle Grid
Infrastructure and Oracle Database. For details, see the official Oracle document (Doc ID
1951613.1). The required patches are as follows:
p19404309_112040_Linux-x86-64.zip
p18370031_112040_Linux-x86-64.zip
p19692824_112040_Linux-x86-64.zip
p13390677_112040_Linux-x86-64_1of7.zip
p13390677_112040_Linux-x86-64_2of7.zip
p13390677_112040_Linux-x86-64_3of7.zip
p13390677_112040_Linux-x86-64_4of7.zip
p13390677_112040_Linux-x86-64_5of7.zip
p13390677_112040_Linux-x86-64_6of7.zip
p13390677_112040_Linux-x86-64_7of7.zip
NOTE
Six folders are generated after the packages are decompressed. The installation requires the grid folder
and database folder. If the client is also installed, the client folder is also required.
[root@dbn01 racinstall]# ls -l
total 5224040
drwxr-xr-x. 6 root root 4096 Sep 4 16:22 client
drwxr-xr-x. 8 root root 4096 Aug 24 23:41 database
drwxr-xr-x. 20 root root 4096 Aug 24 23:40 deinstall
drwxr-xr-x. 6 root root 4096 Aug 24 23:40 examples
drwxr-xr-x. 7 root root 4096 Aug 24 23:41 gateways
drwxr-xr-x. 8 root root 4096 Aug 24 23:42 grid
Step 4 Modify the user and user group of the installation file.
[root@dbn01 racinstall]chown -R grid:oinstall /opt/racinstall/grid/
[root@dbn01 racinstall]chown -R oracle:oinstall /opt/racinstall/database/
Step 5 Copy the .rpm package from the /opt/racinstall/grid/rpm/ directory to other database nodes
and install the package.
[root@dbn01 racinstall]# rpm -ivh /opt/racinstall/grid/rpm/cvuqdisk-1.0.9-1.rpm
----End
Procedure
If the number of CPU cores exceeds 288, ASM will fail to start during grid installation. To
prevent this problem, disable some cores and ensure that the number of CPUs does not exceed
288 before installing grid. For more details, see Doc ID 1416083.1 on the Oracle official
website and A.1 ASM Instance Fails to Start Due to Too Many CPU Cores.
Step 1 Log in to dbn01 as user grid and select a proper graphical display scheme.
Step 2 Navigate to the directory of the Grid Infrastructure installation package and run the following
commands to perform a check:
[grid@dbn01 ~]$ cd /opt/racinstall/grid/
[grid@dbn01 grid]$ ./runcluvfy.sh stage -pre crsinst -n dbn01,dbn02 -verbose
NOTE
Step 3 Navigate to the directory of the Grid Infrastructure installation package and run the
runInstaller script.
[grid@dbn01 ~]$ cd /opt/racinstall/grid/
[grid@dbn01 grid]$ ./runInstaller
Step 4 In the Grid Infrastructure installation wizard window, select Skip software updates and click
Next.
Step 5 Select Install and Configure Oracle Grid Infrastructure for a Cluster and click Next.
Step 8 Deselect Configure GNS, and change the value of SCAN Name. Ensure that the value is
consistent with the SCAN name corresponding to the SCAN IP address in the /hosts file.
Click Next.
Step 9 Click Add to add the information of the other node, and click OK.
Add information of all nodes in sequence.
Step 10 Select SSH Connectivity, enter the password of user grid, and click Setup to establish the
SSH trust relationship between the two nodes.
Step 11 Click Test to verify that the SSH trust relationship is established successfully.
On RHEL, the system may display the following error information. You need to enter yes on
each node during the first interaction between the nodes. Log in to each database node,
manually establish the SSH connection, enter yes. And then click Test again.
Step 12 When the following dialog box is displayed, click OK. Then click Next.
Step 13 Confirm the network port information, and set Interface Type corresponding to the VIP
network port to Public. Set Interface Type corresponding to the priv network port to Private.
If other management IP addresses exist, set Interface Type to Do not use. And then click
Next.
Step 14 Select Oracle Automatic Storage Management (Oracle ASM) and click Next.
Step 15 Configure the parameters to create an ASM disk group and click Next.
l Disk Group Name: Enter an ASM disk group name (such as OCR).
l Redundancy: Select Normal.
l AU Size: Retain the default value 1.
Step 16 Select Use same passwords for these accounts, configure a password for users SYS and
ASMSNMP, and click Next.
Step 17 Select Do not use Intelligent Platform Management Interface (IPMI) and click Next.
Step 18 Retain the default ASM user groups, and click Next.
Step 19 Specify the installation path of Oracle Grid Infrastructure. Set Oracle Base and Software
Location to the directories that have been configured. The value of Software Location
cannot be a subdirectory of the value of Oracle Base. After the paths are specified, click
Next.
Step 20 Retain the default value of Inventory Directory and click Next.
Step 21 The system may display alarms during the check. The recommended handling methods are as
follows:
l "Device Checks for ASM": You can ignore this alarm.
l "Task resolv.conf Integrity": This is a parsing timeout alarm and you can ignore it.
l "Package pdksh-5.2.14": ksh and mksh have been installed by default, and you do not
need to install pdksh. You can ignore this alarm. If pdksh is required, you can install
pdksh by referring to 2.1 Installing RPM PackagesInstalling RPM Packages.
l "OS Kernel Parameters": Click Fix & Check Again, log in to all listed database nodes
as user root, run the runfixup.sh script to configure the parameters, and click OK to
check again.
Step 23 When the following dialog box is displayed, run the scripts on the two database nodes in
sequence as user root to complete the installation.
Step 24 Switch to user root and run the /u01/app/oraInventory/orainstRoot.sh script. After the
execution of the script is complete on the local node, run the script on the other node.
NOTE
To access complete environment settings, run the su - or su - root command to switch to user root. Do
not run the sudo, pbrun, su root, or su command. You must run the script according to the sequence of
the nodes. The script can be run on the other node only after the execution of the script is complete on
the local node.
[root@dbn01 racinstall]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
l When installing the patch, view the README file in the .zip package.
l Perform operations based on Case 5 in section 2.3 "Patch Installation" in the README file.
l If the database node OS is RHEL 6, run the /u01/app/11.2.0/grid/root.sh script after
the /u01/app/oraInventory/orainstRoot.sh script is complete on all database nodes.
NOTE
Run the /u01/app/11.2.0/grid/root.sh script on the other node only after the execution of the script is
complete on the local node. Run the script on all nodes based on the SID sequence of user oracle.
[root@dbn01~]# sh /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: #Press Enter
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/
crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to upstart
CRS-2672: Attempting to start 'ora.mdnsd' on 'dbn01'
CRS-2676: Start of 'ora.mdnsd' on 'dbn01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'dbn01'
CRS-2676: Start of 'ora.gpnpd' on 'dbn01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'dbn01'
CRS-2672: Attempting to start 'ora.gipcd' on 'dbn01'
CRS-2676: Start of 'ora.cssdmonitor' on 'dbn01' succeeded
CRS-2676: Start of 'ora.gipcd' on 'dbn01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'dbn01'
CRS-2672: Attempting to start 'ora.diskmon' on 'dbn01'
CRS-2676: Start of 'ora.diskmon' on 'dbn01' succeeded
CRS-2676: Start of 'ora.cssd' on 'dbn01' succeeded
ASM created and started successfully.
Disk Group OCR created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 1265aa2a11a74fe6bf15911462aff6e1.
Successful addition of voting disk 5d28aa696f794f1fbf1ed78107ca91d1.
Successful addition of voting disk 207beb7ab88f4f77bf5d7b64cc1d033c.
Successfully replaced voting disk group with +OCR.
Step 25 After the execution of the script is complete, click OK. Ignore "[INS-20802] Oracle Cluster
Verification Utility failed" displayed when the progress reaches 100%.
Step 26 In the displayed dialog box, click Yes. And then click Next.
----End
Step 2 Run the /opt/racinstall/database/runInstaller command to access the OUI (Oracle Universal
Installer).
Step 3 Determine whether to receive security updates (in this example, this option is not selected).
Click Next and then click Yes in the displayed dialog box.
Step 6 Select Oracle Real Application Clusters database installation and select both nodes.
Step 7 Click SSH Connectivity, enter the password of user oracle, and click Setup to configure the
SSH trust relationship.
Step 8 After the configuration is complete, click OK. Click Test to check whether the SSH
connection is configured successfully. If the OS is RHEL, the system may display the
following error information. You need to enter yes on each node during the first interaction.
Log in to each database node, manually establish the SSH connection, enter yes. And then
click Test again.
Step 9 When the following dialog box is displayed, click OK. Then click Next.
Step 12 Specify the installation path of Oracle Database. Set Oracle Base and Software Location to
the directories that have been configured and click Next. For details, see 2.8 Configuring
Environment Variables2.8 Configuring Environment Variables.
Step 13 Retain the default Oracle user groups and click Next.
Step 14 Perform the prerequisite checks before the installation. You are advised to handle the possible
alarms by referring to the following methods:
l "Task resolv.conf Integrity": This is a parsing timeout alarm and you can ignore it.
l "Clock Synchronization": Check whether the NTP clock source is configured
successfully. If NTP clock synchronization is normal, ignore this alarm.
l "Single Client Access Name (SCAN)": Ignore this alarm. This alarm is displayed if only
one SCAN IP address is configured.
l Package pdksh-5.2.14: The ksh and mksh have been installed by default. You do not
need to install this package and can ignore this alarm. If you need to install pdksh, follow
instructions in 2.1 Installing RPM Packages to install it.
Step 16 If the database node OS is RHEL 7, the following error message is displayed when the
installation progress reaches 56%. Click Continue. After Oracle Database is installed, install
patch 19692824 by referring to the official Oracle document (Doc ID 1951613.1).
When installing the patch, view the README file in the .zip package.
Step 17 Run the script on each node as user root (the script must be executed on the local node first
and then on the remote node). After the script is successfully executed on both nodes, click
Next.
[root@dbn01 racinstall]# /u01/app/oracle/product/11.2.0/db/root.sh Performing
root user operation for Oracle 11g The following environment variables are set
as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/11.2.0/db Enter the
full pathname of the local bin directory: [/usr/local/bin]: #Press Enter.
The contents of "dbhome" have not changed. No need to overwrite. The contents of
"oraenv" have not changed. No need to overwrite. The contents of "coraenv" have
not changed. No need to overwrite. Entries will be added to the /etc/oratab file
as needed by Database Configuration Assistant when a database is created Finished
running generic part of root script. Now product-specific root actions will be
performed. Finished product-specific root actions.
----End
Procedure
Step 1 Log in to the GUI as user grid (or run the su - grid command to switch to user grid), and run
the asmca command to create disk groups.
Step 2 In the ASM Configuration Assistant window, click the Disk Groups tab, and click Create.
The Create Disk Group window is displayed.
Step 3 Enter the disk group name DATA, set the redundancy policy to Normal, and select the
required disks. Click OK.
Step 4 In the displayed dialog box, click OK. The DATA disk group is created.
Step 5 Repeat the preceding steps to create other disk groups. After the disk groups are created, click
OK.
----End
Procedure
Step 1 Log in to the system as user oracle user and run the following command:
Step 3 Select Oracle Real Application Clusters (RAC) database to create a RAC database, and
click Next.
Step 5 Select a database template and click Next (the following uses Custom Database as an
example, which allows you to customize the parameters such as the block size).
Step 6 Set Configuration Type to Admin-Managed, set Global Database Name (such as dbn), set
SID Prefix (such as dbn0), select all nodes, and click Next. The value of Global Database
Name must be the same as the value of ORACLE_UNQNAME set in 2.8 Configuring
Environment Variables. If the values of ORACLE_SID set in 2.8 Configuring Environment
Variables are dbn01 and dbn02, set SID Prefix to dbn0 so that the values of ORACLE_SID
can be used to access the database.
Step 7 Select Configure Enterprise Manager and Configure Database Control for local
management (selected by default), and click Next.
Step 8 Select Use the Same Administrative Password for All Accounts, set a password, and click
Next.
Step 9 Set Storage Type to Automatic Storage Management (ASM), select Use Oracle-Managed
Files, set Database Area to the created DATA disk group, and click Next.
Step 11 Select Specify Fast Recovery Area, set Fast Recovery Area to the created FRA disk group
t, and set Fast Recovery Area Size according to the size of the created FRA partition. The
size cannot exceed the total available capacity of the partition. In the test environment, you do
not need to select Enable Archiving. However, in the production environment, you need to
select Enable Archiving to enable data backup. Then click Next.
Install database components based on service requirements to prevent unnecessary components from
causing database bugs. Note that too many components will prolong database upgrade.
Step 13 Select Custom to customize the parameters. The HugePages feature is enabled before the
configuration. Therefore, you need to set a small SGA size and PGA size to continue the
installation, and modify the SGA size and PGA size after the installation is complete.
NOTE
1. When you configure the SGA size and PGA size based on the values calculated in 1.2.2 Oracle
RAC Node Memory Plan, the system displays a message indicating that the memory is insufficient.
In this case, you need to set a small SGA size and PGA size to continue the installation, and change
the SGA size and PGA size by referring to 3.5 Configuring the Database.
Step 14 Click the Sizing tab and set the parameters. The default value of Block Size is 8 KB. In this
example, the maximum number of OS user processes that can be simultaneously connected to
the database is set to 2000. You can change this value based on the customer's requirements.
Step 15 Click the Character Sets tab, select a proper database character set based on the service
requirements, and click Next. If the character set is incorrect, garbled characters may occur.
Changing the character set is difficult after the database is created. Confirm with the customer
about the character set before the installation.
Step 16 Confirm the default database storage information, and click Next.
Step 17 To avoid the "checkpoint not complete" error and performance deterioration caused by
frequent log switching, adjust the redo log size to ensure that log switchover occurs every 15
to 30 minutes.
Click Finish to start creating the database.
Click OK.
Step 18 Wait until the process is complete. A certain amount of time is required.
----End
After the database is installed, you must set the SGA size and PGA size to proper values. The
following example describes how to modify the SGA size and PGA size.
l Set the SGA size and PGA size to the values calculated in 1.2.2 Oracle RAC Node
Memory Plan.
The value of sga_max_size must be greater than or equal to that of sga_target. Log in to the database
and run the following SQL commands to modify the SGA size.
SQL> alter system set sga_max_size=300G scope=spfile;
System altered.
System altered.
SQL> startup
————————————————
System altered.
NAME TYPE
------------------------------------ ----------------------
VALUE
------------------------------
pga_aggregate_target big integer
25G
----End
Oracle technical support and development personnel can use this tool to quickly handle the
customer's service requests.
Procedure
Step 1 Log in to each database node and create a /u01/app/archive directory for the tool.
[root@dbn01 u01]# mkdir -p /u01/app/archive
[root@dbn01 u01]# cd /u01/app/archive
Step 2 Upload the downloaded installation package (oswbbXXX.tar) to the directory on each
database node, and decompress the package.
[root@dbn01 archive]# tar xvf oswbb812.tar
[root@dbn01 archive]# chmod 744 *
----End
Step 3 Open the /etc/rc.local file and add a configuration line to the end of the file to automatically
start OSWatcher at system startup.
[root@dbn01 oswbb]# vi /etc/rc.local
nohup ./startOSWbb.sh 2 72 /u01/app/archive &
----End
The Master Note for Database Proactive Patch Program (Doc ID 756671.1) describes the database
proactive patch project and provides the PSU (Patch Set Update) information corresponding to each
database version.
The following uses Oracle Database 11.2.0.4 as an example. Find Oracle document
2285559.1 on the MOS (My Oracle Support), which describes the proactive PSU for 11.2.0.4.
The GI PSU package contains the patches for Oracle Database. Therefore, you can use the GI
PSU package to install patches for Oracle Grid Infrastructure and Oracle Database.
Download the PSU corresponding to the current database platform, view the readme file, and
install the patches according to the readme file.
Decompress the downloaded package and replace the original OPatch tool of Oracle RAC
Database Home and GI Home.
$ unzip p6880880_112000_Linux-x86-64 -d <ORACLE_HOME>
$ <ORACLE_HOME>/OPatch/opatch version
Step 2 Navigate to the $ORACLE_HOME /OPatch/ocm/bin directory and run the ./emocmrsp -
no_banner –output location of the ocm.rsp file command to generate the ocm.rsp file (run
the command as user root on each node in the Oracle RAC environment).
----End
----End
Run the following command as user grid and user oracle to view the information about the
installed patches:
Example:
User grid:
Parameter Description
[root@dbn01~]#su - grid
[grid@dbn01~]#sqlplus / as sysasm
SQL> select ksppinm,ksppstvl,ksppdesc from x$ksppi x,x$ksppcv y where x.indx =
y.indx and ksppinm='_asm_hbeatiowait'; //Verify that the ASM heartbeat value is
modified to 120s.
KSPPINM
--------------------------------------------------------------------------------
KSPPSTVL
--------------------------------------------------------------------------------
KSPPDESC------------------------------------------------------------------------
_asm_hbeatiowait120number of secs to wait for PST Async Hbeat IO return
Log in to the other database node and run the following commands:
[grid@dbn01~]#echo $ORACLE_HOME
/u01/app/11.2.0/grid
[grid@dbn01~]#exit [root@dbn01~]#/u01/app/11.2.0/grid/bin/crsctl stop crs
[root@dbn01~]#/u01/app/11.2.0/grid/bin/crsctl start crs
l OS kernel parameters
l OS packages or patches
l RAC-related settings on the OS
l CRS/Grid Infrastructure
l RDBMS
l ASM
l Database Parameters
l Settings that greatly affect RAC databases
l Checks for 11.2.0.3, 11.2.0.4, and 12c upgrades
l MAA (Maximum Availability Architecture) check
l Database log analysis
CRS stack is running and CRS_HOME is not set. Do you want to set CRS_HOME
to /u01/app/11.2.0.4/grid?[y/n][y]y //Verify that the target value of CRS_HOME is
correct.
Checking ssh user equivalency settings on all nodes in cluster
Node dbn02 is configured for ssh user equivalency for oracle user
Searching for running databases . . . . .
. .
List of running databases registered in OCR
1. RACDB
2. None of above
Select databases from list for checking best practices. For multiple databases,
select 1 for All or comma separated number
like 1,2 etc [1-2][1].1 //Select the databases to be checked.
. .
Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
----------------------------------------------------------------------------------
---------------------
Oracle Stack Status
----------------------------------------------------------------------------------
---------------------
Host Name CRS Installed ASM HOME RDBMS Installed CRS UP ASM UP RDBMS UP DB
Instance Name
----------------------------------------------------------------------------------
---------------------
dbn01 Yes Yes Yes Yes Yes Yes RACDB1
dbn02 Yes Yes Yes Yes Yes Yes RACDB2
----------------------------------------------------------------------------------
---------------------
Copying plug-ins
. . . . . . . . . . . . . . . . . .
. . . . . .
113 of the included audit checks require root privileged data collection . If
sudo is not configured or the root password is
not available, audit checks which require root privileged data collection can be
skipped.
1. Enter 1 if you will enter root password for each host when prompted //Enter
the root password when prompted.
2. Enter 2 if you have sudo configured for oracle user to execute
root_raccheck.sh script //sudo has been configured for user oracle.
3. Enter 3 to skip the root privileged collections //Skip checks that require
the root permission (not recommended).
4. Enter 4 to exit and work with the SA to configure sudo or to arrange for root
access and run the tool later. Exit ORAchk to configure the root permission.
Please indicate your selection from one of the above options for root access[1-4]
[1]:-1 //Specify how grant the root permission for the checks (enter 1 to
manually enter the root password).
Is root password same on all nodes?[y/n][y]y //Specify whether the root password
is the same on all nodes.
Enter root password: //Enter the password of user root.
Verifying root password.
After the check is complete, the following information (or similar information) is displayed:
Detailed report (html) - /home/oracle/orachk/orachk_dbn01_RACDB_111013_185118/
orachk_dbn01_RACDB_111013_185118.html
UPLOAD(if required) - /home/oracle/orachk/orachk_dbn01_RACDB_111013_185118.zip
Pay attention to CRITICAL, FAIL or WARNING messages. To view error details and
troubleshooting suggestions, click View in the Details column.
After the faults are rectified, use the ORAchk tool to check whether incorrect parameters are
modified.
All RAC maintenance commands can be run as user root or user grid.
When the second script is being executed, an ASM startup failure is reported, as shown in the
following figure:
In the log recording the ASM startup failure, you can find the reported error and prompt
message, as shown in the following figure:
Cause
The ASM instance cannot start when the number of CPU cores exceeds 288 and
memort_target is too small. For details, see ID 1416083.1 on the Oracle official website.
Solution
Disable some CPU cores, install the database, change the value of memory_target, and
enable those CPU cores. The detailed procedure is as follows:
Step 2 Restart the OS and go to BIOS. On BIOS, set Cores Enabled to 18 or lower for each CPU, as
shown in the following figures:
Step 3
Step 4 Press F10 and select yes to save the configuration and exit.
Step 5 Reinstall grid. After the installation is complete, set memory_target to 8182MB. Note that
you need to change the value of memory_max_target to 8182MB before changing the value
of memory_target. Then repeat steps 2 and 3 to enable all CPU cores on BIOS.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
System altered.
System altered
SQL> startup
ORACLE instance started.
Total System Global Area 1.0737E+11 bytes
Fixed Size 30041600 bytes
Variable Size 9.5295E+10 bytes
Database Buffers 1.0737E+10 bytes
Redo Buffers 1312133120 bytes
Database mounted.
Database open
----End
All RAC maintenance commands can be run as user root or user grid.
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE dbn02
ora.cvu
1 OFFLINE OFFLINE
ora.dbn.db
1 ONLINE ONLINE dbn01 Open
2 ONLINE ONLINE dbn02 Open
ora.dbn01.vip
1 ONLINE ONLINE dbn01
ora.dbn02.vip
1 ONLINE ONLINE dbn02
ora.oc4j
1 OFFLINE OFFLINE
ora.scan1.vip
1 ONLINE ONLINE dbn02
ora.dbn01.vip
1 ONLINE ONLINE dbn01
ora.dbn02.vip
1 ONLINE ONLINE dbn02
ora.oc4j
1 OFFLINE OFFLINE
ora.scan1.vip
1 ONLINE ONLINE dbn02