Beruflich Dokumente
Kultur Dokumente
THIS TECHNICAL TIP IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN
TYPOGRAPHICAL ERRORS AND TECHNICAL INACCUURACIES. THE CONTENT IS PROVIDED AS IS,
WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.
Nimble Storage: All rights reserved. Reproduction of this material in any manner whatsoever
without the express written permission of Nimble is strictly prohibited.
Introduction ................................................................................................................................................................................. 4
Audience ...................................................................................................................................................................................... 4
Pre-requisites .................................................................................................................................. 18
Oracle ASM Disks Setup ................................................................................................................ 18
Install Grid Infrastructure ............................................................................................................... 20
Install Oracle Software ................................................................................................................... 20
Oracle performance tuning is beyond the scope of this paper. Please visit www.oracle.com for Oracle Performance
Tuning Guide for more information in tuning your database.
Audience
This guide is intended for Oracle database solution architects, storage engineers, system administrators and IT
managers who analyze, design and maintain a robust database environment on Nimble Storage. It is assumed that the
reader has a working knowledge of iSCSI SAN network design, and basic Nimble Storage operations. Knowledge of
Oracle Linux operating system, Oracle Clusterware, and Oracle database is also required.
/boot partition
SWAP partition
/ partition
[root@racnode1 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda3 32998256 22234944 9087076 71% /
tmpfs 6291456 2607808 3683648 42% /dev/shm
/dev/sda1 126931 50862 69516 43% /boot
After the installation of the operating system, the following daemons need to be disabled on each node.
o Disabled selinux
o Disabled NetworkManager
o Disable iptables if iptables is required, make sure it is setup correctly.
o Disable ip6tables
If not installed already, the following packages are required for the Oracle 11gR2 and multipathing.
compat-db42-4.2.52-15.el6.x86_64.rpm
compat-db43-4.3.29-15.el6.x86_64.rpm
compat-db-4.6.21-15.el6.x86_64.rpm
cvuqdisk-1.0.9-1.rpm
device-mapper-multipath-0.4.9-64.0.1.el6.x86_64.rpm
device-mapper-multipath-libs-0.4.9-64.0.1.el6.x86_64.rpm
Add the below settings to the /etc/systctl.conf file on each node. Run the sysctl p when done.
# Nimble Recommended
net.core.wmem_max = 16780000
net.core.rmem_max = 16780000
net.ipv4.tcp_rmem = 10240 87380 16780000
net.ipv4.tcp_wmem = 10240 87380 16780000
Each node should have four NICs. One is assigned a public IP address. One is assigned a private IP
address. The other two are assigned for iSCSI networks. After assigning the public and private
network, make sure to generate SSH keys on each node and exchange the keys between the 2
nodes for root, grid, and oracle users. After the keys are exchanged, ensure that these users can
login to either node without the need of entering a password. This is required for RAC installation.
SCAN IP addresses are also required in 11gR2 RAC so make sure they are setup in DNS. Also
ensure the /etc/hosts file contains entries of Public, Private, and VIP networks for both nodes.
# Public
136.18.127.213 racnode1.sedemo.lab racnode1
136.18.127.214 racnode2.sedemo.lab racnode2
# Private
10.18.127.73 racnode1-priv.sedemo.lab racnode1-priv
10.18.127.75 racnode2-priv.sedemo.lab racnode2-priv
# VIP
136.18.127.215 racnode1-vip.sedemo.lab racnode1-vip
136.18.127.216 racnode2-vip.sedemo.lab racnode2-vip
iscsi-initiator-utils-6.2.0.872-41.0.1.el6.x86_64
DEVICE="eth1"
BOOTPROTO=static
HWADDR="00:50:56:AE:30:71"
NM_CONTROLLED="no"
ONBOOT="yes"
TYPE="Ethernet"
UUID="4705a3e9-671a-45a0-8477-8b8b98d56c09"
IPADDR=172.18.128.71
NETMASK=255.255.255.0
MTU=9000
Multipath Setup
Change directory to /etc and using your favorite editor to create a new file called multipath.conf with
the code below
defaults {
user_friendly_names yes
find_multipaths yes
}
devices {
device {
vendor "Nimble"
product "Server"
path_selector "round-robin 0"
path_checker tur
rr_min_io_rq 10
rr_weight priorities
failback immediate
path_grouping_policy group_by_serial
features "1 queue_if_no_path"
}
}
Click on Add
Add Initiator
Initiator button. Another window will appear. Enter the Initiator Group Name (rac-
cluster in the example below) and the Initiator Name (iqn ID) of both nodes and click OK
OK.
OK You can
find the iqn ID of the Oracle RAC nodes by running the following command on the Oracle Linux
server:
Click on New
New Volume
Volume button. Enter a name for the volume. Enter a short description.
Select Limit
Limit access
access radio button and then select Limit
Limit access to iSCSI initiator group
group button and
then click on the drop down arrow to select the initiator group for this cluster (rac-cluster from the
earlier example). Also select the Allow
Allow multiple initiator access
access button then click Next
Next.
Next
Once all volumes have been created on the Nimble array, scan for the new volumes on both Oracle RAC
nodes and modify the /etc/multipath.conf on your Oracle Linux servers to use the aliases for the Oracle
volumes. You can modify the multipath.conf file on the first Oracle RAC node only and copy (using scp) the
file to the second Oracle RAC node.
Here is how you can setup multipath for the Oracle volumes.
Scan for disks on the Oracle RAC Linux servers. Note that you need to scan both iSCSI networks. If
there is a single discovery IP address, you can use that IP to run the discovery. With either method,
please ensure that you can see the same volume across two networks.
Note:
Note if you want to login for all volumes that this Linux server has access to, then run the above login
command. If you want to login for a certain volume, then use this command iscsiadm m node T <Target
Volume IQN> - -login.
After the scan completes, you can run the following command to find out which disk serial number
belongs to which LUN so you can alias the volumes properly. You can run this command on the first
Oracle RAC node only. In the below output, sdf and sdg is the one disk (same serial #) which I used
ocr
ocr
ocr as an alias, sdc and sde is one disk which I used racproddata1
racproddata1
racproddata1 as an alias, and so forth.
[root@racnode1 ~]# for a in `cat /proc/partitions | awk '{print $4}' | grep sd`; do echo "### $a: `scsi_id -u
-g /dev/$a`" ; done
### sda:
### sda1:
### sda2:
### sda3:
### sdb: 2bfdb64e06cf5ee696c9ce900abdb3466
### sdd: 2bfdb64e06cf5ee696c9ce900abdb3466
### sdc: 2e4d6149bf39b0c626c9ce900abdb3466
### sde: 2e4d6149bf39b0c626c9ce900abdb3466
### sdg: 2c6c2a2e042cc67d96c9ce900abdb3466
### sdf: 2c6c2a2e042cc67d96c9ce900abdb3466
### sdh: 261dccce8c92d80256c9ce900abdb3466
### sdi: 261dccce8c92d80256c9ce900abdb3466
### sdj: 242d6ac9296637ba66c9ce900abdb3466
### sdk: 242d6ac9296637ba66c9ce900abdb3466
The new /etc/multipath.conf would look something like this. Please refer to Oracle Linux
documentation for specific parameter settings for other storage devices.
defaults {
user_friendly_names yes
find_multipaths yes
}
Copy the /etc/multipath.conf file to the second Oracle RAC node and reload multipathd daemon on
both Oracle RAC nodes.
To set at boot time, add the elevator option at the kernel line in the /etc/grub.conf file:
elevator=noop
root@mktg04 ~]# multipath -ll | grep sd | awk -F":" '{print $4}' | awk '{print $2}' | while read LUN; do echo
noop > /sys/block/${LUN}/queue/scheduler ; done
multipath -ll | grep sd | awk -F":" '{print $4}' | awk '{print $2}' | while read LUN
do
echo 1024 > /sys/block/${LUN}/queue/max_sectors_kb
done
Note:
Note To make this change persistent after reboot, add the commands in /etc/rc.local file.
Note:
Note To make this change persistent after reboot, add the commands in /etc/rc.local file.
Use performance setting for all available CPUs on the host when possible.
[root@racnode1 ~]# for a in $(ls -ld /sys/devices/system/cpu/cpu[0-9]* | awk '{print $NF}') ; do echo
performance > $a/cpufreq/scaling_governor ; done
Note:
Note To make this change persistent after reboot, add the commands in /etc/rc.local file.
After a snapshot has been taken using the Volume Collection feature, the snapshot(s) will be available for
recovery based on your retention policy setting. The snapshot(s) can be deleted manually if you wish to do
so. Please note that you cannot delete a snapshot if there is a clone volume associated with that snapshot.
This particular feature is very useful in an Oracle environment. It allows you to create a clone of your
production database for Test, Development, or Staging environment quickly and easily.
Below are the steps to create a Volume Collection and assign the Oracle volumes to it.
A Create
Create a volume collection
collection window appears. Enter a name for this volume collection and select
Blank
Blank schedule
schedule and click Next
Next.
Next
A Schedule
Schedule
Schedule window appears. You must have at least one protection (snapshot) schedule defined.
Enter a name and your desired schedule/retention and click Next
Next.
Next
Pre-requisites
Create a local directory on the boot LUN or a separate Nimble LUN for the Grid Infrastructure and
Oracle software on both Oracle RAC nodes. The directories must be exactly the same on both nodes.
Create group ID of oinstall on both servers
Create group ID of dba on both servers
Create user ID of grid on both servers
Create user ID of oracle on both servers
o Make sure ASM module is loaded on both nodes. If not, then load it.