Beruflich Dokumente
Kultur Dokumente
Oracle RAC OVM Templates OneCommand & the New DeployCluster Tool Saar Maoz RAC Development, Oracle
Updated: 18-JUL-2012
Agenda Oracle RAC & Oracle VM Overview Oracle RAC Oracle VM Templates NEW: DeployCluster tool Demo / Discussion
Standard 2 node Cluster Build OVM3: N-node Cluster Build using DeployCluster tool OVM2: Dom0 N-node Cluster Build Add / Remove Nodes / Instances Live Migration
Node1
Node 2
Node n
Operating System
Operating System
Redo / Archive logs all instances Database / Control files OCR and Voting Disks
Oracle VM Templates
Rapid Application Deployment
E-Delivery
RAC
VM 2 VM VM VM
VM
RAC OVM Templates - Availability Currently available for 11.2.0.3.2, 11.2.0.2.6, 11.2.0.3.0, 11.2.0.2.2, 11.2.0.2.0, 11.2.0.1.4, 11.2.0.1.2 ,11.1.0.7.6, 11.1.0.7.2 on Oracle Linux 32 and 64 bits
Download from e-delivery or My Oracle Support Note:1185244.1:
https://support.oracle.com/oip/faces/secure/km/DocumentDisplay.jspx?id= 1185244.1
Entire install is automated, with the user only providing minimal input parameters.
RAC OVM Templates - Delivery Mechanism 32 bit and 64bit versions Image files are built with sparse file support Inside each zip are tgz archives with the following files:
VM Config file (text file) Opatch lsinventory sample output README / PDFs for installing Disk 1 image file: Operating System Disk 2 image file: Oracle Software Includes Clusterware, ASM and RAC
Follow the instructions in provided PDF files or document for the new DeployCluster tool
Using The RAC OVM Templates Step 1 Download Template 32/64 11.1/11.2.0.1/11.2.0.2/11.2.0.3 Step 2 Register Template with Oracle VM Manager Step 3 Create 2+ VMs Step 4 Shared Storage
For Non Production: Create Shared Disk using OVM Manager Assigned Shared disk to all Guest VMs using OVM Manager For Production Deployment: Identify Physical Shared Disk OVM2: Update all Guest VMs vm.cfg files with location of shared disk OVM3: Attach physical disks using Oracle VM Manager
Step 5 Boot all VMs or use Deploycluster tool and skip Step 6 Step 6 Run OneCommand to Configure and Build RAC database
Templates support both configurations, each has its own PDF to help with steps
10
Dom0 OS DOM 0
Dom0 OS DOM 0
DOM-0
Hypervisor
X86/64 Bare-Metal Server
Hypervisor
DOM-0
I/O
I/O
I/O
I/O
11
phy
W!
12
G uest 2
P r iv a t e n e tw o rk
e th 0
e th 1
e th 0
e th 1
D o m a in -0
x e n b r0
xen b r1
bond0
bond1
e th 0
e th 1
e th 2
e th 3
P u b lic N e tw o rk S w itc h 1
P u b lic N e tw o r k S w itc h 2
P riv a te N e tw o r k S w itc h 1
P riv a te N e tw o rk S w itc h 2
13
2-node Test RAC Minimum Requirements Two or more cores 4GB of memory or more 30GB of disk or more
Dom0 OS DOM 0
DOM-0
Hypervisor DM-0
X86/64 Bare-Metal Server
CPU CPU CPU
I/O
I/O
14
15
16
NEW: DeployCluster Tool Features Allows for fully automated end-to-end cluster deployment of Nnode clusters
Assuming VMs are pre-created w/NICs & shared disks
No Dom0 access or login to VMs is needed All previously released templates are fully compatible
As long as the OVMAPI enabled OS disk is used
NEW: DeployCluster Tool Features (Contd) Allow ANY build configuration (SID name, user name, passwords, ports, etc.) to be modified from outside the guests at deploy time
Supply a custom params.ini using --params (-P) flag
All exceptions are trapped showing a default of 4 lines from it Automated logfile written upon each invocation Easy to re-attempt a failed deployment
Only fix what failed, and leave other VMs running
18
19
DeployCluster Examples List all VMs with a simple name of racnode.? on Manager
$ deploycluster.py u admin N netconf.ini M racnode.? -L
21
Oracle RAC OneCommand (v1.1.0) for Oracle VM - deploy cluster - (c) 2011-2012 Oracle Corporation (com: 26700:v1.1.0, lib: 126247:v1.1.0, var: 1100:v1.1.0) - v2.4.3 server1.us.oracle.com (x86_64) Invoked as root at Sun Jun 3 23:28:53 2012 (size: 37500, mtime: Wed May 16 00:13:19 2012) Using: ./deploycluster.py -u admin -p **** -M racnode.? -P params-sample.ini -N netconfig30-3nodes.ini INFO: Attempting to connect to Oracle VM Manager... INFO: Oracle VM Client (3.1.1.305) protocol (1.8) CONNECTED (tcp) to Oracle VM Manager (3.1.1.212) protocol (1.8) IP (141.22.242.50) UUID (0004fb0000010000f2b5b95576e2c301) INFO: Inspecting /home/oracle/netconfig30-3nodes.ini for number of nodes defined.... INFO: Detected 3 nodes in: /home/oracle/netconfig30-3nodes.ini INFO: Located a total of (3) VMs; 3 VMs with a simple name of: ['racnode.0', 'racnode.2', 'racnode.1'] INFO: Verifying all (3) VMs are in Running state INFO: VM with a simple name of "racnode.0" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racnode.2" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racnode.1" is in a Stopped state, attempting to start it...OK. INFO: Detected that all (3) VMs specified on command have (3) common shared disks between them (ASM_MIN_DISKS=3) INFO: The (3) VMs passed basic sanity checks and in Running state, sending cluster details as follows: netconfig.ini (Network setup): /home/oracle/netconfig30-3nodes.ini params.ini (Overall build options): /home/oracle/params-sample.ini buildcluster: yes INFO: Starting to send cluster details to all (3) VM(s)...... INFO: Sending to VM with a simple name of "racnode.0".............. INFO: Sending to VM with a simple name of "racnode.2"..... INFO: Sending to VM with a simple name of "racnode.1"...... INFO: Cluster details sent to (3) VMs... Check log (default location /u01/racovm/buildcluster.log) on build VM (racnode.0)... INFO: deploycluster.py completed successfully at 23:30:26 in 92.6 seconds (01m:32s) Logfile at: /home/oracle/deploycluster/deploycluster1.log
22
23
Customer supplies an initialisation file. (netconfig.ini) Stamp file into shared storage Repeat this section, identifying the 6 attributes for each node Power on new nodes and pass command on grub boot-up line
netconfig.ini
# Node specific information NODE1=test170 NODE1IP=192.168.1.170 NODE1PRIV=test170-priv NODE1PRIVIP=10.10.10.170 NODE1VIP=test170-vip NODE1VIPIP=192.168.1.172
24
If filesystem disk, use losetup vf to loop mount the disk, then stamp the loop device.
Above will automatically configure the network on the new VMs and build a 2 node cluster
25
Most steps can be run global or local (add local to step) Any failure of any step will stop execution Combination of common steps are also available as special steps, e.g. buildcluster or command line flags, e.g. -c To cleanup run:
/u01/racovm/racovm.sh -S clean
26
27
28
Adding or Removing Node(s) / Instance(s) Fully automated addition and removal of nodes or instances Simply run:
./racovm.sh -S addnodes -N node2,node3 Or: ./racovm.sh -S removenodes -N node2,node3 Or: ./racovm.sh -S addinstances -N node2,node3 Or: ./racovm.sh -S deleteinstances -N node2,node3
29
diskconfig.sh Configures disks in VMs Verifies disks are not held on any node by
ASM, ASMLib, RAID device, PowerPath, Device Mapper, User Application, Filesystem, Swap Device
Stamps and discovers disks on all nodes (verify sharedness) Auto-partition & align data to 1MB offset (default) Supports MSDOS or GPT partition table Merges needed udev rules to /etc/udev/
Supports EL4, EL5 & SLES10, SLES11
30
netconfig.sh Configures network in VMs Full validation on user input, NIC names, IP/subnet masks Checks for duplicate IPs on subnet (arping) Writes /etc/hosts and related ifcfg-*, resolv.conf, etc. files to fully configure network Allows stamping of netconfig.ini to shared storage; helps in N-node network configuration (from dom0 or inside guests) Supports and configures bonding (not needed inside guests)
31
Examples:
./doall.sh -L last reboot ./doall.sh -ps /u01/app/11.2.0/grid/bin/diagcollection.sh
32
33
QUESTIONS ANSWERS
35
36