Sie sind auf Seite 1von 22

Creating zones under VCS on ZFS file system with VXDMP enabling

Make sure mpxio disabled(stmboot L)


zpool create uz1332_zone /dev/vx/dmp/emc0_1804 to create the zpool on
veritas disks
bash-3.2# zfs list
NAME
USED AVAIL REFER MOUNTPOINT
uz1332_zone
8.31G 16.2G 31K legacy
uz1332_zone/root 8.31G 16.2G 8.31G /export/zones/usa0300uz1332
create the zone with your required configuration of ips only and boot the zone
and perform option
with zlogin C zonename
make config rw(haconf makerw)
1-create new ServiceGroup named "yourSG" (name it as you prefer)
1- hagrp -add yourSG
2- hagrp -modify yourSG SystemList server1 0 server2 1
3- hagrp -modify yourSG AutoStartList server1 server2
4- hagrp -modify yourSG Parallel 0- here we declaring as failover or parllel
2- create new Resource named "vcspool" (name it as you prefer)
hares -add vcspool Zpool yourSG
hares -modify vcspool Critical 1--- we declared as critical as it zone root file
system
hares -modify vcspool ChkZFSMounts 1
hares -modify vcspool FailMode continue
hares -modify vcspool ForceOpt 1
hares -modify vcspool ForceRecoverOpt 0
hares -modify vcspool PoolName yourZPOOLname
hares -modify vcspool AltRootPath /
hares -modify vcspool ZoneResName vcszone
hares -modify vcspool DeviceDir -delete keys
hares -modify vcspool Enabled 1
hazonesetup -g yourSG -r vcszone -z yourZONE -p abc123 -a -s server1,server2
hares -link vcszone vcspool
haconf dump makero
3- Copy /etc/zones/index and /etc/zones/yourZONE.xml files from server1 to server2

4- edit server2 /etc/zones/index and change yourZONE in "configured" state as


follow:
yourZONE:configured:/yourZPOOLname/yourZFS:
5- probe yourSG on both server
6- halt yourZONE on server1 and export yourZPOOLname
zoneadm z yourZONE halt
zpool export yourZPOOLname
7- import zpool with alterroot on both nodes:
server1:
server1:

zpool import -R / yourZPOOLname


zpool export yourZPOOLname

server2:
server2:

zpool import -R / yourZPOOLname


zpool export yourZPOOLname

8 - put online resource group


hagrp enable yourSG sys server1
9- check if vcspool and vcszone are online and yourZONE is running
10- try to switch yourSG on the other node

Enabling in debug mode


haconf -makerw
hatype -modify Zone LogDbg DBG_1 DBG_2 DBG_3 DBG_4 DBG_5
hatype -modify Zpool LogDbg DBG_1 DBG_2 DBG_5
haconf -dump -makero

delete debug the keys

#
#
#
#

haconf -makerw
hatype -modify Zone LogDbg -delete DBG_1 DBG_2 DBG_3 DBG_4 DBG_5
hatype -modify Zpool LogDbg -delete DBG_1 DBG_2 DBG_5
haconf -dump -makero

hagrp -add zone_uz1332


hagrp -modify zone_uz1332 SystemList usa0300ux592 0 usa0300ux593 1
hagrp -modify zone_uz1332 AutoStartList usa0300ux593 usa0300ux592
hagrp -modify zone_uz1332 Parallel 0( failover SGs we need to mention)
hares -add zone_uz1333_root Zpool zone_uz1333
hares -modify zone_uz1333_root Critical 1 --- we declared as critical as it zone root
file system
hares -modify zone_uz1333_root Enabled 1
hares -modify zone_uz1333_root ChkZFSMounts 1
hares -modify zone_uz1333_root FailMode continue
hares -modify zone_uz1333_root ForceOpt 1
hares -modify zone_uz1333_root ForceRecoverOpt 0
hares -modify zone_uz1333_root PoolName uz1333_zone
hares -modify zone_uz1333_root AltRootPath /--- it is mandatory to mention if
zpool.cache missed/corrupted it will use the alternative root path to import so we
need to mention any zpool resource under VCS
hares -modify zone_uz1333_root DeviceDir -delete -keys
hazonesetup -g zone_uz1333 -r usa0300uz1333 -z usa0300uz1333 -p hclr00t -a -s
usa0300ux592,usa300ux593
in this command it will create once zone VCS user on usa0300ux592 only if we
mention ux593 as well and we need to login into other server usa0300ux593 again
we need to run the same command with ux593 only

hazonesetup -g zone_uz1333 -r usa0300uz1333 -z usa0300uz1333 -p hclr00t -a -s


usa0300ux593
here it will create zone VCS user
and one more thing no need to create a zone resource under VCS this above
hazonesetup command will create zone resource (-r usa0300uz1333 zone name)
with zone type and those zone VCS users will help work internally to zlogin under
the zone.

You will see the output like this zone resource name usa0300uz1333
bash-3.2# hagrp -resources zone_uz1333
usa0300uz1333
zone_uz1333_root

cluster ux592_ux593 (
UserNames = { admin = dKLdKFkHLgLLjTLfKI,
z_usa0300uz1329_usa0300ux593 = JKKlKHjQLfHEgELh,
z_usa0300uz1329_usa0300ux592 = gJJkJGiPKeGDfDKg,
z_usa0300uz1330_usa0300ux592 = chhIheGniCebDbiE,
z_usa0300uz1330_usa0300ux593 = aLLmLIkRMgIFhFMi,
z_usa0300uz1331_usa0300ux592 = gmmNmjLsnHjgIgnJ,
z_usa0300uz1331_usa0300ux593 = gJJkJGiPKeGDfDKg,
z_usa0300uz1332_usa0300ux593 = bkkLkhJqlFheGelH,
z_usa0300uz1332_usa0300ux592 = aLLmLIkRMgIFhFMi,
z_usa0300uz1333_usa0300ux593 = hllMliKrmGifHfmI,
z_usa0300uz1333_usa0300ux592 = fQQrQNpWRlNKmKRn }
Administrators = { admin }
You can delete these user with the hauser delete z-**-** once SG deleted at VCS
Once the resource created you need to link with zpool with zone resources and no
need to create any mount resource for zone root file system it will pick associated
mount entry from zone xml file.

#hares link usa0300uz1333 zone_uz1333_root here zone is dependent on


zone root pool make sure you need to link properly otherwise zone will not boot VCS
throws errors if you have problem in this manner check the main.cf resource
dependency tree and if still facing issue you can enable above debug commands
and see logs for calrity

#hares -modify usa0300uz1332 BootState multi-user-server ( to boot the zone into


multi-user-server mode)

if required probe the resources on both the nodes


resource dependency tree example in main.cf. you can trouble shoot with
each resource online through VCS if all are coming online properly you
need to check your resource link dependency tree
usa0300uz1330 requires zone_uz1330_root

// resource dependency tree


//
//

group zone_uz1330

//

//

//

Zone usa0300uz1330

//

//

Zpool zone_uz1330_root

//

//

//
//

}
}

zoneadm z zonename halt


umount the zone root file system
zpool export uz1333_zone
zpool import R / uz1333_zone
zpool export uz1333_zone
and do the same on other cluster node you will see output like once you
imported with -R

bash-3.2# zpool list


NAME

SIZE ALLOC FREE CAP HEALTH ALTROOT

uz1333_zone

24.9G 8.95G 15.9G 35% ONLINE /

now copy the /etc/zones/index file to other node and put the zone in
configured state on other node where you not created zone.

Better to put autoboot false in each zone on both the servers( even
though it will not affect but better to keep false for safer side)

As it required failover purpose we need zone xml on that node as well and
create the zone root path with 700 permissions
Before bringing SG online , you can export and import zone root zpool on
other nodes and you can test the zone whether it is booting out of VCS or
not .once you have zone XML file on both the nodes
Failover will work like it will detach the zone from one node and attach
with F option so in this case it will not look for any other patch miss
match version it will simply attach. If you want to equal patches with
zones you can detach and attach with U option where it required normally
we dont do.

Check hastatus sum

bash-3.2# hastatus -sum

-- SYSTEM STATE
-- System

A usa0300ux592

State

RUNNING

Frozen

A usa0300ux593

RUNNING

-- GROUP STATE
-- Group

System

Probed

AutoDisabled

State

B zone_uz1329

usa0300ux592

ONLINE

B zone_uz1329

usa0300ux593

OFFLINE

B zone_uz1330

usa0300ux592

ONLINE

B zone_uz1330

usa0300ux593

OFFLINE

B zone_uz1331

usa0300ux592

ONLINE

B zone_uz1331

usa0300ux593

OFFLINE

B zone_uz1332

usa0300ux592

OFFLINE

B zone_uz1332

usa0300ux593

ONLINE

B zone_uz1333

usa0300ux592

ONLINE

B zone_uz1333

usa0300ux593

OFFLINE

Bash-3.2#
Once zone booted on physical server you need to enable password less
authentication between physical server to virtual zone which you created
at VCS level otherwise it will give message please run hazonesetup

Zlogin usa0300uz1333
export VCS_HOST=physical ser(usa0300ux593)
export VCS_HOST=usa0300ux593
# /opt/VRTSvcs/bin/halogin z_yourZONE_server2 abc123
Here z_usa0300uz1333_usa0300ux593 which we need to provide in this
command and this user which we created with hazonesetup command in
earlier you can see main.cf file and password is hclr00t which we provided
same in hazonesetup command earlier.

What is the physical server name which you imported in VCS_HOST like
ux593 so user which you have to provided also same z-zonenameusa0300ux593
If my zone is at usa0300ux592 physical server I logged into zone zlogin
now I need to run same command with different usere and different host.

#export VCS_HOST=usa0300ux592
# /opt/VRTSvcs/bin/halogin z_usa0300uz1333_usa0300ux592 hclr00t
It will generate .vcspwd of root account home directory but this needs to
be copied into / location otherwise it will throw the messages related to
hazonsetup password less authentication messages in engine logs while
failover

Ex: root having home directory having /root so when I ran the the halogin
command on zone it will created .vcspwd file under /root
Login name: root

(messages off) In real life: Super-User

Directory: /root

Shell: /sbin/sh

On since Feb 18 08:35:52 on pts/3 from zone:global


No unread mail
No Plan.
bash-3.2#
bash-3.2# ls -alrt
total 37
rw-------rw-------

1 root

root

1 root

drwxr-xr-x 22 root

root
root

126 Feb 18 04:57 .vcspwd


7452 Feb 18 04:58 .bash_history
25 Feb 18 05:08 ..

bash-3.2# more .vcspwd


100 usa0300ux593 z_usa0300uz1332_usa0300ux593 bkkLkhJqlFheGelH-
this entry came while my zone at physical server ux593 after running
halogin,vcs_host

100 usa0300ux592 z_usa0300uz1332_usa0300ux592 IppQpmOvqKmjLjqM- this entry came while my zone at physical server ux592 after running
halogin,vcs_host

bash-3.2#
so I copied this file into / and check the failover again and it went
successfully without messages as it will look this file under zone root
account /
if halogin not setup on both the physical server that will not create any
issue about it will show the messages while failover
# ls -l /.vcspwd (file exists?)
# cat /.vcspwd (if it exists check how many lines you have inside, it would
be
2) # ls -l /etc/VRTSvcs/.vcshost (file exists?)
# VCS_HOST=server2 # export VCS_HOST
# /opt/VRTSvcs/bin/halogin z_yourZONE_server2 <password-you-setbefore>

in example I sent you would be:


# /opt/VRTSvcs/bin/halogin z_yourZONE_server2 abc123

Below is the other example of zone SG with mount points under zones also
where you need to create the mount resource

If you have IPMP at OS level you should create IPMP group for public
network under VCS ,
So you can put zone public ip-address under VCS instead of placing ipaddress in zone config
I followed BUR ipmp in VCS and not put in zone config similarly we can put
it

hagrp -add zone_uz1334


hagrp -modify zone_uz1334 SystemList usa0300ux592 0 usa0300ux593 1
hagrp -modify zone_uz1334 AutoStartList usa0300ux593 usa0300ux592
hagrp -modify zone_uz1334 Parallel 0
hares -add zone_uz1334_root Zpool zone_uz1334
hares -modify zone_uz1334_root Critical 1
hares -modify zone_uz1334_root Enabled 1
hares -modify zone_uz1334_root ChkZFSMounts 1
hares -modify zone_uz1334_root FailMode continue
hares -modify zone_uz1334_root ForceOpt 1
hares -modify zone_uz1334_root ForceRecoverOpt 0
hares -modify zone_uz1334_root PoolName uz1334_zone
hares -modify zone_uz1334_root AltRootPath /
hares -modify zone_uz1334_root DeviceDir -delete -keys
hazonesetup -g zone_uz1334 -r usa0300uz1334 -z usa0300uz1334 -p
hclr00t -a -s usa0300ux592,usa0300ux593
again we need to run the below command on ux593 server
hazonesetup -g zone_uz1334 -r usa0300uz1334 -z usa0300uz1334 -p
hclr00t -a -s usa0300ux593
hares -modify usa0300uz1334 BootState multi-user-server
hares -link usa0300uz1334 zone_uz1334_root

login into other physical server


hazonesetup -g zone_uz1334 -r usa0300uz1334 -z usa0300uz1334 -p
hclr00t -a s server2(ux592)
and run the halogin under the zone on both the physical server with
username password and copy the vcspwd under /

before adding the external mount point under zone make sure respective
mount points are existed
under zone(/apps/oracle) and run export/import R/export on associated
zpool(oracle) on both the nodes

hares -add ora_uz1334_pool Zpool zone_uz1334


hares -modify ora_uz1334_pool Critical 0
hares -modify ora_uz1334_pool PoolName uz1334_oracle
hares -modify ora_uz1334_pool AltRootPath "/"
hares -modify ora_uz1334_pool ChkZFSMounts 1
hares -modify ora_uz1334_pool FailMode continue
hares -modify ora_uz1334_pool ForceOpt 1
hares -modify ora_uz1334_pool ForceRecoverOpt 0
hares -modify ora_uz1334_pool Enabled 1
hares -add uz1334_apps_ora_mnt Mount zone_uz1334
hares -modify uz1334_apps_ora_mnt Critical 0
hares -modify uz1334_apps_ora_mnt MountPoint
"/export/zones/usa0300uz1334/root/apps/oracle"
hares -modify uz1334_apps_ora_mnt BlockDevice "uz1334_oracle/apps"
hares -modify uz1334_apps_ora_mnt FSType zfs
hares -modify uz1334_apps_ora_mnt CkptUmount 1
hares -modify uz1334_apps_ora_mnt RecursiveMnt 0
hares -modify uz1334_apps_ora_mnt VxFSMountLock 1
hares -modify uz1334_apps_ora_mnt Enabled 1
hares -link uz1334_apps_ora_mnt ora_uz1334_pool
hares -link ora_uz1334_pool usa0300uz1334

Symantec case which helped on this setup you can find all the details in
this case also

RE Symantec Case
05890758
ref _00D30jPy _50050QD1DF ref .msg

multinicb
hagrp -add Network_Adapters
hagrp -modify Network_Adapters SystemList usa0300ux594 0
usa0300ux595 1
hagrp -modify Network_Adapters Parallel 1
hagrp -modify Network_Adapters AutoStartList usa0300ux594
usa0300ux595
hares -add IPMP_BUR MultiNICB Network_Adapters
hares -add IPMP_BUR_Phantom Phantom Network_Adapters

hares -modify IPMP_BUR Critical 0


hares -modify IPMP_BUR UseMpathd 1
hares -modify IPMP_BUR ConfigCheck 0
hares -modify IPMP_BUR Device igb0 0 igb2 1
hares -modify IPMP_BUR GroupName BUR
hares -modify IPMP_BUR MpathdCommand "/usr/lib/inet/in.mpathd"
hares -modify IPMP_BUR MpathdRestart 1
hares -modify IPMP_BUR LinkTestRatio 1
hares -modify IPMP_BUR IgnoreLinkStatus 1
hares -modify IPMP_BUR NetworkTimeout 100
hares -modify IPMP_BUR OnlineTestRepeatCount 3

hares -modify IPMP_BUR OfflineTestRepeatCount 3


hares -modify IPMP_BUR DefaultRouter "0.0.0.0"
hares -modify IPMP_BUR Protocol IPv4
hares -modify IPMP_BUR Enabled 1

hares -modify IPMP_BUR_Phantom Critical 0


hares -modify IPMP_BUR_Phantom Enabled 1
hares -link IPMP_BUR_Phantom IPMP_BUR

ipmp-resource assign for bur under zone


hares -add zone_uz1336_bur IPMultiNICB zone_uz1336
hares -modify zone_uz1336_bur Critical 0
hares -modify zone_uz1336_bur BaseResName IPMP_BUR
hares -modify zone_uz1336_bur Address "10.1.124.84"
hares -modify zone_uz1336_bur NetMask "255.255.0.0"
hares -modify zone_uz1336_bur DeviceChoice 0
hares -modify zone_uz1336_bur Enabled 1
hares -link zone_uz1336_bur usa0300uz1336

Better to keep agent value to 1 for IPMP resource to avoid SG failover in


ipmp failover.
Use hatype

Recommendation:

The reason for the mpathd failure was that network was unavailable for 13 seconds,
and in perfect world, the network should not be unavailable however as we have
seen this coming more than once, we could give more time for mpathd to recover.
Current attributes:
----------------------MultiNICB
MonitorInterval
MultiNICB
ToleranceLimit
-----------------------

10
0

- we can increase the ToleranceLimit to 2 so that it will give more time before it can
declare the resources was faulted.
Side effect of increasing too much is that it will wait longer before it can detect an
actual failure
ToleranceLimit allows to prolong the monitoring cycles before declaring a failure.
We may also increase the toleranceLimit for IPMultiNICB
Current attributes:
----------------------IPMultiNICB MonitorInterval
IPMultiNICB ToleranceLimit
-----------------------

30
1

Please feel free to let me know if you need further information/clarification on this.

Monitor interval means: agent will monitor every 30 seconds so it means every minute 2 times it
will monitor and
Tolerance Limit value: if nic fault it will wait for some time then declared as it faulty
To change the value you need to use hatype

bash-3.2$ sudo hatype -display MultiNICB|grep -i limit


MultiNICB

CleanRetryLimit

MultiNICB

OfflineWaitLimit

MultiNICB

OnlineRetryLimit

0
0
0

MultiNICB

OnlineWaitLimit

MultiNICB

RestartLimit

MultiNICB

ToleranceLimit

2
0
0

bash-3.2$ sudo hatype -display MultiNICB|grep -i interval


MultiNICB

ConfInterval

600

MultiNICB

InfoInterval

MultiNICB

MonitorInterval

MultiNICB

OfflineMonitorInterval 60

0
10

bash-3.2$ sudo hatype -display MultiNICB|grep -i interval


MultiNICB

ConfInterval

600

MultiNICB

InfoInterval

MultiNICB

MonitorInterval

MultiNICB

OfflineMonitorInterval 60

0
10

bash-3.2$ sudo hatype -display IPMultiNICB|grep -i interval


IPMultiNICB ConfInterval
IPMultiNICB InfoInterval
IPMultiNICB MonitorInterval

600
0
30

IPMultiNICB OfflineMonitorInterval 300


bash-3.2$ uname -a
SunOS usa0300ux593 5.10 Generic_150400-07 sun4v sparc sun4v
bash-3.2$
bash-3.2$
bash-3.2$ sudo hatype -display MultiNICB|grep -i limit
bash-3.2$ sudo hatype -display IPMultiNICB|grep -i limit
IPMultiNICB CleanRetryLimit
IPMultiNICB OfflineWaitLimit

0
0

IPMultiNICB OnlineRetryLimit

IPMultiNICB OnlineWaitLimit

IPMultiNICB RestartLimit

IPMultiNICB ToleranceLimit

bash-3.2$ sudo hatype -display MultiNICB|grep -i limit


Password:
MultiNICB

CleanRetryLimit

MultiNICB

OfflineWaitLimit

MultiNICB

OnlineRetryLimit

MultiNICB

OnlineWaitLimit

MultiNICB

RestartLimit

MultiNICB

ToleranceLimit

0
0

bash-3.2$ sudo hatype -modify MultiNICB ToleranceLimit 1


VCS WARNING V-16-1-11335 Configuration must be ReadWrite : Use haconf
-makerw
bash-3.2$ sudo haconf -makerw
bash-3.2$ sudo hatype -modify MultiNICB ToleranceLimit 1
bash-3.2$ sudo hatype -display MultiNICB|grep -i limit
MultiNICB

CleanRetryLimit

MultiNICB

OfflineWaitLimit

MultiNICB

OnlineRetryLimit

MultiNICB

OnlineWaitLimit

MultiNICB

RestartLimit

MultiNICB

ToleranceLimit

0
1

other SGs add

hagrp -add zone_uz1338


hagrp -modify zone_uz1338 SystemList usa0300ux594 0 usa0300ux595 1
hagrp -modify zone_uz1338 AutoStartList usa0300ux594 usa0300ux595
hagrp -modify zone_uz1338 Parallel 0
hares -add zone_uz1338_root Zpool zone_uz1338
hares -modify zone_uz1338_root Critical 1
hares -modify zone_uz1338_root Enabled 1
hares -modify zone_uz1338_root ChkZFSMounts 1
hares -modify zone_uz1338_root FailMode continue
hares -modify zone_uz1338_root ForceOpt 1
hares -modify zone_uz1338_root ForceRecoverOpt 0
hares -modify zone_uz1338_root PoolName uz1338_zone
hares -modify zone_uz1338_root AltRootPath /
hares -modify zone_uz1338_root DeviceDir -delete -keys
hazonesetup -g zone_uz1338 -r usa0300uz1338 -z usa0300uz1338 -p
hclr00t -a -s usa0300ux594
hazonesetup -g zone_uz1338 -r usa0300uz1338 -z usa0300uz1338 -p
hclr00t -a -s usa0300ux595
hares -modify usa0300uz1338 BootState multi-user-server
hares -link usa0300uz1338 zone_uz1338_root
hares -add ora_uz1338_pool Zpool zone_uz1338
hares -modify ora_uz1338_pool Critical 0
hares -modify ora_uz1338_pool Enabled 1
hares -modify ora_uz1338_pool PoolName uz1338_oracle

hares -modify ora_uz1338_pool AltRootPath "/"


hares -modify ora_uz1338_pool ChkZFSMounts 1
hares -modify ora_uz1338_pool FailMode continue
hares -modify ora_uz1338_pool ForceOpt 1
hares -modify ora_uz1338_pool ForceRecoverOpt 0
hares -add uz1338_apps_ora_mnt Mount zone_uz1338
hares -modify uz1338_apps_ora_mnt Critical 0
hares -modify uz1338_apps_ora_mnt MountPoint
"/export/zones/usa0300uz1338/root/apps/oracle"
hares -modify uz1338_apps_ora_mnt BlockDevice "uz1338_oracle/apps"
hares -modify uz1338_apps_ora_mnt FSType zfs
hares -modify uz1338_apps_ora_mnt CkptUmount 1
hares -modify uz1338_apps_ora_mnt RecursiveMnt 0
hares -modify uz1338_apps_ora_mnt Enabled 1
hares -link uz1338_apps_ora_mnt ora_uz1338_pool
hares -link ora_uz1338_pool usa0300uz1338

hares -add zone_uz1338_bur IPMultiNICB zone_uz1338


hares -modify zone_uz1338_bur Critical 0
hares -modify zone_uz1338_bur BaseResName IPMP_BUR
hares -modify zone_uz1338_bur Address "10.1.126.84"
hares -modify zone_uz1338_bur NetMask "255.255.0.0"
hares -modify zone_uz1338_bur DeviceChoice 0
hares -modify zone_uz1338_bur Enabled 1
hares -link zone_uz1338_bur usa0300uz1338

hagrp -add zone_uz1339


hagrp -modify zone_uz1339 SystemList usa0300ux594 0 usa0300ux595 1
hagrp -modify zone_uz1339 AutoStartList usa0300ux594 usa0300ux595
hagrp -modify zone_uz1339 Parallel 0
hares -add zone_uz1339_root Zpool zone_uz1339
hares -modify zone_uz1339_root Critical 1
hares -modify zone_uz1339_root Enabled 1
hares -modify zone_uz1339_root ChkZFSMounts 1
hares -modify zone_uz1339_root FailMode continue
hares -modify zone_uz1339_root ForceOpt 1
hares -modify zone_uz1339_root ForceRecoverOpt 0
hares -modify zone_uz1339_root PoolName uz1339_zone
hares -modify zone_uz1339_root AltRootPath /
hares -modify zone_uz1339_root DeviceDir -delete -keys
hazonesetup -g zone_uz1339 -r usa0300uz1339 -z usa0300uz1339 -p
hclr00t -a -s usa0300ux594
hazonesetup -g zone_uz1339 -r usa0300uz1339 -z usa0300uz1339 -p
hclr00t -a -s usa0300ux595
hares -modify usa0300uz1339 BootState multi-user-server
hares -link usa0300uz1339 zone_uz1339_root
hares -add ora_uz1339_pool Zpool zone_uz1339
hares -modify ora_uz1339_pool Critical 0
hares -modify ora_uz1339_pool Enabled 1
hares -modify ora_uz1339_pool PoolName uz1339_oracle
hares -modify ora_uz1339_pool AltRootPath "/"
hares -modify ora_uz1339_pool ChkZFSMounts 1

hares -modify ora_uz1339_pool FailMode continue


hares -modify ora_uz1339_pool ForceOpt 1
hares -modify ora_uz1339_pool ForceRecoverOpt 0
hares -add uz1339_apps_ora_mnt Mount zone_uz1339
hares -modify uz1339_apps_ora_mnt Critical 0
hares -modify uz1339_apps_ora_mnt MountPoint
"/export/zones/usa0300uz1339/root/apps/oracle"
hares -modify uz1339_apps_ora_mnt BlockDevice "uz1339_oracle/apps"
hares -modify uz1339_apps_ora_mnt FSType zfs
hares -modify uz1339_apps_ora_mnt CkptUmount 1
hares -modify uz1339_apps_ora_mnt RecursiveMnt 0
hares -modify uz1339_apps_ora_mnt Enabled 1
hares -link uz1339_apps_ora_mnt ora_uz1339_pool
hares -link ora_uz1339_pool usa0300uz1339

hares -add zone_uz1339_bur IPMultiNICB zone_uz1339


hares -modify zone_uz1339_bur Critical 0
hares -modify zone_uz1339_bur BaseResName IPMP_BUR
hares -modify zone_uz1339_bur Address "10.1.127.84"
hares -modify zone_uz1339_bur NetMask "255.255.0.0"
hares -modify zone_uz1339_bur DeviceChoice 0
hares -modify zone_uz1339_bur Enabled 1
hares -link zone_uz1339_bur usa0300uz1339

hagrp -add zone_uz1340


hagrp -modify zone_uz1340 SystemList usa0300ux594 0 usa0300ux595 1

hagrp -modify zone_uz1340 AutoStartList usa0300ux594 usa0300ux595


hagrp -modify zone_uz1340 Parallel 0
hares -add zone_uz1340_root Zpool zone_uz1340
hares -modify zone_uz1340_root Critical 1
hares -modify zone_uz1340_root Enabled 1
hares -modify zone_uz1340_root ChkZFSMounts 1
hares -modify zone_uz1340_root FailMode continue
hares -modify zone_uz1340_root ForceOpt 1
hares -modify zone_uz1340_root ForceRecoverOpt 0
hares -modify zone_uz1340_root PoolName uz13340_zone
hares -modify zone_uz1340_root AltRootPath /
hares -modify zone_uz1340_root DeviceDir -delete -keys
hazonesetup -g zone_uz1340 -r usa0300uz1340 -z usa0300uz1340 -p
hclr00t -a -s usa0300ux594
hazonesetup -g zone_uz1340 -r usa0300uz1340 -z usa0300uz1340 -p
hclr00t -a -s usa0300ux595
hares -modify usa0300uz1340 BootState multi-user-server
hares -link usa0300uz1340 zone_uz1340_root
hares -add ora_uz1340_pool Zpool zone_uz1340
hares -modify ora_uz1340_pool Critical 0
hares -modify ora_uz1340_pool Enabled 1
hares -modify ora_uz1340_pool PoolName uz1340_oracle
hares -modify ora_uz1340_pool AltRootPath "/"
hares -modify ora_uz1340_pool ChkZFSMounts 1
hares -modify ora_uz1340_pool FailMode continue
hares -modify ora_uz1340_pool ForceOpt 1
hares -modify ora_uz1340_pool ForceRecoverOpt 0

hares -add uz1340_apps_ora_mnt Mount zone_uz1340


hares -modify uz1340_apps_ora_mnt Critical 0
hares -modify uz1340_apps_ora_mnt MountPoint
"/export/zones/usa0300uz1340/root/apps/oracle"
hares -modify uz1340_apps_ora_mnt BlockDevice "uz1340_oracle/apps"
hares -modify uz1340_apps_ora_mnt FSType zfs
hares -modify uz1340_apps_ora_mnt CkptUmount 1
hares -modify uz1340_apps_ora_mnt RecursiveMnt 0
hares -modify uz1340_apps_ora_mnt Enabled 1
hares -link uz1340_apps_ora_mnt ora_uz1340_pool
hares -link ora_uz1340_pool usa0300uz1340

hares -add zone_uz1340_bur IPMultiNICB zone_uz1340


hares -modify zone_uz1340_bur Critical 0
hares -modify zone_uz1340_bur BaseResName IPMP_BUR
hares -modify zone_uz1340_bur Address "10.1.128.84"
hares -modify zone_uz1340_bur NetMask "255.255.0.0"
hares -modify zone_uz1340_bur DeviceChoice 0
hares -modify zone_uz1340_bur Enabled 1
hares -link zone_uz1340_bur usa0300uz1340

Das könnte Ihnen auch gefallen