Sie sind auf Seite 1von 5

Zone creation zonecfg -z <zonename> >create -b ----------<creates whole root zone> >set zonepath= >set autoboot= >add net

net>set address=192.168.1.1 net>set physical=eri0 net>end >verify >commit ---------------adding lofs devices >add fs fs>set dir=/opt/sfw fs>set special=/opt/websphere fs>set type=lofs fs>set options=rw fs>end >verify >commit -------------------adding ufs/VXFS >add fs fs>set dir=/opt/test fs>set special=/dev/dsk/c0t0d0s3 /dev/vx/dsk/dgname/vol fs>set raw=/dev/rdsk/c0t0d0s3 /dev/vx/rdsk/dgname/vol fs>set type=ufs vxfs fs>set options=logging fs>end >verify >commit -------------------------adding physical device >add device device>set match=/dev/dsk/c0t0d0s5 fs>end >add device device>set match=/dev/rdsk/c0t0d0s5 fs>end --------------------------lofiadm devices global# mkfile 1g /mystuff/myfile global# lofiadm -a /mystuff/myfile global# zonecfg -z my-zone zonecfg:my-zone> add device zonecfg:my-zone:device> set match=/dev/rlofi/1 zonecfg:my-zone:fs> end -------------------------------Pkg dir zonecfg:app1> add inherit-package-dir zonecfg:app1:inherit-pkg-dir> set dir=/opt/sfw zonecfg:app1:inherit-pkg-dir> end -------------------------------------Zone daemons Daemon Function zoneadmd Responsible for booting and shutting down zones Zsched Keeps track of kernel threads belonging to zones ------------------------------------------------Zone software package parameters

SUNW_PKG_ALLZONES Determines the type of zone in which a package can beinstalled SUNW_PKG_HOLLOW Determines the visibility of the package in a zone SUNW_PKG_THISZONE Determines if the package must be installed in the current zon e only ---------------------------------------------------Zone deletion steps zoneadm -z zonename halt zoneadm -z zonename uninstall -F zonecfg -z zonename delete -F Specify the memory limits for the zone my-zone. Each limit is optional, but at l east one must be set. zonecfg:my-zone> **add capped-memory** zonecfg:my-zone:capped-memory> **set physical=50m** zonecfg:my-zone:capped-memory> **set swap=100m** zonecfg:my-zone:capped-memory> **set locked=30m** zonecfg:my-zone:capped-memory> **end** dedicated-cpu Resource Usage Example The dedicated-cpu resource sets limits for number of CPUs, and optionally, the r elative importance of the pool. The following example specifies a CPU range for use by the zone my-zone. zonecfg:my-zone> **add dedicated-cpu** zonecfg:my-zone:dedicated-cpu> **set ncpus=1-3** zonecfg:my-zone:dedicated-cpu> **set importance=2** zonecfg:my-zone:dedicated-cpu> **end** ****************************** migrating a zone to different host solaris2#zonecfg -z vicky export >/tmp/export.config solaris2#cat /tmp/export.config create -b set zonepath=/export/vicky set autoboot=false set ip-type=shared add inherit-pkg-dir set dir=/lib end add inherit-pkg-dir set dir=/platform end add inherit-pkg-dir set dir=/sbin end add inherit-pkg-dir set dir=/usr end add net set address=192.168.1.15 set physical=pcn0 end zoneadm -z zonename halt zoneadm -z zonename dettach -n

create the above zonecfg in the alternative host as below zonecfg -z zonename -f '/tmp/export.config zoneadm -z zonename attach -n ============> dry run

Solaris Live Upgrade is a mechanism to copy the currently running system onto ne w slices. When nonglobal zones are installed, they can be copied to the inactive boot environment along w ith the global zone's file systems. In this example of a system with a single disk, the root (/) file system is copi ed to c0t0d0s4. All nonglobal zones that are associated with the file system are also copied to s4. The /expor t and /swap file systems are shared between the current boot environment, bootenv1, and the inact ive boot environment, bootenv2. The lucreate command is the following: # lucreate -c bootenv1 -m /:/dev/dsk/c0t0d0s4:ufs -n bootenv2 In this example of a system with two disks, the root (/) file system is copied t o c0t1d0s0. All non-global zones that are associated with the file system are also copied to s0. The /expor t and /swap file systems are shared between the current boot environment, bootenv1, and the inactive boot env ironment, bootenv2. The lucreate command is the following: # lucreate -c bootenv1 -m /:/dev/dsk/c0t1d0s0:ufs -n bootenv2 In this example of a system with a single disk, the root (/) file system is copi ed to c0t0d0s4. All nonglobal zones that are associated with the file system are also copied to s4. The non-gl obal zone, zone1, has a separate file system that was created by the zonecfg add fs command. The z one path is /zone1/ root/export. To prevent this file system from being shared by the inactive boot environment, the file system is placed on a separate slice, c0t0d0s6. The /export and /swap file syste ms are shared between the current boot environment, bootenv1, and the inactive boot environment, boote nv2. The lucreate command is the following: # lucreate -c bootenv1 -m /:/dev/dsk/c0t0d0s4:ufs \ -m /export:/dev/dsk/c0t0d0s6:ufs:zone1 -n bootenv2 In this example of a system with two disks, the root (/) file system is copied t o c0t1d0s0. All non-global zones that are associated with the file system are also copied to s0. The non-gl obal zone, zone1, has a separate file system that was created by the zonecfg add fs command. The zone path is /zone1/root/ export. To prevent this file system from being shared by the inactive boot envir onment, the file system is placed on a separate slice, c0t1d0s4. The /export and /swap file systems are sha red between the current boot environment, bootenv1, and the inactive boot environment, bootenv2. The luc reate command is the following: # lucreate -c bootenv1 -m /:/dev/dsk/c0t1d0s0:ufs \ -m /export:/dev/desk/c0t1d0s4:ufs:zone1 -n bootenv2 Create Alternate Boot Device - SVM

Note that when a filesystem is not specified in the lucreate command it is assum ed shared Make sure that the alternate boot disk has the same partition layout and has been labled 1. Make sure that the partition layout is the same # prtvtoc /dev/rdsk/c0d0s2 | fmrhard -s - /dev/rdsk/c0d1s2 2. Create OS Image with same FS Layout ; Have lucreate split mirror for you. # lucreate -n abe -m /:/dev/md/dsk/d200:ufs,mirror -m /:/dev/dsk/c0d1s0:detach,attach,preserve -m /var:/dev/md/dsk/d210:ufs,mirror -m /var:/dev/dsk/c0d1s3:detach,attach,preserve -m /zones:/dev/md/dsk/d220:ufs,mirror -m /zones:/dev/dsk/c0d1s5:detach,attach,preserve -m /export:/dev/md/dsk/d230:ufs,mirror -m /export:/dev/dsk/c0d1s4:detach,attach,preserve Patch, Adding Packages, setting boot environment and Installation examples Note that when a MD filesystem is not specified in the lucreate command it is as sumed shared Make sure that the alternate boot disk has the same partition layout and has been labled Warning When adding patches to ABE bad patch script permissions could prevent the patch from being added; look for errors around permissions such as: /var/sadm/spool/lu/120273-25/ postpatch simple chmod will fix and allow for patch installation ; recommend scripting che ck before adding patches 1. PATCHING - For Solaris 10 '*' works out patch order - otherwise patch_order f ile can be passed to it. # luupgrade -t -n abe -s /var/tmp/patches '*' 2. PATCHING - For pre-solaris 10 needing patch order file # luupgrade -t -n abe -s /path/to/patches -O "-M /path/to/patch patch_order_list " 3. Adding Additional Packages to alternate boot environment # luupgrade -p -n abe -s /export/packages MYpkg 4. Removing packages from ABE # luupgrade -P -n abe MYpkg 5. Mounting Alternate Boot Environment for modifications # lumount abe /mnt 6. Unmount Alternate Boot Environment # luumount abe 7. Enable ABE # luactivate abe 8. Show Boot Environment Status # lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status ----------------- -------- ------ --------- ------ --------disk_a_S7 yes yes yes no disk_b_S7db yes no no no UPGRADING disk_b_S8 no no no no S9testbed yes no no yes 9. Filesystem merger example Instead of using the preceding command to create the alternate boot environment so it matches the current boot environment, the following command joins / and /usr, assuming that c0t3d0s0 is partitioned with sufficient space: # lucreate -c "Solaris_8" -m /:/dev/dsk/c0t3d0s0:ufs \

-m /usr:merged:ufs -m /var:/dev/dsk/c0t3d0s4:ufs \ -n "Solaris_9" 10.example patch order # luupgrade -t -n "Solaris_9" \ -s /install/data/patches/SunOS-5.9-sparc/recommended -O \ "-M /install/data/patches/SunOS-5.9-sparc/recommended patch_order" 11.Example with splitoff This next example would instead split /opt off of /, assuming that c0t3d0s5 is p artitioned with sufficient space: # lucreate -c "Solaris_8" -m /:/dev/dsk/c0t3d0s0:ufs \ -m /usr:/dev/dsk/c0t3d0s3:ufs -m /var:/dev/dsk/c0t3d0s4:ufs \ -m /opt:/dev/dsk/c0t3d0s5:ufs -n "Solaris_9" 12.Using luupgrade to Upgrade from a JumpStart Server This next example shows how to upgrade from the existing Solaris 8 alternate boo t environment to Solaris 9 by means of an NFS-mounted JumpStart installation. First create a Jump Start installation from CD-ROM, DVD, or an ISO image as covered in the Solaris 9 Installation Guide . The JumpStart installation in this example resides in /install on the server js-server. The OS image itself resides in / install/cdrom/SunOS-5.9-sparc. The profiles for this JumpStart installation dwel l in /install/jumpstart/ profiles/ in a subdirectory called liveupgrade. Within this directory, the file js-upgrade contains the JumpStart profile to upgrade the OS and additionally install the package SUNWxwi ce: install_type upgrade package SUNWxwice add On the target machine, mount the /install partition from js-server and run luupg rade, specifying the Solaris_9 alternate boot environment as the target, the OS image location, and t he JumpStart profile: # # # mkdir /install mount -o ro js-server:/install /install luupgrade -u -n "Sol_9" -s /install/cdrom/SunOS-5.9-sparc \ -j /install/jumpstart/profiles/liveupgrade/js-upgrade

Das könnte Ihnen auch gefallen