First, on the system, create a zfs snapshot of the system
1. Share space on an NFS server. Can we linux, another solaris or even my new friend the MobaXterm (16MB of concentrated Unix technology, worth the test) 2. Mount that space to make it available on phys: phys# mount nfsserver:/backup /mnt 3. Create a recursive snapshot of the root pool. Recursive works after 2008 release, but not here so
4. We prepare all snaps 5. phys# zfs list | awk '{print "zfs snapshot "$1"@fred"}' 6. zfs snapshot rpool@fred 7. zfs snapshot rpool/ROOT@fred 8. zfs snapshot rpool/ROOT/s10@fred 9. zfs snapshot rpool/dump@fred 10. zfs snapshot rpool/export@fred zfs snapshot rpool/export/home@fred zfs snapshot rpool/export/home/admin@fred zfs snapshot rpool/swap@fred 11. We cut and paste the lines and check creation 12. phys# zfs list 13. NAME USED AVAIL REFER MOUNTPOINT 14. rpool 17.6G 116G 67K /rpool 15. rpool@fred 0 - 67K - 16. rpool/ROOT 5.43G 116G 21K legacy 17. rpool/ROOT@fred 0 - 21K - 18. rpool/ROOT/s10 5.43G 116G 4.68G / 19. rpool/ROOT/s10@fred 290K - 4.68G - 20. rpool/dump 4.00G 116G 4.00G - 21. rpool/dump@fred 0 - 4.00G - 22. rpool/export 69.5K 116G 23K /export 23. rpool/export@fred 0 - 23K - 24. rpool/export/home 46.5K 116G 23K /export/home 25. rpool/export/home@fred 0 - 23K - 26. rpool/export/home/admin 23.5K 116G 23.5K /export/home/admin 27. rpool/export/home/admin@fred 0 - 23.5K - 28. rpool/swap 8.20G 124G 15.2M - rpool/swap@fred 0 - 15.2M - 29. We cut and paste the lines and check creation. We can prepare the backup of file systems 30. phys# zfs list | awk '{print "zfs send "$1"@fred > /mnt/"$1}' > run.sh phys# vi run.sh but we need to replace / by _ in the file before running (dont zip it, solaris minimal setup will not be able to restore it) zfs send rpool@fred > /mnt/rpool zfs send rpool/ROOT@fred> /mnt/rpool_ROOT zfs send rpool/ROOT/s10@fred > /mnt/rpool_ROOT_s10 zfs send rpool/export@fred > /mnt/rpool_export zfs send rpool/export/home@fred > /mnt/rpool_export_home zfs send rpool/export/home/admin@fred > /mnt/rpool_export_home_admin And need to remove useless filesystem : zfs send rpool/dump@fred > /mnt/rpool/dump@fred zfs send rpool/swap@fred > /mnt/rpool_swap@fred 31. We make the script runnable and run it 32. phys# chmod 755 run.sh phys# ./run.sh 33. We can also prepare the restore to come 34. phys# zfs list | awk '{print "cat /mnt/"$1" | zfs receive "$1}' > restore.sh 35. phys# vi restore.sh 36. cat /mnt/rpool|zfs receive -F rpool 37. cat /mnt/rpool_ROOT|zfs receive rpool/ROOT 38. cat /mnt/rpool_ROOT_s10|zfs receive rpool/ROOT/s10 39. cat /mnt/rpool_export|zfs receive rpool/export 40. cat /mnt/rpool_export_home|zfs receive rpool/export/home cat /mnt/rpool_export_home_admin |zfs receive rpool/export/home/admin Make it executable and store it with the backup phys# chmod 755 restore.sh phys# mv restore.sh /tmp 41. To simplify we complete restore.sh with A. Recreate the root pool. For example: (before the zfs receive) # zpool create -f -o failmode=continue -R /a -m legacy rpool c1t0d0s0 B. Install grub ( after zfs receive) # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0 C. Let the bootfs property on the root pool BE. # zpool set bootfs=rpool/ROOT/s10 rpool D. Recreate the dump device. # zfs create -V 2G rpool/dump E. Recreate the swap device. # zfs create -V 2G -b 4k rpool/swap 42. We are done so we can clean the snapshots 43. phys# zfs list | awk '{print "zfs destroy "$1"@fred"}' 44. zfs destroy rpool@fredzfs destroy rpool/ROOT@fred 45. zfs destroy rpool/ROOT/s10@fred 46. zfs destroy rpool/dump@fred 47. zfs destroy rpool/export@fred 48. zfs destroy rpool/export/home@fred 49. zfs destroy rpool/export/home/admin@fred zfs destroy rpool/swap@fred Ok after a cut/paste, we can now prepare the virtual version 50. Connect to vCenter (Workstation, VirtualBox choose it) and create a new Solaris 10 VM with the same version of solaris CD or be ready to patch the system, or get zpool version to create it at proper version A. During boot, use option 6 to boot CD to single user Once on the shell, first restore the delete key ( system is in US keyboard be careful if it is not your native keyboard, you will soon understand why I prepare the restore script) stty erase ^H (press delete) B. Then allow / to be modified mount -o rw,remount / C. Now connect to network D. ifconfig -a plumb ifconfig e1000g0 10.0.0.3/24 up E. and connect to nfsserver (10.0.0.2 in our example) mount -F 10.0.0.2:/backups /mnt F. We now need to create a Solaris fdisk partition that can be used for booting by selecting 1=SOLARIS2. You can create a Solaris partition by using the fdisk -B option that creates one Solaris partition that uses the whole disk. Beware that the following command uses the whole disk. # fdisk -B /dev/rdsk/c1t0d0p0 G. Display the newly created Solaris partition. For example: H. Total disk size is 8924 cylinders I. Cylinder size is 16065 (512 byte) blocks J. K. Cylinders L. Partition Status Type Start End Length % M. ========= ====== ============ ===== === ====== === N. 1 Active Solaris2 1 8923 8923 100 O. . P. . Q. . Enter Selection: 6 R. Create a slice in the Solaris partition for the root pool. Creating a slice on x86 and SPARC system is similar except that an x86 system has a slice 8. In the example below, a slice 0 is created and the disk space is allocated to slice 0 on an x86 system. For a SPARC system, just ignore the slice 8 input. S. # format T. Specify disk (enter its number): 1 U. selecting c1t0d0 V. [disk formatted] W. FORMAT MENU: X. disk - select a disk Y. type - select (define) a disk type Z. partition - select (define) a partition table AA. current - describe the current disk BB. format - format and analyze the disk CC. fdisk - run the fdisk program DD. . EE. . FF. . GG. format> p HH. PARTITION MENU: II. 0 - change `0' partition JJ. 1 - change `1' partition KK. 2 - change `2' partition LL. 3 - change `3' partition MM. 4 - change `4' partition NN. 5 - change `5' partition OO. 6 - change `6' partition PP. 7 - change `7' partition QQ. select - select a predefined table RR. modify - modify a predefined partition table SS. name - name the current table TT. print - display the current table UU. label - write partition map and label to the disk VV. !<cmd> - execute <cmd>, then return WW. quit XX. partition> p YY. Current partition table (original): ZZ. Total disk cylinders available: 8921 + 2 (reserved cylinders) AAA. BBB. Part Tag Flag Cylinders Size Blocks CCC. 0 unassigned wm 0 0 (0/0/0) 0 DDD. 1 unassigned wm 0 0 (0/0/0) 0 EEE. 2 backup wu 0 - 8920 68.34GB (8921/0/0) 143315865 FFF. 3 unassigned wm 0 0 (0/0/0) 0 GGG. 4 unassigned wm 0 0 (0/0/0) 0 HHH. 5 unassigned wm 0 0 (0/0/0) 0 III. 6 unassigned wm 0 0 (0/0/0) 0 JJJ. 7 unassigned wm 0 0 (0/0/0) 0 KKK. 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 LLL. 9 unassigned wm 0 0 (0/0/0) 0 MMM. partition> modify NNN. Select partitioning base: OOO. 0. Current partition table (original) PPP. 1. All Free Hog QQQ. Choose base (enter number) [0]? 1 RRR. SSS. Part Tag Flag Cylinders Size Blocks TTT. 0 root wm 0 0 (0/0/0) 0 UUU. 1 swap wu 0 0 (0/0/0) 0 VVV. 2 backup wu 0 - 8920 68.34GB (8921/0/0) 143315865 WWW. 3 unassigned wm 0 0 (0/0/0) 0 XXX. 4 unassigned wm 0 0 (0/0/0) 0 YYY. 5 unassigned wm 0 0 (0/0/0) 0 ZZZ. 6 usr wm 0 0 (0/0/0) 0 AAAA. 7 unassigned wm 0 0 (0/0/0) 0 BBBB. 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 CCCC. 9 alternates wm 0 0 (0/0/0) 0 DDDD. EEEE. Do you wish to continue creating a new partition FFFF. table based on above table[yes]? GGGG. Free Hog partition[6]? 0 HHHH. Enter size of partition '1' [0b, 0c, 0.00mb, 0.00gb]: IIII. Enter size of partition '3' [0b, 0c, 0.00mb, 0.00gb]: JJJJ. Enter size of partition '4' [0b, 0c, 0.00mb, 0.00gb]: KKKK. Enter size of partition '5' [0b, 0c, 0.00mb, 0.00gb]: LLLL. Enter size of partition '6' [0b, 0c, 0.00mb, 0.00gb]: MMMM. Enter size of partition '7' [0b, 0c, 0.00mb, 0.00gb]: NNNN. OOOO. Part Tag Flag Cylinders Size Blocks PPPP. 0 root wm 1 - 8920 68.33GB (8920/0/0) 143299800 QQQQ. 1 swap wu 0 0 (0/0/0) 0 RRRR. 2 backup wu 0 - 8920 68.34GB (8921/0/0) 143315865 SSSS. 3 unassigned wm 0 0 (0/0/0) 0 TTTT. 4 unassigned wm 0 0 (0/0/0) 0 UUUU. 5 unassigned wm 0 0 (0/0/0) 0 VVVV. 6 usr wm 0 0 (0/0/0) 0 WWWW. 7 unassigned wm 0 0 (0/0/0) 0 XXXX. 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 YYYY. 9 alternates wm 0 0 (0/0/0) 0 ZZZZ. AAAAA. Okay to make this the current partition table[yes]? BBBBB. Enter table name (remember quotes): "disk1" CCCCC. DDDDD. Ready to label disk, continue? yes EEEEE. partition> q format> q FFFFF. disk is ready ! we can go to nfs server cd /mnt ./restore.sh After a while we can reboot the system AFTER changing to another VLAN than the physical server. # init 6 After the reboot, you will be in maintenance mode. You need to edit the /etc/path_to_inst file to remove e1000g0 reference if your server was running with it. Dont try vi yet grep -v e1000 /etc/path_to_inst > backup cp /etc/path_to_inst /etc/path_to_inst.bak cp backup /etc/path_to_inst also run devfsadm -Cv to update all devices address After a final reboot, if your system was not a minimal install, you will be able to install vmware-tools or equivalent and do a last rebootLast ? help yourself, make a vm snapshot, upgrade system to latest maintenance release of solaris 10 (if you still have access to solaris support page) and upgrade zpool and zfs to latest version.