Sie sind auf Seite 1von 8

If you'd like to download

software, participate in
forums, and get access to
other technical how-to
goodies in addition to
content like this, become an
OTN member. No spam!
How to Live Install from Oracle Solaris 10 to Oracle Solaris 11 11/11
by Harold Shaw
How to migrate a system that runs Oracle Solaris 10 to Oracle Solaris 11 11/11 with minimal downtime.
Published April 2012
This live install procedure describes the steps you can perform to migrate from Oracle Solaris 10 to Oracle Solaris 11 11/11 with minimal downtime.
First, you create a set of ZFS send archivesgolden imageon an Oracle Solaris 11 11/11 system that is the
same model as your Oracle Solaris 10 system. Then you install this golden image on an unused disk of the
system running Oracle Solaris 10 to enable it to be rebooted into Oracle Solaris 11 11/11. The basic system
configuration parameters from the Oracle Solaris 10 image are stored and applied to the Oracle Solaris 11 11/11
image.
Note: Migrating the installed software to a system of a different model is not supported. For example, an image
created on a SPARC M-Series system from Oracle cannot be deployed on a SPARC T-Series system from
Oracle. Also, at this time, this procedure applies only to migrating to Oracle Solaris 11 11/11, not to other
releases of Oracle Solaris 11.
Overview of the Process and Requirements
This live install procedure has the following four phases:
Phase 1: Creating the Oracle Solaris 11 11/11 Archive
Phase 2: Preparing to Configure the Oracle Solaris 11 11/11 System
Phase 3: Migrating the Oracle Solaris 11 11/11 Archive
Phase 4: Configuring the Oracle Solaris 11 11/11 System
This article refers to two systems:
The archive system is a system on which an Oracle Solaris 11 11/11 archive is created.
The migration system is a system that is currently running Oracle Solaris 10 and is being migrated to Oracle Solaris 11 11/11.
Visually, the entire process is expressed through the following sequential steps:
A ZFS archive is created for the root pool and its associated data sets from a freshly installed Oracle Solaris 11 11/11 system (the archive system). When
the archive is created, it may be saved on local removable media, such as a USB drive, or sent across the network to a file server from which it can later
be retrieved. When it is time to make use of the archive, you perform the following high-level steps:
You start a superuser-privileged shell on the Oracle Solaris 10 system that is to be migrated to Oracle Solaris 11 11/11 (the migration system). 1.
You select and configure a boot disk device and you create the new ZFS root pool. 2.
You restore the archived ZFS data sets in the new pool. 3.
You perform the final configuration and then reboot the migration system. 4.
The migration system must be a host that is running Oracle Solaris 10 and has a ZFS version compatible to the archive system. For the ZFS archive to
be migrated to a new disk or system, ensure that the following requirements are met:
The archive system and the migration system are the same model (for example, SPARC T-Series) and they meet the Oracle Solaris 11 11/11
minimum requirements.
The migration system is running Oracle Solaris 10 8/11 or later, which is necessary in order to have a version of ZFS that is compatible with
Oracle Solaris 11 11/11.
If the migration system is running Oracle Solaris 10 8/11, apply the following ZFS patch before attempting to restore the archive. Without this
patch, any attempt to restore the archive will fail. The patch is not necessary with any later release of Oracle Solaris 10.
Patch 147440-11 or later for SPARC-based systems
Patch 147441-11 or later for x86-based systems
Note: The migration system must be rebooted after applying the patch.
Ensure that the disks that will house the new ZFS pool are at least as large in total capacity as the space allocated in the archived pools. This is
discussed in more detail in the Preparation section.
You must have root privileges on both the archive system and the migration system. The archives will carry with them all the software and configuration
information that resides in the ZFS data sets that are archived. Note that the migration of zones via a system archive is not supported. After the migration
is complete, you can migrate Oracle Solaris 10 zones into solaris10 branded zones using separate procedures that are outside the scope of this
document. Also note that it is not possible to run Oracle Solaris 8 or Oracle Solaris 9 zones on an Oracle Solaris 11 system. For more information about
migrating Oracle Solaris 10 zones into solaris10 branded zones, refer to Oracle Solaris Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones,
and Resource Management.
The archive that is created will not have the desired system configuration since will be created on a different host than the host on which it will eventually
be run. Configuration of the archive (after migration) is covered in Phase 4. It will be necessary to reconfigure each boot environment in the archive after
the migration is complete and before Oracle Solaris 11 11/11 is booted. For this reason, the archive should contain only a single boot environment (BE).
For more information on system configuration, refer to Chapter 6, "Unconfiguring or Reconfiguring an Oracle Solaris Instance," in Installing Oracle
How to Live Install from Oracle Solaris 10 to Oracle Solaris ... http://www.oracle.com/technetwork/articles/servers-stora...
1 8 2012/7/23 14:27
Solaris 11 Systems.
No hardware-specific configuration data is carried in the archive image. Hardware-specific system characteristics that will not transfer with the backup
include, but are not limited to, the following:
Disk capacity and configuration (including ZFS pool configurations)
Hardware Ethernet address
Installed hardware peripherals
Phase 1: Creating the Oracle Solaris 11 11/11 Archive
Figure 1 depicts what happens when you create the Oracle Solaris 11 11/11 archive.
Figure 1. Creating the Oracle Solaris 11 11/11 Archive
Preparation
To prepare for migration, note the disk topology and ZFS pool configuration for the root pool on the migration system. Configure the target disk on the
migration system similarly to the disks on the archive system, and size the new ZFS pool appropriately. At a minimum, the allocated amount of the pool
(the ALLOC column in the zpool list output shown below) is required to ensure there is enough room to restore the data sets on the migrating system.
# zpoollist
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 68G 51.6G 16.4G 75% 1.00x ONLINE -
If any archival pool's capacity (as shown by the CAP column) exceeds 80%, best practices dictate that the migration pool should be grown to plan for
capacity. Increasing the headroom in the pool can also be beneficial to performance, depending upon other configuration elements and the workload.
For further information about managing ZFS file systems and related performance, please refer to the Oracle Solaris Administration: ZFS File Systems
guide.
To prepare for later migration, save the output from various commands to a file that is kept with the archive for reference during migration. Listing 1
shows the commands that are recommended as a bare minimum, but other configuration information might be useful, depending upon the system
configuration. The commands shown in Listing 1 with example output are for rpool only.
# zpoollist
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 68G 51.6G 16.4G 75% 1.00 ONLINE -
# zpoolgetallrpool
NAME PROPERTY VALUE SOURCE
rpool size 68G -
rpool capacity 75% -
rpool altroot - default
rpool health ONLINE -
rpool guid 18397928369184079239 -
rpool version 33 default
rpool bootfs rpool/ROOT/snv_175a local
rpool delegation on default
rpool autoreplace off default
rpool cachefile - default
rpool failmode wait default
rpool listsnapshots off default
rpool autoexpand off default
rpool dedupditto 0 default
rpool dedupratio 1.00x -
rpool free 16.4G -
rpool allocated 51.6G -
rpool readonly off -
# zpoolstatus
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c5t0d0s0 ONLINE 0 0 0
errors: No known data errors
How to Live Install from Oracle Solaris 10 to Oracle Solaris ... http://www.oracle.com/technetwork/articles/servers-stora...
2 8 2012/7/23 14:27
# formatc5t0d0s0
selecting c5t0d0s0
[disk formatted]
/dev/dsk/c5t0d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show disk ID
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> p
PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> p
Current partition table (original):
Total disk cylinders available: 14087 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 1 - 14086 68.35GB (14086/0/0) 143339136
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 14086 68.35GB (14087/0/0) 143349312
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
partition> ^D
#
Listing 1. Output from Various Commands
Place the information shown in Listing 1 from the system being archived, along with anything else that might be useful during migration, in a file, and
store the file in the same location as the archive files for use later during the migration.
Alternatively, you can use the Oracle Explorer Data Collector to gather all system configuration information for later reference. Information about Oracle
Explorer Data Collector and its related documentation can be found at the Oracle Services Tools Bundle for Sun Systems Website.
For additional information about ZFS administration and capacity planning, please refer to the Oracle Solaris Administration:ZFS File Systems guide.
Archive Creation
To archive the root pool and include all snapshots, you need to create a ZFS replication stream. To do this, you first create a recursive snapshot from the
top level of the pool, as described below. In the same manner, you can archive other pools that need to be archived and carried over to a migrated host.
Note that rpool is the default root pool name, but the root pool might be named differently on any given system. Use beadm list -d to determine on
which pool the BE resides. In the remainder of this article, the default name rpool is used to reference the root pool.
Use the following command to create a recursive snapshot of the root pool. The snapshot name (archive, in this example) can be based upon the date
or whatever descriptive labels you desire.
# zfssnapshot-rrpool@archive
Next, delete the swap and dump device snapshots because they likely do not contain any relevant data, and deleting them typically reduces the size of
the archive significantly.
Note: Regarding the dump device, it is possible, though unlikely, that the dump device has data that has not yet been extracted to the /var data set (in
the form of a core archive). If this is the case, and the contents of the dump device should be preserved, dump the contents out to the file system prior to
deleting the dump device snapshot. See dumpadm(1M) for further details.
The following commands delete the default-named swap and dump device snapshots, though there might be more deployed on a host. To determine
whether there are more than the default-named devices in place, use swap(1M) and dumpadm(1M) to list the names of swap and dump devices,
respectively.
# zfsdestroyrpool/swap@archive
# zfsdestroyrpool/dump@archive
Now that the snapshot has been prepared, the next step is to send it to a file for archival. If you are archiving more than one ZFS pool, each pool will
How to Live Install from Oracle Solaris 10 to Oracle Solaris ... http://www.oracle.com/technetwork/articles/servers-stora...
3 8 2012/7/23 14:27
have a snapshot, and each snapshot needs to be sent to its own archive file. The following steps focus on creating the archive for the root pool.
However, you can archive any other pools on the system in the same manner.
To send the snapshot to a file, you pipe the zfs send command into a gzip command, as shown below, which results in a compressed file that contains
the pool snapshot archive. When creating this archive file, it is a good idea to use a unique naming scheme that reflects the host name, the date, or other
descriptive terms that will be useful in determining the contents of the archive at a later date.
You can save the archive file locally for later relocation or you can create it on removable media. The location where you store the archive file should be
a file system that is backed up regularly. Also, although compression is used, enough storage space should be available on the file system. A good rule
of thumb is to have enough capacity for the sum of the ALLOC amounts reported by zpool list.
Use the following command to create the archive file locally. The archive file name can be any string that helps identify this archive for later use. A
common choice might be using the host name plus the date, as shown in the following example.
# zfssend-Rvrpool@archive|gzip>/path/to/archive_$(hostname)_$(date+%Y%m%d).zfs.gz
Now, move the archive file to a file server for later retrieval, as shown in Figure 2.
Figure 2. Ensuring Accessibility of the Oracle Solaris 11 11/11 Archive
Optionally, you can write the archive file directly to an NFS-mounted path, as shown below:
# zfssend-Rvrpool@archive|gzip>/net/FILESERVER/path/to/archive_$(hostname)_$(date+%Y%m%d).zfs.gz
Similarly, you can stream the archive file to a file server via ssh:
# zfssend-Rvrpool@archive|gzip|sshUSER@FILESEVER"cat>/path/to/archive_$(hostname)_$(date+%Y%m%d).zfs.gz"
Note that if you stream the archive across the network, the ssh transfer does not support any sort of suspend and resume functionality. Therefore, if the
network connection is interrupted, you will need to restart the entire command.
Now that the migration archive file has been created, destroy the local snapshots using the following command:
# zfsdestroy-rrpool@archive
Phase 2: Preparing to Configure the Oracle Solaris 11 11/11 System
Before you boot the migrated Oracle Solaris 11 11/11 instance, prepare for the migration by gathering all the relevant system configuration parameters
from the migration system that is running Oracle Solaris 10. The system configuration items you need to gather include, but are not limited to, the
following:
Host name
Time zone
Locale
Root password
Administrative user information
Primary network interface (if it is not auto-configured)
Name service information
Please see Chapter 6, "Unconfiguring or Reconfiguring an Oracle Solaris Instance," in Installing Oracle Solaris 11 Systems for information on the data
required.
The system configuration information must be regenerated when the archive is restored. This is done by generating the System Configuration (SC)
profile on a running Oracle Solaris 11 11/11 system and then copying that profile to the restored Oracle Solaris 11 11/11 archive so that it is automatically
applied upon first boot.
The create-profile subcommand of sysconfig invokes the SCI Tool interface, queries you for the system configuration information, and then generates
an SC profile you can use later to configure the system.
Use the following command to create an SC profile locally. The profile name can be any string that helps identify the profile for later use. The following
example uses config with date information appended.
# sysconfigcreate-profile-o/path/to/config_$(date+%Y%m%d).xml
Then move the SC profile to a file server for later retrieval.
Optionally, you can create the SC profile and write it directly to an NFS-mounted path, as shown below.
# sysconfigcreate-profile-o/net/FILESERVER/path/to/config_$(date+%Y%m%d).xml

Phase 3: Migrating the Oracle Solaris 11 11/11 Archive
How to Live Install from Oracle Solaris 10 to Oracle Solaris ... http://www.oracle.com/technetwork/articles/servers-stora...
4 8 2012/7/23 14:27
Figure 3 depicts what happens when you migrate the Oracle Solaris 11 11/11 archive.
Figure 3. Migrating the Oracle Solaris 11 11/11 Archive
Boot Device and Root Pool Preparation
The first step is to configure the new boot disk device. For information on how to manage disk devices, determine the boot device, and change the
default boot device (if necessary), see the following guides:
Oracle Solaris Administration: Devices and File Systems
Oracle Solaris Administration: Booting and Shutting Down Oracle Solaris on SPARC Platforms
Oracle Solaris Administration: Booting and Shutting Down Oracle Solaris on x86 Platforms
As previously mentioned, you can replicate the original disk layout or you can use a different layout as long as the following steps are taken and space at
the beginning of the disk is reserved for boot data. The root pool does not need to be the same size as the original. However, the new pools must be
large enough to contain all the data in the respective archive file (for example, as large as the ALLOC section in the zpool list output, as described
previously).
Decide how to configure the boot device based upon the initial disk configuration on the archive system. To reiterate, what is required is that ultimately
the ZFS pools you create are large enough to store the archive data sets described by the ALLOC amounts in the output of zpool list.
Use the format(1M) command to configure the disk partitions and/or slices, as desired. For boot devices, a VTOC label should be used, and the default
configuration is a full-device slice 0 starting at cylinder 1. The files that were saved as part of the archive creation can provide guidance on how to best
set up the boot device.
The example in Listing 2 shows how to select the desired boot device from the format utility's menu.
# format
Searching for disks...done
c3t3d0: configured with capacity of 68.35GB
AVAILABLE DISK SELECTIONS:
0. c3t2d0 <SEAGATE-ST973401LSUN72G-0556 cyl 8921 alt 2 hd 255 sec 63>
/pci@0,0/pci1022,7450@2/pci1000,3060@3/sd@2,0
1. c3t3d0 <FUJITSU-MAY2073RCSUN72G-0401 cyl 14087 alt 2 hd 24 sec 424>
/pci@0,0/pci1022,7450@2/pci1000,3060@3/sd@3,0
Specify disk (enter its number): 1
selecting c3t3d0
[disk formatted]
Listing 2. Selecting the Boot Disk
On an x86 system, if you see the message No Solaris fdisk partition found, then you need to create an fdisk partition:
format> fdisk
No fdisk table exists. The default partition for the disk is:
a 100% "SOLARIS System" partition
Type "y" to accept the default partition, otherwise type "n" to edit the partition table.
y
format>
Now configure the slices as needed. Listing 3 is an example of setting up a full-capacity slice 0, which is the default configuration. The slice starts at
cylinder 1 to leave room for boot software at the beginning of the disk. Note that the partition table might look different based upon your system
architecture, disk geometry, and other variables.
partition> print
format> partition
Current partition table (default):
Total disk cylinders available: 8921 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 8920 68.34GB (8921/0/0) 143315865
3 unassigned wm 0 0 (0/0/0) 0
How to Live Install from Oracle Solaris 10 to Oracle Solaris ... http://www.oracle.com/technetwork/articles/servers-stora...
5 8 2012/7/23 14:27
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 7.84MB (1/0/0) 16065
9 unassigned wm 0 0 (0/0/0) 0
partition> 0
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
Enter partition id tag[unassigned]: root
Enter partition permission flags[wm]:
Enter new starting cyl[1]: 1
Enter partition size[0b, 0c, 1e, 0.00mb, 0.00gb]: $
partition>
Listing 3. Setting Up a Full-Capacity Slice 0
Once the slices are configured as needed, label the disk, as shown in Listing 4. Confirm the overall layout prior to moving on to the next step.
partition> label
Ready to label disk, continue?
y
partition> print
Current partition table (unnamed):
Total disk cylinders available: 8921 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 1 - 8920 68.34GB (0/0/0) 0
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 8920 68.34GB (8921/0/0) 143315865
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 7.84MB (1/0/0) 16065
9 unassigned wm 0 0 (0/0/0) 0
partition> ^D
Listing 4. Labeling the Disk
For more information regarding managing disk devices, please see the Oracle Solaris Administration: Devices and File Systems guide.
ZFS Pool Creation and Archive Restoration
Now that you have configured the disk, create the new root pool on slice 0 using the following command:
# zpoolcreaterpoolcXtXdXs0
Note that if the archive system's root pool did not use the default name, rpool, use its name instead of rpool. The migration procedure is able to
complete successfully when a different pool named is used, but the resulting ZFS file system will have a different mount point.
The next step is to restore the ZFS data sets from the archive file. If the archive is stored on removable media, attach and configure that media now so
that the file can be accessed. For information on configuring removable media, please see the Oracle Solaris Administration: Devices and File Systems
guide.
Once the file is accessible locally, restore the data sets using the following command:
# gzcat/path/to/archive_myhost_20111011.zfs.gz|zfsreceive-vFrpool
Alternatively, if the files are stored on a networked file server, you can use the following command to stream the archive file and restore the data sets.
# sshUSER@FILESERVER"cat/path/to/archive_myhost_20111011.zfs.gz"|gzip-d|zfsreceive-vFrpool
Note: The receive command might generate error messages of the following form: cannot receive $share2 property on rpool: invalid property
value. This is expected and will not affect the operation of the restored data sets.
If other pools were archived for restoration on this host, you can restore them at this point using the same ZFS operation shown above. For additional
information on migrating ZFS data sets, please see Oracle Solaris Administration: ZFS File Systems guide.
The data migration portion of the procedure is now complete. Some final steps must be performed now to ensure that the migration system will boot as
expected.
Hardware Configuration and Test
Next, you need to create swap and dump devices for use with the migration system. Note that the default-named devices are being used in this article.
Therefore, no further administrative tasks are required (for example, adding the swap device using swap(1M)), since the devices were already in use
and are configured to run with this system at boot time. If the migration system has a memory configuration that varies from the system that was
archived, the swap and dump devices might require a different size, but the names are still the same as in the previous configuration and, thus, they will
be configured properly on the first boot of the migration system.
The swap and dump devices should be sized according to the advice in the Oracle Solaris Administration: Devices and File Systems and Oracle Solaris
Administration: ZFS File Systems guides, which is roughly as shown in Table 1.
Table 1. Swap and Dump Device Sizes
Physical Memory Swap Size Dump Size
System with up to 4 GB of physical memory 1 GB 2 GB
Mid-range server with 4 GB to 8 GB of physical memory 2 GB 4 GB
High-end server with 16 GB to 32 GB of physical
memory
4 GB 8 GB+
System with more than 32 GB of physical memory
1/4 total memory
size
1/2 total memory
size
How to Live Install from Oracle Solaris 10 to Oracle Solaris ... http://www.oracle.com/technetwork/articles/servers-stora...
6 8 2012/7/23 14:27
You can determine the amount of physical memory as follows:
$ prtconf|grepMemory
Memory size: 130560 Megabytes
Note that once the system is booted, you can add additional swap devices if needed. Please see the above-referenced documentation for further
information regarding management of these devices.
Use the following commands to recreate swap and dump devices with appropriate capacities. Note that in this example, the migration system has 8 GB
of memory installed.
# zfscreate-b128k-V2GBrpool/swap
# zfssetprimarycache=metadatarpool/swap
# zfscreate-b128k-V4GBrpool/dump
The BE that is to be activated needs to be mounted now so that it can be accessed and modified in the following steps. To identify the BE to mount, use
the zfs list command:
# zfslist-rrpool/ROOT
NAME USED AVAIL REFER MOUNTPOINT
rpool/ROOT 3.32G 443G 31K legacy
rpool/ROOT/solaris_11 3.32G 443G 3.02G /
rpool/ROOT/solaris_11/var 226M 443G 220M /var
BEs are located in the root pool in the rpool/ROOT data set. Each BE has at least two entries: the root data set and a /var data set. The BE in the
example above is solaris_11.
The BE that will be active when the system reboots needs to be identified by setting the appropriate property on the root pool. To do this, use the zpool
command:
# zpoolsetbootfs=rpool/ROOT/solaris_11rpool
To mount the active BE data set, it is first necessary to change the mount point. Change the mount point and then mount the active BE data set using the
following commands:
# zfssetmountpoint=/tmp/mntrpool/ROOT/solaris_11
# zfsmountrpool/ROOT/solaris_11
The BE's root file system can now be accessed via the /tmp/mnt mount point. The first step is to install the boot software that will allow the host to boot
the new root pool. The steps are different depending upon architecture, as shown below. Both examples use the /tmp/mnt BE mount point.
To install the boot software on an x86-based host, use this command:
# installgrub/tmp/mnt/boot/grub/stage1/tmp/mnt/boot/grub/stage2/dev/rdsk/cXtXdXs0
To install the boot software on a SPARC-based host, use this command:
# installboot-Fzfs/tmp/mnt/usr/platform/`uname-i`/lib/fs/zfs/bootblk/dev/rdsk/cXtXdXs0
It is possible that the same devices will not be in use or that they will be configured in a different manner on the new system. Therefore, use the following
command to clear the device file system:
# devfsadm-Cn-r/tmp/mnt
Next, you need to direct the system to perform a reconfiguration boot on first boot, which will configure any new device hardware (as related to the
archive system versus the migration system). To force a reconfiguration boot, you place a file named reconfigure at the top level of the BE's root file
system. This action is not persistent, because the file is removed and, thus, the reconfiguration occurs only on the first boot after the file is placed.
Use the following command to set up a reconfiguration boot by creating the reconfigure file in the active BE's mounted file system:
# touch/tmp/mnt/reconfigure
If you are doing a live install on an x86 machine, the hostid file needs to be regenerated. If the file doesn't exist at boot time, it will be generated, so
delete the file, as follows:
# rm/tmp/mnt/etc/hostid
Phase 4: Configuring the Oracle Solaris 11 11/11 System
The SC profile created in Phase 2 will now be applied to the migration system. If an SC profile already exists on that system, remove it using the
following command:
# rm/tmp/mnt/etc/svc/profile/site/profile*.xml
Next, two Oracle Solaris Service Management Facility profiles that are included in the Appendix (/tmp/disable_sci.xml and unconfig.xml) need to be
copied to /tmp/mnt/etc/svc/profile/site. These profiles will cause the system to do an unconfigure before applying the SC profile generated earlier.
Create /tmp/disable_sci.xml and unconfig.xml by copying the XML information from the Appendix:
Now, copy the generated SC profile to the appropriate location, which is /tmp/mnt/etc/svc/profile/sc. This directory might not exist, so it might be
necessary to create it.
# cp/tmp/disable_sci.xml/tmp/mnt/etc/svc/profile/site
# cp/tmp/unconfig.xml/tmp/mnt/etc/svc/profile/site
# cp/path/to/config_20111011.xml/tmp/mnt/etc/svc/profile/sc/
Next, unmount the BE and reset the mount point:
# zfsumountrpool/ROOT/solaris_11
# zfssetmountpoint=/rpool/ROOT/solaris_11
Then reboot the migration system.
As depicted in Figure 4, the migration system should now be as the archive system was, barring any changes in the system configuration, physical
topology, or peripheral devices, or any other hardware-related changes. Please see the Oracle Solaris administration guides for further information
regarding configuration.
How to Live Install from Oracle Solaris 10 to Oracle Solaris ... http://www.oracle.com/technetwork/articles/servers-stora...
7 8 2012/7/23 14:27
Figure 4. Rebooting from the New Boot Disk
Appendix
Listing 5 shows the contents of the disable_sci.xml profile.
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type="profile" name="config_profile">
<service name="milestone/config" version="1" type="service">
<instance name="default" enabled="true">
<property_group name="sysconfig" type="application">
<propval name="interactive_config" type="boolean" value="false"/>
<propval name="config_groups" type="astring" value="system "/>
<propval name="configure" type="boolean" value="true"/>
</property_group>
</instance>
</service>
</service_bundle>
Listing 5. Contents of the disable_sci.xml Profile
Listing 6 shows the contents of the unconfig.xml profile.
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type="profile" name="unconfig_profile">
<service name="milestone/unconfig" version="1" type="service">
<property_group name="sysconfig" type="application">
<propval name="shutdown" type="boolean" value="false"/>
<propval name="destructive_unconfig" type="boolean" value="false"/>
<propval name="unconfig_groups" type="astring" value="system "/>
<propval name="unconfigure" type="boolean" value="true"/>
</property_group>
</service>
</service_bundle>
Listing 6. Contents of the unconfig.xml Profile
See Also
Download Oracle Solaris 11
Access Oracle Solaris 11 product documentation
Access the Simplified Installation and Cloud Provisioning with Oracle Solaris 11 page
Access all Oracle Solaris 11 how-to articles
Learn more with Oracle Solaris 11 training and support
See the official Oracle Solaris blog
Check out The Observatory Blog for Oracle Solaris tips and tricks
Follow Oracle Solaris on Facebook and Twitter
About the Author
Harold Shaw is a Principal Software Engineer for Oracle Solaris and focuses on software installation and deployment technologies. Harold joined Oracle
in 2010.
Revision 1.0, 04/23/2012
Follow us on Facebook, Twitter, or Oracle Blogs.
How to Live Install from Oracle Solaris 10 to Oracle Solaris ... http://www.oracle.com/technetwork/articles/servers-stora...
8 8 2012/7/23 14:27

Das könnte Ihnen auch gefallen