Beruflich Dokumente
Kultur Dokumente
com/software
Chapter 1
Introduction
To support mission-critical business applications, IT departments are faced with the challenging
task of delivering continuous access to strategic corporate information assets, often on a 24x7
basis. Yet data center managers must operate within tight budget constraints, maintaining high
service levels while containing personnel and equipment costs. In this regard, IT managers face a
daunting task — how to deliver high levels of service while simultaneously lowering the total cost
of ownership.
Sun Microsystems, an industry leader in supplying solutions for mission-critical business
computing, understands this challenge. In recent years, Sun has focused on delivering products
that can help to lower downtime, improve service levels, and reduce the total cost of ownership
(TCO). Today, Sun offers a fully scalable product line with built-in availability features and a
reliable, mature operating environment — the Solaris™ Operating Environment (OE) — which
is proven around the world in numerous mission-critical computing environments.
With the introduction of the Solaris 9 OE, Sun is integrating key technologies that can help
manage computing resources and enhance service levels, including a robust storage
management solution, Solaris Volume Manager software. Solaris Volume Manager can be
used to configure multiple storage components into storage volumes, with redundancy and
failover capabilities that help provide continuous data access — even in the event of multiple
device failures. With easy-to-use graphical and command-line interfaces, Solaris Volume Manager
greatly simplifies storage administration, and allows many operations — such as recovering
volumes or expanding the size of a file system — to occur online, minimizing the need for costly
downtime.
P2 White Paper Transitioning to Solaris™ Volume Manager © 2003 Sun Microsystems, Inc.
Support for RAID 1 mirrored volumes and Provides continuous data availability even when a disk device
RAID 5 volumes (striping with parity) within the volume fails
RAID 0 striped volumes Distributes the I/O workload over several devices, which can
improve I/O performance
Disk concatenation and online expansion Increases file system capacity without interruption or
of volumes and file systems downtime
Support for disk sets Enables more effective namespace management, supporting
clustered and fabric-connected storage
Graphical user interface (GUI) integrated Provides an easy-to-use, consistent GUI for remote and local
with Solaris Management Console storage administration
Testing with Sun StorEdge™ storage Offers proven solutions that can reduce deployment risk
products
A Sun white paper, Comprehensive Data Management Using Solaris Volume Manager,
describes these features and benefits in more detail. For additional information on other
functionality integrated in the Solaris 9 Operating Environment, see the Better by Design
— The Solaris 9 Operating Environment white paper.
• Single vendor support: With a single maintenance contract, 24x7 support can be available for
all software layers between the application and storage resources — covering the Solaris 9 OE,
UFS file systems, the Sun StorEdge™ Traffic Manager software, and Solaris Volume Manager.
Together, these components can provide a powerful, integrated solution, offering journaled file
systems created on flexible, RAID-protected volumes. These volumes can, in turn, be built on
Sun StorEdge Traffic Manager multipathed disk storage, which can help to improve data
protection and availability and increase server-to-storage bandwidth.
• Upgradable, volume-managed system disks: A unique feature of Solaris Volume Manager is the
ability to upgrade the operating system with an active, volume-managed root disk. With VxVM-
managed boot disks, upgrading the Solaris OE, or indeed VxVM itself, can be a complex process.
In essence, it requires unmounting file systems, deporting disk groups, unencapsulating root,
booting underlying devices, and removing the VxVM packages. On completion of the Solaris
software upgrade, VxVM must be reinstalled and boot disks must be reencapsulated and mirrored.
The upgrade process can involve a significant amount of planning and there are many potential
pitfalls. With Solaris Volume Manager managing the root disk, the upgrade process can be
significantly less complex. Customers can seamlessly migrate from previous Solaris OE releases
and even earlier versions of Solstice DiskSuite software. Sun performs extensive testing for common
upgrade scenarios, which helps minimize deployment risks, now and into the future.
These advantages, and the robust storage management features of Solaris Volume Manager,
are encouraging many data center managers to seriously consider using Solaris Volume Manager
as an alternative to VxVM. Chapter 2 provides a more detailed comparison of the two products,
and discusses how they can be used to create RAID-protected volumes.
Product Functionality Comparison P5
Chapter 2
s3 s3
subdisk s4 s4
sd1 sd2
VxVM takes ownership of the entire disk, changing the disk volume table of contents (vtoc) to
present private and public regions (usually slices 3 and 4 respectively), and placing the disk into a
disk group. The private region is used to store VxVM configuration information, and is comprised
of up to three structures:
• Disk header that uniquely identifies the disk
• Optional configuration copy (config copy) that defines the volume configuration
• Optional log copy that tracks volume state information
VxVM decides on the appropriate distribution of active config and log copies depending on
the number of disks and their physical location. The algorithms used to distribute config and log
copies appear to have changed at 3.0.4, 3.1, and 3.2 releases of VxVM, so it is good practice to
check the active config/log copy distribution for single points of failure.
The underlying building blocks of any VERITAS volume are subdisks, which are specified by
reserving an area of the public region (defined by an offset into the public region) and a size.
Subdisks can either be concatenated or striped together to form a plex, which is the level where
mirroring occurs. In the above example, plex1 and plex2 are both attached to the volume v_data.
VERITAS volumes are presented under the /dev/vx device path (for example, the file system /data
is mounted on /dev/vx/dsk/datadg/v_data).
Volumes are usually created from the top down using the vxassist command. This example
might use the command:
vxassist -g datadg make v_data layout=mirror-concat alloc=”disk01,disk02”
When this command is run, VERITAS creates the volume and then begins to synchronize one
plex to the other. If a file system is to be placed on the volume, it may not be necessary to perform
mirror synchronization — in this case, the “init=active” argument on the vxassist command line
can initialize the volume without synchronization of the plexes, which may otherwise take a
considerable amount of time for large volumes. The VxVM command vxprint can be used to list
the configuration and status of managed volumes.
Note – Mirror creation without full synchronization should be used with care, regardless of the
volume manager.
© 2003 Sun Microsystems, Inc. Product Functionality Comparison P7
s7 s7
submirror s5 s5
d21 d22
In this example, d21 and d22 can be created with the following commands:
# metainit d21 1 1 c1t0d0s5
# metainit d22 1 1 c2t0d0s5
The first digit after the volume name indicates the number of slices that are to be concatenated
together. The second digit indicates the number of partitions within that slice, for example, a
striped volume can be created with the command metainit d50 1 3 c1t0d0s3 c1t1d0s3 c1t2d0s3,
which builds the volume d50 with one slice striped over three hard partitions.
The RAID 1 volume d20 is then created by using the -m option of metainit to attach d21 as
a submirror. The second submirror, d22, is in turn attached to form a two-way mirror. The
commands necessary to achieve this are:
# metainit d20 -m d21
# metattach d20 d22
As d22 is attached to d20, a full mirror synchronization occurs from submirror d21 to
submirror d22. If the assumption is that the contents of both submirrors are already the same,
or the intention is to create a new file system on d20, it is possible to create the volume without
having to perform submirror synchronization. Instead of a metainit followed by a metattach,
a single metainit can be supplied:
# metainit d20 -m d21 d22
This effectively gives the same functionality as VxVM does using the init=active argument
with vxassist.
Note – Again, mirror creation without full synchronization should be used with care, regardless
of the volume manager.
Volumes are presented to the Solaris Operating Environment under the /dev/md device path,
for example, the file system is mounted on /dev/md/dsk/d10. The Solaris Volume Manager
command metastat can be used to list the configuration and status of a volume.
Solaris Volume Manager Example — Creating a Mirror Using Soft Partitions on a Physical Slice
It is also possible to create a mirror using soft partitions on a physical slice. The example shown in
Figure 2-3 creates the mirrored volume d10 using disks that have been configured for use with
Solaris Volume Manager soft partitions. Slice s7 has been reserved for a state database, while
slice s0 maps to the rest of the disk. Solaris Volume Manager initializes the soft partition d13
by allocating one gigabyte of data from the space available in slice s0.
Figure 2-3: Solaris Volume Manager mirrored
file system /data volume with soft partitions on a physical slice
concat volume/
d11 d12
submirror
s7 s7
s0 s0
soft partition d13 d14
The -p option creates a soft partition by allocating one gigabyte of the space available on
c1t0d0. The -e argument, a once-per-disk option, takes the disk c1t0d0 and repartitions it for soft
partitioning, creating slice s0 for the data and slice s7 for state database replicas. This command
also removes the entry in the vtoc for slice s2. As an alternative to using the -e option, the physical
drive can be manually partitioned, with a hard partition allocated for use by soft partitions. For
example, if c4t0d0s5 is created as a four-gigabyte physical partition, the d23 soft partition can
be created by issuing “metainit d13 -p c4t0d0s5 1g”, which allocates one gigabyte of the space
within the physical partition c4t0d0s5 to create the volume.
For soft partitioning, two basic rules must be followed:
• Soft partitions have to be layered on traditional volumes or partitions
• A mirrored volume cannot be created directly using soft partitions as submirrors (the soft
partitions must first be made into a concatenation/stripe), as in the following commands:
# metainit d11 1 1 d13
# metainit d12 1 1 d14
The concatenated volume d11 is created using a single slice, and that slice is constructed
using a single logical volume, d13. The first digit after the volume name indicates the number of
slices that are to be concatenated together. The second digit indicates the number of partitions
within that slice. Similarly, d12 is created out of soft partition d14. The volumes d11 and d12 are
effectively submirrors. The top level volume d10 is then created using these commands:
# metainit d10 -m d11
# metattach d10 d12
The RAID 1 volume d10 is created by using the -m option of metainit to attach d11 as a
submirror. The second submirror d12 is, in turn, attached to form a two-way mirror.
The Solaris Volume Manager approach to disk manipulation and volume creation is
analogous to the VERITAS approach of disk initialization and creation of subdisks (d13 and d14),
plexes (d11 and d12), and volumes (d10). If the total requirement is to create 10 volumes out of
the same two disks, then the number of configuration objects has potentially increased from five
to 50, regardless of whether Solaris Volume Manager or VxVM is used for storage management.
d30
s7 s7
submirror
s5 d31 d32 s5
disk
c1t0d0 c1t0d0
The concatenated volumes d31 and d32 are created out of the 36-gigabyte drives c1t0d0s5
and c2t0d0s5, respectively. (The slice s7 has been created at 15 megabytes, and reserved for state
database replicas.) The commands metainit and metattach are used to create the mirrored volume
d30 out of submirrors d31 and d32. At this point, the 36-gigabyte, RAID-protected container can
be sliced into a large number of soft partitions. The volume d100 is constructed by allocating one
gigabyte of the mirrored volume d30:
# metainit d100 -p d30 1g
If 10 volumes are required from this storage, it is now necessary to create only a total of 15
objects: five objects to create the RAID container d30, and one object for each soft partition
allocated from the container. In this way, soft partitioning in Solaris Volume Manager can help
to minimize the number of objects and simplify configuration complexity.
For soft partitioning, each component or extent that is used to create a soft partition is
defined by an offset and size parameter, which describes the physical boundaries of the extent in
the nominated partition or volume. 512-byte watermarks are also used to identify and record the
boundaries of each soft partition.
Component Failure
To provide increased data availability for RAID 1 and RAID 5 volumes, both Solaris Volume Manager
and VxVM provide features to automate the replacement and resynchronization of failed volume
components.
VxVM provides the choice of two daemons for disk sparing:
• vxrelocd, which responds to failures at the subdisk level
• vxsparecheck, which responds to failures at the disk level
The use of disks and free space within a disk group can be influenced using the nohotuse
and spare flags with these commands.
Solaris Volume Manager uses the concept of hot spare pools. These are pools of dedicated
disk slices that can be used to replace failed components. A spare slice can belong to more than
one pool, and the order in which slices are added to the pool determines the order in which they
may be used to replace failed components. RAID 1 submirrors can be assigned to different hot
spare pools, providing a high level of control over the disk sparing strategy.
Note – Solaris Volume Manager cannot always provide RAID 1+0 functionality. However, in a
best practices environment, where both submirrors are identical and are made up of disk slices
(not soft partitions), RAID 1+0 volumes can be configured.
For example, with a pure RAID 0+1 implementation and a two-way mirror that consists of three
striped slices, a single slice failure can fail one side of the mirror. Assuming that no hot spares are
in use, a second slice failure can fail the mirror. Using Solaris Volume Manager, up to three slices
can potentially fail without failing the mirror because each of the three striped slices are individually
mirrored to their counterparts on the other half of the mirror.
P12 White Paper Transitioning to Solaris™ Volume Manager © 2003 Sun Microsystems, Inc.
In Figure 2-5, a RAID 1 volume consists of two submirrors, which each consist of three
identical physical disks and the same interlace value. A failure of the three disks A, B, and F can
be tolerated because the entire logical block range of the mirror is still contained on at least one
good disk.
Figure 2-5: With a Solaris Volume Manager RAID 1+0
volume, access will succeed to portions of the disk
Submirror 1 where data is still available
If, however, disks A and D fail, a portion of the mirror’s data is no longer available on any disk
and access to these logical blocks will fail. When a portion of a mirror’s data is unavailable due to
multiple slice errors, access to portions of the mirror where data is still available will still succeed.
Under this situation, the mirror can act like a single disk that has developed bad blocks. The damaged
portions are unavailable, but the rest of the blocks can be accessed by the application.
VERITAS introduced RAID 1+0 functionality with the release of layered volumes in version 3.0.x
of VxVM. One reason for the introduction of layered volumes in VxVM was to enable the creation
of a striped volume in which each of the components of the stripe is a mirrored subvolume. This
provides the same functionality as Solaris Volume Manager, but the configuration can often
appear overly complex, and the number of VERITAS objects necessary to define the volume can
increase dramatically.
Volume Logging
All mirrored volumes in Solaris Volume Manager automatically benefit from volume logging.
Volume logging limits the amount of block copy activity necessary to keep the mirrored volumes
in sync. Volume logging is the default behavior of Solaris Volume Manager, and uses bitmaps
held in the state databases to track changes to submirrors. Consequently, all mirrored volumes
can be protected against the need to perform a full-mirror resynchronization in the event of a
system failure.
In VxVM, Dirty Region Logs (DRLs) are attached to volumes to track the recent changes to a
mirror, and are effectively bitmaps implemented as a logging plex. Following a system crash, if the
synchronization state of a volume cannot be determined, only the recently modified blocks are
copied between the plexes to synchronize the volume. DRLs must be manually attached to
the volume, and are only used in a system crash scenario. Within Solaris Volume Manager,
the same bitmap technology can be used to speed up the resynchronization process when
performing online backups after splitting a submirror. This functionality is not enabled by default
within VxVM — it can be activated only by purchasing a VERITAS VxVM Fast Mirror Resync license.
Management
To make it easier for administrators to accomplish routine tasks, and to simplify the configuration
and management of storage resources, both Solaris Volume Manager and VxVM support command-
line and graphical user interfaces (CLI and GUI)
© 2003 Sun Microsystems, Inc. Product Functionality Comparison P13
The Solaris Volume Manager GUI, also called the Enhanced Storage Tool, is integrated
with Solaris Management Console — a Java™ technology-based interface that administrators use
to access other administrative tools for the Solaris OE. The look and feel of Solaris Management
Console is customizable, providing a consistent, intuitive interface for all Solaris management
tools. Using the Enhanced Storage Tool, wizards guide administrators through easy-to-follow,
step-by-step instructions, automating many of the common storage management operations.
In addition, as volumes are defined, the actual commands used to create the volumes can be
captured and saved. These commands can then be reused in scripts or as an aid to learning Solaris
Volume Manager commands.
Solaris Volume Manager provides mdmonitord, which monitors storage configurations and
identifies failed volume components. Solaris Volume Manager also supports SNMP reporting,
allowing volume status information to be propagated to SNMP-based management frameworks.
Within VxVM, volume error notification can be handled by either the hot relocation or hot spare
daemons, vxrelocd and vxsparecheck. VxVM does not provide SNMP support as part of the core
product.
Solaris Volume Manager includes support for an application programming interface (API)
that allows standards-based management of storage resources. This API adheres to the Web-Based
Enterprise Management (WBEM) infrastructure and uses the Common Information Model (CIM)
object model — standards that are specified by the Distributed Management Task Force (DMTF).
For more information about DMTF, see www.dmtf.org.
CIM defines the data model, or schema, that describes:
• The attributes of Solaris Volume Manager devices and operations against them
• The relationships among various Solaris Volume Manager devices
• The relationships among Solaris Volume Manager devices and other aspects of the operating
system, such as file systems
The CIM model is made available through the Solaris WBEM SDK, which is a set of Java
technology-based APIs that allow access to system management capabilities represented by CIM.
The CIM/WBEM API provides a public, standards-based programmatic interface to monitor and
configure storage resources with Solaris Volume Manager.
VxVM does not provide CIM/WBEM functionality as part of the core product.
With VxVM, disk groups can be exported from one node and imported on another. VxVM disk
groups and the volumes defined within them are device independent. Multihost-attached storage
may appear on different controllers on each attached node. VxVM masks this difference and
establishes a mapping between physical devices and VERITAS objects on import of the disk group.
P14 White Paper Transitioning to Solaris™ Volume Manager © 2003 Sun Microsystems, Inc.
Solaris Volume Manager disk sets support similar functionality, where ownership of the disk
set can be released by one node and taken by another. When a disk is placed into a disk set, its vtoc
is changed to include a slice s7, sized to include a state replica database for the disk set. By
default, up to four disk sets, each with 128 volumes names (d0 to d127), can be created, which
is usually adequate for most implementations. The kernel configuration file /kernel/drv/md.conf
can be tuned to increase the number of volume names per disk set and the number of disk sets
supported by the system.
Solaris Volume Manager disk sets can enhance namespace management of storage
resources. A common application of Solaris Volume Manager disk sets is in clustered environments,
since disk sets are not automatically imported at boot time. Sun Cluster 3.0 software provides a
device-independent (DID) layer to make use of Solaris Volume Manager disk set functionality.
The DID layer can help present all devices to each node in the cluster using the same name. This
feature is important with Solaris Volume Manager due to the disk set relying on controller/target/
device references for volume component definitions. Disk sets can be used outside of the cluster
framework, but care should be taken to address all shared storage devices using the same address.
Fabric-attached storage can be more easily managed within a disk set-controlled namespace
that is separate from other local storage. SAN fabric-connected storage is not usually available to
the system as early in the boot process as other devices (such as SCSI and IDE disks). When this
storage is not defined within a disk set, Solaris Volume Manager reports logical volumes on
the fabric as unavailable at boot. However, by adding the storage to a disk set and then using disk
set tools to manage the storage, the problem with boot time availability is avoided.
DMP and Sun StorEdge Traffic Manager can also coexist. Both products can be configured to
manage particular device paths. Beginning with VxVM 3.2, DMP can work with the pseudo-device
paths presented by Sun StorEdge Traffic Manager. The choice of which solution to use is dependent on
factors such as the type of storage, host bus adapters, and whether the environment is clustered.
Another area of concern when using VxVM and Solaris Volume Manager together is how
to prevent both products from being used to manage the same devices. The standard practice
of having well-documented configurations and effective change-control processes can help to
alleviate any such problems.
Support for a wide range of Yes, any that support IEEE Yes
arrays and storage unique device IDs
Mirror snapshot with fast resync Yes No, must purchase additional
support (for backup) license
Cluster Yes, Sun Cluster 3.0 is designed Yes, Sun and VERITAS cluster
around Solaris Volume Manager products
functionality
Multipathing Works with Sun StorEdge Traffic Works with Sun StorEdge Traffic
Manager devices Manager or VERITAS DMP
managed devices
Chapter 3
Sun StorEdge
Sun Fire 3800 Server D240 Media Tray
Sun StorEdge
A5200 Arrays
Data
Mining
At the client level, Solaris Volume Manager can be used to mirror internal boot disks of the
Sun Blade 100 workstations and boot disks for the Sun Ray appliance server, which reside on a Sun
StorEdge D240 media tray. The Sun Fire 3800 server for the Sun Ray appliances also provides home
directory storage, shown in Figure 3-1 on a pair of Sun StorEdge A5200 arrays. Solaris Volume
Manager can configure these arrays with RAID 1+0 volumes, striping each volume across five disks
in each array, to facilitate storage performance. Two hot spare pools can also be created — one
containing a number of slices from the first array, and the other comprising slices from the
second array. Consequently, each submirror is assigned to the hot spare pool that provides spare
slices on the same array.
In this environment, application requirements mean that the window of opportunity to
perform backups on the home file systems is limited. To back up this data, it is necessary to stop
the applications, take the backup, and then restart the applications. To reduce the outage
necessary for backups, Solaris Volume Manager three-way mirrored volumes can be created for
the home file systems. The command metaoffline can be used to detach a mirror, allowing that
mirror to be available for backup purposes. When the backup is complete, the submirror can be
placed online. To fully resynchronize all three submirrors, only the blocks that have changed must
be recopied, because of the way Solaris Volume Manager tracks affected blocks.
At the application server level, Solaris Volume Manager can manage boot disks and application-
specific storage on the Sun Fire 280R servers.
At the back end, clustered Sun Fire V880 servers with Sun StorEdge T3 arrays provide application
data storage and management for highly available RDBMS services. Sun Cluster 3.0 software,
designed to support Solaris Volume Manager integration, provides the cluster framework. The
database storage volumes can be configured in Solaris Volume Manager disk sets, which can
then be failed over between cluster nodes with the RDBMS application, helping to maintain
both availability and optimal application-to-storage I/O performance.
P18 White Paper Transitioning to Solaris™ Volume Manager © 2003 Sun Microsystems, Inc.
Nightly, an extract of data can be taken from the RDBMS system and loaded into a data
mining application running on a Sun Fire 15K server domain. The Sun StorEdge 9960 array provides
mainframe-class storage — each hardware RAID 5 LUN can be presented on two paths from the
Sun StorEdge 9960 array. Sun StorEdge Traffic Manager Software can be used to provide load
balancing and path redundancy. Solaris Volume Manager can then be used to present large,
striped volumes to the data mining application. The stripes can be defined with an appropriate
interlace size across the many independently pathed LUNs, which helps to optimize the available
server-to-storage bandwidth.
Throughout the enterprise, Solaris Volume Manager enables storage management using
the same intuitive user interface, Sun Management Console, regardless of the Sun platform or
storage components. On any of these systems — from Sun Blade workstations to the Sun Fire 15K
data center server with terabytes of attached storage — Sun Management Console can be used to
configure and manage Solaris Volume Manager volumes. Solaris Volume Manager is a comprehensive,
integrated, scalable, and flexible solution that helps to enable data management across a wide
spectrum of Sun server and Sun and third-party storage products.
• Clustered Environment: Does the cluster technology offer a choice of volume manager? Sun
Cluster 3.0 software, for example, allows one volume manager to be used for the local boot disk
environment and another for data storage managed by the cluster framework. It is possible to
migrate volume managers in an existing Sun Cluster 3.0 implementation, but the migration can
be more complex than in nonclustered environments.
• VERITAS Cluster Volume Manager: Is VERITAS Cluster Volume Manager (CVM) in use? Oracle
Parallel Server environments may rely on VERITAS CVM functionality to provide shared disk
groups that can be imported on more than one node at any point in time. This feature is
not currently available with Solaris Volume Manager in clustered environments.
• Dependencies: Are there any dependencies on VERITAS features, such as VERITAS Volume Replicator?
If so, does Solaris Volume Manager offer similar functionality? (For example, Solaris Volume
Manager inherently provides bitmap logging to synchronize mirrors, which can obviate the
need for VERITAS Fast Mirror Resync.) Are there volume manager-independent solutions that can
help, such as the Sun StorEdge Availability Suite?
Sun Professional Services consultants can help customers address many of these issues
and assist with planning for a migration or installation. Planning is key to a successful migration,
including the definition of a possible regression path. Sun consultants are experienced in helping
customers perform migrations and upgrades in 24x7 and clustered environments. Performance
analysis services are also available — Sun consultants can profile the I/O characteristics of existing
applications and suggest suitable volume layouts and system tuning parameters that can help to
optimize the Solaris Volume Manager configuration.
In addition, Sun StorEdge Resource Management Suite can be used to understand the current
storage usage characteristics and help predict future demand. The product has many advanced
features, including the ability to probe database systems to analyze the actual amount of storage
in use. Sun consultants use this tool to help customers design appropriate storage and volume
management strategies.
Migration Approaches
Two primary approaches can be used as the basis for a migration from VxVM to Solaris Volume
Manager:
• Backup/Restore Migration: If the organization can afford some degree of downtime, probably
the easiest and most straightforward migration method is to back up all relevant data to tape,
unmount the application file systems, deport data disk groups, unmirror and unencapsulate the
boot disk, and remove VxVM. At this point, system disk file systems and swap can be running
directly on top of disk partitions. Solaris Volume Manager state databases can then be created,
and Solaris Volume Manager can be configured to manage the boot disks. Finally, Solaris
Volume Manager volumes can be created to hold application file systems, and data restored
from tape.
• Staged Migration: In many customer environments, it is not possible to perform a full backup/
restore migration. As discussed previously, it is possible to run both volume managers together,
and in some cases, this may be the best way to stage the migration process.
P20 White Paper Transitioning to Solaris™ Volume Manager © 2003 Sun Microsystems, Inc.
In the past, some administrators have developed scripts to migrate between volume
managers, but due to the evolving architecture of products and an infinite number of configuration
layouts, creating and maintaining generic conversion scripts is not very feasible. Certainly part of
the migration process can be scripted, but scripts may need to be customized to reflect specific
migration objectives.
A staged migration must take into account two primary tasks:
• How to migrate the data disk volume
• How to migrate the boot disk volumes
It is important to remember that in order to use VxVM, rootdg always needs to be active,
ideally consisting of two disks. This means it is probably best to migrate the data volumes first,
so that all that is left under VxVM control is the boot disk and its mirror in the rootdg disk group.
Otherwise, it is necessary to select two nonsystem disks to remain in rootdg until VxVM is no
longer required and can be removed.
When is the best time to migrate? If the server is currently running on the Solaris 8 OE with
VxVM, the migration can be performed as part of the upgrade to the Solaris 9 OE — this alleviates
the need to remove VxVM under the Solaris 8 OE and then reinstall it on the Solaris 9 OE. Instead,
one approach is to migrate from VxVM to Solstice DiskSuite 4.2.1 software (the precursor of Solaris
Volume Manager), and then upgrade to the Solaris 9 OE and Solaris Volume Manager, eliminating
the need to unmirror boot disks or hide the volume configuration.
Note – It is important to note that due to the potential complexity of a VxVM configuration, the
methods described here may not be effective in all environments. Always back up data prior to
beginning an activity of this nature, and seek consulting assistance as necessary.
© 2003 Sun Microsystems, Inc. Deploying Solaris Volume Manager P21
volume v_data
volume d10
plex v_data_01 v_data_02
s3 s3
s5 s5
s4 disk01-01
subdisk disk02-01 s4 submirror d11 d12
Steps:
1. Take a full file system backup of /data to tape.
2. Display the configuration of v_data:
dm disk01 c1t0d0s2 sliced 3590 17674902 -
dm disk02 c2t0d0s2 sliced 3590 17674902 -
4. Check that there are no other volumes residing on the disk, then remove the disk from the disk
group using either vxdiskadm or vxdg:
# vxdg -g datadg rmdisk disk01
5. Now remove disk01 from the VxVM configuration. This command will remove the private
and public partitions from the disk:
# /etc/vx/bin/vxdiskunsetup c1t0d0
6. The disk c1t0d0 can be partitioned to create the Solaris Volume Manager subvolume d11. Use
format or fmthard to create an underlying partition, s5, of the required size. The partition must
be at least as big as the original VxVM volume. This might also be a good time to consider
increasing the size of the volume to support future capacity demands.
P22 White Paper Transitioning to Solaris™ Volume Manager © 2003 Sun Microsystems, Inc.
8. Create a one-way Solaris Volume Manager mirror volume d10 out of submirror d11:
# metainit d10 -m d11
9. Create a file system on d10 to hold the contents of /data. Again, this may be a good time
to migrate from the VERITAS file system VxFS to UFS:
# newfs /dev/md/rdsk/d10
10.Temporarily mount d10 on /mnt to copy data between VxVM and Solaris Volume Manager:
# mount -Fufs /dev/md/dsk/d10 /mnt
11.Stop any I/O to /data and then copy the file system contents to /mnt. In this example, cpio
is used:
# cd /data
# find . -mount | cpio -pdumv /mnt
12.Once the file system data has been copied, unmount both /data and /mnt, and remove
the VxVM volume v_data:
# umount /data
# umount /mnt
# vxassist -g datadg remove volume v_data
13.Check that there are no other volumes residing on the disk disk02, then remove the disk
from the disk group either using vxdiskadm or vxdg:
# vxdg -g datadg rmdisk disk02
14.Now remove disk02 from the VxVM configuration. This command will remove the private
and public partitions from the disk:
# /etc/vx/bin/vxdiskunsetup c2t0d0
15.The disk c2t0d0 can now be partitioned to create the Solaris Volume Manager subvolume d12.
Use format or fmthard to create an underlying partition, s5, of the required size. The partition
should be the same size as that used for d11.
17.Attach d12 to d10 to form a two-way mirror. This will start resynchronization of the submirrors:
# metattach d10 d12
18.Change the /etc/vfstab entry for /data so that the block device is now /dev/md/dsk/d10
and the raw device is now /dev/md/rdsk/d10. Mount the volume under the Solaris OE:
# mount /data
This procedure could be scripted, potentially resulting in less downtime and interruption
to service than using the backup/restore method and waiting for the data to restore from tape.
In this scenario, it may be possible to switch volume managers without having to move any
data, consequently helping to minimize downtime even further. Hard partitions can be defined
around the area of disk used by each of the VxVM subdisks (using the VERITAS command
vxmksdpart), enabling a mirrored Solaris Volume Manager volume to be created on the disk
partitions. However, it is recommended that Sun consultants with experience in performing
complex data migrations between volume managers be engaged to assist in this process.
© 2003 Sun Microsystems, Inc. Deploying Solaris Volume Manager P23
volume d100
volume v_data
d20
plex plex1
s3 s3 s3 s7 s7 s7
The goal of the migration is to remove the disks from VxVM control and then use them in
Solaris Volume Manager to create a concatenated stripe across all three drives. Soft partitions
are then created on top of the striped volume to support file systems.
Before the migration begins, the data is backed up to tape prior to the creation of the Solaris
Volume Manager volumes. When the disks are removed from VxVM control, the vtocs are changed so
that a 15-megabyte partition s7 is created (starting at cylinder 0) for the Solaris Volume Manager
state database replicas. This also helps protect the vtoc.
Note – In VxVM, the stripe unit size defines how much data is written to a column before moving
to the next column in the stripe. The Solaris Volume Manager equivalent is the interlace size,
which should be optimized according to application I/O characteristics. A migration can provide
an appropriate opportunity to create Solaris Volume Manager volumes with a different interlace
size.
Steps:
1. Look at the configuration of the VxVM stripe and take note of the stripe unit size. In this example,
it is 128 blocks:
v v_data - ENABLED ACTIVE 2097152 SELECT v_data-01 fsgen
pl v_data-01 v_data ENABLED ACTIVE 2100821 STRIPE 3/128 RW
sd disk01-01 v_data-01 disk01 0 700245 0/0 c1t0d0 ENA
sd disk02-01 v_data-01 disk02 0 700245 1/0 c2t0d0 ENA
sd disk03-01 v_data-01 disk03 0 700245 2/0 c3t0d0 ENA
2. Copy the contents of /data to tape, unmount it, and then remove the volumes and disks
from VxVM control:
# umount /data
# vxassist -g datadg remove volume v_data
P24 White Paper Transitioning to Solaris™ Volume Manager © 2003 Sun Microsystems, Inc.
3. Repeat the steps for all other volumes residing on the three disks, then remove the disks
from the disk group using either vxdiskadm or vxdg:
# vxdg -g datadg rmdisk disk01 disk02 disk03
4. Now remove disk01, disk02, and disk03 from the VxVM configuration. These commands
will remove the private and public partitions from the disks:
# /etc/vx/bin/vxdiskunsetup c1t0d0
# /etc/vx/bin/vxdiskunsetup c2t0d0
# /etc/vx/bin/vxdiskunsetup c3t0d0
5. The disks c1t0d0, c2t0d0, and c3t0d0 can now be partitioned to create the Solaris Volume Man-
ager stripe volume d20. Use format or fmthard to create an underlying partition, s7, sized at 15
megabytes to hold the Solaris Volume Manager state databases, with partition s0 occupying
the rest of the disk and containing the stripe components.
6. Initialize the stripe volume d20, specifying an interlace value of 128 blocks:
# metainit d20 1 3 c1t0d0s0 c2t0d0s0 c3t0d0s0 -i 128b
7. The volume d100, a soft partition of the stripe volume d20, is created to be used for the /data
file system. Create a soft partition for each of the volumes that are required on these disks:
# metainit d100 -p d20 200m
9. Change the /etc/vfstab entry for /data, so that the block device is now /dev/md/dsk/d100
and the raw device is now /dev/md/rdsk/d100, and mount the file system:
# mount /data
10.Restore the data from the backup into the /data file system.
Scenario 3: Migrating Data to New Storage On the Same Server With Minimal Downtime
A good time to accomplish a data migration between volume managers is when storage hardware
is added or replaced. In this scenario, legacy storage technology is replaced with new Sun
StorEdge 9900 series hardware. It may be possible to have both the old and new storage connected
to the same machine for the period of the migration. Backup and restore provides one route to
migrating the data, but the time necessary to perform the restore may prove prohibitive.
One solution to this problem is to use Point-In-Time Copy functionality, part of the Sun StorEdge
Availability Suite. This product works independently of volume management, so it can be used to
snapshot data presented by VERITAS VxVM, Solaris Volume Manager, or even nonvolume-managed
disks. The Point-In-Time Copy functionality sits in the driver stack between the file system and
the volume manager, and uses bitmap technology to track the synchronization state between
master and shadow volumes, limiting the amount of data that must be copied between volumes
when the migration begins. (See sun.com/storage/software/availability for more detail on Sun
StorEdge Availability Suite.) Figure 3-4 illustrates how Point-In-Time Copy can replicate the contents
of a VxVM volume v_data (the Point-In-Time Copy master volume) to Solaris Volume Manager
volume d10 on a hardware RAID5 LUN (the Point-In-Time Copy shadow volume).
© 2003 Sun Microsystems, Inc. Deploying Solaris Volume Manager P25
volume v_data
volume d10
plex plex1 plex2
s3 s3
After the Point-in-Time Copy software has been installed, the VERITAS volume can be
acquiesced during a quiet period, and a full independent snapshot can be copied to the Solaris
Volume Manager volume. Following the initial copy, a bitmap tracks any changes to the VERITAS
volume. Then, when it is time to complete the migration, an update synchronization can take
place, copying only the changed blocks to the Solaris Volume Manager volume. In this way, the
Point-in-Time Copy capability helps to minimize the amount of downtime needed to migrate data
to the new storage.
Scenario 4: Storage and Server Consolidation, Remote Data Movement, Minimal Downtime
If data migration is required as a part of a consolidation effort, it may be necessary to move the
data to Solaris Volume Manager volumes that are managed on a different host. Again, backup
and restore is a possible approach, but may result in a significant amount of downtime.
Remote Mirror capability, also part of the Sun StorEdge Availability Suite, operates in a
similar way to Point-in-Time Copy. Remote Mirror sits between the file system and volume manager,
and can be configured to replicate VERITAS volumes on one Sun server to Solaris Volume Manager
volumes on another, using any transport that supports TCP/IP. The replication can occur in
synchronous and asynchronous modes, and synchronization states are tracked using bitmap
technology. When all volumes are synchronized, the replication can be stopped (or put in logging
mode) and the Solaris Volume Manager volumes can be mounted on the new server.
The Sun StorEdge Availability Suite includes both Point-In-Time Copy and Remote Mirror
capabilities — powerful tools that can be used to minimize the impact of a volume migration. To
facilitate smooth transitions and develop useful storage configurations, Sun consultants can help
customers analyze migration requirements and assist with migrations that involve these products.
Since it is usually necessary to unencapsulate the boot disks, an upgrade to the Solaris OE
(or VxVM) provides a window of opportunity in which to migrate to Solaris Volume Manager
software-managed boot disks. If all data disks have been migrated to Solaris Volume Manager
and the only remaining VxVM-managed volumes are on the rootdg disk group, then boot disk
migration can start by following the steps to remove VxVM (unmirroring the root disk, unencapsulating
root, and removing VxVM). There are well-documented procedures for completing this task, both
on Sun’s site (sun.com/service/support/sunsolve) and in VERITAS administration guides. Care
should be taken to ensure that underlying partitions exist on the boot disk for all the original boot
disk file systems. Removing VxVM effectively frees up the partitions used for private and public
regions. One of these freed partitions should be used for creating the Solaris Volume Manager
state database replicas.
Also, VxVM allows the creation of volumes out of any free space in the public region. As a
consequence, there may be more volumes than physical partitions on the boot disk. If this is the
case, these volumes must first be moved to other storage to avoid data loss.
To reduce the potential risk and limit any interruption to service, it may be advantageous to
consider using Solaris Live Upgrade technology. This technology reduces the usual service outage
typically associated with an operating system upgrade — the current operating environment is
replicated to a nominated disk, enabling an upgrade while the system remains active. As part of
this process, the target boot slice cannot be managed initially by Solstice DiskSuite or VxVM (see
the Solaris Live Upgrade manual for further details). After the boot slice is copied over, the copy
can then be configured with Solaris Volume Manager. This provides a dual boot environment,
with each boot disk under a different volume manager.
If it is desirable to migrate to Solaris Volume Manager software-managed boot disks and
keep VxVM enabled, it is necessary to maintain an active rootdg disk group. This may require
adding two free disks to rootdg to compensate for the loss of the root disk and its mirror.
Chapter 4
Conclusion
Solaris Volume Manager, integrated into the Solaris 9 OE, offers a viable alternative to the VERITAS
Volume Manager. Because Solaris Volume Manager is included in the operating environment at no
additional fee, it can help to lower overall storage TCO.
There are numerous methods of migrating from VxVM to Solaris Volume Manager, and the
specific method that should be used depends largely on the current server and storage infrastructure.
In general, there are several key steps: analyzing the volume management requirements, setting
aside time for planning the migration, backing up the data, validating the backup, performing the
migration, and testing the new environment.
Sun consultants have extensive knowledge of both VxVM and Solaris Volume Manager, and
are experienced at helping organizations perform migration activities. For more information on
how Sun can help with a migration to the Solaris 9 OE and Solaris Volume Manager, please
contact your local Sun representative.
P28 Transitioning to Solaris™ Volume Manager
Chapter 5
References
Sun Microsystems posts product information in the form of datasheets and white papers on the
Web at sun.com. The following sites describe products specifically mentioned in this paper:
• Solaris Volume Manager, sun.com/software/solaris/ds/ds-volumemgr
• Solaris Operating Environment, sun.com/software/solaris
• Sun StorEdge Availability Suite, sun.com/storage/software/availability
• Sun StorEdge Resource Management Suite, sun.com/storage/software/resourcemanagement
Please refer to the following white papers for more information on Solaris Volume Manager
and other Sun products:
• Comprehensive Data Management Using Solaris Volume Manager, Technical White Paper
• Better by Design — The Solaris 9 Operating Environment, Technical White Paper
Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark in the U.S. and other countries, exclu-
sively licensed through X/Open Company, Ltd.
Sun, Sun Microsystems, the Sun logo, Java, Solaris, Solstice DiskSuite, Sun Blade, Sun Fire, Sun Ray, and Sun StorEdge are trademarks or registered trademarks of Sun Microsys-
tems, Inc. in the U.S. and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the U.S. and
other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. Mozilla and Netscape are trademarks or registered
trademarks of Netscape Communications Corporation in the United States and other countries. OpenGL is a registered trademark of Silicon Graphics, Inc.
The OPEN LOOK and Sun™ Graphical User Interface was developed by Sun Microsystems, Inc. for its users and licensees. Sun acknowledges the pioneering efforts of Xerox in
researching and developing the concept of visual or graphical user interfaces for the computer industry. Sun holds a non-exclusive license from Xerox to the Xerox Graphical User
Interface, which license also covers Sun’s licensees who implement OPEN LOOK GUIs and otherwise comply with Sun’s written license agreements.
RESTRICTED RIGHTS: Use, duplication, or disclosure by the U.S. Government is subject to restrictions of FAR 52.227-14(g)(2)(6/87) and FAR 52.227-19(6/87),
or DFAR 252.227-7015(b)(6/95) and DFAR 227.7202-3(a).
DOCUMENTATION IS PROVIDED “AS IS” AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABIL-
ITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.
SUN™ Copyright 2003 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, Californie 95054 Etats-Unis. Tous droits réservés.
Ce produit ou document est protégé par un copyright et distribué avec des licences qui en restreignent l’utilisation, la copie, la distribution, et la décompilation. Aucune partie de
ce produit ou document ne peut être reproduite sous aucune forme, par quelque moyen que ce soit, sans l’autorisation préalable et écrite de Sun et de ses bailleurs de licence, s’il
y en a. Le logiciel détenu par des tiers, et qui comprend la technologie relative aux polices de caractères, est protégé par un copyright et licencié par des fournisseurs de Sun.
Des parties de ce produit pourront être dérivées des systèmes Berkeley BSD licenciés par l’Université de Californie. UNIX est une marque déposée aux Etats-Unis et dans d’autres
pays et licenciée exclusivement par X/Open Company, Ltd.
Sun, Sun Microsystems, le logo Sun, Java, Solaris, Solstice DiskSuite, Sun Blade, Sun Fire, Sun Ray, et Sun StorEdge sont des marques de fabrique ou des marques déposées de Sun
Microsystems, Inc. aux Etats-Unis et dans d’autres pays. Toutes les marques SPARC sont utilisées sous licence et sont des marques de fabrique ou des marques déposées de SPARC
International, Inc. aux Etats-Unis et dans d’autres pays. Les produits portant les marques SPARC sont basés sur une architecture développée par Sun Microsystems, Inc. Mozilla et
Netscape sont des marques de fabrique ou des marques déposées de Netscape Communications Corporation aux Etats-Unis et dans d’autres pays. OpenGL est une marque
déposée de Silicon Graphics, Inc.
L’interface d’utilisation graphique OPEN LOOK et Sun™ a été développée par Sun Microsystems, Inc. pour ses utilisateurs et licenciés. Sun reconnaît les efforts de pionniers de
Xerox pour la recherche et le développement du concept des interfaces d’utilisation visuelle ou graphique pour l’industrie de l’informatique. Sun détient une licence non exclusive
de Xerox sur l’interface d’utilisation graphique Xerox, cette licence couvrant également les licenciés de Sun qui mettent en place l’interface d’utilisation graphique OPEN LOOK et
qui en outre se conforment aux licences écrites de Sun.
CETTE PUBLICATION EST FOURNIE “EN L’ETAT” ET AUCUNE GARANTIE, EXPRESSE OU IMPLICITE, N’EST ACCORDEE, Y COMPRIS DES GARANTIES CONCERNANT LA VALEUR MARCHANDE,
L’APTITUDE DE LA PUBLICATION A REPONDRE A UNE UTILISATION PARTICULIERE, OU LE FAIT QU’ELLE NE SOIT PAS CONTREFAISANTE DE PRODUIT DE TIERS. CE DENI DE GARANTIE NE
S’APPLIQUERAIT PAS, DANS LA MESURE OU IL SERAIT TENU JURIDIQUEMENT NUL ET NON AVENU.
Please
Recycle
White Paper Transitioning to Solaris™ Volume Manager On the Web sun.com/software
Sun Microsystems, Inc. 4150 Network Circle, Santa Clara, CA 95054 USA Phone 800-786-7638 or +1 512 434-1577 Web sun.com
Sun Worldwide Sales Offices: Africa (North, West and Central) +33-13-067-4680, Argentina +5411-4317-5600, Australia +61-2-9844-5000, Austria +43-1-60563-0, Belgium +32-2-704-8000, Brazil +55-11-5187-2100, Canada +905-477-
6745, Chile +56-2-3724500, Colombia +571-629-2323, Commonwealth of Independent States +7-502-935-8411, Czech Republic +420-2-3300-9311, Denmark +45 4556 5000, Egypt +202-570-9442, Estonia +372-6-308-900, Finland
+358-9-525-561, France +33-134-03-00-00, Germany +49-89-46008-0, Greece +30-1-618-8111, Hungary +36-1-489-8900, Iceland +354-563-3010, India–Bangalore +91-80-2298989/2295454; New Delhi +91-11-6106000; Mumbai +91-
22-697-8111, Ireland +353-1-8055-666, Israel +972-9-9710500, Italy +39-02-641511, Japan +81-3-5717-5000, Kazakhstan +7-3272-466774, Korea +822-2193-5114, Latvia +371-750-3700, Lithuania +370-729-8468, Luxembourg +352-
49 11 33 1, Malaysia +603-21161888, Mexico +52-5-258-6100, The Netherlands +00-31-33-45-15-000, New Zealand–Auckland +64-9-976-6800; Wellington +64-4-462-0780, Norway +47 23 36 96 00, People’s Republic of China–
Beijing +86-10-6803-5588; Chengdu +86-28-619-9333; Guangzhou +86-20-8755-5900; Shanghai +86-21-6466-1228; Hong Kong +852-2202-6688, Poland +48-22-8747800, Portugal +351-21-4134000, Russia +7-502-935-8411, Sin-
gapore +65-6438-1888, Slovak Republic +421-2-4342-94-85, South Africa +27 11 256-6300, Spain +34-91-596-9900, Sweden +46-8-631-10-00, Switzerland–German 41-1-908-90-00; French 41-22-999-0444, Taiwan +886-2-8732-9933,
Thailand +662-344-6888, Turkey +90-212-335-22-00, United Arab Emirates +9714-3366333, United Kingdom +44-1-276-20444, United States +1-800-555-9SUN or +1-650-960-1300, Venezuela +58-2-905-3800 FE1941-0