Sie sind auf Seite 1von 32

White Paper Transitioning to Solaris™ Volume Manager On the Web sun.

com/software

Transitioning to Solaris™ Volume


Manager

Technical White Paper


January 2003
Table of Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Choosing a Storage Management Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
About This Paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Using Solaris Volume Manager for Data Management . . . . . . . . . . . . . . . . . . . . . . . . . 2
Why Choose Solaris Volume Manager Over VERITAS Volume Manager? . . . . . . . . . . . . 3
Product Functionality Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Defining Volumes Using VxVM and Solaris Volume Manager . . . . . . . . . . . . . . . . . . . . 5
Other Volume Management Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
Coexistence of Solaris Volume Manager and VxVM . . . . . . . . . . . . . . . . . . . . . . . . . . .14
Summary of Solaris Volume Manager and VxVM Features . . . . . . . . . . . . . . . . . . . . . .15
Deploying Solaris Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
An Example of Storage Management in an N-Tier Infrastructure . . . . . . . . . . . . . . . . .16
Migrating to Solaris Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
Migration Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19
Data Disk Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
Boot Disk Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Introduction P1

Chapter 1

Introduction
To support mission-critical business applications, IT departments are faced with the challenging
task of delivering continuous access to strategic corporate information assets, often on a 24x7
basis. Yet data center managers must operate within tight budget constraints, maintaining high
service levels while containing personnel and equipment costs. In this regard, IT managers face a
daunting task — how to deliver high levels of service while simultaneously lowering the total cost
of ownership.
Sun Microsystems, an industry leader in supplying solutions for mission-critical business
computing, understands this challenge. In recent years, Sun has focused on delivering products
that can help to lower downtime, improve service levels, and reduce the total cost of ownership
(TCO). Today, Sun offers a fully scalable product line with built-in availability features and a
reliable, mature operating environment — the Solaris™ Operating Environment (OE) — which
is proven around the world in numerous mission-critical computing environments.
With the introduction of the Solaris 9 OE, Sun is integrating key technologies that can help
manage computing resources and enhance service levels, including a robust storage
management solution, Solaris Volume Manager software. Solaris Volume Manager can be
used to configure multiple storage components into storage volumes, with redundancy and
failover capabilities that help provide continuous data access — even in the event of multiple
device failures. With easy-to-use graphical and command-line interfaces, Solaris Volume Manager
greatly simplifies storage administration, and allows many operations — such as recovering
volumes or expanding the size of a file system — to occur online, minimizing the need for costly
downtime.
P2 White Paper Transitioning to Solaris™ Volume Manager © 2003 Sun Microsystems, Inc.

Solaris Volume Manager is based on Solstice DiskSuite™ software, a proven storage


management tool that Sun has offered for the past 10 years. Recognizing the importance of
reliable storage management, Sun has incorporated Solaris Volume Manager technology directly
into the Solaris 9 OE, offering customers a more comprehensive, integrated data management
solution.

Choosing a Storage Management Solution


Prior to the availability of the Solaris 9 OE, IT managers looking for storage management solutions
had a choice of either Solstice DiskSuite software from Sun or VERITAS Volume Manager (VxVM)
from VERITAS Software Corporation. Both products are well established in the marketplace and
offer comparable functionality. System administrators often have a preference for one or the
other, and many organizations have a policy of using both products to solve different problems.
Because the Solaris 9 OE now incorporates Solaris Volume Manager software, some IT managers are
contemplating a migration from VxVM to Solaris Volume Manager — either as a replacement or
as a companion storage management solution.

About This Paper


This white paper is aimed at system managers and administrators who have some experience with
VERITAS Volume Manager and are considering Solaris Volume Manager software as a storage
management solution. This chapter introduces the functionality of Solaris Volume Manager,
describes its advantages, and discusses why many data center managers are now considering its
use.
Chapter 2 discusses key features of both products. It compares underlying volume management
architectures, exploring key constructs used to create volumes and manage the storage
environment.
Chapter 3 explores a typical n-tiered infrastructure to illustrate the flexibility and scalability of
Solaris Volume Manager. In planning a migration to Solaris Volume Manager, data center
managers must take into account the complexity of the current VxVM configuration, as well as the
specific business and technical requirements. This chapter discusses migration issues, addresses the
pros and cons of using both products in a dual-volume management configuration, and describes
a number of specific migration scenarios.

Using Solaris Volume Manager for Data Management


Solaris Volume Manager incorporates the storage management functionality found in the 4.2.1
release of Solstice DiskSuite software, along with several new features. Figure 1-1 lists specific
features and benefits that can help to enable:
• Enhanced data availability
• Improved data reliability and integrity
• More sustained I/O performance
• Greater configuration flexibility
• Simplified storage administration and management
• Lower deployment risk
© 2003 Sun Microsystems, Inc. Introduction P3

Figure 1-1: Features and benefits of Solaris Volume


Manager
Features Benefits

Support for RAID 1 mirrored volumes and Provides continuous data availability even when a disk device
RAID 5 volumes (striping with parity) within the volume fails

Hot spare pools Enables online system recovery

Alternate path support Enhances data availability


State database replicas Protects Solaris Volume Manager configuration information

RAID 0 striped volumes Distributes the I/O workload over several devices, which can
improve I/O performance

Soft partitioning Provides greater configuration flexibility, enabling many


partitions to be created on a single, high-capacity LUN

Disk concatenation and online expansion Increases file system capacity without interruption or
of volumes and file systems downtime

Device ID support Preserves configuration information when disks or controllers


are moved, providing greater flexibility

Support for disk sets Enables more effective namespace management, supporting
clustered and fabric-connected storage

Graphical user interface (GUI) integrated Provides an easy-to-use, consistent GUI for remote and local
with Solaris Management Console storage administration

Command-line interface (CLI) Facilitates remote operations and scripting

CIM/WBEM API Enables management of storage resources from any


compliant tool

Storage monitoring Simplifies administration and management of storage devices

Testing with Sun StorEdge™ storage Offers proven solutions that can reduce deployment risk
products

Upgrade support Provides seamless upgrade process for Solstice DiskSuite


software and earlier Solaris versions, minimizing downtime
and risk

A Sun white paper, Comprehensive Data Management Using Solaris Volume Manager,
describes these features and benefits in more detail. For additional information on other
functionality integrated in the Solaris 9 Operating Environment, see the Better by Design
— The Solaris 9 Operating Environment white paper.

Why Choose Solaris Volume Manager Over VERITAS Volume


Manager?
Several compelling reasons exist for selecting Solaris Volume Manager over VxVM to manage
storage resources in the data center:
• No volume management license fees: There is a large financial incentive to use Solaris Volume
Manager, especially for organizations wishing to reduce storage total cost of ownership (TCO).
Solaris Volume Manager is a no-cost option built into the Solaris 9 OE. In comparison, VxVM is
priced and licensed according to the size of the server on which it runs. Furthermore, some
VxVM features, such as the ability to perform fast resynchronization of volume mirrors used in a
backup scenario, require additional software licenses. Solaris Volume Manager provides this
functionality as part of the core product at no extra cost. Similar arguments exist for replacing
VERITAS VxFS file systems with UFS file systems.
P4 White Paper Transitioning to Solaris™ Volume Manager © 2003 Sun Microsystems, Inc.

• Single vendor support: With a single maintenance contract, 24x7 support can be available for
all software layers between the application and storage resources — covering the Solaris 9 OE,
UFS file systems, the Sun StorEdge™ Traffic Manager software, and Solaris Volume Manager.
Together, these components can provide a powerful, integrated solution, offering journaled file
systems created on flexible, RAID-protected volumes. These volumes can, in turn, be built on
Sun StorEdge Traffic Manager multipathed disk storage, which can help to improve data
protection and availability and increase server-to-storage bandwidth.
• Upgradable, volume-managed system disks: A unique feature of Solaris Volume Manager is the
ability to upgrade the operating system with an active, volume-managed root disk. With VxVM-
managed boot disks, upgrading the Solaris OE, or indeed VxVM itself, can be a complex process.
In essence, it requires unmounting file systems, deporting disk groups, unencapsulating root,
booting underlying devices, and removing the VxVM packages. On completion of the Solaris
software upgrade, VxVM must be reinstalled and boot disks must be reencapsulated and mirrored.
The upgrade process can involve a significant amount of planning and there are many potential
pitfalls. With Solaris Volume Manager managing the root disk, the upgrade process can be
significantly less complex. Customers can seamlessly migrate from previous Solaris OE releases
and even earlier versions of Solstice DiskSuite software. Sun performs extensive testing for common
upgrade scenarios, which helps minimize deployment risks, now and into the future.

These advantages, and the robust storage management features of Solaris Volume Manager,
are encouraging many data center managers to seriously consider using Solaris Volume Manager
as an alternative to VxVM. Chapter 2 provides a more detailed comparison of the two products,
and discusses how they can be used to create RAID-protected volumes.
Product Functionality Comparison P5

Chapter 2

Product Functionality Comparison


This chapter offers a comparison of the two volume management products, Solaris Volume
Manager and VERITAS Volume Manager (VxVM). Fundamentally, both deliver software RAID
protection, offering RAID levels 0, 1, 0+1, 1+0, and 5. And both provide services to the operating
system that help protect data in the event of disk failure, and allow today's large-capacity disks
to be broken down into functional units that can meet specific business and application requirements.
The products differ, however, in the underlying architecture each uses to implement storage
management features.

Defining Volumes Using VxVM and Solaris Volume Manager


The fundamental constructs of these products can be illustrated by examining how they are
configured to provide a mirrored volume that supports a file system. Note that this overview is
intended to show the simplicity and elegance of Solaris Volume Manager, and is not intended
as a tutorial.

VERITAS VxVM Volumes


VxVM uses the concept of disk groups, and each disk managed by VxVM resides in a disk group.
Every configuration has at least one disk group called rootdg, which is commonly reserved for
boot disks. Application volumes are usually created in other disk groups that bind related
application data storage. In Figure 2-1, two physical drives belong to the datadg disk group and
are used for application data. Each disk in the disk group is given a logical name, in this case,
disk01 for c1t0d0 and disk02 for c2t0d0.
P6 White Paper Transitioning to Solaris™ Volume Manager © 2003 Sun Microsystems, Inc.

Figure 2-1: VxVM mirrored volume


file system /data

VxVM volume v_data

plex plex1 plex2

s3 s3

subdisk s4 s4
sd1 sd2

VxVM disk disk01 disk02


disk c1t0d0 c2t0d0

VxVM takes ownership of the entire disk, changing the disk volume table of contents (vtoc) to
present private and public regions (usually slices 3 and 4 respectively), and placing the disk into a
disk group. The private region is used to store VxVM configuration information, and is comprised
of up to three structures:
• Disk header that uniquely identifies the disk
• Optional configuration copy (config copy) that defines the volume configuration
• Optional log copy that tracks volume state information

VxVM decides on the appropriate distribution of active config and log copies depending on
the number of disks and their physical location. The algorithms used to distribute config and log
copies appear to have changed at 3.0.4, 3.1, and 3.2 releases of VxVM, so it is good practice to
check the active config/log copy distribution for single points of failure.
The underlying building blocks of any VERITAS volume are subdisks, which are specified by
reserving an area of the public region (defined by an offset into the public region) and a size.
Subdisks can either be concatenated or striped together to form a plex, which is the level where
mirroring occurs. In the above example, plex1 and plex2 are both attached to the volume v_data.
VERITAS volumes are presented under the /dev/vx device path (for example, the file system /data
is mounted on /dev/vx/dsk/datadg/v_data).
Volumes are usually created from the top down using the vxassist command. This example
might use the command:
vxassist -g datadg make v_data layout=mirror-concat alloc=”disk01,disk02”

When this command is run, VERITAS creates the volume and then begins to synchronize one
plex to the other. If a file system is to be placed on the volume, it may not be necessary to perform
mirror synchronization — in this case, the “init=active” argument on the vxassist command line
can initialize the volume without synchronization of the plexes, which may otherwise take a
considerable amount of time for large volumes. The VxVM command vxprint can be used to list
the configuration and status of managed volumes.

Note – Mirror creation without full synchronization should be used with care, regardless of the
volume manager.
© 2003 Sun Microsystems, Inc. Product Functionality Comparison P7

Solaris Volume Manager Volumes


Solaris Volume Manager, in its simplest form, uses the traditional Solaris OE functionality of
partitioning disks in up to eight slices and using them as building blocks to form volumes. Any
partitions can be used to create volumes, but it is common practice to reserve slice s7 for the
state database replicas. These are similar to VxVM private regions, in that they are created on
selected disks to hold the Solaris Volume Manager configuration data. It is the administrator’s
responsibility to create these state databases (using the metadb command) and distribute them
sensibly across disks and controllers to avoid any single points of failure.
With the soft partitioning feature in Solaris Volume Manager, a disk can be subdivided into
many slices that are controlled and maintained by software (hence the term soft partitioning).
Soft partitioning allows up to 8192 partitions or file systems on a single drive or volume, providing
greater flexibility. With today’s large-capacity disks, customers need to subdivide a disk into many
partitions. Solaris Volume Manager enables an administrator to create soft partitions either on
top of individual physical disks, or on existing RAID 1, RAID 5, or RAID 0 volumes. This can greatly
simplify the process of creating many file systems with the required data availability and performance
characteristics. (Examples of soft partitioning using both physical and logical volumes are given
later in this chapter.)
Similar functionality to VxVM disk groups is provided by Solaris Volume Manager disk sets.
These are groups of disks that are managed together as a single namespace, often used in cluster
environments for moving storage between cluster nodes. The disks effectively live in separate
namespaces with separate state databases. Disks that are not explicitly assigned to a disk set are
viewed as belonging to the local disk set. (See the Disk Groups and Disk Sets section later in this
chapter.)
Within Solaris Volume Manager, all volumes are given a name beginning with the letter d
and followed by a unique number. Volume names must be unique within the disk set they reside
(either local or a named set).

Solaris Volume Manager Example — Creating a Mirror Using Hard Partitions


Using the same physical drives as in the VxVM example, Solaris Volume Manager can create
a volume of equivalent size and RAID protection. Using Solaris Volume Manager to build a mirrored
volume on hard partitions can allow volumes to be created that have a very straightforward structure.
Boot disk mirroring is a typical application of mirrors based on hard partitions.
Figure 2-2 illustrates how a mirrored volume, d20, is constructed with submirrors created
out of hard partitions.
Figure 2-2: Solaris Volume Manager mirrored
volume with submirrors created from hard file system /data
partitions
Solaris Volume Manager volume d20

s7 s7

submirror s5 s5
d21 d22

disk c1t0d0 c2t0d0


P8 White Paper Transitioning to Solaris™ Volume Manager © 2003 Sun Microsystems, Inc.

In this example, d21 and d22 can be created with the following commands:
# metainit d21 1 1 c1t0d0s5
# metainit d22 1 1 c2t0d0s5

The first digit after the volume name indicates the number of slices that are to be concatenated
together. The second digit indicates the number of partitions within that slice, for example, a
striped volume can be created with the command metainit d50 1 3 c1t0d0s3 c1t1d0s3 c1t2d0s3,
which builds the volume d50 with one slice striped over three hard partitions.
The RAID 1 volume d20 is then created by using the -m option of metainit to attach d21 as
a submirror. The second submirror, d22, is in turn attached to form a two-way mirror. The
commands necessary to achieve this are:
# metainit d20 -m d21
# metattach d20 d22

As d22 is attached to d20, a full mirror synchronization occurs from submirror d21 to
submirror d22. If the assumption is that the contents of both submirrors are already the same,
or the intention is to create a new file system on d20, it is possible to create the volume without
having to perform submirror synchronization. Instead of a metainit followed by a metattach,
a single metainit can be supplied:
# metainit d20 -m d21 d22

This effectively gives the same functionality as VxVM does using the init=active argument
with vxassist.

Note – Again, mirror creation without full synchronization should be used with care, regardless
of the volume manager.

Volumes are presented to the Solaris Operating Environment under the /dev/md device path,
for example, the file system is mounted on /dev/md/dsk/d10. The Solaris Volume Manager
command metastat can be used to list the configuration and status of a volume.

Solaris Volume Manager Example — Creating a Mirror Using Soft Partitions on a Physical Slice
It is also possible to create a mirror using soft partitions on a physical slice. The example shown in
Figure 2-3 creates the mirrored volume d10 using disks that have been configured for use with
Solaris Volume Manager soft partitions. Slice s7 has been reserved for a state database, while
slice s0 maps to the rest of the disk. Solaris Volume Manager initializes the soft partition d13
by allocating one gigabyte of data from the space available in slice s0.
Figure 2-3: Solaris Volume Manager mirrored
file system /data volume with soft partitions on a physical slice

Solaris Volume Manager volume d10

concat volume/
d11 d12
submirror
s7 s7

s0 s0
soft partition d13 d14

disk c1t0d0 c2t0d0


© 2003 Sun Microsystems, Inc. Product Functionality Comparison P9

The following command defines the soft partition d13:


# metainit d13 -p -e c1t0d0 1g

The -p option creates a soft partition by allocating one gigabyte of the space available on
c1t0d0. The -e argument, a once-per-disk option, takes the disk c1t0d0 and repartitions it for soft
partitioning, creating slice s0 for the data and slice s7 for state database replicas. This command
also removes the entry in the vtoc for slice s2. As an alternative to using the -e option, the physical
drive can be manually partitioned, with a hard partition allocated for use by soft partitions. For
example, if c4t0d0s5 is created as a four-gigabyte physical partition, the d23 soft partition can
be created by issuing “metainit d13 -p c4t0d0s5 1g”, which allocates one gigabyte of the space
within the physical partition c4t0d0s5 to create the volume.
For soft partitioning, two basic rules must be followed:
• Soft partitions have to be layered on traditional volumes or partitions
• A mirrored volume cannot be created directly using soft partitions as submirrors (the soft
partitions must first be made into a concatenation/stripe), as in the following commands:
# metainit d11 1 1 d13
# metainit d12 1 1 d14

The concatenated volume d11 is created using a single slice, and that slice is constructed
using a single logical volume, d13. The first digit after the volume name indicates the number of
slices that are to be concatenated together. The second digit indicates the number of partitions
within that slice. Similarly, d12 is created out of soft partition d14. The volumes d11 and d12 are
effectively submirrors. The top level volume d10 is then created using these commands:
# metainit d10 -m d11
# metattach d10 d12

The RAID 1 volume d10 is created by using the -m option of metainit to attach d11 as a
submirror. The second submirror d12 is, in turn, attached to form a two-way mirror.
The Solaris Volume Manager approach to disk manipulation and volume creation is
analogous to the VERITAS approach of disk initialization and creation of subdisks (d13 and d14),
plexes (d11 and d12), and volumes (d10). If the total requirement is to create 10 volumes out of
the same two disks, then the number of configuration objects has potentially increased from five
to 50, regardless of whether Solaris Volume Manager or VxVM is used for storage management.

Solaris Volume Manager Example — Creating Soft Partitions on Logical Volumes


The flexible nature of the Solaris Volume Manager architecture allows soft partitions to be
configured on logical RAID-protected volumes. This can reduce the number of objects and thereby
lower the complexity of a storage configuration. By layering soft partitions on existing RAID
volumes, an administrator can greatly simplify the process of creating many partitions with
the required performance and availability features.
Figure 2-4 illustrates how soft partitions can be layered on top of a large mirrored volume
(volume d30) using the same two physical disks as in the previous examples. In this case, Solaris
Volume Manager effectively defines a RAID container that is then sliced with soft partitions.
P10 White Paper Transitioning to Solaris™ Volume Manager © 2003 Sun Microsystems, Inc.

Figure 2-4: Solaris Volume Manager allows soft


file systems /app1 /app2 partitions to be created on logical volumes, such
as on the mirrored volume d30
soft partitions d100 d101

d30

s7 s7

submirror
s5 d31 d32 s5

disk
c1t0d0 c1t0d0

The concatenated volumes d31 and d32 are created out of the 36-gigabyte drives c1t0d0s5
and c2t0d0s5, respectively. (The slice s7 has been created at 15 megabytes, and reserved for state
database replicas.) The commands metainit and metattach are used to create the mirrored volume
d30 out of submirrors d31 and d32. At this point, the 36-gigabyte, RAID-protected container can
be sliced into a large number of soft partitions. The volume d100 is constructed by allocating one
gigabyte of the mirrored volume d30:
# metainit d100 -p d30 1g

If 10 volumes are required from this storage, it is now necessary to create only a total of 15
objects: five objects to create the RAID container d30, and one object for each soft partition
allocated from the container. In this way, soft partitioning in Solaris Volume Manager can help
to minimize the number of objects and simplify configuration complexity.
For soft partitioning, each component or extent that is used to create a soft partition is
defined by an offset and size parameter, which describes the physical boundaries of the extent in
the nominated partition or volume. 512-byte watermarks are also used to identify and record the
boundaries of each soft partition.

Other Volume Management Features


Multipathing
To provide extra resilience and facilitate some degree of load balancing, storage is often
connected to a Sun™ server using multiple physical paths. Sun storage platforms — including the
Sun StorEdge T3 array, the Sun StorEdge A5200 array, and the Sun StorEdge 9900 series — offer
the ability to connect a single disk or logical unit number (LUN) to the same host using independent
physical connection points.
The Solaris 9 OE includes multipathing functionality. Sun StorEdge Traffic Manager software is
implemented via a scsi_vhci driver that allows the operating environment to present a multipathed
device as a single entity. With Sun StorEdge Traffic Manager enabled, physical paths are masked out
and a pseudo-device is presented instead. Consequently, the format command lists multipathed
disks or LUNs only once.
© 2003 Sun Microsystems, Inc. Product Functionality Comparison P11

There are obvious benefits to having volume manager-independent multipathing functionality


built into the Solaris OE. Depending on circumstances, it may not even be necessary to use any
volume management software. By using the format command to slice the vtoc of a 73-gigabyte
Sun StorEdge 9900 LUN and Sun StorEdge Traffic Manager to manage the multiple devices, the
required resilience, load-balancing, and configuration flexibility may be provided.
Both Solaris Volume Manager and VxVM (Version 3.2 and higher) can create volumes using
Sun StorEdge Traffic Manager software-managed devices. VxVM also provides Dynamic Multipathing
(DMP), which can effectively duplicate the features provided by Sun StorEdge Traffic Manager.
When VxVM is using Sun StorEdge Traffic Manager software-managed devices, DMP, although still
active, provides little functionality but merely exists to satisfy the architectural requirements of VxVM.

Component Failure
To provide increased data availability for RAID 1 and RAID 5 volumes, both Solaris Volume Manager
and VxVM provide features to automate the replacement and resynchronization of failed volume
components.
VxVM provides the choice of two daemons for disk sparing:
• vxrelocd, which responds to failures at the subdisk level
• vxsparecheck, which responds to failures at the disk level

The use of disks and free space within a disk group can be influenced using the nohotuse
and spare flags with these commands.
Solaris Volume Manager uses the concept of hot spare pools. These are pools of dedicated
disk slices that can be used to replace failed components. A spare slice can belong to more than
one pool, and the order in which slices are added to the pool determines the order in which they
may be used to replace failed components. RAID 1 submirrors can be assigned to different hot
spare pools, providing a high level of control over the disk sparing strategy.

RAID 0+1 and RAID 1+0


To configure storage resources for better performance and availability, Solaris Volume
Manager can support both RAID 1+0 (mirrors that are then striped) and RAID 0+1 (stripes that are
then mirrored), depending on the underlying devices. The Solaris Volume Manager interface makes
it appear that all RAID 1 devices are strictly RAID 0+1, but Solaris Volume Manager recognizes the
underlying components and mirrors individually, when possible.

Note – Solaris Volume Manager cannot always provide RAID 1+0 functionality. However, in a
best practices environment, where both submirrors are identical and are made up of disk slices
(not soft partitions), RAID 1+0 volumes can be configured.

For example, with a pure RAID 0+1 implementation and a two-way mirror that consists of three
striped slices, a single slice failure can fail one side of the mirror. Assuming that no hot spares are
in use, a second slice failure can fail the mirror. Using Solaris Volume Manager, up to three slices
can potentially fail without failing the mirror because each of the three striped slices are individually
mirrored to their counterparts on the other half of the mirror.
P12 White Paper Transitioning to Solaris™ Volume Manager © 2003 Sun Microsystems, Inc.

In Figure 2-5, a RAID 1 volume consists of two submirrors, which each consist of three
identical physical disks and the same interlace value. A failure of the three disks A, B, and F can
be tolerated because the entire logical block range of the mirror is still contained on at least one
good disk.
Figure 2-5: With a Solaris Volume Manager RAID 1+0
volume, access will succeed to portions of the disk
Submirror 1 where data is still available

Physical Physical Physical


Slice A Slice B Slice C
RAID 1
volume
Submirror 2

Physical Physical Physical


Slice D Slice E Slice F

If, however, disks A and D fail, a portion of the mirror’s data is no longer available on any disk
and access to these logical blocks will fail. When a portion of a mirror’s data is unavailable due to
multiple slice errors, access to portions of the mirror where data is still available will still succeed.
Under this situation, the mirror can act like a single disk that has developed bad blocks. The damaged
portions are unavailable, but the rest of the blocks can be accessed by the application.
VERITAS introduced RAID 1+0 functionality with the release of layered volumes in version 3.0.x
of VxVM. One reason for the introduction of layered volumes in VxVM was to enable the creation
of a striped volume in which each of the components of the stripe is a mirrored subvolume. This
provides the same functionality as Solaris Volume Manager, but the configuration can often
appear overly complex, and the number of VERITAS objects necessary to define the volume can
increase dramatically.

Volume Logging
All mirrored volumes in Solaris Volume Manager automatically benefit from volume logging.
Volume logging limits the amount of block copy activity necessary to keep the mirrored volumes
in sync. Volume logging is the default behavior of Solaris Volume Manager, and uses bitmaps
held in the state databases to track changes to submirrors. Consequently, all mirrored volumes
can be protected against the need to perform a full-mirror resynchronization in the event of a
system failure.
In VxVM, Dirty Region Logs (DRLs) are attached to volumes to track the recent changes to a
mirror, and are effectively bitmaps implemented as a logging plex. Following a system crash, if the
synchronization state of a volume cannot be determined, only the recently modified blocks are
copied between the plexes to synchronize the volume. DRLs must be manually attached to
the volume, and are only used in a system crash scenario. Within Solaris Volume Manager,
the same bitmap technology can be used to speed up the resynchronization process when
performing online backups after splitting a submirror. This functionality is not enabled by default
within VxVM — it can be activated only by purchasing a VERITAS VxVM Fast Mirror Resync license.

Management
To make it easier for administrators to accomplish routine tasks, and to simplify the configuration
and management of storage resources, both Solaris Volume Manager and VxVM support command-
line and graphical user interfaces (CLI and GUI)
© 2003 Sun Microsystems, Inc. Product Functionality Comparison P13

The Solaris Volume Manager GUI, also called the Enhanced Storage Tool, is integrated
with Solaris Management Console — a Java™ technology-based interface that administrators use
to access other administrative tools for the Solaris OE. The look and feel of Solaris Management
Console is customizable, providing a consistent, intuitive interface for all Solaris management
tools. Using the Enhanced Storage Tool, wizards guide administrators through easy-to-follow,
step-by-step instructions, automating many of the common storage management operations.
In addition, as volumes are defined, the actual commands used to create the volumes can be
captured and saved. These commands can then be reused in scripts or as an aid to learning Solaris
Volume Manager commands.
Solaris Volume Manager provides mdmonitord, which monitors storage configurations and
identifies failed volume components. Solaris Volume Manager also supports SNMP reporting,
allowing volume status information to be propagated to SNMP-based management frameworks.
Within VxVM, volume error notification can be handled by either the hot relocation or hot spare
daemons, vxrelocd and vxsparecheck. VxVM does not provide SNMP support as part of the core
product.
Solaris Volume Manager includes support for an application programming interface (API)
that allows standards-based management of storage resources. This API adheres to the Web-Based
Enterprise Management (WBEM) infrastructure and uses the Common Information Model (CIM)
object model — standards that are specified by the Distributed Management Task Force (DMTF).
For more information about DMTF, see www.dmtf.org.
CIM defines the data model, or schema, that describes:
• The attributes of Solaris Volume Manager devices and operations against them
• The relationships among various Solaris Volume Manager devices
• The relationships among Solaris Volume Manager devices and other aspects of the operating
system, such as file systems

The CIM model is made available through the Solaris WBEM SDK, which is a set of Java
technology-based APIs that allow access to system management capabilities represented by CIM.
The CIM/WBEM API provides a public, standards-based programmatic interface to monitor and
configure storage resources with Solaris Volume Manager.
VxVM does not provide CIM/WBEM functionality as part of the core product.

Disk Groups and Disk Sets


VxVM disk groups and Solaris Volume Manager disk sets allow related disks and volumes to be
managed together as a single namespace. Using disk groups in VxVM and disk sets in Solaris
Volume Manager can help facilitate management of:
• Name space
• Storage in clustered environments
• SAN fabric-connected storage

With VxVM, disk groups can be exported from one node and imported on another. VxVM disk
groups and the volumes defined within them are device independent. Multihost-attached storage
may appear on different controllers on each attached node. VxVM masks this difference and
establishes a mapping between physical devices and VERITAS objects on import of the disk group.
P14 White Paper Transitioning to Solaris™ Volume Manager © 2003 Sun Microsystems, Inc.

Solaris Volume Manager disk sets support similar functionality, where ownership of the disk
set can be released by one node and taken by another. When a disk is placed into a disk set, its vtoc
is changed to include a slice s7, sized to include a state replica database for the disk set. By
default, up to four disk sets, each with 128 volumes names (d0 to d127), can be created, which
is usually adequate for most implementations. The kernel configuration file /kernel/drv/md.conf
can be tuned to increase the number of volume names per disk set and the number of disk sets
supported by the system.
Solaris Volume Manager disk sets can enhance namespace management of storage
resources. A common application of Solaris Volume Manager disk sets is in clustered environments,
since disk sets are not automatically imported at boot time. Sun Cluster 3.0 software provides a
device-independent (DID) layer to make use of Solaris Volume Manager disk set functionality.
The DID layer can help present all devices to each node in the cluster using the same name. This
feature is important with Solaris Volume Manager due to the disk set relying on controller/target/
device references for volume component definitions. Disk sets can be used outside of the cluster
framework, but care should be taken to address all shared storage devices using the same address.
Fabric-attached storage can be more easily managed within a disk set-controlled namespace
that is separate from other local storage. SAN fabric-connected storage is not usually available to
the system as early in the boot process as other devices (such as SCSI and IDE disks). When this
storage is not defined within a disk set, Solaris Volume Manager reports logical volumes on
the fabric as unavailable at boot. However, by adding the storage to a disk set and then using disk
set tools to manage the storage, the problem with boot time availability is avoided.

Coexistence of Solaris Volume Manager and VxVM


Historically there has been much debate around the subject of Solstice DiskSuite/Solaris Volume
Manager and VxVM coexistence. Solaris Volume Manager and VxVM can be used together, and
some organizations have made a strategic decision to use both.
One approach to storage management is to select the volume manager that most successfully
meets the business needs and use it exclusively. This translates into a requirement for a single set
of skills and knowledge, and only one set of software packages to administer. However, a key
advantage to using Solaris Volume Manager is the ability to perform upgrades with a mirrored boot
disk. Combined with the simplicity of the Solaris Volume Manager architecture, this capability forms
a strong argument for migrating completely to Solaris Volume Manager — or at least using
Solaris Volume Manager to manage the boot disks while VxVM manages the data disks.
VxVM mandates the use of a rootdg disk group to act as a bootstrap for the rest of the VxVM
configuration. Common practice is to place the root disk and its mirror into rootdg, and put
everything else into application or service-related disk groups. If Solaris Volume Manager is used
for the boot disks, it is still necessary to identify at least two disks to create rootdg. This can prove
costly in terms of allocating two entire disks for that purpose. An alternative solution, depending
on the particular storage configuration, may be to create small LUNs for this purpose. For example,
Sun StorEdge T3 arrays (with controller firmware 2.1 and above) provide the ability to perform
slicing, allowing a LUN as small as one gigabyte to be presented out of the array.
© 2003 Sun Microsystems, Inc. Product Functionality Comparison P15

DMP and Sun StorEdge Traffic Manager can also coexist. Both products can be configured to
manage particular device paths. Beginning with VxVM 3.2, DMP can work with the pseudo-device
paths presented by Sun StorEdge Traffic Manager. The choice of which solution to use is dependent on
factors such as the type of storage, host bus adapters, and whether the environment is clustered.
Another area of concern when using VxVM and Solaris Volume Manager together is how
to prevent both products from being used to manage the same devices. The standard practice
of having well-documented configurations and effective change-control processes can help to
alleviate any such problems.

Summary of Solaris Volume Manager and VxVM Features


This chapter discussed and compared key characteristics of Solaris Volume Manager and VxVM.
summarizes the features of these products:
Figure 2-6: A Summary of Solaris Volume Manager
and VxVM features
Feature Solaris Volume Manager VxVM

RAID levels 0, 1, 5, 0+1, 1+0 0, 1, 5, 0+1, 1+0

Hot Spares Yes Yes

Seamless mirrored root upgrade Yes No


Volume growing Yes Yes

Upgradeability Easy, Solaris Volume Manager is Complex


part of the Solaris Operating
Environment

Support for a wide range of Yes, any that support IEEE Yes
arrays and storage unique device IDs

SNMP support Yes No

WBEM/CIM Yes, open API for WBEM/CIM No


integration

Mirror snapshot with fast resync Yes No, must purchase additional
support (for backup) license

RAID 1 logging Yes Yes

RAID 5 logging Yes Yes

Management GUI Yes, part of Solaris Management Yes


Console and includes
configuration wizards
GUI security Yes, via Solaris Management Yes, via manipulation of
Console user profiles /etc/group on each server

CLI Yes Yes


Fabric Storage Yes Yes

Cluster Yes, Sun Cluster 3.0 is designed Yes, Sun and VERITAS cluster
around Solaris Volume Manager products
functionality

Multipathing Works with Sun StorEdge Traffic Works with Sun StorEdge Traffic
Manager devices Manager or VERITAS DMP
managed devices

Tunable mirror read/write policy Yes Yes

RAID 0 tuning Yes, interlace size Yes, stripe unit


P16 Transitioning to Solaris™ Volume Manager

Chapter 3

Deploying Solaris Volume


Manager
This chapter begins by examining a hypothetical environment in which Solaris Volume Manager
is deployed, and describes how certain configuration characteristics can help to improve data
availability and performance. To capitalize on the benefits of Solaris Volume Manager, many data
center managers are now contemplating a migration from VxVM. This chapter explores migration
approaches for consideration. It also offers some planning guidelines and discusses several typical
migration scenarios.

An Example of Storage Management in an N-Tier Infrastructure


Just as the Solaris OE can scale across a broad spectrum of Sun desktops and servers, Solaris
Volume Manager is a scalable storage management solution that can be deployed as the volume
management solution for a broad range of Sun platforms across the enterprise.
Many business solutions are implemented using a scalable, n-tiered architecture, similar to
the example depicted in Figure 3-1. In this hypothetical environment, client machines (including
Sun Blade™ 100 workstations and Sun Ray™ appliances) connect to Sun Fire™ 280R application
servers. These application servers access highly available RDBMS services, which are housed on a
pair of clustered Sun Fire V880 servers with Sun StorEdge T3 arrays. At night, a data extract is
loaded into a data mining application that resides on a Sun Fire 15K server, with mainframe-class
storage provided by a Sun StorEdge 9960 system. Throughout the enterprise, Solaris Volume
Manager can be used in the storage infrastructure to help provide data redundancy, configuration
flexibility, and improved performance.
© 2003 Sun Microsystems, Inc. Deploying Solaris Volume Manager P17

Figure 3-1: In an n-tiered customer environment,


Solaris Volume Manager can be used at all levels to
protect data and enhance application availability Sun Blade 100 Sun Ray Appliances
Workstations
Client

Sun StorEdge
Sun Fire 3800 Server D240 Media Tray
Sun StorEdge
A5200 Arrays

Sun Fire 280R Application


Servers Servers

Sun Fire V880 Servers Database


and Sun StorEdge T3 Arrays
(Cluster)

Data
Mining

Sun Fire 15K Sun StorEdge


Server 9960 Array

At the client level, Solaris Volume Manager can be used to mirror internal boot disks of the
Sun Blade 100 workstations and boot disks for the Sun Ray appliance server, which reside on a Sun
StorEdge D240 media tray. The Sun Fire 3800 server for the Sun Ray appliances also provides home
directory storage, shown in Figure 3-1 on a pair of Sun StorEdge A5200 arrays. Solaris Volume
Manager can configure these arrays with RAID 1+0 volumes, striping each volume across five disks
in each array, to facilitate storage performance. Two hot spare pools can also be created — one
containing a number of slices from the first array, and the other comprising slices from the
second array. Consequently, each submirror is assigned to the hot spare pool that provides spare
slices on the same array.
In this environment, application requirements mean that the window of opportunity to
perform backups on the home file systems is limited. To back up this data, it is necessary to stop
the applications, take the backup, and then restart the applications. To reduce the outage
necessary for backups, Solaris Volume Manager three-way mirrored volumes can be created for
the home file systems. The command metaoffline can be used to detach a mirror, allowing that
mirror to be available for backup purposes. When the backup is complete, the submirror can be
placed online. To fully resynchronize all three submirrors, only the blocks that have changed must
be recopied, because of the way Solaris Volume Manager tracks affected blocks.
At the application server level, Solaris Volume Manager can manage boot disks and application-
specific storage on the Sun Fire 280R servers.
At the back end, clustered Sun Fire V880 servers with Sun StorEdge T3 arrays provide application
data storage and management for highly available RDBMS services. Sun Cluster 3.0 software,
designed to support Solaris Volume Manager integration, provides the cluster framework. The
database storage volumes can be configured in Solaris Volume Manager disk sets, which can
then be failed over between cluster nodes with the RDBMS application, helping to maintain
both availability and optimal application-to-storage I/O performance.
P18 White Paper Transitioning to Solaris™ Volume Manager © 2003 Sun Microsystems, Inc.

Nightly, an extract of data can be taken from the RDBMS system and loaded into a data
mining application running on a Sun Fire 15K server domain. The Sun StorEdge 9960 array provides
mainframe-class storage — each hardware RAID 5 LUN can be presented on two paths from the
Sun StorEdge 9960 array. Sun StorEdge Traffic Manager Software can be used to provide load
balancing and path redundancy. Solaris Volume Manager can then be used to present large,
striped volumes to the data mining application. The stripes can be defined with an appropriate
interlace size across the many independently pathed LUNs, which helps to optimize the available
server-to-storage bandwidth.
Throughout the enterprise, Solaris Volume Manager enables storage management using
the same intuitive user interface, Sun Management Console, regardless of the Sun platform or
storage components. On any of these systems — from Sun Blade workstations to the Sun Fire 15K
data center server with terabytes of attached storage — Sun Management Console can be used to
configure and manage Solaris Volume Manager volumes. Solaris Volume Manager is a comprehensive,
integrated, scalable, and flexible solution that helps to enable data management across a wide
spectrum of Sun server and Sun and third-party storage products.

Migrating to Solaris Volume Manager


An obvious time to consider adopting Solaris Volume Manager as part of a volume management
strategy is at the introduction of new servers into the infrastructure. In this case, the server and
storage configuration can be specifically designed around Solaris Volume Manager, file systems
can be created, and applications can be loaded. In many environments, server and storage
consolidation efforts offer an approach to reducing overall TCO. The procedures and processes
necessary to complete a consolidation provide an ideal window of opportunity to migrate to
Solaris Volume Manager software-managed storage.
A more difficult scenario is a migration of an existing VxVM implementation to Solaris
Volume Manager. Should a migration take place between volume managers on a production
system, and if so, how should it occur? Before a migration takes place, IT managers should analyze
both the business and technical requirements, which can influence the migration approach. The
following issues should be considered:
• Availability: To perform the migration, how much interruption to service is acceptable to the
organization? Is this a 24x7 environment? Are maintenance windows available? The amount
of downtime can have a major influence on the migration approach taken.
• Application Constraints: Do the applications use file systems or raw volumes? If using raw
volumes, how easy is it to reconfigure the application to reference /dev/md device paths
instead of /dev/vx device paths?
• I/O Requirements: Does the existing volume layout make the best use of storage inventory?
When new systems are installed, it is often difficult to predict a storage configuration that is
optimal for an application platform. Migration can provide an ideal opportunity to analyze
current configurations to define the I/O requirements for the new volume management layout.
• Capacity Management: How much storage will the applications require in the next 18 to 24
months? Does the existing volume configuration meet these requirements, or is there a need
to consider increasing volume number and size?
• Storage Technology Refresh: Is this an appropriate time to consider the benefits of performing
a refresh of existing storage technology?
© 2003 Sun Microsystems, Inc. Deploying Solaris Volume Manager P19

• Clustered Environment: Does the cluster technology offer a choice of volume manager? Sun
Cluster 3.0 software, for example, allows one volume manager to be used for the local boot disk
environment and another for data storage managed by the cluster framework. It is possible to
migrate volume managers in an existing Sun Cluster 3.0 implementation, but the migration can
be more complex than in nonclustered environments.
• VERITAS Cluster Volume Manager: Is VERITAS Cluster Volume Manager (CVM) in use? Oracle
Parallel Server environments may rely on VERITAS CVM functionality to provide shared disk
groups that can be imported on more than one node at any point in time. This feature is
not currently available with Solaris Volume Manager in clustered environments.
• Dependencies: Are there any dependencies on VERITAS features, such as VERITAS Volume Replicator?
If so, does Solaris Volume Manager offer similar functionality? (For example, Solaris Volume
Manager inherently provides bitmap logging to synchronize mirrors, which can obviate the
need for VERITAS Fast Mirror Resync.) Are there volume manager-independent solutions that can
help, such as the Sun StorEdge Availability Suite?

Sun Professional Services consultants can help customers address many of these issues
and assist with planning for a migration or installation. Planning is key to a successful migration,
including the definition of a possible regression path. Sun consultants are experienced in helping
customers perform migrations and upgrades in 24x7 and clustered environments. Performance
analysis services are also available — Sun consultants can profile the I/O characteristics of existing
applications and suggest suitable volume layouts and system tuning parameters that can help to
optimize the Solaris Volume Manager configuration.
In addition, Sun StorEdge Resource Management Suite can be used to understand the current
storage usage characteristics and help predict future demand. The product has many advanced
features, including the ability to probe database systems to analyze the actual amount of storage
in use. Sun consultants use this tool to help customers design appropriate storage and volume
management strategies.

Migration Approaches
Two primary approaches can be used as the basis for a migration from VxVM to Solaris Volume
Manager:
• Backup/Restore Migration: If the organization can afford some degree of downtime, probably
the easiest and most straightforward migration method is to back up all relevant data to tape,
unmount the application file systems, deport data disk groups, unmirror and unencapsulate the
boot disk, and remove VxVM. At this point, system disk file systems and swap can be running
directly on top of disk partitions. Solaris Volume Manager state databases can then be created,
and Solaris Volume Manager can be configured to manage the boot disks. Finally, Solaris
Volume Manager volumes can be created to hold application file systems, and data restored
from tape.
• Staged Migration: In many customer environments, it is not possible to perform a full backup/
restore migration. As discussed previously, it is possible to run both volume managers together,
and in some cases, this may be the best way to stage the migration process.
P20 White Paper Transitioning to Solaris™ Volume Manager © 2003 Sun Microsystems, Inc.

In the past, some administrators have developed scripts to migrate between volume
managers, but due to the evolving architecture of products and an infinite number of configuration
layouts, creating and maintaining generic conversion scripts is not very feasible. Certainly part of
the migration process can be scripted, but scripts may need to be customized to reflect specific
migration objectives.
A staged migration must take into account two primary tasks:
• How to migrate the data disk volume
• How to migrate the boot disk volumes

It is important to remember that in order to use VxVM, rootdg always needs to be active,
ideally consisting of two disks. This means it is probably best to migrate the data volumes first,
so that all that is left under VxVM control is the boot disk and its mirror in the rootdg disk group.
Otherwise, it is necessary to select two nonsystem disks to remain in rootdg until VxVM is no
longer required and can be removed.
When is the best time to migrate? If the server is currently running on the Solaris 8 OE with
VxVM, the migration can be performed as part of the upgrade to the Solaris 9 OE — this alleviates
the need to remove VxVM under the Solaris 8 OE and then reinstall it on the Solaris 9 OE. Instead,
one approach is to migrate from VxVM to Solstice DiskSuite 4.2.1 software (the precursor of Solaris
Volume Manager), and then upgrade to the Solaris 9 OE and Solaris Volume Manager, eliminating
the need to unmirror boot disks or hide the volume configuration.

Data Disk Migration


The complexity of underlying volume structures and available resources can determine how data
volumes are best migrated from VxVM to Solaris Volume Manager. After the Solaris Volume
Manager state databases have been created, a number of possible approaches can be followed.
The remainder of this chapter describes several typical scenarios for migrating data disks:
• Scenario 1: Data disk migration of a mirrored file system
• Scenario 2: Data disk migration of a striped file system
• Scenario 3: Data disk migration to new storage on the same server, with minimal downtime
(using the Point-in-Time Copy capability from Sun StorEdge Availability Suite)
• Scenario 4: Data disk migration during the course of a storage and server consolidation, with
remote data movement and minimal downtime (using Remote Mirror functionality in Sun
StorEdge Availability Suite)

Note – It is important to note that due to the potential complexity of a VxVM configuration, the
methods described here may not be effective in all environments. Always back up data prior to
beginning an activity of this nature, and seek consulting assistance as necessary.
© 2003 Sun Microsystems, Inc. Deploying Solaris Volume Manager P21

Scenario 1: Mirrored Data Volume Migration


As shown in Figure 3-2, the UFS file system /data resides on the mirrored volume v_data in the
VxVM disk group datadg. (For the sake of simplicity, the two disks that hold the configuration
do not contain any other volumes.) In this scenario, to reduce the time necessary to migrate to
Solaris Volume Manager, one plex can be removed from the volume, freeing up its disk, which can
then be used to initialize the Solaris Volume Manager volume. Data can be directly copied from
the VxVM volume to the Solaris Volume Manager volume. Once the data has been copied, the
VxVM volume can be destroyed and the vacated disk can be used to create a second submirror.
The specific steps to perform this migration are listed here.
Figure 3-2: Migration of a mirrored data volume
from VxVM to Solaris Volume Manager file system /data file system /data

volume v_data
volume d10
plex v_data_01 v_data_02

s3 s3

s5 s5
s4 disk01-01
subdisk disk02-01 s4 submirror d11 d12

disk c1t0d0 c2t0d0 c1t0d0 c2t0d0

VxVM Solaris Volume Manager

Steps:
1. Take a full file system backup of /data to tape.
2. Display the configuration of v_data:
dm disk01 c1t0d0s2 sliced 3590 17674902 -
dm disk02 c2t0d0s2 sliced 3590 17674902 -

v v_data - ENABLED ACTIVE 2097152 SELECT - fsgen


pl v_data-01 v_data ENABLED ACTIVE 2100735 CONCAT - RW
sd disk01-01 v_data-01 disk01 0 2100735 0 c1t0d0 ENA
pl v_data-02 v_data ENABLED ACTIVE 2100735 CONCAT - RW
sd disk02-01 v_data-02 disk02 0 2100735 0 c2t0d0 ENA

3. Remove the first plex, v_data-01, from the volume:


# vxassist -g datadg remove mirror v_data disk01

4. Check that there are no other volumes residing on the disk, then remove the disk from the disk
group using either vxdiskadm or vxdg:
# vxdg -g datadg rmdisk disk01

5. Now remove disk01 from the VxVM configuration. This command will remove the private
and public partitions from the disk:
# /etc/vx/bin/vxdiskunsetup c1t0d0

6. The disk c1t0d0 can be partitioned to create the Solaris Volume Manager subvolume d11. Use
format or fmthard to create an underlying partition, s5, of the required size. The partition must
be at least as big as the original VxVM volume. This might also be a good time to consider
increasing the size of the volume to support future capacity demands.
P22 White Paper Transitioning to Solaris™ Volume Manager © 2003 Sun Microsystems, Inc.

7. Initialize the concatenated volume d11:


# metainit d11 1 1 c1t0d0s5

8. Create a one-way Solaris Volume Manager mirror volume d10 out of submirror d11:
# metainit d10 -m d11

9. Create a file system on d10 to hold the contents of /data. Again, this may be a good time
to migrate from the VERITAS file system VxFS to UFS:
# newfs /dev/md/rdsk/d10

10.Temporarily mount d10 on /mnt to copy data between VxVM and Solaris Volume Manager:
# mount -Fufs /dev/md/dsk/d10 /mnt

11.Stop any I/O to /data and then copy the file system contents to /mnt. In this example, cpio
is used:
# cd /data
# find . -mount | cpio -pdumv /mnt

12.Once the file system data has been copied, unmount both /data and /mnt, and remove
the VxVM volume v_data:
# umount /data
# umount /mnt
# vxassist -g datadg remove volume v_data

13.Check that there are no other volumes residing on the disk disk02, then remove the disk
from the disk group either using vxdiskadm or vxdg:
# vxdg -g datadg rmdisk disk02

14.Now remove disk02 from the VxVM configuration. This command will remove the private
and public partitions from the disk:
# /etc/vx/bin/vxdiskunsetup c2t0d0

15.The disk c2t0d0 can now be partitioned to create the Solaris Volume Manager subvolume d12.
Use format or fmthard to create an underlying partition, s5, of the required size. The partition
should be the same size as that used for d11.

16.Initialize the concatenated volume d12:


# metainit d12 1 1 c2t0d0s5

17.Attach d12 to d10 to form a two-way mirror. This will start resynchronization of the submirrors:
# metattach d10 d12

18.Change the /etc/vfstab entry for /data so that the block device is now /dev/md/dsk/d10
and the raw device is now /dev/md/rdsk/d10. Mount the volume under the Solaris OE:
# mount /data

This procedure could be scripted, potentially resulting in less downtime and interruption
to service than using the backup/restore method and waiting for the data to restore from tape.
In this scenario, it may be possible to switch volume managers without having to move any
data, consequently helping to minimize downtime even further. Hard partitions can be defined
around the area of disk used by each of the VxVM subdisks (using the VERITAS command
vxmksdpart), enabling a mirrored Solaris Volume Manager volume to be created on the disk
partitions. However, it is recommended that Sun consultants with experience in performing
complex data migrations between volume managers be engaged to assist in this process.
© 2003 Sun Microsystems, Inc. Deploying Solaris Volume Manager P23

Scenario 2: Striped Data Volume Migration


The UFS file system /data in Figure 3-3 resides on the VERITAS RAID 0 volume v_data in the disk
group datadg. The three disks that hold the configuration are used to present many volumes to
the operating environment. In this case, the disks are protected by hardware RAID 5 (such as in
a Sun StorEdge A1000 array), and software RAID 0 is used to increase performance.
Figure 3-3: Striped data volume migration
file system /data file system /data

volume d100
volume v_data

d20
plex plex1

s3 s3 s3 s7 s7 s7

s4 sd1 s4 sd2 s4 sd3 disk s0 s0 s0


subdisk slice

c1t0d0 c2t0d0 c3t0d0 c1t0d0 c2t0d0 c3t0d0

VxVM Solaris Volume Manager

The goal of the migration is to remove the disks from VxVM control and then use them in
Solaris Volume Manager to create a concatenated stripe across all three drives. Soft partitions
are then created on top of the striped volume to support file systems.
Before the migration begins, the data is backed up to tape prior to the creation of the Solaris
Volume Manager volumes. When the disks are removed from VxVM control, the vtocs are changed so
that a 15-megabyte partition s7 is created (starting at cylinder 0) for the Solaris Volume Manager
state database replicas. This also helps protect the vtoc.

Note – In VxVM, the stripe unit size defines how much data is written to a column before moving
to the next column in the stripe. The Solaris Volume Manager equivalent is the interlace size,
which should be optimized according to application I/O characteristics. A migration can provide
an appropriate opportunity to create Solaris Volume Manager volumes with a different interlace
size.

Steps:
1. Look at the configuration of the VxVM stripe and take note of the stripe unit size. In this example,
it is 128 blocks:
v v_data - ENABLED ACTIVE 2097152 SELECT v_data-01 fsgen
pl v_data-01 v_data ENABLED ACTIVE 2100821 STRIPE 3/128 RW
sd disk01-01 v_data-01 disk01 0 700245 0/0 c1t0d0 ENA
sd disk02-01 v_data-01 disk02 0 700245 1/0 c2t0d0 ENA
sd disk03-01 v_data-01 disk03 0 700245 2/0 c3t0d0 ENA

2. Copy the contents of /data to tape, unmount it, and then remove the volumes and disks
from VxVM control:
# umount /data
# vxassist -g datadg remove volume v_data
P24 White Paper Transitioning to Solaris™ Volume Manager © 2003 Sun Microsystems, Inc.

3. Repeat the steps for all other volumes residing on the three disks, then remove the disks
from the disk group using either vxdiskadm or vxdg:
# vxdg -g datadg rmdisk disk01 disk02 disk03

4. Now remove disk01, disk02, and disk03 from the VxVM configuration. These commands
will remove the private and public partitions from the disks:
# /etc/vx/bin/vxdiskunsetup c1t0d0
# /etc/vx/bin/vxdiskunsetup c2t0d0
# /etc/vx/bin/vxdiskunsetup c3t0d0

5. The disks c1t0d0, c2t0d0, and c3t0d0 can now be partitioned to create the Solaris Volume Man-
ager stripe volume d20. Use format or fmthard to create an underlying partition, s7, sized at 15
megabytes to hold the Solaris Volume Manager state databases, with partition s0 occupying
the rest of the disk and containing the stripe components.

6. Initialize the stripe volume d20, specifying an interlace value of 128 blocks:
# metainit d20 1 3 c1t0d0s0 c2t0d0s0 c3t0d0s0 -i 128b

7. The volume d100, a soft partition of the stripe volume d20, is created to be used for the /data
file system. Create a soft partition for each of the volumes that are required on these disks:
# metainit d100 -p d20 200m

8. Create the UFS file system using newfs:


# newfs /dev/md/rdsk/d100

9. Change the /etc/vfstab entry for /data, so that the block device is now /dev/md/dsk/d100
and the raw device is now /dev/md/rdsk/d100, and mount the file system:
# mount /data

10.Restore the data from the backup into the /data file system.

Scenario 3: Migrating Data to New Storage On the Same Server With Minimal Downtime
A good time to accomplish a data migration between volume managers is when storage hardware
is added or replaced. In this scenario, legacy storage technology is replaced with new Sun
StorEdge 9900 series hardware. It may be possible to have both the old and new storage connected
to the same machine for the period of the migration. Backup and restore provides one route to
migrating the data, but the time necessary to perform the restore may prove prohibitive.
One solution to this problem is to use Point-In-Time Copy functionality, part of the Sun StorEdge
Availability Suite. This product works independently of volume management, so it can be used to
snapshot data presented by VERITAS VxVM, Solaris Volume Manager, or even nonvolume-managed
disks. The Point-In-Time Copy functionality sits in the driver stack between the file system and
the volume manager, and uses bitmap technology to track the synchronization state between
master and shadow volumes, limiting the amount of data that must be copied between volumes
when the migration begins. (See sun.com/storage/software/availability for more detail on Sun
StorEdge Availability Suite.) Figure 3-4 illustrates how Point-In-Time Copy can replicate the contents
of a VxVM volume v_data (the Point-In-Time Copy master volume) to Solaris Volume Manager
volume d10 on a hardware RAID5 LUN (the Point-In-Time Copy shadow volume).
© 2003 Sun Microsystems, Inc. Deploying Solaris Volume Manager P25

Figure 3-4: Compared to backup/restore methods,


Point-in-Time Copy can often reduce the time file system /data
required to migrate data
Point-in-Time
Copy master volume shadow volume

volume v_data
volume d10
plex plex1 plex2

s3 s3

subdisk s4 sd1 sd2 s4 disk partition s3

disk c1t0d0 c2t0d0 c1t0d0

VxVM Solaris Volume Manager

After the Point-in-Time Copy software has been installed, the VERITAS volume can be
acquiesced during a quiet period, and a full independent snapshot can be copied to the Solaris
Volume Manager volume. Following the initial copy, a bitmap tracks any changes to the VERITAS
volume. Then, when it is time to complete the migration, an update synchronization can take
place, copying only the changed blocks to the Solaris Volume Manager volume. In this way, the
Point-in-Time Copy capability helps to minimize the amount of downtime needed to migrate data
to the new storage.

Scenario 4: Storage and Server Consolidation, Remote Data Movement, Minimal Downtime
If data migration is required as a part of a consolidation effort, it may be necessary to move the
data to Solaris Volume Manager volumes that are managed on a different host. Again, backup
and restore is a possible approach, but may result in a significant amount of downtime.
Remote Mirror capability, also part of the Sun StorEdge Availability Suite, operates in a
similar way to Point-in-Time Copy. Remote Mirror sits between the file system and volume manager,
and can be configured to replicate VERITAS volumes on one Sun server to Solaris Volume Manager
volumes on another, using any transport that supports TCP/IP. The replication can occur in
synchronous and asynchronous modes, and synchronization states are tracked using bitmap
technology. When all volumes are synchronized, the replication can be stopped (or put in logging
mode) and the Solaris Volume Manager volumes can be mounted on the new server.
The Sun StorEdge Availability Suite includes both Point-In-Time Copy and Remote Mirror
capabilities — powerful tools that can be used to minimize the impact of a volume migration. To
facilitate smooth transitions and develop useful storage configurations, Sun consultants can help
customers analyze migration requirements and assist with migrations that involve these products.

Boot Disk Migration


As discussed earlier in this chapter, it is common practice to place VxVM boot disks (such as a boot
disk and its mirror) in the rootdg disk group. In addition to data disks, a migration plan should
consider how and when to migrate the boot environment. Solaris Volume Manager software-
managed boot disks can help to simplify subsequent upgrades to the operating environment
— a significant advantage of migrating to Solaris Volume Manager.
P26 White Paper Transitioning to Solaris™ Volume Manager © 2003 Sun Microsystems, Inc.

Since it is usually necessary to unencapsulate the boot disks, an upgrade to the Solaris OE
(or VxVM) provides a window of opportunity in which to migrate to Solaris Volume Manager
software-managed boot disks. If all data disks have been migrated to Solaris Volume Manager
and the only remaining VxVM-managed volumes are on the rootdg disk group, then boot disk
migration can start by following the steps to remove VxVM (unmirroring the root disk, unencapsulating
root, and removing VxVM). There are well-documented procedures for completing this task, both
on Sun’s site (sun.com/service/support/sunsolve) and in VERITAS administration guides. Care
should be taken to ensure that underlying partitions exist on the boot disk for all the original boot
disk file systems. Removing VxVM effectively frees up the partitions used for private and public
regions. One of these freed partitions should be used for creating the Solaris Volume Manager
state database replicas.
Also, VxVM allows the creation of volumes out of any free space in the public region. As a
consequence, there may be more volumes than physical partitions on the boot disk. If this is the
case, these volumes must first be moved to other storage to avoid data loss.
To reduce the potential risk and limit any interruption to service, it may be advantageous to
consider using Solaris Live Upgrade technology. This technology reduces the usual service outage
typically associated with an operating system upgrade — the current operating environment is
replicated to a nominated disk, enabling an upgrade while the system remains active. As part of
this process, the target boot slice cannot be managed initially by Solstice DiskSuite or VxVM (see
the Solaris Live Upgrade manual for further details). After the boot slice is copied over, the copy
can then be configured with Solaris Volume Manager. This provides a dual boot environment,
with each boot disk under a different volume manager.
If it is desirable to migrate to Solaris Volume Manager software-managed boot disks and
keep VxVM enabled, it is necessary to maintain an active rootdg disk group. This may require
adding two free disks to rootdg to compensate for the loss of the root disk and its mirror.

Migration Scenarios — A Summary


This chapter describes several scenarios for migrating both data and boot disks from VxVM to
Solaris Volume Manager. These scenarios are designed to provide general guidelines for
administrators as well as highlight typical issues and migration approaches. Although they
cannot take into account specifics of a customer environment — every migration must be planned
according to business and technical requirements — they may suggest effective approaches to
meeting those requirements. Sun’s experienced consultants can help customers plan a smooth
transition from VxVM to Solaris Volume Manager according to site specifics and business goals.
Conclusion P27

Chapter 4

Conclusion
Solaris Volume Manager, integrated into the Solaris 9 OE, offers a viable alternative to the VERITAS
Volume Manager. Because Solaris Volume Manager is included in the operating environment at no
additional fee, it can help to lower overall storage TCO.
There are numerous methods of migrating from VxVM to Solaris Volume Manager, and the
specific method that should be used depends largely on the current server and storage infrastructure.
In general, there are several key steps: analyzing the volume management requirements, setting
aside time for planning the migration, backing up the data, validating the backup, performing the
migration, and testing the new environment.
Sun consultants have extensive knowledge of both VxVM and Solaris Volume Manager, and
are experienced at helping organizations perform migration activities. For more information on
how Sun can help with a migration to the Solaris 9 OE and Solaris Volume Manager, please
contact your local Sun representative.
P28 Transitioning to Solaris™ Volume Manager

Chapter 5

References
Sun Microsystems posts product information in the form of datasheets and white papers on the
Web at sun.com. The following sites describe products specifically mentioned in this paper:
• Solaris Volume Manager, sun.com/software/solaris/ds/ds-volumemgr
• Solaris Operating Environment, sun.com/software/solaris
• Sun StorEdge Availability Suite, sun.com/storage/software/availability
• Sun StorEdge Resource Management Suite, sun.com/storage/software/resourcemanagement

Please refer to the following white papers for more information on Solaris Volume Manager
and other Sun products:
• Comprehensive Data Management Using Solaris Volume Manager, Technical White Paper
• Better by Design — The Solaris 9 Operating Environment, Technical White Paper

In addition, the following documentation may be useful to administrators preparing


for a migration to Solaris Volume Manager (see docs.sun.com):
• Solaris Volume Manager Administration Guide
• Sun StorEdge Traffic Manager Software Installation and Configuration Guide
• Solaris Live Upgrade 2.0 Guide
• Sun StorEdge Availability Suite 3.1 Remote Mirror Software Administration and Operations
Guide
• Sun StorEdge Availability Suite 3.1 Point-in-Time Copy Software Administration and Operations
Guide
SUN™ Copyright 2003 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, California 95054 U.S.A. All rights reserved.
This product or document is protected by copyright and distributed under licenses restricting its use, copying, distribution, and decompilation. No part of this product or docu-
ment may be reproduced in any form by any means without prior written authorization of Sun and its licensors, if any. Third-party software, including font technology, is copy-
righted and licensed from Sun suppliers.

Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark in the U.S. and other countries, exclu-
sively licensed through X/Open Company, Ltd.

Sun, Sun Microsystems, the Sun logo, Java, Solaris, Solstice DiskSuite, Sun Blade, Sun Fire, Sun Ray, and Sun StorEdge are trademarks or registered trademarks of Sun Microsys-
tems, Inc. in the U.S. and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the U.S. and
other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. Mozilla and Netscape are trademarks or registered
trademarks of Netscape Communications Corporation in the United States and other countries. OpenGL is a registered trademark of Silicon Graphics, Inc.

The OPEN LOOK and Sun™ Graphical User Interface was developed by Sun Microsystems, Inc. for its users and licensees. Sun acknowledges the pioneering efforts of Xerox in
researching and developing the concept of visual or graphical user interfaces for the computer industry. Sun holds a non-exclusive license from Xerox to the Xerox Graphical User
Interface, which license also covers Sun’s licensees who implement OPEN LOOK GUIs and otherwise comply with Sun’s written license agreements.

RESTRICTED RIGHTS: Use, duplication, or disclosure by the U.S. Government is subject to restrictions of FAR 52.227-14(g)(2)(6/87) and FAR 52.227-19(6/87),
or DFAR 252.227-7015(b)(6/95) and DFAR 227.7202-3(a).

DOCUMENTATION IS PROVIDED “AS IS” AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABIL-
ITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.

SUN™ Copyright 2003 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, Californie 95054 Etats-Unis. Tous droits réservés.
Ce produit ou document est protégé par un copyright et distribué avec des licences qui en restreignent l’utilisation, la copie, la distribution, et la décompilation. Aucune partie de
ce produit ou document ne peut être reproduite sous aucune forme, par quelque moyen que ce soit, sans l’autorisation préalable et écrite de Sun et de ses bailleurs de licence, s’il
y en a. Le logiciel détenu par des tiers, et qui comprend la technologie relative aux polices de caractères, est protégé par un copyright et licencié par des fournisseurs de Sun.

Des parties de ce produit pourront être dérivées des systèmes Berkeley BSD licenciés par l’Université de Californie. UNIX est une marque déposée aux Etats-Unis et dans d’autres
pays et licenciée exclusivement par X/Open Company, Ltd.

Sun, Sun Microsystems, le logo Sun, Java, Solaris, Solstice DiskSuite, Sun Blade, Sun Fire, Sun Ray, et Sun StorEdge sont des marques de fabrique ou des marques déposées de Sun
Microsystems, Inc. aux Etats-Unis et dans d’autres pays. Toutes les marques SPARC sont utilisées sous licence et sont des marques de fabrique ou des marques déposées de SPARC
International, Inc. aux Etats-Unis et dans d’autres pays. Les produits portant les marques SPARC sont basés sur une architecture développée par Sun Microsystems, Inc. Mozilla et
Netscape sont des marques de fabrique ou des marques déposées de Netscape Communications Corporation aux Etats-Unis et dans d’autres pays. OpenGL est une marque
déposée de Silicon Graphics, Inc.

L’interface d’utilisation graphique OPEN LOOK et Sun™ a été développée par Sun Microsystems, Inc. pour ses utilisateurs et licenciés. Sun reconnaît les efforts de pionniers de
Xerox pour la recherche et le développement du concept des interfaces d’utilisation visuelle ou graphique pour l’industrie de l’informatique. Sun détient une licence non exclusive
de Xerox sur l’interface d’utilisation graphique Xerox, cette licence couvrant également les licenciés de Sun qui mettent en place l’interface d’utilisation graphique OPEN LOOK et
qui en outre se conforment aux licences écrites de Sun.

CETTE PUBLICATION EST FOURNIE “EN L’ETAT” ET AUCUNE GARANTIE, EXPRESSE OU IMPLICITE, N’EST ACCORDEE, Y COMPRIS DES GARANTIES CONCERNANT LA VALEUR MARCHANDE,
L’APTITUDE DE LA PUBLICATION A REPONDRE A UNE UTILISATION PARTICULIERE, OU LE FAIT QU’ELLE NE SOIT PAS CONTREFAISANTE DE PRODUIT DE TIERS. CE DENI DE GARANTIE NE
S’APPLIQUERAIT PAS, DANS LA MESURE OU IL SERAIT TENU JURIDIQUEMENT NUL ET NON AVENU.

Please
Recycle
White Paper Transitioning to Solaris™ Volume Manager On the Web sun.com/software

Sun Microsystems, Inc. 4150 Network Circle, Santa Clara, CA 95054 USA Phone 800-786-7638 or +1 512 434-1577 Web sun.com

Sun Worldwide Sales Offices: Africa (North, West and Central) +33-13-067-4680, Argentina +5411-4317-5600, Australia +61-2-9844-5000, Austria +43-1-60563-0, Belgium +32-2-704-8000, Brazil +55-11-5187-2100, Canada +905-477-
6745, Chile +56-2-3724500, Colombia +571-629-2323, Commonwealth of Independent States +7-502-935-8411, Czech Republic +420-2-3300-9311, Denmark +45 4556 5000, Egypt +202-570-9442, Estonia +372-6-308-900, Finland
+358-9-525-561, France +33-134-03-00-00, Germany +49-89-46008-0, Greece +30-1-618-8111, Hungary +36-1-489-8900, Iceland +354-563-3010, India–Bangalore +91-80-2298989/2295454; New Delhi +91-11-6106000; Mumbai +91-
22-697-8111, Ireland +353-1-8055-666, Israel +972-9-9710500, Italy +39-02-641511, Japan +81-3-5717-5000, Kazakhstan +7-3272-466774, Korea +822-2193-5114, Latvia +371-750-3700, Lithuania +370-729-8468, Luxembourg +352-
49 11 33 1, Malaysia +603-21161888, Mexico +52-5-258-6100, The Netherlands +00-31-33-45-15-000, New Zealand–Auckland +64-9-976-6800; Wellington +64-4-462-0780, Norway +47 23 36 96 00, People’s Republic of China–
Beijing +86-10-6803-5588; Chengdu +86-28-619-9333; Guangzhou +86-20-8755-5900; Shanghai +86-21-6466-1228; Hong Kong +852-2202-6688, Poland +48-22-8747800, Portugal +351-21-4134000, Russia +7-502-935-8411, Sin-
gapore +65-6438-1888, Slovak Republic +421-2-4342-94-85, South Africa +27 11 256-6300, Spain +34-91-596-9900, Sweden +46-8-631-10-00, Switzerland–German 41-1-908-90-00; French 41-22-999-0444, Taiwan +886-2-8732-9933,
Thailand +662-344-6888, Turkey +90-212-335-22-00, United Arab Emirates +9714-3366333, United Kingdom +44-1-276-20444, United States +1-800-555-9SUN or +1-650-960-1300, Venezuela +58-2-905-3800 FE1941-0

Das könnte Ihnen auch gefallen