Sie sind auf Seite 1von 27

VDCF - Virtual Datacenter Control Framework

for the SolarisTM Operating System

Shrinking zPool
*** CONFIDENTIAL ***

Version 0.1
28. April 2010

Copyright 2005-2009 JomaSoft GmbH


All rights reserved.

Seite 1/27

Inhaltsverzeichnis
Migrationsszenario fr zPool Rootfilesystem.....................................................................................................................................................................................3
1) neuen zPool erstellen ................................................................................................................................................................................................................3
2) Snapshot erstellen......................................................................................................................................................................................................................3
3) ZFS send/receive in neuen Pool.................................................................................................................................................................................................4
4) vserver detach............................................................................................................................................................................................................................5
5) zPool mit neuen Namen importieren.........................................................................................................................................................................................5
6) ZFS Zonen Root anpassen.........................................................................................................................................................................................................6
7) vServer attach.............................................................................................................................................................................................................................7
8) Altes/Grosses Dataset lschen...................................................................................................................................................................................................7
9) DiskReplace ..............................................................................................................................................................................................................................7
10) Dataset Size anpassen .............................................................................................................................................................................................................9
11) Delete Snapshot......................................................................................................................................................................................................................10
Migrationsszenario fr zPool Datafilesystem...................................................................................................................................................................................12
1) neuen zPool erstellen...............................................................................................................................................................................................................12
2) Snapshot erstellen....................................................................................................................................................................................................................15
3) ZFS send/receive......................................................................................................................................................................................................................15
4) vserver renamefs .....................................................................................................................................................................................................................17
5) dataset rename (if required) ....................................................................................................................................................................................................20
5.1) zpool export / import newname........................................................................................................................................................................................20
5.2) vserver commit.................................................................................................................................................................................................................23
6) Cleanup....................................................................................................................................................................................................................................26

Seite 2/27

Introduction
This documentation describes the features of the Virtual Datacenter Control Framework (VDCF) for the Solaris Operating
System, which support ??????

Overview
In fast changing IT Infrastructure and changing requirements, the requested Space for a application can be to high. In this
document we descripe who to shrink a zPool to the new requirement.

Requirements
VDCF Base and vServer Version 2.3.13

Restrictions
There are no restrictions for this option.

Seite 3/27

Migrationsszenario fr zPool Rootfilesystem


1) neuen zPool erstellen
Ausgangslage vServer
mech@S0000 $ vserver -c show name=V0001
Virtual
Type
cState
rState
V0001
SPARSE
ACTIVATED INSTALLED
cPool
default

Node
S0013

Patch-Set
sparc_U6_U8

nState
ACTIVE
State
OK

Build
5.10S_U6

Patch-Level
142900-06 (U8+)

Hardware
SUNW,Sun-Blade-1500

Comment
zpool migration test
Comment
Solaris Sparc Build Server

Install-Date
2010-04-13_16:52:23

Config Definitions
Groups: virtual
Network Definitions
Type Hostname
management V0001

VID
-

Interface
bge0

Filesystem Definitions
Dataset: V0001_root (ZPOOL/ACTIVATED)
Name State
Size/MB
Type
- ACTIVATED <undefined>
zfs
Dataset: V0001_root2 (ZPOOL/ACTIVATED)
No Filesystems defined on Dataset V0001_root2.

IP
192.168.14.231

Netmask
255.255.255.0

StackType
SHARED

State
ACTIVATED

Total Space/MB: 2042


Options
rw

Available Space/MB: 2042


Mountpoint
/zones/V0001 (root)

Total Space/MB: 1021

Available Space/MB: 1021

2) Snapshot erstellen
mech@S0000 $ vserver -c zfssnapshot name=V0001 mountpoint=/zones/V0001 snapshot=for_transfer recursive
ZFS snapshot created for vServer V0001
mech@S0000 $ vserver -c zfslist snapshots name=V0001
Dataset list for vServer: V0001

Seite 4/27

Pool Name: V0001_root


NAME
V0001_root/root@for_transfer

USED
0

AVAIL
-

REFER
841M

MOUNTPOINT
-

Pool Name: V0001_root2


NAME

USED

AVAIL

REFER

MOUNTPOINT

CAP
40%
0%

HEALTH
ONLINE
ONLINE

root@S0013 # zpool list


NAME
SIZE
USED
V0001_root
1.98G
831M
V0001_root2 1008M
87K

AVAIL
1.17G
1008M

ALTROOT
-

3) ZFS send/receive in neuen Pool


root@S0013 # zfs send V0001_root/root@for_transfer | zfs receive -d -F V0001_root2
root@S0013 # zpool list
NAME
SIZE
USED AVAIL
CAP HEALTH ALTROOT
V0001_root
1.98G
842M 1.16G
41% ONLINE V0001_root2 1008M
842M
166M
83% ONLINE root@S0013 # zfs list
NAME
USED AVAIL REFER MOUNTPOINT
V0001_root
842M 1.13G
21K legacy
V0001_root/root
841M 1.13G
841M /zones/V0001
V0001_root/root@for_transfer
30K
841M V0001_root2
842M
134M
21K legacy
V0001_root2/root
841M
134M
841M legacy
V0001_root2/root@for_transfer
0
841M root@S0013 #
mech@S0000 $ vserver -c zfslist snapshots name=V0001
Dataset list for vServer: V0001
Pool Name: V0001_root
NAME
V0001_root/root@for_transfer

USED
30K

AVAIL
-

REFER
841M

MOUNTPOINT
-

Pool Name: V0001_root2


NAME

USED

AVAIL

REFER

MOUNTPOINT

Seite 5/27

V0001_root2/root@for_transfer

841M

mech@S0000 $ dataset -c show name=V0001_root


Name State
Size/MB
Avail/MB
V0001_root DETACHED
2042
2042

Type
ZPOOL

mech@S0000 $
4) vserver detach
mech@S0000 $ vserver -c detach name=V0001
detaching vServer(s)
detach successful
mech@S0000 $

Layout:

Owner
V0001

Node
(S0013)

mirror 01000003BA2D33B400002A0049462819 01000003BA2D33B400002A004946280D

Name Use-Type Dev-Type State


GUID
V0001_root ZPOOL
ISCSI
ACTIVATED
01000003BA2D33B400002A004946280D
V0001_root ZPOOL
ISCSI
ACTIVATED
01000003BA2D33B400002A0049462819
mech@S0000 $ dataset -c show name=V0001_root2
Name State
Size/MB
Avail/MB Type
Owner
Node
V0001_root2 DETACHED
1021
1021
ZPOOL
V0001
(S0013)
Layout:

Serial
-

Size/MB
2042
2042

Location
S0011SBlade
S0011SBlade

Size/MB
1021
1021

Location
S0011SBlade
S0011SBlade

mirror 01000003BA2D33B400002A004946281C 01000003BA2D33B400002A0049462812

Name
V0001_root2
V0001_root2

Use-Type
ZPOOL
ZPOOL

Dev-Type
ISCSI
ISCSI

State
ACTIVATED
ACTIVATED

GUID
01000003BA2D33B400002A0049462812
01000003BA2D33B400002A004946281C

Serial
-

5) zPool mit neuen Namen importieren


root@S0013 # zpool import
pool: V0001_root2
id: 9290830909401151195
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

Seite 6/27

V0001_root2
mirror
c0t01000003BA2D33B400002A004946281Cd0
c0t01000003BA2D33B400002A0049462812d0
pool:
id:
state:
action:
config:

ONLINE
ONLINE
ONLINE
ONLINE

V0001_root
216184917218085764
ONLINE
The pool can be imported using its name or numeric identifier.
V0001_root
mirror
c0t01000003BA2D33B400002A0049462819d0
c0t01000003BA2D33B400002A004946280Dd0

root@S0013 #
root@S0013 #
root@S0013 #
NAME
V0001_root
V0001_root2

ONLINE
ONLINE
ONLINE
ONLINE

zpool import 9290830909401151195 V0001_root


zpool import 216184917218085764 V0001_root2
zpool list
SIZE
USED AVAIL
CAP HEALTH ALTROOT
1008M
842M
166M
83% ONLINE 1.98G
848M 1.16G
41% ONLINE -

6) ZFS Zonen Root anpassen


root@S0013 # zfs list
NAME
V0001_root
V0001_root/root
V0001_root/root@for_transfer
V0001_root2
V0001_root2/root
V0001_root2/root@for_transfer
root@S0013
root@S0013
root@S0013
root@S0013
NAME
V0001_root

#
#
#
#

zfs
zfs
zfs
zfs

USED
842M
841M
0
848M
847M
2.15M

AVAIL
134M
134M
1.13G
1.13G
-

REFER
21K
841M
841M
21K
845M
841M

MOUNTPOINT
legacy
legacy
legacy
/zones/V0001
-

umount V0001_root2/root
set mountpoint=/zones/V0001 V0001_root/root
set mountpoint=legacy V0001_root2/root
list
USED AVAIL REFER MOUNTPOINT
842M
134M
21K legacy

Seite 7/27

V0001_root/root
V0001_root/root@for_transfer
V0001_root2
V0001_root2/root
V0001_root2/root@for_transfer

841M
0
848M
847M
2.15M

134M
1.13G
1.13G
-

841M
841M
21K
845M
841M

root@S0013 # zfs mount V0001_root/root


root@S0013 # df -h /zones/V0001
Filesystem
size
used avail capacity
V0001_root/root
976M
841M
134M
87%
root@S0013 #

/zones/V0001
legacy
legacy
-

Mounted on
/zones/V0001

7) vServer attach
mech@S0000 $ vserver -c attach name=V0001
attaching vServer(s)
WARN: zpool already imported: V0001_root
WARN: zpool already imported: V0001_root2
attach successful
8) Altes/Grosses Dataset lschen
mech@S0000 $ dataset -c show name=V0001_root2
Name State
Size/MB
Avail/MB
V0001_root2 ACTIVATED
1021
1021
Layout:

Type
ZPOOL

Owner
V0001

Node
S0013

mirror 01000003BA2D33B400002A004946281C 01000003BA2D33B400002A0049462812

Name Use-Type Dev-Type State


V0001_root2 ZPOOL
ISCSI
ACTIVATED
V0001_root2 ZPOOL
ISCSI
ACTIVATED
mech@S0000 $ dataset -c remove name=V0001_root2
removing dataset: V0001_root2
dataset removal successful
mech@S0000 $ dataset -c commit name=V0001_root2
committing dataset changes: V0001_root2
dataset changes committed successfully

GUID
01000003BA2D33B400002A0049462812
01000003BA2D33B400002A004946281C

Serial
-

Size/MB
1021
1021

Location
S0011SBlade
S0011SBlade

9) DiskReplace
mech@S0000 $ su

Seite 8/27

Password:
# /opt/jomasoft/vdcf/mods/disks/disks_replace -o 01000003BA2D33B400002A004946280D -n 01000003BA2D33B400002A0049462812
WARN: LUN Size Mismatch: GUID 01000003BA2D33B400002A004946280D (2042) - GUID 01000003BA2D33B400002A0049462812 (1021).
Replaced 01000003BA2D33B400002A004946280D with 01000003BA2D33B400002A0049462812 in Dataset V0001_root
# /opt/jomasoft/vdcf/mods/disks/disks_replace -o 01000003BA2D33B400002A0049462819 -n 01000003BA2D33B400002A004946281C
WARN: LUN Size Mismatch: GUID 01000003BA2D33B400002A0049462819 (2042) - GUID 01000003BA2D33B400002A004946281C (1021).
Replaced 01000003BA2D33B400002A0049462819 with 01000003BA2D33B400002A004946281C in Dataset V0001_root
#
# exit
exit
mech@S0000 $ vserver -c show name=V0001
Virtual
Type
cState
rState
Build
Patch-Level
Comment
V0001
SPARSE
ACTIVATED INSTALLED
5.10S_U6
142900-06 (U8+) zpool migration test
cPool
default

Node
S0013

Patch-Set
sparc_U6_U8

nState
ACTIVE
State
OK

Hardware
SUNW,Sun-Blade-1500

Comment
Solaris Sparc Build Server

Install-Date
2010-04-13_16:52:23

Config Definitions
Groups: virtual
Network Definitions
Type Hostname
management V0001

VID
-

Interface
bge0

IP
192.168.14.231

Filesystem Definitions
Dataset: V0001_root (ZPOOL/ACTIVATED)
Name State
Size/MB
Type
- ACTIVATED <undefined>
zfs

Netmask
255.255.255.0

Total Space/MB: 2042


Options
rw

StackType
SHARED

State
ACTIVATED

Available Space/MB: 2042


Mountpoint
/zones/V0001 (root)

mech@S0000 $
mech@S0000 $ dataset -c show name=V0001_root
Name State
Size/MB
Avail/MB
V0001_root ACTIVATED
2042
2042
Layout:

Type
ZPOOL

Owner
V0001

Node
S0013

mirror 01000003BA2D33B400002A004946281C 01000003BA2D33B400002A0049462812

Name
V0001_root
V0001_root

Use-Type
ZPOOL
ZPOOL

Dev-Type
ISCSI
ISCSI

State
ACTIVATED
ACTIVATED

GUID
01000003BA2D33B400002A0049462812
01000003BA2D33B400002A004946281C

Serial
-

Size/MB
1021
1021

Location
S0011SBlade
S0011SBlade

Seite 9/27

10) Dataset Size anpassen

# /opt/jomasoft/vdcf/mods/dataset/ds_set -d V0001_root -u
Updated Size for Dataset V0001_root from 2042 MB to 1021 MB
mech@S0000 $ vserver -c show name=V0001
Virtual
Type
cState
rState
V0001
SPARSE
ACTIVATED RUNNING
cPool
default

Node
S0013

Patch-Set
sparc_U6_U8

nState
ACTIVE
State
OK

Build
5.10S_U6

Patch-Level
142900-06 (U8+)

Hardware
SUNW,Sun-Blade-1500

Comment
zpool migration test
Comment
Solaris Sparc Build Server

Install-Date
2010-04-13_16:52:23

Config Definitions
Groups: virtual
Network Definitions
Type Hostname
management V0001

VID
-

Interface
bge0

Filesystem Definitions
Dataset: V0001_root (ZPOOL/ACTIVATED)
Name State
Size/MB
Type
- ACTIVATED <undefined>
zfs

IP
192.168.14.231

StackType
SHARED

Total Space/MB: 1021


Options
rw

mech@S0000 $ dataset -c show name=V0001_root verbose


Name State
Size/MB
Avail/MB Type
V0001_root ACTIVATED
1021
1021
ZPOOL
Layout:

Netmask
255.255.255.0

Owner
V0001

State
ACTIVATED

Available Space/MB: 1021


Mountpoint
/zones/V0001 (root)

Node
S0013

mirror 01000003BA2D33B400002A004946281C 01000003BA2D33B400002A0049462812

Name
V0001_root
V0001_root

Use-Type
ZPOOL
ZPOOL

Dev-Type
ISCSI
ISCSI

State
ACTIVATED
ACTIVATED

GUID
01000003BA2D33B400002A0049462812
01000003BA2D33B400002A004946281C

Serial
-

Size/MB
1021
1021

Location
S0011SBlade
S0011SBlade

Volume Manager details from node S0013

Seite 10/27

pool: V0001_root
state: ONLINE
scrub: none requested
config:
NAME
V0001_root
mirror
c0t01000003BA2D33B400002A004946281Cd0
c0t01000003BA2D33B400002A0049462812d0

STATE
ONLINE
ONLINE
ONLINE
ONLINE

READ WRITE CKSUM


0
0
0
0
0
0
0
0
0
0
0
0

errors: No known data errors


mech@S0000 $
11) Delete Snapshot
mech@S0000 $ vserver -c zfslist snapshots name=V0001
Dataset list for vServer: V0001
Pool Name: V0001_root
NAME
V0001_root/root@for_transfer

USED
13.0M

AVAIL
-

REFER
841M

MOUNTPOINT
-

mech@S0000 $ vserver -c zfsdestroy name=V0001 snapshot=for_transfer filesystem=V0001_root/root


ZFS snapshot destroyed for vServer V0001

Seite 11/27

Migrationsszenario fr zPool Datafilesystem


1) neuen zPool erstellen
Ausgangslage vServer
mech@S0000 $ vserver -c show name=V0001
Virtual
Type
cState
rState
V0001
SPARSE
ACTIVATED RUNNING
cPool
default

Node
S0013

Patch-Set
sparc_U6_U8

nState
ACTIVE
State
OK

Build
5.10S_U6_Finter

Patch-Level
142900-06 (U8+)

Hardware
SUNW,Sun-Blade-1500

Comment
zpool migration test
Comment
Solaris Sparc Build Server

Install-Date
2010-04-13_16:52:23

Config Definitions
Groups: virtual
Network Definitions
Type Hostname
management V0001

VID
-

Interface
bge0

Filesystem Definitions
Dataset: V0001_root (ZPOOL/ACTIVATED)
Name State
Size/MB
Type
- ACTIVATED <undefined>
zfs
Dataset:
Name
-

V0001_data
State
ACTIVATED
ACTIVATED
ACTIVATED

(ZPOOL/ACTIVATED)
Size/MB
Type
100
zfs
100
zfs
100
zfs

Dataset:
Name
-

V0001_data2 (ZPOOL/ACTIVATED)
State
Size/MB
Type
ACTIVATED 100
zfs
ACTIVATED 100
zfs
ACTIVATED 100
zfs

IP
192.168.14.231

Netmask
255.255.255.0

StackType
SHARED

State
ACTIVATED

Total Space/MB: 1021


Options
rw

Available Space/MB: 1021


Mountpoint
/zones/V0001 (root)

Total Space/MB: 2042


Options
rw
rw
rw

Available Space/MB: 1742


Mountpoint
/data
/var/app
/var/app2

Total Space/MB: 799


Options
rw
rw
rw

Available Space/MB: 499


Mountpoint
/data-new
/var/app-new
/var/app2-new

Seite 12/27

mech@S0000 $ vserver -c shutdown name=V0001


shutdown command being issued for vServer(s) V0001
vServer(s) down
mech@S0000 $

mech@S0000 $ dataset -c show verbose name=V0001_data


Name State
Size/MB
Avail/MB Type
V0001_data ACTIVATED
2042
2042
ZPOOL
Layout:

Owner
V0001

Node
S0013

mirror 01000003BA2D33B400002A0049462819 01000003BA2D33B400002A004946280D

Name
V0001_data
V0001_data

Use-Type
ZPOOL
ZPOOL

Dev-Type
ISCSI
ISCSI

State
ACTIVATED
ACTIVATED

GUID
01000003BA2D33B400002A004946280D
01000003BA2D33B400002A0049462819

Serial
-

Size/MB
2042
2042

Location
S0011SBlade
S0011SBlade

Size/MB
799
799

Location
S0011SBlade
S0011SBlade

Volume Manager details from node S0013


pool: V0001_data
state: ONLINE
scrub: none requested
config:
NAME
V0001_data
mirror
c0t01000003BA2D33B400002A0049462819d0
c0t01000003BA2D33B400002A004946280Dd0

STATE
ONLINE
ONLINE
ONLINE
ONLINE

errors: No known data errors


mech@S0000 $ dataset -c show verbose name=V0001_data2
Name State
Size/MB
Avail/MB Type
V0001_data2 ACTIVATED
799
799
ZPOOL
Layout:

READ WRITE CKSUM


0
0
0
0
0
0
0
0
0
0
0
0

Owner
V0001

Node
S0013

mirror 01000003BA2D33B400002A0049462802 01000003BA2D33B400002A0049462814

Name
V0001_data2
V0001_data2

Use-Type
ZPOOL
ZPOOL

Dev-Type
ISCSI
ISCSI

State
ACTIVATED
ACTIVATED

GUID
01000003BA2D33B400002A0049462802
01000003BA2D33B400002A0049462814

Serial
-

Volume Manager details from node S0013

Seite 13/27

pool: V0001_data2
state: ONLINE
scrub: none requested
config:
NAME
V0001_data2
mirror
c0t01000003BA2D33B400002A0049462802d0
c0t01000003BA2D33B400002A0049462814d0

STATE
ONLINE
ONLINE
ONLINE
ONLINE

READ WRITE CKSUM


0
0
0
0
0
0
0
0
0
0
0
0

errors: No known data errors


mech@S0000 $
mech@S0000 $ vserver -c zfslist name=V0001
Dataset list for vServer: V0001
Pool Name: V0001_root
NAME
V0001_root
V0001_root/root
Pool Name: V0001_data
NAME
V0001_data
V0001_data/data
V0001_data/data/data
V0001_data/data/var_app
V0001_data/data/var_app2
Pool Name: V0001_data2
NAME
V0001_data2
V0001_data2/data
V0001_data2/data/data-new
V0001_data2/data/var_app-new
V0001_data2/data/var_app2-new

USED
858M
858M

AVAIL
118M
118M

REFER
21K
858M

MOUNTPOINT
legacy
/zones/V0001

USED
25.3M
25.2M
7.85M
4.69M
12.6M

AVAIL
1.93G
1.93G
92.1M
95.3M
87.4M

REFER
21K
21K
7.85M
4.69M
12.6M

MOUNTPOINT
legacy
legacy
legacy
legacy
legacy

USED
198K
84K
21K
21K
21K

AVAIL
752M
752M
100M
100M
100M

REFER
21K
21K
21K
21K
21K

MOUNTPOINT
legacy
legacy
legacy
legacy
legacy

mech@S0000 $

Seite 14/27

2) Snapshot erstellen
mech@S0000 $ vserver -c zfssnapshot name=V0001 snapshot=for_transfer mountpoint=/data
ZFS snapshot created for vServer V0001
mech@S0000 $ vserver -c zfssnapshot name=V0001 snapshot=for_transfer mountpoint=/var/app
ZFS snapshot created for vServer V0001
mech@S0000 $ vserver -c zfssnapshot name=V0001 snapshot=for_transfer mountpoint=/var/app2
ZFS snapshot created for vServer V0001
mech@S0000 $
mech@S0000 $ vserver -c zfslist name=V0001 snapshots
Dataset list for vServer: V0001
Pool Name: V0001_root
NAME

USED

AVAIL

REFER

MOUNTPOINT

Pool Name: V0001_data


NAME
V0001_data/data/data@for_transfer
V0001_data/data/var_app@for_transfer
V0001_data/data/var_app2@for_transfer

USED
0
0
0

AVAIL
-

REFER
7.85M
4.69M
12.6M

MOUNTPOINT
-

Pool Name: V0001_data-new


NAME

USED

AVAIL

REFER

MOUNTPOINT

mech@S0000 $
3) ZFS send/receive
root@S0013 # zpool list
NAME
SIZE
USED AVAIL
V0001_data
1.98G 25.3M 1.96G
V0001_data-new
784M
210K
784M
V0001_root
1008M
860M
148M
root@S0013 # zfs list
NAME
V0001_data
V0001_data-new
V0001_data-new/data
V0001_data-new/data/data-new
V0001_data-new/data/var_app-new

CAP
1%
0%
85%
USED
25.3M
206K
84K
21K
21K

HEALTH
ONLINE
ONLINE
ONLINE
AVAIL
1.93G
752M
752M
100M
100M

ALTROOT
REFER
21K
21K
21K
21K
21K

MOUNTPOINT
legacy
legacy
legacy
legacy
legacy

Seite 15/27

V0001_data-new/data/var_app2-new
V0001_data/data
V0001_data/data/data
V0001_data/data/data@for_transfer
V0001_data/data/var_app
V0001_data/data/var_app@for_transfer
V0001_data/data/var_app2
V0001_data/data/var_app2@for_transfer
V0001_root
V0001_root/root

21K
25.2M
7.85M
0
4.69M
0
12.6M
0
860M
860M

100M
1.93G
92.1M
95.3M
87.4M
116M
116M

21K
21K
7.85M
7.85M
4.69M
4.69M
12.6M
12.6M
21K
860M

legacy
legacy
legacy
legacy
legacy
legacy
/zones/V0001

root@S0013 # zfs send V0001_data/data/data@for_transfer | zfs receive -F V0001_data-new/data/data-new


root@S0013 # zfs send V0001_data/data/var_app@for_transfer | zfs receive -F V0001_data-new/data/var_app-new
root@S0013 # zfs send V0001_data/data/var_app2@for_transfer | zfs receive -F V0001_data-new/data/var_app2-new
root@S0013 #
root@S0013 # zpool list
NAME
SIZE
USED AVAIL
CAP HEALTH ALTROOT
V0001_data
1.98G 25.3M 1.96G
1% ONLINE V0001_data-new
784M 25.3M
759M
3% ONLINE V0001_root
1008M
860M
148M
85% ONLINE root@S0013 # zfs list
NAME
USED AVAIL REFER MOUNTPOINT
V0001_data
25.3M 1.93G
21K legacy
V0001_data-new
25.3M
727M
21K legacy
V0001_data-new/data
25.2M
727M
21K legacy
V0001_data-new/data/data-new
7.85M 92.2M 7.85M legacy
V0001_data-new/data/data-new@for_transfer
0
- 7.85M V0001_data-new/data/var_app-new
4.69M 95.3M 4.69M legacy
V0001_data-new/data/var_app-new@for_transfer
0
- 4.69M V0001_data-new/data/var_app2-new
12.6M 87.4M 12.6M legacy
V0001_data-new/data/var_app2-new@for_transfer
0
- 12.6M V0001_data/data
25.2M 1.93G
21K legacy
V0001_data/data/data
7.85M 92.1M 7.85M legacy
V0001_data/data/data@for_transfer
0
- 7.85M V0001_data/data/var_app
4.69M 95.3M 4.69M legacy
V0001_data/data/var_app@for_transfer
0
- 4.69M V0001_data/data/var_app2
12.6M 87.4M 12.6M legacy
V0001_data/data/var_app2@for_transfer
0
- 12.6M V0001_root
860M
116M
21K legacy
V0001_root/root
860M
116M
860M /zones/V0001
root@S0013 #

Seite 16/27

mech@S0000 $ vserver -c zfslist name=V0001 snapshots


Dataset list for vServer: V0001
Pool Name: V0001_root
NAME

USED

AVAIL

REFER

MOUNTPOINT

Pool Name: V0001_data


NAME
V0001_data/data/data@for_transfer
V0001_data/data/var_app@for_transfer
V0001_data/data/var_app2@for_transfer

USED
0
0
0

AVAIL
-

REFER
7.85M
4.69M
12.6M

MOUNTPOINT
-

Pool Name: V0001_data-new


NAME
V0001_data-new/data/data-new@for_transfer
V0001_data-new/data/var_app-new@for_transfer
V0001_data-new/data/var_app2-new@for_transfer

USED
0
0
0

AVAIL
-

REFER
7.85M
4.69M
12.6M

MOUNTPOINT
-

4) vserver renamefs
mech@S0000 $ vserver -c renamefs name=V0001 mountpoint=/data to=/data-old
Renaming filesystem on vServer V0001
Filesystem /data renamed to /data-old. Re-run using remount or commit 'exec' to activate this change completely.
Renamed filesystem successfully
mech@S0000 $ vserver -c renamefs name=V0001 mountpoint=/var/app to=/var/app-old
Renaming filesystem on vServer V0001
Filesystem /var/app renamed to /var/app-old. Re-run using remount or commit 'exec' to activate this change completely.
Renamed filesystem successfully
mech@S0000 $ vserver -c renamefs name=V0001 mountpoint=/var/app2 to=/var/app2-old
Renaming filesystem on vServer V0001
Filesystem /var/app2 renamed to /var/app2-old. Re-run using remount or commit 'exec' to activate this change completely.
Renamed filesystem successfully
mech@S0000 $
mech@S0000 $ vserver -c commit exec name=V0001
committing datasets for virtual server
Dataset V0001_root committed successfully.
Dataset V0001_data committed successfully.

Seite 17/27

Dataset V0001_data-new committed successfully.


dataset commit successful
committing filesystems for virtual server
Renamed mountpoint /data to /data-old in V0001_data on S0013
Renamed mountpoint /var/app to /var/app-old in V0001_data on S0013
Renamed mountpoint /var/app2 to /var/app2-old in V0001_data on S0013
filesystem commit successful
committing vServer V0001 - this may take a moment ...
vServer successfully committed.
commit successful
mech@S0000 $
mech@S0000 $ vserver -c renamefs name=V0001 mountpoint=/data-new to=/data
Renaming filesystem on vServer V0001
Filesystem /data-new renamed to /data. Re-run using remount or commit 'exec' to activate this change completely.
Renamed filesystem successfully
mech@S0000 $ vserver -c renamefs name=V0001 mountpoint=/var/app-new to=/var/app
Renaming filesystem on vServer V0001
Filesystem /var/app-new renamed to /var/app. Re-run using remount or commit 'exec' to activate this change completely.
Renamed filesystem successfully
mech@S0000 $ vserver -c renamefs name=V0001 mountpoint=/var/app2-new to=/var/app2
Renaming filesystem on vServer V0001
Filesystem /var/app2-new renamed to /var/app2. Re-run using remount or commit 'exec' to activate this change completely.
Renamed filesystem successfully
mech@S0000 $
mech@S0000 $ vserver -c commit exec name=V0001
committing datasets for virtual server
Dataset V0001_root committed successfully.
Dataset V0001_data committed successfully.
Dataset V0001_data-new committed successfully.
dataset commit successful
committing filesystems for virtual server
Renamed mountpoint /data-new to /data in V0001_data-new on S0013
Renamed mountpoint /var/app-new to /var/app in V0001_data-new on S0013
Renamed mountpoint /var/app2-new to /var/app2 in V0001_data-new on S0013
filesystem commit successful
committing vServer V0001 - this may take a moment ...
vServer successfully committed.

Seite 18/27

commit successful
mech@S0000 $
mech@S0000 $ vserver -c show name=V0001
Virtual
Type
cState
rState
V0001
SPARSE
ACTIVATED INSTALLED
cPool
default

Node
S0013

Patch-Set
sparc_U6_U8

nState
ACTIVE
State
OK

Build
5.10S_U6_Finter

Patch-Level
142900-06 (U8+)

Hardware
SUNW,Sun-Blade-1500

Comment
zpool migration test
Comment
Solaris Sparc Build Server

Install-Date
2010-04-13_16:52:23

Config Definitions
Groups: virtual
Network Definitions
Type Hostname
management V0001

VID
-

Interface
bge0

Filesystem Definitions
Dataset: V0001_root (ZPOOL/ACTIVATED)
Name State
Size/MB
Type
- ACTIVATED <undefined>
zfs
Dataset:
Name
-

V0001_data
State
ACTIVATED
ACTIVATED
ACTIVATED

(ZPOOL/ACTIVATED)
Size/MB
Type
100
zfs
100
zfs
100
zfs

Dataset:
Name
-

V0001_data-new (ZPOOL/ACTIVATED)
State
Size/MB
Type
ACTIVATED 100
zfs
ACTIVATED 100
zfs
ACTIVATED 100
zfs

IP
192.168.14.231

Netmask
255.255.255.0

StackType
SHARED

State
ACTIVATED

Total Space/MB: 1021


Options
rw

Available Space/MB: 1021


Mountpoint
/zones/V0001 (root)

Total Space/MB: 2042


Options
rw
rw
rw

Available Space/MB: 1742


Mountpoint
/data-old
/var/app-old
/var/app2-old

Total Space/MB: 799


Options
rw
rw
rw

Available Space/MB: 499


Mountpoint
/data
/var/app
/var/app2

5) dataset rename (if required)


# /opt/jomasoft/vdcf/mods/dataset/ds_rename -d V0001_data -n V0001_data-old
dataset V0001_data renamed to V0001_data-old

Seite 19/27

# /opt/jomasoft/vdcf/mods/dataset/ds_rename -d V0001_data-new -n V0001_data


dataset V0001_data-new renamed to V0001_data
#
5.1) zpool export / import newname
root@S0013 # zpool list
NAME
SIZE
USED AVAIL
CAP HEALTH ALTROOT
V0001_data
1.98G 25.4M 1.96G
1% ONLINE V0001_data-new
784M 25.3M
759M
3% ONLINE V0001_root
1008M
860M
148M
85% ONLINE root@S0013 # zpool export V0001_data
root@S0013 # zpool export V0001_data-new
root@S0013 # zpool import V0001_data V0001_data-old
root@S0013 # zpool import V0001_data-new V0001_data
root@S0013 #
root@S0013 # zpool list
NAME
SIZE
USED AVAIL
CAP HEALTH ALTROOT
V0001_data
784M 25.3M
759M
3% ONLINE V0001_data-old 1.98G 25.3M 1.96G
1% ONLINE V0001_root
1008M
860M
148M
85% ONLINE root@S0013 # zfs list
NAME
USED AVAIL
V0001_data
25.3M
727M
V0001_data-old
25.3M 1.93G
V0001_data-old/data
25.2M 1.93G
V0001_data-old/data/data-old
7.85M 92.1M
V0001_data-old/data/data-old@for_transfer
0
V0001_data-old/data/var_app-old
4.69M 95.3M
V0001_data-old/data/var_app-old@for_transfer
0
V0001_data-old/data/var_app2-old
12.6M 87.4M
V0001_data-old/data/var_app2-old@for_transfer
0
V0001_data/data
25.2M
727M
V0001_data/data/data
7.85M 92.2M
V0001_data/data/data@for_transfer
0
V0001_data/data/var_app
4.69M 95.3M
V0001_data/data/var_app@for_transfer
0
V0001_data/data/var_app2
12.6M 87.4M
V0001_data/data/var_app2@for_transfer
0
V0001_root
860M
116M
V0001_root/root
860M
116M
root@S0013 #

REFER
21K
21K
21K
7.85M
7.85M
4.69M
4.69M
12.6M
12.6M
21K
7.85M
7.85M
4.69M
4.69M
12.6M
12.6M
21K
860M

MOUNTPOINT
legacy
legacy
legacy
legacy
legacy
legacy
legacy
legacy
legacy
legacy
legacy
/zones/V0001

Seite 20/27

mech@S0000 $ vserver -c show name=V0001 verbose


Virtual
Type
cState
rState
V0001
SPARSE
ACTIVATED INSTALLED
cPool
default

Node
S0013

Patch-Set
sparc_U6_U8

nState
ACTIVE
State
OK

Build
5.10S_U6_Finter

Patch-Level
142900-06 (U8+)

Hardware
SUNW,Sun-Blade-1500

Comment
zpool migration test
Comment
Solaris Sparc Build Server

Install-Date
2010-04-13_16:52:23

Config Definitions
Groups: virtual
Network Definitions
Type Hostname
management V0001

VID
-

Interface
bge0

Filesystem Definitions
Dataset: V0001_root (ZPOOL/ACTIVATED)
Name State
Size/MB
Type
- ACTIVATED <undefined>
zfs

IP
192.168.14.231

Netmask
255.255.255.0

StackType
SHARED

State
ACTIVATED

Total Space/MB: 1021


Options
rw

Available Space/MB: 1021


Mountpoint
/zones/V0001 (root)

Dataset:
Name
-

V0001_data-old (ZPOOL/ACTIVATED)
State
Size/MB
Type
ACTIVATED 100
zfs
ACTIVATED 100
zfs
ACTIVATED 100
zfs

Total Space/MB: 2042


Options
rw
rw
rw

Available Space/MB: 1742


Mountpoint
/data-old
/var/app-old
/var/app2-old

Dataset:
Name
-

V0001_data
State
ACTIVATED
ACTIVATED
ACTIVATED

Total Space/MB: 799


Options
rw
rw
rw

Available Space/MB: 499


Mountpoint
/data
/var/app
/var/app2

(ZPOOL/ACTIVATED)
Size/MB
Type
100
zfs
100
zfs
100
zfs

Current zonecfg
zonename: V0001
zonepath: /zones/V0001
brand: native
autoboot: true
bootargs:

Seite 21/27

pool:
limitpriv:
scheduling-class:
ip-type: shared
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
fs:
dir: /data-old
special: V0001_data/data/data-old
raw not specified
type: zfs
options: [rw]
fs:
dir: /var/app-old
special: V0001_data/data/var_app-old
raw not specified
type: zfs
options: [rw]
fs:
dir: /var/app2-old
special: V0001_data/data/var_app2-old
raw not specified
type: zfs
options: [rw]
fs:
dir: /data
special: V0001_data-new/data/data
raw not specified
type: zfs
options: [rw]
fs:
dir: /var/app
special: V0001_data-new/data/var_app
raw not specified
type: zfs
options: [rw]
fs:

Seite 22/27

net:

mech@S0000 $

dir: /var/app2
special: V0001_data-new/data/var_app2
raw not specified
type: zfs
options: [rw]
address: 192.168.14.231/24
physical: bge0
defrouter not specified

5.2) vserver commit


mech@S0000 $ vserver -c commit name=V0001
committing datasets for virtual server
Dataset V0001_root committed successfully.
Dataset V0001_data-old committed successfully.
Dataset V0001_data committed successfully.
dataset commit successful
committing filesystems for virtual server
filesystem commit successful
committing vServer V0001 - this may take a moment ...
vServer successfully committed.
commit successful
mech@S0000 $
mech@S0000 $ vserver -c show name=V0001 verbose
Virtual
Type
cState
rState
Build
V0001
SPARSE
ACTIVATED INSTALLED
5.10S_U6_Finter
cPool
default

Node
S0013

Patch-Set
sparc_U6_U8

nState
ACTIVE
State
OK

Patch-Level
142900-06 (U8+)

Hardware
SUNW,Sun-Blade-1500

Comment
zpool migration test
Comment
Solaris Sparc Build Server

Install-Date
2010-04-13_16:52:23

Config Definitions
Groups: virtual
Network Definitions
Type Hostname
management V0001

VID
-

Interface
bge0

IP
192.168.14.231

Netmask
255.255.255.0

StackType
SHARED

State
ACTIVATED

Filesystem Definitions

Seite 23/27

Dataset: V0001_root (ZPOOL/ACTIVATED)


Name State
Size/MB
Type
- ACTIVATED <undefined>
zfs

Total Space/MB: 1021


Options
rw

Available Space/MB: 1021


Mountpoint
/zones/V0001 (root)

Dataset:
Name
-

V0001_data-old (ZPOOL/ACTIVATED)
State
Size/MB
Type
ACTIVATED 100
zfs
ACTIVATED 100
zfs
ACTIVATED 100
zfs

Total Space/MB: 2042


Options
rw
rw
rw

Available Space/MB: 1742


Mountpoint
/data-old
/var/app-old
/var/app2-old

Dataset:
Name
-

V0001_data
State
ACTIVATED
ACTIVATED
ACTIVATED

Total Space/MB: 799


Options
rw
rw
rw

Available Space/MB: 499


Mountpoint
/data
/var/app
/var/app2

(ZPOOL/ACTIVATED)
Size/MB
Type
100
zfs
100
zfs
100
zfs

Current zonecfg
zonename: V0001
zonepath: /zones/V0001
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
fs:
dir: /data-old
special: V0001_data-old/data/data-old
raw not specified
type: zfs
options: [rw]
fs:

Seite 24/27

fs:

fs:

fs:

fs:

net:

mech@S0000 $

dir: /var/app-old
special: V0001_data-old/data/var_app-old
raw not specified
type: zfs
options: [rw]
dir: /var/app2-old
special: V0001_data-old/data/var_app2-old
raw not specified
type: zfs
options: [rw]
dir: /data
special: V0001_data/data/data
raw not specified
type: zfs
options: [rw]
dir: /var/app
special: V0001_data/data/var_app
raw not specified
type: zfs
options: [rw]
dir: /var/app2
special: V0001_data/data/var_app2
raw not specified
type: zfs
options: [rw]
address: 192.168.14.231/24
physical: bge0
defrouter not specified

6) Delete Snapshot (Cleanup)


mech@S0000 $ vserver -c remfs name=V0001 mountpoint=/var/app2-old
removing filesystem from vServer V0001
filesystem removed successfully
mech@S0000 $ vserver -c remfs name=V0001 mountpoint=/var/app-old
removing filesystem from vServer V0001

Seite 25/27

filesystem removed successfully


mech@S0000 $ vserver -c remfs name=V0001 mountpoint=/data-old
removing filesystem from vServer V0001
filesystem removed successfully
mech@S0000 $ dataset -c remove name=V0001_data-old
removing dataset: V0001_data-old
dataset removal successful
mech@S0000 $ vserver -c commit remove name=V0001
committing datasets for virtual server
Dataset V0001_root committed successfully.
Dataset V0001_data committed successfully.
dataset commit successful
committing filesystems for virtual server
WARN: Ignoring Dataset V0001_data-old. Dataset is in state PURGING. Must be active to commit Filesystems.
filesystem commit successful
committing vServer V0001 - this may take a moment ...
vServer successfully committed.
commit successful
purging filesystems
filesystem /var/app2-old purged in V0001_data-old on S0013
filesystem /var/app-old purged in V0001_data-old on S0013
filesystem /data-old purged in V0001_data-old on S0013
filesystem purge successful
purging datasets
Dataset V0001_data-old committed successfully.
dataset purge successful
mech@S0000 $
mech@S0000 $ vserver -c zfslist snapshots name=V0001
Dataset list for vServer: V0001
Pool Name: V0001_root
NAME

USED

AVAIL

REFER

MOUNTPOINT

Pool Name: V0001_data


NAME
V0001_data/data/data@for_transfer
V0001_data/data/var_app@for_transfer

USED
0
0

AVAIL
-

REFER
7.85M
4.69M

MOUNTPOINT
-

Seite 26/27

V0001_data/data/var_app2@for_transfer

12.6M

mech@S0000 $ vserver -c zfsdestroy name=V0001 snapshot=for_transfer filesystem=V0001_data/data/data


ZFS snapshot destroyed for vServer V0001
mech@S0000 $ vserver -c zfsdestroy name=V0001 snapshot=for_transfer filesystem=V0001_data/data/var_app
ZFS snapshot destroyed for vServer V0001
mech@S0000 $ vserver -c zfsdestroy name=V0001 snapshot=for_transfer filesystem=V0001_data/data/var_app2
ZFS snapshot destroyed for vServer V0001
mech@S0000 $
mech@S0000 $ vserver -c console name=V0001
Use <%> as escape character
[Connected to zone 'V0001' console]
V0001 console login: root
Last login: Wed Apr 28 15:54:00 on console
Sun Microsystems Inc.
SunOS 5.10
Generic January 2005
root@V0001 # df -h | grep data
V0001_data/data/data
0K
7.8M
92M
8%
/data
V0001_data/data/var_app
0K
4.7M
95M
5%
/var/app
V0001_data/data/var_app2
0K
13M
87M
13%
/var/app2
root@V0001 # rmdir /data-old /var/app-old /var/app2-old
root@V0001 # exit
V0001 console login: %.
[Connection to zone 'V0001' console closed]
Connection to S0013 closed.

Seite 27/27

Das könnte Ihnen auch gefallen