Sie sind auf Seite 1von 15

AIX LVM Mirror walking

and Flash Storage Preferred Read


Deployment
How to walk the LVM data over from
one storage array to a new storage
array, using a smaller number of larger
LUNS.
Mirror Walking
All data is available and online throughout the entire
process. No down time required
Some impact on Server resources, CPU, I/O.

Requirements
The VG must be scalable to accept the larger LUNs number
of PPs per LUN.
varyoffvg <vg>; chvg G <vg>; varyonvg <vg>
This is one place where down time may be required
The hdisks within a single VG can be walked over to a
smaller number of larger hdisks.
Cannot do this across VGs.
hdisk1
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
lv1
lv2
lv3
lv4
hdisk7
hdisk8
hdisk9
vg1
All hdisks and lvs in a single vg can
use this technique
Vg must be scalable and able to accept
Larger LUN size PP count. Introducing
A larger sized LUN may not be able to
be added if the vg is not scalable.
hdisk1
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
lv1
lv2
lv3
lv4
hdisk7
hdisk8
hdisk9
hdisk11
hdisk10
-Create Bigger LUNS on XIV
-Scan for the new devices
-xiv_fc_admin -R
vg1
hdisk1
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
lv1
lv2
lv3
lv4
hdisk7
hdisk8
hdisk9
hdisk11
hdisk10
Add hdisks to vg1
-extendvg vg1 hdisk10 hdisk11
vg1
hdisk1
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
lv1
lv2
lv3
lv4
hdisk7
hdisk8
hdisk9
hdisk11
hdisk10
Start the lv mirroring
-mklvcopy lv1 2 hdisk10 hdisk11
-mklvcopy lv2 2 hdisk10 hdisk11
-mklvcopy lv3 2 hdisk10 hdisk11
-mklvcopy lv4 2 hdisk10 hdisk11

lslv -l lv1
vg1
Check lv for pv contents
lslv -l lv1
lvm2lv:/lvm2
PV COPIES IN BAND DISTRIBUTION
hdisk1 004:000:000 100% 000:004:000:000:000
hdisk10 000:000:000 100% 000:000:000:000:000
hdisk2 004:000:000 100% 000:004:000:000:000
hdisk11 000:000:000 100% 000:000:000:000:000
hdisk3 004:000:000 100% 000:004:000:000:000
hdisk4 004:000:000 100% 000:004:000:000:000
hdisk5 004:000:000 100% 000:004:000:000:000
hdisk6 004:000:000 100% 000:004:000:000:000
hdisk7 004:000:000 100% 000:004:000:000:000
hdisk8 004:000:000 100% 000:004:000:000:000
hdisk9 004:000:000 100% 000:004:000:000:000
Note the 2 new LUNS have 0 PPs distributed
hdisk1
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
lv1
lv2
lv3
lv4
hdisk7
hdisk8
hdisk9
hdisk11
hdisk10
All lvs mirrors will be stale
lsvg l vg1
vg1:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
lv1 jfs2 40 80 12 open/stale /lvm1
lv2 jfs2 40 80 12 open/stale /lvm2
lv3 jfs2 40 80 12 open/stale /lvm3
lv4 jfs2 40 80 12 open/stale /lvm4

Synchronize the two copies
syncvg v vg1
Will take time to complete
vg1
hdisk1
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
lv1
lv2
lv3
lv4
hdisk7
hdisk8
hdisk9
hdisk11
hdisk10
Now LVs are synced
lsvg l vg1
vg1:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
lv1 jfs2 40 80 12 open/syncd /lvm1
lv2 jfs2 40 80 12 open/syncd /lvm2
lv3 jfs2 40 80 12 open/syncd /lvm3
lv4 jfs2 40 80 12 open/syncd /lvm4


vg1
hdisk1
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
lv1
lv2
lv3
lv4
hdisk7
hdisk8
hdisk9
hdisk11
hdisk10
vg1
Check lv for pv contents and distribution
lslv -l lv1
lvm2lv:/lvm2
PV COPIES IN BAND DISTRIBUTION
hdisk1 004:000:000 100% 000:004:000:000:000
hdisk10 020:000:000 100% 000:020:000:000:000
hdisk2 004:000:000 100% 000:004:000:000:000
hdisk11 020:000:000 100% 000:020:000:000:000
hdisk3 004:000:000 100% 000:004:000:000:000
hdisk4 004:000:000 100% 000:004:000:000:000
hdisk5 004:000:000 100% 000:004:000:000:000
hdisk6 004:000:000 100% 000:004:000:000:000
hdisk7 004:000:000 100% 000:004:000:000:000
hdisk8 004:000:000 100% 000:004:000:000:000
hdisk9 004:000:000 100% 000:004:000:000:000
Note the 2 new LUNS have more PPs than the original LUNS
hdisk1
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
lv1
lv2
lv3
lv4
hdisk7
hdisk8
hdisk9
hdisk11
hdisk10
Remove the old pvs from each lv
rmlvcopy lv1 1 hdisk1 hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8 hdisk9
lslv l lv1
lv1:
PV COPIES IN BAND DISTRIBUTION
hdisk10 020:000:000 100% 000:020:000:000:000
hdisk11 020:000:000 100% 000:020:000:000:000
Migration Done!

vg1
Flash Storage Read Preferred
To enable Flash storage as a read-preferred copy of the logical volume device pair,
Add in the mirror as discussed on slide 6
mklvcopy <lv> 2 hdisk10 hdisk11
Synchronize the mirrored logical volumes
syncvg v <vg>
Remove the primary devices from the <lv>
rmlvcopy <lv> 1 hdisk1 hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8 hdisk9
Add back the original primary devices and sync. They will now be added in as
secondary
mklvcopy lvm1lv 2 hdisk1 hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8 hdisk9
Synchronize the mirrored logical volumes
syncvg v <vg>

Adjust the write schedule policy
Check which logical volumes are primary, and which are secondary
# lslv -m <lv>
lvm1lv:<lv>
LP PP1 PV1 PP2 PV2 PP3 PV3
0001 0078 hdisk10 0014 hdisk1
0002 0079 hdisk11 0014 hdisk2
0003 0079 hdisk10 0014 hdisk3
0004 0080 hdisk11 0014 hdisk4
0005 0080 hdisk10 0014 hdisk5
0006 0081 hdisk11 0014 hdisk6
0007 0081 hdisk10 0014 hdisk7
0007 0081 hdisk11 0014 hdisk8
0007 0081 hdisk10 0014 hdisk9
Now the devices in the PV1 column are the primary devices. The reads will all be
supported by the PV1 devices. During Boot, the PV1 devices are the primary copy
of the mirror, and will be used as the sync point.
Flash Storage Read Preferred
There are 5 write policies for LVM mirroring
The default is parallel
Write operations are done in parallel to all copies of the mirror.
Read operations are done to the least busy device
We want parallel write with sequential read
Write operations are done in parallel to all copies of the mirror.
Read operations are ALWAYS performed on the primary copy of the devices in the mirror set
During Boot, the PV1 devices are the primary copy of the mirror, and will be used as the sync source.

There is a short amount of downtime for the file system required to change the policy
Adjust the write schedule policy
Check the write schedule policy of the logical volume
LOGICAL VOLUME: lvm2lv VOLUME GROUP: lvm_test
LV IDENTIFIER: 00f62b1d00004c000000013fd4f00d26.2 PERMISSION: read/write
VG STATE: active/complete LV STATE: closed/syncd
TYPE: jfs2 WRITE VERIFY: off
MAX LPs: 512 PP SIZE: 256 megabyte(s)
COPIES: 2 SCHED POLICY: parallel
LPs: 40 PPs: 80
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: maximum RELOCATABLE: yes
INTRA-POLICY: middle UPPER BOUND: 1024
MOUNT POINT: /lvm2 LABEL: /lvm2
DEVICE UID: 0 DEVICE GID: 0
DEVICE PERMISSIONS: 432
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?: NO
INFINITE RETRY: no
DEVICESUBTYPE: DS_LVZ
COPY 1 MIRROR POOL: None
COPY 2 MIRROR POOL: None
COPY 3 MIRROR POOL: None


Adjust the write schedule policy
Change the write schedule policy. This must be done with the file system for each of the logical
volumes unmounted. Here is where the down time will occur.
Chlv d ps <lv>

Re-Check the write schedule policy
LOGICAL VOLUME: lvm1lv VOLUME GROUP: lvm_test
LV IDENTIFIER: 00f62b1d00004c000000013fd4f00d26.1 PERMISSION: read/write
VG STATE: active/complete LV STATE: opened/stale
TYPE: jfs2 WRITE VERIFY: off
MAX LPs: 512 PP SIZE: 256 megabyte(s)
COPIES: 2 SCHED POLICY: parallel/sequential
LPs: 40 PPs: 80
STALE PPs: 40 BB POLICY: relocatable
INTER-POLICY: maximum RELOCATABLE: yes
INTRA-POLICY: middle UPPER BOUND: 1024
MOUNT POINT: /lvm1 LABEL: /lvm1
DEVICE UID: 0 DEVICE GID: 0
DEVICE PERMISSIONS: 432
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?: NO
INFINITE RETRY: no
DEVICESUBTYPE: DS_LVZ
COPY 1 MIRROR POOL: None
COPY 2 MIRROR POOL: None
COPY 3 MIRROR POOL: None

Das könnte Ihnen auch gefallen