Sie sind auf Seite 1von 11

"Configuring Logical Volumes (LVM2) for Device Mapper Mutipathing (DM-MPIO) on RHEL

4 and RHEL 5"

Question: Procedure to create LVM2 for DM-MPIO on RHEL 4 and RHEL 5


Environment: Product: VNX Unified/Block
Environment: Product: CLARiiON CX4 Series
Environment: OS: Red Hat Enterprise Linux (RHEL)
Fix: It is best to create Logical Volumes on DM-MPIO devices instead of SCSI
devices for the following reasons:

In a multipath environment, more than one SCSI device (SD) points to the same
physical device. Using LVM on SD devices results in the duplicate entries being
reported during the LVM scan.

Depending on the order of the scan, it is conceivable that the LVM volumes are
specifically tied to particular SD devices instead of the multipath infrastructure.
This may result in multipath infrastructure not providing failover capabilities in
the event of a SAN failure or device unavailability.

The SD names are not persistent across reboots or SAN changes.

The LVM2 configuration does not scan for multipath devices. For LVM2 operation in a
multipath environment with DM-MPIO, the SD devices must be filtered out and the
device mapper devices must be included as part of the volume scan operation. To
configure LVM2 for DM-MPIO on RHEL 4 and RHEL 5 follow these:

Add the following line to /etc/lvm/lvm.conf file, to enable scanning of device-


mapper block devices.

types = [ "device-mapper", 1]

Filter out all sd devices from the system and choose to scan for multipath
devices by adding the following line.

filter = [ "a/dev/mpath/.*/", "r/.*/" ]

If there are Logical Volumes on devices that are not controlled by multipath,
enable selective scanning of those devices. In the following example, the partition
sda2 contains a Logical Volume. However, the SD device is not under multipath
control. To enable scanning for this device, set the following filter as follows:

filter = [ "a/dev/sda2$/","a/dev/mpath/.*/", "r/.*/" ]

Save the edits to "/etc/lvm/lvm.conf."

5. Execute the lvmdiskscan command and ensure that the required SCSI devices are
scanned and that the LVM volume groups and partitions are available.
Notes: For more information, refer to the EMC Linux Host Connectivity guide on
Powerlink or EMC Support.

-----------------------------------------------------------------------------------
--------------------------------------

"Sistina LVM2 is reporting duplicate PV on RHEL"

Environment: OS: Red Hat Enterprise Linux AS release 4 (Nahant)


Environment: OS: Red Hat Enterprise Linux AS release 4 (Nahant Update 1)
Environment: OS: Red Hat Enterprise Linux AS release 4 (Nahant Update 2)
Environment: OS: Red Hat Enterprise Linux AS release 4 (Nahant Update 3)
Environment: OS: Red Hat Enterprise Linux AS release 4 (Nahant Update 4)
Environment: EMC SW: PowerPath 4.4.0
Environment: EMC SW: PowerPath 4.5.1
Environment: Application SW: Sistina LVM 2.01.08 (2005-03-22)
Environment: Application SW: LVM2
Problem: Error msg: Found duplicate PV
Problem: Found duplicate PV NlBGUGkROMV5X3MmF6yoYEo0yEqrNS8U: using /dev/sdj1
not /dev/emcpowera1
Problem: Found duplicate PV uGldT1TVeK04ZL9uY84yJnqIR46yh0RZ: using
/dev/disk/by-path/pci-0000:01:01.0-fc-0x50060161106020b5:0x0000000000000000-part1
not /dev/emcpowera1
Root Cause: Linux native device names have not been filtered out of LVM's
configuration per the PowerPath 4.4 Release Notes (and the PowerPath 4.5
Installation Guide).
Fix: Modify the /etc/lvm/lvm.conf file to filter out native device names. See the
PowerPath 4.4 Release Notes (and PowerPath 4.5 Installation Guide) for details as
there are several different options depending on the host's configuration.
Notes: For RHEL4u4 Linux persistent device name is now active so a different
filter needs to be used in order to avoid duplicate PVs. One which will also
filter out the "/dev/disk/by-path/pci-*" devices.
Notes: Also, please make sure that the internal devices under LVM control are
not filtered out!
Notes: On RHEL 4.0, PowerPath 4.4.0 requires LVM2 version 2.01.08-1.0 and
above due to Bugzilla #151657.

Use rpm -qa to check the version of LVM2 on the system

# rpm -qa | grep lvm2


lvm2-2.01.08-1.0.RHEL4

Notes:

For example, with Red Hat Linux and the Root File System Not Mounted on a Logical
Volume

1) Modify the filter field in the /etc/lvm/lvm.conf file. Replace:

filter=["a/.*/"]

with:

filter=["r/sd.*/", "a/.*/"]

or

filter=["r/sd.*/", "r/disk.*/", "a/.*/"] *Use this filter with RHEL4u4

2) Rebuild the LVM2 cache. Enter:

vgscan -v

3) Verify that the filter field is working correctly. Run the command
below and verify that the �filtered� device nodes are not listed in
the command output. Enter:
lvmdiskscan

Notes: The following is an example of what will be seen without using the
correct filter with LVM2 and PowerPath for Linux 4.4.0
Note this example is NOT applicable with RHEL4u4, see next note for RHEL4U4

We start with a newly created file system which is on a LVM2 logical volume, see
Knowledgebase solution emc118890 for details on creating this.

# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda3 68389076 2641008 62274012 5% /
/dev/sda1 101086 12631 83236 14% /boot
none 1037500 0 1037500 0% /dev/shm
/dev/mapper/testVG-testLV
1032088 1284 978376 1% /test

Now review the existing volume groups with vgscan

# vgscan
Reading all physical volumes. This may take a while...
Found volume group "testVG" using metadata type lvm2

Now rebuild the LVM2 cache with vgscan -v

# vgscan -v
Wiping cache of LVM-capable devices
Wiping internal VG cache
Reading all physical volumes. This may take a while...
Finding all volume groups
Finding volume group "testVG"
Found volume group "testVG" using metadata type lvm2

Now using the lvmdiskscan command scan for all devices visible to LVM2

# lvmdiskscan
/dev/sdq [ 8.43 GB]
/dev/sda1 [ 101.94 MB]
/dev/emcpowera1 [ 8.43 GB] LVM physical volume
/dev/sda2 [ 2.00 GB]
/dev/root [ 66.26 GB]
/dev/sdb [ 11.25 MB]
/dev/sdr [ 63.21 GB]
/dev/emcpowerb [ 8.43 GB]
/dev/sds [ 134.85 GB]
/dev/emcpowerc [ 8.43 GB]
/dev/sdd [ 8.43 GB]
/dev/sdt [ 8.43 GB]
/dev/emcpowerd [ 8.43 GB]
/dev/sde [ 8.43 GB]
/dev/sdu [ 8.43 GB]
/dev/emcpowere [ 63.21 GB]
/dev/sdf [ 8.43 GB]
/dev/sdv [ 8.43 GB]
/dev/emcpowerf [ 134.85 GB]
/dev/sdg [ 63.21 GB]
/dev/emcpowerg [ 8.43 GB]
/dev/sdw1 [ 8.43 GB]
/dev/sdh [ 134.85 GB]
/dev/emcpowerh [ 8.43 GB]
/dev/sdi [ 8.43 GB]
/dev/emcpoweri [ 8.43 GB]
/dev/sdj [ 8.43 GB]
/dev/emcpowerj1 [ 8.43 GB]
/dev/sdk [ 8.43 GB]
/dev/sdl1 [ 8.43 GB]
/dev/sdm [ 11.25 MB]
/dev/sdo [ 8.43 GB]
/dev/sdp [ 8.43 GB]
27 disks
5 partitions
0 LVM physical volume whole disks
1 LVM physical volume

As you can see above the filter is not in place but PowerPath and LVM2 appear to be
working together nicely. Lets reboot and see what happens.

# reboot

Broadcast message from root (pts/1) (Thu Oct 6 09:33:42 2005):

The system is going down for reboot NOW!

After the reboot we log back in and we have a problem, vgscan and lvmdiskscan are
reporting duplicate physical volumes.

# vgscan
Reading all physical volumes. This may take a while...
Found duplicate PV kT5bpzX1DPK9wGxjjr80dRowFLeexudd: using /dev/sdc1 not
/dev/emcpowera1
Found duplicate PV kT5bpzX1DPK9wGxjjr80dRowFLeexudd: using /dev/sdn1 not
/dev/emcpowera1
Found volume group "testVG" using metadata type lvm2

# lvmdiskscan
/dev/sdq [ 8.43 GB]
/dev/sda1 [ 101.94 MB]
/dev/emcpowera1 [ 8.43 GB] LVM physical volume
/dev/sda2 [ 2.00 GB]
/dev/root [ 66.26 GB]
/dev/sdb [ 11.25 MB]
/dev/sdr [ 63.21 GB]
/dev/emcpowerb [ 8.43 GB]
/dev/sds [ 134.85 GB]
/dev/emcpowerc [ 8.43 GB]
Found duplicate PV kT5bpzX1DPK9wGxjjr80dRowFLeexudd: using /dev/sdc1 not
/dev/emcpowera1
/dev/sdc1 [ 8.43 GB] LVM physical volume
/dev/sdd [ 8.43 GB]
/dev/sdt [ 8.43 GB]
/dev/emcpowerd [ 8.43 GB]
/dev/sde [ 8.43 GB]
/dev/sdu [ 8.43 GB]
/dev/emcpowere [ 63.21 GB]
/dev/sdf [ 8.43 GB]
/dev/sdv [ 8.43 GB]
/dev/emcpowerf [ 134.85 GB]
/dev/sdg [ 63.21 GB]
/dev/emcpowerg [ 8.43 GB]
/dev/sdw1 [ 8.43 GB]
/dev/sdh [ 134.85 GB]
/dev/emcpowerh [ 8.43 GB]
/dev/sdi [ 8.43 GB]
/dev/emcpoweri [ 8.43 GB]
/dev/sdj [ 8.43 GB]
/dev/emcpowerj1 [ 8.43 GB]
/dev/sdk [ 8.43 GB]
/dev/sdl1 [ 8.43 GB]
/dev/sdm [ 11.25 MB]
Found duplicate PV kT5bpzX1DPK9wGxjjr80dRowFLeexudd: using /dev/sdn1 not
/dev/emcpowera1
/dev/sdn1 [ 8.43 GB] LVM physical volume
/dev/sdo [ 8.43 GB]
/dev/sdp [ 8.43 GB]
27 disks
5 partitions
0 LVM physical volume whole disks
3 LVM physical volumes

This can be fixed by putting the correct filter in place per the PowerPath for
Linux 4.4 Release Notes

# grep filter /etc/lvm/lvm.conf | grep -v \#


filter = [ "a/.*/" ]
# vi /etc/lvm/lvm.conf
# grep filter /etc/lvm/lvm.conf | grep -v \#
filter=["r/sd.*/", "a/.*/"]

Now vgscan and lvmdiskscan no longer generate duplicate PV errors

# vgscan
Reading all physical volumes. This may take a while...
Found volume group "testVG" using metadata type lvm2

# lvmdiskscan
/dev/emcpowera1 [ 8.43 GB] LVM physical volume
/dev/root [ 66.26 GB]
/dev/emcpowerb [ 8.43 GB]
/dev/emcpowerc [ 8.43 GB]
/dev/emcpowerd [ 8.43 GB]
/dev/emcpowere [ 63.21 GB]
/dev/emcpowerf [ 134.85 GB]
/dev/emcpowerg [ 8.43 GB]
/dev/emcpowerh [ 8.43 GB]
/dev/emcpoweri [ 8.43 GB]
/dev/emcpowerj1 [ 8.43 GB]
9 disks
1 partition
0 LVM physical volume whole disks
1 LVM physical volume

Notes: The following is an example of what will be seen without using the
correct filter with LVM2, RHEL4u4 and PowerPath for Linux 4.5.1

First check the environment

[root@L2 lvm]# uname -a


Linux L2.lss.emc.com 2.6.9-42.ELsmp #1 SMP Wed Jul 12 23:27:17 EDT 2006 i686 i686
i386 GNU/Linux

[root@L2 lvm]# cat /etc/redhat-release


Red Hat Enterprise Linux AS release 4 (Nahant Update 4)

[root@L2 lvm]# powermt version


EMC powermt for PowerPath (c) Version 4.5.1 (build 22)

Note what happens with the default LVM filter, we see both emcpower and native
devices

[root@L2 lvm]# grep filter /etc/lvm/lvm.conf | grep -v \#


filter = [ "a/.*/" ]
[root@L2 lvm]# lvmdiskscan
/dev/ramdisk [ 16.00 MB]
/dev/root [ 66.25 GB]
/dev/ram [ 16.00 MB]
/dev/sda1 [ 101.94 MB]
/dev/emcpowera1 [ 569.46 GB]
/dev/dm-1 [ 1.94 GB]
/dev/ram2 [ 16.00 MB]
/dev/sda2 [ 68.26 GB] LVM physical volume
/dev/ram3 [ 16.00 MB]
/dev/ram4 [ 16.00 MB]
/dev/ram5 [ 16.00 MB]
/dev/ram6 [ 16.00 MB]
/dev/ram7 [ 16.00 MB]
/dev/ram8 [ 16.00 MB]
/dev/ram9 [ 16.00 MB]
/dev/ram10 [ 16.00 MB]
/dev/ram11 [ 16.00 MB]
/dev/ram12 [ 16.00 MB]
/dev/ram13 [ 16.00 MB]
/dev/ram14 [ 16.00 MB]
/dev/ram15 [ 16.00 MB]
/dev/sdb1 [ 19.99 GB]
/dev/emcpowerb1 [ 20.00 GB]
/dev/sdc1 [ 20.00 GB]
/dev/emcpowerc1 [ 19.99 GB]
/dev/sdd1 [ 569.46 GB]
/dev/sde1 [ 19.99 GB]
/dev/sdf1 [ 20.00 GB]
/dev/sdg1 [ 569.46 GB]
/dev/sdh1 [ 19.99 GB]
/dev/sdi1 [ 20.00 GB]
/dev/sdj1 [ 569.46 GB]
/dev/sdk1 [ 19.99 GB]
/dev/sdl1 [ 20.00 GB]
/dev/sdm1 [ 569.46 GB]
3 disks
31 partitions
0 LVM physical volume whole disks
1 LVM physical volume
Note when using the filter recommended for earlier versions of RHEL we now see
Linux persistent device names

[root@L2 lvm]# vi /etc/lvm/lvm.conf


[root@L2 lvm]# grep filter /etc/lvm/lvm.conf | grep -v \#
filter = [ "r/sd.*/", "a/.*/" ]

[root@L2 lvm]# vgscan -v


Wiping cache of LVM-capable devices
Wiping internal VG cache
Reading all physical volumes. This may take a while...
Finding all volume groups
Finding volume group "VolGroup00"
Found volume group "VolGroup00" using metadata type lvm2

[root@L2 lvm]# lvmdiskscan


/dev/ramdisk
[ 16.00 MB]
/dev/root
[ 66.25 GB]
/dev/ram
[ 16.00 MB]
/dev/disk/by-path/pci-0000:05:06.0-scsi-0:0:1:0-part1
[ 101.94 MB]
/dev/emcpowera1
[ 569.46 GB]
/dev/dm-1
[ 1.94 GB]
/dev/ram2
[ 16.00 MB]
/dev/disk/by-path/pci-0000:05:06.0-scsi-0:0:1:0-part2
[ 68.26 GB] LVM physical volume
/dev/ram3
[ 16.00 MB]
/dev/ram4
[ 16.00 MB]
/dev/ram5
[ 16.00 MB]
/dev/ram6
[ 16.00 MB]
/dev/ram7
[ 16.00 MB]
/dev/ram8
[ 16.00 MB]
/dev/ram9
[ 16.00 MB]
/dev/ram10
[ 16.00 MB]
/dev/ram11
[ 16.00 MB]
/dev/ram12
[ 16.00 MB]
/dev/ram13
[ 16.00 MB]
/dev/ram14
[ 16.00 MB]
/dev/ram15
[ 16.00 MB]
/dev/disk/by-path/pci-0000:02:06.0-fc-0x5006016a0060041b:0x0000000000000000-part1
[ 19.99 GB]
/dev/emcpowerb1
[ 20.00 GB]
/dev/disk/by-path/pci-0000:02:06.0-fc-0x5006016a0060041b:0x0001000000000000-part1
[ 20.00 GB]
/dev/emcpowerc1
[ 19.99 GB]
/dev/disk/by-path/pci-0000:02:06.0-fc-0x5006016a0060041b:0x0003000000000000-part1
[ 569.46 GB]
/dev/disk/by-path/pci-0000:02:06.0-fc-0x500601620060041b:0x0000000000000000-part1
[ 19.99 GB]
/dev/disk/by-path/pci-0000:02:06.0-fc-0x500601620060041b:0x0001000000000000-part1
[ 20.00 GB]
/dev/disk/by-path/pci-0000:02:06.0-fc-0x500601620060041b:0x0003000000000000-part1
[ 569.46 GB]
/dev/disk/by-path/pci-0000:02:06.1-fc-0x5006016b0060041b:0x0000000000000000-part1
[ 19.99 GB]
/dev/disk/by-path/pci-0000:02:06.1-fc-0x5006016b0060041b:0x0001000000000000-part1
[ 20.00 GB]
/dev/disk/by-path/pci-0000:02:06.1-fc-0x5006016b0060041b:0x0003000000000000-part1
[ 569.46 GB]
/dev/disk/by-path/pci-0000:02:06.1-fc-0x500601630060041b:0x0000000000000000-part1
[ 19.99 GB]
/dev/disk/by-path/pci-0000:02:06.1-fc-0x500601630060041b:0x0001000000000000-part1
[ 20.00 GB]
/dev/disk/by-path/pci-0000:02:06.1-fc-0x500601630060041b:0x0003000000000000-part1
[ 569.46 GB]
3 disks
31 partitions
0 LVM physical volume whole disks
1 LVM physical volume

Using this new filter allows us to eliminate the duplicate PVs

[root@L2 lvm]# vi lvm.conf


[root@L2 lvm]# grep filter /etc/lvm/lvm.conf | grep -v \#
filter = [ "r/sd.*/", "r/disk.*/", "a/.*/" ]

[root@L2 lvm]# vgscan -v


Wiping cache of LVM-capable devices
Wiping internal VG cache
Reading all physical volumes. This may take a while...
Finding all volume groups
No volume groups found

[root@L2 lvm]# lvmdiskscan


/dev/ram0 [ 16.00 MB]
/dev/root [ 66.25 GB]
/dev/ram [ 16.00 MB]
/dev/emcpowera1 [ 569.46 GB]
/dev/dm-1 [ 1.94 GB]
/dev/ram2 [ 16.00 MB]
/dev/ram3 [ 16.00 MB]
/dev/ram4 [ 16.00 MB]
/dev/ram5 [ 16.00 MB]
/dev/ram6 [ 16.00 MB]
/dev/ram7 [ 16.00 MB]
/dev/ram8 [ 16.00 MB]
/dev/ram9 [ 16.00 MB]
/dev/ram10 [ 16.00 MB]
/dev/ram11 [ 16.00 MB]
/dev/ram12 [ 16.00 MB]
/dev/ram13 [ 16.00 MB]
/dev/ram14 [ 16.00 MB]
/dev/ram15 [ 16.00 MB]
/dev/emcpowerb1 [ 20.00 GB]
/dev/emcpowerc1 [ 19.99 GB]
2 disks
19 partitions
0 LVM physical volume whole disks
0 LVM physical volumes

-----------------------------------------------------------------------------------
----------------------------------

"Why do I see 'found duplicate pv' warnings when using LVM with multipathing
software in Red Hat Enterprise Linux?"

Question: Why do I see 'found duplicate pv' warnings when using LVM with
multipathing software in Red Hat Enterprise Linux?
Environment: OS: Red Hat Enterprise Linux Server release 5.4 (Tikanga)
Environment: EMC SW: PowerPath 5.3.1
Problem: # vgcreate vgSanData /dev/emcpowera
Found duplicate PV IBq2Fu90JZYA6tn5hk9dDha8maGcNvin: using /dev/sdd not
/dev/emcpowera
Found duplicate PV IBq2Fu90JZYA6tn5hk9dDha8maGcNvin: using /dev/sde not /dev/sdd
Found duplicate PV IBq2Fu90JZYA6tn5hk9dDha8maGcNvin: using /dev/sdh not /dev/sde
Found duplicate PV IBq2Fu90JZYA6tn5hk9dDha8maGcNvin: using /dev/sdi not /dev/sdh
Volume group "vgSanData" successfully created
Root Cause: With a default configuration, Logical Volume Manager (LVM) scans
all attached disks and determines which of them contain physical volumes. When
using device-mapper-multipath or other multipathing software such as EMC PowerPath
or Hitachi Dynamic Link Manager (HDLM), each path to a particular LUN is registered
as a different SCSI device, such as /dev/sdb, /dev/sdc, etc. The multipathing
software will then create a new device that maps to those individual paths such
as /dev/mapper/mpath1 (device-mapper-multipath), /dev/emcpowerb (EMC PowerPath), or
/dev/sddlmab (HDLM). Since each device points to the same LUN, they all contain
the same LVM metadata and thus when they are scanned they appear to be duplicates.
Upon running any LVM command, warnings such as the above may be printed
Fix: To ensure that LVM only scans the preferred multipath devices and not the
individual paths, a filter can be configured in /etc/lvm/lvm.conf.

By default this filter looks like: filter = [ "a/.*/" ]

When configuring a custom filter, the filter shown above, and any other filters
should either be removed or commented out with a '#' because LVM will only honor
one filter line.
The syntax of the filter accepts multiple regular expressions, enclosed in either
"a| |" (add) or "r| |" (remove), separated by commas. Any devices matching an
'add' regular expression are included in the scan, and any matching the 'remove'
expressions are omitted.
In most cases, it is beneficial to only add devices that are needed, and remove
everything else. This will prevent LVM from scanning devices which may take a long
time to respond, such as the CD-ROM or other non-block devices.
NOTE: If any local storage devices contain physical volumes, ensure that they are
included in the filter in addition to the multipath devices.

Example:
This is an example of a filter, which only scans device-mapper-multipath devices.
filter = [ "a|/dev/mapper/mpath.*|", "r|.*|" ]

Note that the above filter assumes the usage of user_friendly_names in the
multipath configuration. If physical volumes reside on local CCISS disks, they
should be added to the filter as well:
filter = [ "a|/dev/mapper/mpath.*|", "a|/dev/cciss/.*|", "r|.*|" ]

To include a volume on the second partition of the first local IDE disk in addition
to EMC PowerPath devices:
filter = [ "a|/dev/emcpower.*|", "a|/dev/hda2$|", "r|.*|" ]

After making those changes a rescan should be done to ensure all devices are
properly seen
# pvscan
# vgscan

NOTE: If the root filesystem resides on a logical volume, be certain that the above
scan commands list all physical volumes included in that volume group. If they are
not listed, do not reboot until a proper filter has been configured that allows for
scanning the necessary devices.

Once the desired filter is configured, it is recommended to rebuild the initrd so


that only the necessary devices are scanned upon reboot. For more information on
rebuilding the initrd, see How do I rebuild the initial ramdisk image?
Notes: http://kbase.redhat.com/faq/docs/DOC-2991

Das könnte Ihnen auch gefallen