Sie sind auf Seite 1von 26

LVM (Logical Volume Manager):

Logical volume management is a widely-used technique for deploying logical rather than
physical storage. With LVM, "logical" partitions can span across physical hard drives and
can be resized (unlike traditional ext3 "raw" partitions). A physical disk is divided into one
or more physical volumes (Pvs), and logical volume groups (VGs) are created by
combining PVs as shown in below diagram. The VGs can be an aggregate of PVs from
multiple physical disks.
Advantages:

Logical volumes can be resized while they are mounted and accessible by the
database or file system, removing the downtime associated with adding or deleting
storage from a Linux server
Data from one (potentially faulty or damaged) physical device may be relocated to
another device that is newer, faster or more resilient, while the original volume
remains online and accessible
Logical volumes can be constructed by aggregating physical devices to increase
performance (via disk striping) or redundancy (via disk mirroring and I/O
multipathing)
Logical volume snapshots can be created to represent the exact state of the volume at
a certain point-in-time, allowing accurate backups to proceed simultaneously with
regular system operation

Create PV(Physical Volume)

# pvcreate /dev/sda1 or pvcreate sda{1,2,3} # pvdisplay or pvs

Create VG (Volume Group)

# vgcreate prasadvg /dev/sda1 or vgcreate prasadvg /dev/sda{1,2,3} # vgdisplay or vgs

Create LV (Logical Volume)

# lvcreate L1024 n newlvname vgname(prasadvg) # lvdisplay or lvs

lvcreate -i2 -I4 --size 255G -n logical_volume_one_striped volume_group_one

# mkfs t ext4 /dev/prasadvg/newlvname


# Mkdir /lvmdir

# mount /dev/prasadvg/newlvname /lvmdir

Make entry in /etc/fstab for permanent mount the file system.

How we can Decrease lv space?

swapoff a

we decided we want to remove some space from already mount file system that is
vg_altipaydb-lv_swap logical volume.

Step 1: umount /dev/mapper/vg_altipaydb-lv_swap

Step 2: e2fsck -f /dev/mapper/vg_altipaydb-lv_swap

Step 3: resize2fs /dev/mapper/vg_altipaydb-lv_swap 8G


Step 4: lvreduce -L 8G /dev/mapper/vg_altipaydb-lv_swap

Validation:

#lvs

#vgs

How we can increase the one LV space online?

Step 1:

Check either space available or not in VG using following command # vgs or vgdisplay. If
enough space available then go ahead and execute the below commands

Step 2:

lvextend -L +9000M /dev/mapper/vg_altipaydb-lv_root

Step 3:

resize2fs /dev/mapper/vg_altipaydb-lv_root

Step 4:

#e2fsck -f /dev/mapper/TCPDumpVolGRP-TCPDumpLV

If space not available in VG .

Then increase the PV into VG.

# Pvcreate /dev/sda5

# vgextend prasadvg /dev/sda5

How to increase or decrease the PV into VG?

Ans : vgextend volume_group_one /dev/sda5


vgreduce volume_group_one /dev/hda5

How to remove a LV?


Step 1 : # umount /lvmdir
Step 2 : Remove entry in /etc/fstab
Step 3 : lvremove /dev/prasadvg/newlv
what is the latest lvm version?

Ans: lvm2
What is difference between LVM version 1 & 2?

Features LVM1 LVM2


RHEL AS 2.1 support No No
RHEL 3 support Yes No
RHEL 4 support No Yes
Transactional metadata for fast recovery No Yes
Shared volume mounts with GFS No Yes
Cluster Suite failover supported Yes Yes
Striped volume expansion No Yes
Max number PVs, LVs 256 PVs, 256 LVs 2**32 PVs, 2**32 LVs
Max device size 2 Terabytes 8 Exabytes (64-bit CPUs)
Volume mirroring support No Yes, in Fall 2005

How to take snap shot in lvm?

Step 1: Lvcreate -L592M -s -n Prasad-snap /dev/groupname/Prasad-lv

Step 2: Create partial directory to mount the snap image.

# mkdir /Prasad-snapdir

Step 3: Mount the snap image on /Prasad-snapdir

# mount /dev/group/Prasad-lv / Prasad-snapdir

LAB:

Increase LV online:

[root@testos-02-bash ~]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_r 19G 4.7G 13G 27% /

tmpfs 246M 0 246M 0% /dev/shm

/dev/sda1 485M 31M 429M 7% /boot

/dev/sdb6 494M 12M 457M 3% /data

/dev/sr0 4.2G 4.2G 0 100% /media

/dev/mapper/prasadvg-prasadlv01 678M 68M 576M 11% /lv01

/dev/mapper/prasadvg-prasadlv02 291M 68M 209M 25% /lv02


[root@testos-02-bash ~]# resize2fs /dev/prasadvg/prasadlv02 500M

resize2fs 1.41.12 (17-May-2010)

The containing partition (or device) is only 307200 (1k) blocks.

You requested a new size of 512000 blocks.

[root@testos-02-bash ~]# lvextend -L +700M /dev/prasadvg/prasadlv02

Extending logical volume prasadlv02 to 1000.00 MiB

Logical volume prasadlv02 successfully resized

[root@testos-02-bash ~]# resize2fs /dev/prasadvg/prasadlv02

resize2fs 1.41.12 (17-May-2010)

Filesystem at /dev/prasadvg/prasadlv02 is mounted on /lv02; on-line resizing required

old desc_blocks = 2, new_desc_blocks = 4

Performing an on-line resize of /dev/prasadvg/prasadlv02 to 1024000 (1k) blocks.

The filesystem on /dev/prasadvg/prasadlv02 is now 1024000 blocks long.

[root@testos-02-bash ~]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root 9G 4.7G 13G 27% /

tmpfs 246M 0 246M 0% /dev/shm

/dev/sda1 485M 31M 429M 7% /boot

/dev/sdb6 494M 12M 457M 3% /data

/dev/sr0 4.2G 4.2G 0 100% /media

/dev/mapper/prasadvg-prasadlv01 678M 68M 576M 11% /lv01

/dev/mapper/prasadvg-prasadlv02 969M 68M 852M 8% /lv02

Decrease LV size from already mounted file system:

[root@testos-02-bash ~]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root 9G 4.7G 13G 27% /

tmpfs 246M 0 246M 0% /dev/shm


/dev/sda1 485M 31M 429M 7% /boot

/dev/sdb6 494M 12M 457M 3% /data

/dev/sr0 4.2G 4.2G 0 100% /media

/dev/mapper/prasadvg-prasadlv01 678M 68M 576M 11% /lv01

/dev/mapper/prasadvg-prasadlv02 969M 68M 852M 8% /lv02

[root@testos-02-bash ~]# umount /lv02/

[root@testos-02-bash ~]# e2fsck -f /dev/prasadvg/prasadlv02

e2fsck 1.41.12 (17-May-2010)

Pass 1: Checking inodes, blocks, and sizes

Pass 2: Checking directory structure

Pass 3: Checking directory connectivity

Pass 4: Checking reference counts

Pass 5: Checking group summary information

/dev/prasadvg/prasadlv02: 2301/254000 files (0.4% non-contiguous), 101561/1024000


blocks

[root@testos-02-bash ~]# resize2fs /dev/prasadvg/prasadlv02 700M

resize2fs 1.41.12 (17-May-2010)

Resizing the filesystem on /dev/prasadvg/prasadlv02 to 716800 (1k) blocks.

The filesystem on /dev/prasadvg/prasadlv02 is now 716800 blocks long.

[root@testos-02-bash ~]# mount /lv02/

[root@testos-02-bash ~]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root 19G 4.7G 13G 27% /

tmpfs 246M 0 246M 0% /dev/shm

/dev/sda1 485M 31M 429M 7% /boot

/dev/sdb6 494M 12M 457M 3% /data

/dev/sr0 4.2G 4.2G 0 100% /media


/dev/mapper/prasadvg-prasadlv01 678M 68M 576M 11% /lv01

/dev/mapper/prasadvg-prasadlv02 678M 68M 576M 11% /lv02

[root@testos-02-bash ~]#

Take Snapshot of the LV:

[root@testos-02-bash /]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root 19G 4.7G 13G 27% /

tmpfs 246M 0 246M 0% /dev/shm

/dev/sda1 485M 31M 429M 7% /boot

/dev/sdb6 494M 12M 457M 3% /data

/dev/sr0 4.2G 4.2G 0 100% /media

/dev/mapper/prasadvg-prasadlv01 678M 68M 576M 11% /lv01

/dev/mapper/prasadvg-prasadlv02 678M 68M 576M 11% /lv02

[root@testos-02-bash /]# lvcreate -L200M -s -n snapoflv02-11-11-2014


/dev/prasadvg/prasadlv02

Logical volume "snapoflv02-11-11-2014" created

[root@testos-02-bash /]# cd /dev/prasadvg/

[root@testos-02-bash prasadvg]# ls -ltr

total 0

lrwxrwxrwx. 1 root root 7 Nov 10 10:39 prasadlv01 -> ../dm-2

lrwxrwxrwx. 1 root root 7 Nov 10 11:45 prasadlv02 -> ../dm-3

lrwxrwxrwx. 1 root root 7 Nov 10 11:45 snaplv02lv -> ../dm-4

lrwxrwxrwx. 1 root root 7 Nov 10 11:45 snapoflv02-11-11-2014 -> ../dm-7

[root@testos-02-bash prasadvg]# du -hs snapoflv02-11-11-2014

0 snapoflv02-11-11-2014

[root@testos-02-bash prasadvg]# du -hs ../dm-7

0 ../dm-7
[root@testos-02-bash prasadvg]# mkdir /snapoflv02

[root@testos-02-bash prasadvg]# mount /dev/prasadvg/snapoflv02-11-11-2014


/snapoflv02/

[root@testos-02-bash prasadvg]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root 19G 4.7G 13G 27% /

tmpfs 246M 0 246M 0% /dev/shm

/dev/sda1 485M 31M 429M 7% /boot

/dev/sdb6 494M 12M 457M 3% /data

/dev/sr0 4.2G 4.2G 0 100% /media

/dev/mapper/prasadvg-prasadlv01 678M 68M 576M 11% /lv01

/dev/mapper/prasadvg-prasadlv02 678M 68M 576M 11% /lv02

/dev/mapper/prasadvg-snapoflv02--11--11--2014 678M 68M 576M 11% /snapoflv02

[root@testos-02-bash prasadvg]#

Mirroring in LVM:

[root@testos-02-bash Packages]# lvcreate -L 500M -m 1 -n prasadmirror prasadvg

Logical volume "prasadmirror" created

[root@testos-02-bash Packages]# lvs -a

LV VG Attr LSize Origin Snap% Move Log Copy% Convert

lv_root VolGroup -wi-ao 18.54g

lv_swap VolGroup -wi-ao 992.00m

prasadlv01 prasadvg -wi-ao 700.00m

prasadlv02 prasadvg -wi-ao 1000.00m

prasadmirror prasadvg mwi-a- 500.00m prasadmirror_mlog 24.80

[prasadmirror_mimage_0] prasadvg Iwi-ao 500.00m

[prasadmirror_mimage_1] prasadvg Iwi-ao 500.00m

[prasadmirror_mlog] prasadvg lwi-ao 4.00m


[root@testos-02-bash Packages]# lvs -a

LV VG Attr LSize Origin Snap% Move Log Copy% Convert

lv_root VolGroup -wi-ao 18.54g

lv_swap VolGroup -wi-ao 992.00m

prasadlv01 prasadvg -wi-ao 700.00m

prasadlv02 pasadvg -wi-ao 1000.00m

prasadmirror prasadvg mwi-a- 500.00m prasadmirror_mlog 48.00

[prasadmirror_mimage_0] prasadvg Iwi-ao 500.00m

[prasadmirror_mimage_1] prasadvg Iwi-ao 500.00m

[prasadmirror_mlog] prasadvg lwi-ao 4.00m

[root@testos-02-bash Packages]# lvs -a

LV VG Attr LSize Origin Snap% Move Log Copy% Convert

lv_root VolGroup -wi-ao 18.54g

lv_swap VolGroup -wi-ao 992.00m

prasadlv01 prasadvg -wi-ao 700.00m

prasadlv02 prasadvg -wi-ao 1000.00m

prasadmirror prasadvg mwi-a- 500.00m prasadmirror_mlog 100.00

[prasadmirror_mimage_0] prasadvg iwi-ao 500.00m

[prasadmirror_mimage_1] prasadvg iwi-ao 500.00m

[prasadmirror_mlog] prasadvg lwi-ao 4.00m

[root@testos-02-bash Packages]# lvs -a -o +devices

LV VG Attr LSize Origin Snap% Move Log Copy% Convert Devices

lv_root VolGroup -wi-ao 18.54g /dev/sda2(0)

lv_swap VolGroup -wi-ao 992.00m /dev/sda2(4746)

prasadlv01 prasadvg -wi-ao 700.00m /dev/sdc1(0)

prasadlv01 prasadvg -wi-ao 700.00m /dev/sdc1(250)

prasadlv01 prasadvg -wi-ao 700.00m /dev/sdc2(0)


prasadlv02 prasadvg -wi-ao 1000.00m /dev/sdc1(125)

prasadlv02 prasadvg -wi-ao 1000.00m /dev/sdc2(42)

prasadmirror prasadvg mwi-a- 500.00m prasadmirror_mlog 100.00


prasadmirror_mimage_0(0),prasadmirror_mimage_1(0)

[prasadmirror_mimage_0] prasadvg iwi-ao 500.00m /dev/sdc5(0)

[prasadmirror_mimage_1] prasadvg iwi-ao 500.00m /dev/sdc6(0)

[prasadmirror_mlog] prasadvg lwi-ao 4.00m /dev/sdc12(0)

[root@testos-02-bash Packages]#

[root@testos-02-bash Packages]# mkdir /mirror

[root@testos-02-bash Packages]# mkfs -t ext3 /dev/prasadvg/prasadmirror

mke2fs 1.41.12 (17-May-2010)

Filesystem label=

OS type: Linux

Block size=1024 (log=0)

Fragment size=1024 (log=0)

Stride=0 blocks, Stripe width=0 blocks

128016 inodes, 512000 blocks

25600 blocks (5.00%) reserved for the super user

First data block=1

Maximum filesystem blocks=67633152

63 block groups

8192 blocks per group, 8192 fragments per group

2032 inodes per group

Superblock backups stored on blocks:

8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409

Writing inode tables: done

Creating journal (8192 blocks): done


Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 21 mounts or

180 days, whichever comes first. Use tune2fs -c or -i to override.

[root@testos-02-bash Packages]# mount /dev/prasadvg/prasadmirror /mirror/

[root@testos-02-bash Packages]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root 19G 4.7G 13G 27% /

tmpfs 246M 0 246M 0% /dev/shm

/dev/sda1 485M 31M 429M 7% /boot

/dev/sdb6 494M 12M 457M 3% /data

/dev/sr0 4.2G 4.2G 0 100% /media

/dev/mapper/prasadvg-prasadlv01 678M 68M 576M 11% /lv01

/dev/mapper/prasadvg-prasadlv02 678M 68M 576M 11% /lv02

/dev/mapper/prasadvg-prasadmirror 485M 11M 449M 3% /mirror

RAID: RAID stands for Redundant Array of Inexpensive (Independent) Disks. On most
situations you will be using one of the following four levels of RAIDs.

RAID 0
RAID 1
RAID 5
RAID 10 (also known as RAID 1+0)

RAID LEVEL 0
Following are the key points to remember for RAID level 0.

Minimum 2 disks.
Excellent performance ( as blocks are striped ).
No redundancy ( no mirror, no parity ).
Dont use this for any critical system.

RAID LEVEL 1

Following are the key points to remember for RAID level 1.

Minimum 2 disks.
Good performance ( no striping. no parity ).
Excellent redundancy ( as blocks are mirrored ).

RAID LEVEL 5
Following are the key points to remember for RAID level 5.

Minimum 3 disks.
Good performance ( as blocks are striped ).
Good redundancy ( distributed parity ).
Best cost effective option providing both performance and redundancy. Use this for
DB that is heavily read oriented. Write operations will be slow.

RAID LEVEL 10

LAB:

Raid Levels implementation using mdadm software:

Step 1: fdisk l partition type should be (fd)

/dev/sdc13 759 823 522081 fd Linux raid autodetect

/dev/sdc14 824 888 522081 fd Linux raid autodetect

/dev/sdc15 889 953 522081 fd Linux raid autodetect

/dev/sdc16 954 1018 522081 fd Linux raid autodetect

Step 2:

[root@testos-02-bash Packages]# mdadm --create /dev/md5 --level=5 --raid-devices=3


/dev/sdc13 /dev/sdc14 /dev/sdc15

mdadm: Defaulting to version 1.2 metadata

mdadm: array /dev/md5 started.


[root@testos-02-bash Packages]# mdadm --detail /dev/md5

/dev/md5 : Version : 1.2


Creation Time : Mon Nov 10 13:19:56 2014
Raid Level : raid5
Array Size : 1043456 (1019.17 MiB 1068.50 MB)
Used Dev Size : 521728 (509.59 MiB 534.25 MB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Mon Nov 10 13:20:07 2014
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : testos-02:5 (local to host testos-02)
UUID : d3ab4e57:54c2d050:56743083:c1a6a334
Events : 18
Number Major Minor RaidDevice State
0 8 45 0 active sync /dev/sdc13

1 8 46 1 active sync /dev/sdc14

3 8 47 2 active sync /dev/sdc15

[root@testos-02-bash Packages]# mke2fs -j /dev/md5

mke2fs 1.41.12 (17-May-2010)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=128 blocks, Stripe width=256 blocks

65280 inodes, 260864 blocks

13043 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=268435456


8 block groups

32768 blocks per group, 32768 fragments per group

8160 inodes per group

Superblock backups stored on blocks: 32768, 98304, 163840, 229376

Writing inode tables: done

Creating journal (4096 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 31 mounts or

180 days, whichever comes first. Use tune2fs -c or -i to override.

[root@testos-02-bash Packages]# mkdir /raid5

[root@testos-02-bash Packages]# mount /dev/md5 /raid5/

Removing failed disk from the Raid5:

[root@testos-02-bash Packages]# mdadm /dev/md5 -f /dev/sdc14

mdadm: set /dev/sdc14 faulty in /dev/md5

[root@testos-02-bash Packages]# mdadm --detail /dev/md5

/dev/md5 : Version : 1.2


Creation Time : Mon Nov 10 13:19:56 2014
Raid Level : raid5
Array Size : 1043456 (1019.17 MiB 1068.50 MB)
Used Dev Size : 521728 (509.59 MiB 534.25 MB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Mon Nov 10 14:12:30 2014
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : testos-02:5 (local to host testos-02)
UUID : d3ab4e57:54c2d050:56743083:c1a6a334
Events : 19
Number Major Minor RaidDevice State
0 8 45 0 active sync /dev/sdc13

1 0 0 1 removed

3 8 47 2 active sync /dev/sdc15

1 8 46 - faulty spare /dev/sdc14

Add additional partition to Raid5:

[root@testos-02-bash Packages]# mdadm /dev/md5 -a /dev/sdc16

mdadm: added /dev/sdc16

[root@testos-02-bash Packages]#

[root@testos-02-bash Packages]# mdadm --detail /dev/md5

/dev/md5 : Version : 1.2

Creation Time : Mon Nov 10 13:19:56 2014

Raid Level : raid5

Array Size : 1043456 (1019.17 MiB 1068.50 MB)

Used Dev Size : 521728 (509.59 MiB 534.25 MB)

Raid Devices : 3

Total Devices : 4

Persistence : Superblock is persistent

Update Time : Mon Nov 10 14:18:13 2014

State : clean

Active Devices : 3

Working Devices : 3

Failed Devices : 1

Spare Devices : 0

Layout : left-symmetric

Chunk Size : 512K

Name : testos-02:5 (local to host testos-02)


UUID : d3ab4e57:54c2d050:56743083:c1a6a334

Events : 40

Number Major Minor RaidDevice State

0 8 45 0 active sync /dev/sdc13

4 259 0 1 active sync /dev/sdc16

3 8 47 2 active sync /dev/sdc15

1 8 46 - faulty spare /dev/sdc14

1. How to remove fault disk or partition from the raid

Mdadm /dev/md1 remove /dev/sdc*

2. How to rename the VG name?

Vgrename oldvgname newvgname

3. How to merge one volume group to another group?

Vgmerge raidvg test

Here test volume group merging with raidvg

4. How to rename lv name?

Lvrename oldlv newlv

Lvrename /dev/raidvg/testlv /dev/raidvg/tlv


RAID:

What is RAID?

RAID stands for Redundant Array of Inexpensive Disks which was later interpreted to
Redundant Array of Independent Disks. This technology is now used in almost all the IT
organizations looking for data redundancy and better performance. It combines multiple
available disks into 1 or more logical drive and gives you the ability to survive one or more
drive failures depending upon the RAID level used.

Why to use RAID?

With the increasing demand in the storage and data world wide the prime concern for the
organization is moving towards the security of their data. Now when I use the term security,
here it does not means security from vulnerable attacks rather than from hard disk failures
and any such relevant accidents which can lead to destruction of data. Now at those scenarios
RAID plays it magic by giving you redundancy and an opportunity to get back all your data
within a glimpse of time.

Levels

Now with the moving generation and introduction of new technologies new RAID levels
started coming into the picture with various improvisation giving an opportunity to
organizations to select the required model of RAID as per their work requirement.

Now here I will be giving you brief introduction about some of the main RAID levels
which are used in various organizations.

RAID 0

This level strips the data into multiple available drives equally giving a very high read and
write performance but offering no fault tolerance or redundancy. This level does not provides
any of the RAID factor and cannot be considered in an organization looking for redundancy
instead it is preferred where high performance is required.

Calculation:
No. of Disk: 5
Size of each disk: 100GB
Usable Disk size: 500GB
Pros Cons

Data is stripped into multiple


No support for Data Redundancy
drives

Disk space is fully utilized No support for Fault Tolerance

Minimum 2 drives required No error detection mechanism

Failure of either disk results in complete data loss in


High performance
respective array

RAID 1

This level performs mirroring of data in drive 1 to drive 2. It offers 100% redundancy as
array will continue to work even if either disk fails. So organization looking for better
redundancy can opt for this solution but again cost can become a factor.

Calculation:
No. of Disk: 2
Size of each disk: 100GB
Usable Disk size: 100GB

Pros Cons

Performs mirroring of data i.e identical


Expense is higher (1 extra drive required per
data from one drive is written to another
drive for mirroring)
drive for redundancy.

High read speed as either disk can be used Slow write performance as all drives has to be
if one disk is busy updated

Array will function even if any one of the


drive fails

Minimum 2 drives required


RAID 2

This level uses bit-level data stripping rather than block level. To be able to use RAID 2
make sure the disk selected has no self disk error checking mechanism as this level uses
external Hamming code for error detection. This is one of the reason RAID is not in the
existence in real IT world as most of the disks used these days come with self error detection.
It uses an extra disk for storing all the parity information

Calculation:

Formula: n-1 where n is the no. of disk


No. of Disk: 3
Sizeofeachdisk: 100GB

Usable Disk size: 200GB

Pros Cons

It is used with drives with no built in error detection


BIT level stripping with parity
mechanism

One designated drive is used to store


These days all SCSI drives have error detection
parity

Uses Hamming code for error


Additional drives required for error detection
detection

RAID 3

This level uses byte level stripping along with parity. One dedicated drive is used to store the
parity information and in case of any drive failure the parity is restored using this extra drive.
But in case the parity drive crashes then the redundancy gets affected again so not much
considered in organizations.
Calculation:
Formula: n-1 where n is the no. of disk
No. of Disk: 3
Size of each disk: 100GB

Usable Disk size: 200GB

Pros Cons

BYTE level stripping with parity Additional drives required for parity

One designated drive is used to store parity No redundancy in case parity drive crashes

Slow performance for operating on small sized


Data is regenerated using parity drive
files

Data is accessed parallel

High data transfer rates (for large sized


files)

Minimum 3 drives required

RAID 4

This level is very much similar to RAID 3 apart from the feature where RAID 4 uses block
level stripping rather than byte level.

Calculation:
Formula: n-1 where n is the no. of disk
No. of Disk: 3
Size of each disk: 100GB
Usable Disk size: 200GB
Pros Cons

BLOCK level stripping along with Since only 1 block is accessed at a time so
dedicated parity performance degrades

One designated drive is used to store


Additional drives required for parity
parity

Write operation becomes slow as every time a


Data is accessed independently
parity has to be entered

Minimum 3 drives required

High read performance since data is


accessed independently.

RAID 5

It uses block level stripping and with this level distributed parity concept came into the
picture leaving behind the traditional dedicated parity as used in RAID 3 and RAID 5. Parity
information is written to a different disk in the array for each stripe. In case of single disk
failure data can be recovered with the help of distributed parity without affecting the
operation and other read write operations.

Calculation:
Formula: n-1 where n is the no. of disk
No. of Disk: 4
Size of each disk: 100GB
Usable Disk size: 300GB
Pros Cons

Block level stripping with In case of disk failure recovery may take longer time
DISTRIBUTED parity as parity has to be calculated from all available drives

Parity is distributed across the disks


Cannot survive concurrent drive failures
in an array

High Performance

Cost effective

Minimum 3 drives required

RAID 6

This level is an enhanced version of RAID 5 adding extra benefit of dual parity. This level
uses block level stripping with DUAL distributed parity. So now you can get extra
redundancy. Imagine you are using RAID 5 and 1 of your disk fails so you need to hurry to
replace the failed disk because if simultaneously another disk fails then you won't be able to
recover any of the data so for those situations RAID 6 plays its part where you can survive 2
concurrent disk failures before you run out of options.

Calculation:

Formula: n-2 where n is the no. of disk

No. of Disk: 4
Size of each disk: 100GB

Usable Disk size: 200GB


Pros Cons

Block level stripping with DUAL


Cost Expense can become a factor
distributed parity

Writing data takes longer time due to dual


2 parity blocks are created
parity

Can survive concurrent 2 drive failures in


an array

Extra Fault Tolerance and Redundancy

Minimum 4 drives required

RAID 0+1

This level uses RAID 0 and RAID 1 for providing redundancy. Stripping of data is performed
before Mirroring. In this level the overall capacity of usable drives is reduced as compared to
other RAID levels. You can sustain more than one drive failure as long as they are not in the
same mirrored set.

NOTE: The no. of drives to be created


should always be in the multiple of 2

Calculation:
Formula: n/2 * size of disk (where n is the no. of disk)

No. of Disk: 8
Size of each disk: 100GB

Usable Disk size: 400GB


Pros Cons

No parity generation Costly as extra drive is required for each drive

Performs RAID 0 to strip data and RAID 100% disk capacity is not utilized as half is used
1 to mirror for mirroring

Stripping is performed before Mirroring Very limited scalability

Usable capacity is n/2 * size of disk (n =


no. of disks)

Drives required should be multiple of 2

High Performance as data is stripped

RAID 1+0 (RAID 10)

This level performs Mirroring of data prior stripping which makes it much more efficient and
redundant as compared to RAID 0+1. This level can survive multiple simultaneous drive
failures. This can be used in organizations where high performance and security are required.
In terms of fault Tolerance and rebuild performance it is better than RAID 0+1.

NOTE: The no. of drives to be created should always be in the multiple of 2

Calculation:
Formula: n/2 * size of disk (where n is the no. of disk)

No. of Disk: 8
Size of each disk: 100GB

Usable Disk size: 400GB


Pros Cons

No Parity generation Very Expensive

Performs RAID 1 to mirror and RAID 0 to


Limited scalability
strip data

Mirroring is performed before stripping

Drives required should be multiple of 2

Usable capacity is n/2 * size of disk (n = no.


of disks)

Better Fault Tolerance than RAID 0+1

Better Redundancy and faster rebuild than


0+1

Can sustain multiple drive failures

Das könnte Ihnen auch gefallen