Sie sind auf Seite 1von 12

RAID Concepts

RAID Levels

RAID Level 0 : ----- Disk Striping -----

Improved I/O performance is the major reason for using RAID level 0.
No protection is provided against data loss due to member disk failures. A
RAID level 0 array by itself is thus an unsuitable storage medium for data
that can not easily be reproduced, or for data that must be available for
critical system operation. It is more suitable for data that can be reproduced
or is replicated on other media.

A RAID level 0 array can be particularly for :


Storing program image libraries or runtime libraries for rapid loading, these
libraries are normally read only.
Storing large tables or other structures of read only data for rapid
application access. Like program images, these data can be backed up on
highly reliable media, from which they can recreated in the event of a failure.
Collecting data from external sources at very high data transfer rate.

A RAID level 0 array are not particularly for :


Applications which make sequential requests for small amount of data.
These applications will spend most of their I/O time waiting for disk to spin,
whether or not they use striped arrays as storage media.
Applications which make synchronous random requests for small amounts
of data.
RAID Level 1 : ----- Disk Mirroring -----
Before You Start
Specially built hardware-based RAID disk controllers are available for both IDE and
SCSI drives. They usually have their own BIOS, so you can configure them right after
your system's the power on self test (POST). Hardware-based RAID is transparent to
your operating system; the hardware does all the work.

If hardware RAID isn't available, then you should be aware of these basic guidelines to
follow when setting up software RAID.

IDE Drives
To save costs, many small business systems will probably use IDE disks, but they do
have some limitations.

• The total length of an IDE cable can be only a few feet long, which generally
limits IDE drives to small home systems.
• IDE drives do not hot swap. You cannot replace them while your system is
running.
• Only two devices can be attached per controller.
• The performance of the IDE bus can be degraded by the presence of a second
device on the cable.
• The failure of one drive on an IDE bus often causes the malfunctioning of the
second device. This can be fatal if you have two IDE drives of the same RAID set
attached to the same cable.

For these reasons, I recommend you use only one IDE drive per controller when using
RAID, especially in a corporate environment. In a home or SOHO setting, IDE-based
software RAID may be adequate.

Serial ATA Drives


Serial ATA type drives are rapidly replacing IDE, or Ultra ATA, drives as the preferred
entry level disk storage option because of a number of advantages:

• The drive data cable can be as long as 1 meter in length versus IDE's 18 inches.
• Serial ATA has better error checking than IDE.
• There is only one drive per cable which makes hot swapping, or the capability to
replace components while the system is still running, possible without the fear of
affecting other devices on the data cable.
• There are no jumpers to set on Serial ATA drives to make it a master or slave
which makes them simpler to configure.
• IDE drives have a 133Mbytes/s data rate whereas the Serial ATA specification
starts at 150 Mbytes/sec with a goal of reaching 600 Mbytes/s over the expected
ten year life of the specification.

If you can't afford more expensive and faster SCSI drives, Serial ATA would be the
preferred device for software and hardware RAID

SCSI Drives
SCSI hard disks have a number of features that make them more attractive for RAID use
than either IDE or Serial ATA drives.

• SCSI controllers are more tolerant of disk failures. The failure of a single drive is
less likely to disrupt the remaining drives on the bus.
• SCSI cables can be up to 25 meters long, making them suitable for data center
applications.
• Much more than two devices may be connected to a SCSI cable bus. It can
accommodate 7 (single-ended SCSI) or 15 (all other SCSI types) devices.
• Some models of SCSI devices support "hot swapping" which allows you to
replace them while the system is running.
• SCSI currently supports data rates of up to 640 Mbytes/s making them highly
desirable for installations where rapid data access is imperative.

SCSI drives tend to be more expensive than IDE drives, however, which may make them
less attractive for home use.

Should I Use Software RAID Partitions Or Entire


Disks?
It is generally a not a good idea to share RAID-configured partitions with non-RAID
partitions. The reason for this is obvious: A disk failure could still incapacitate a system.

If you decide to use RAID, all the partitions on each RAID disk should be part of a RAID
set. Many people simplify this problem by filling each disk of a RAID set with only one
partition.

Step By Step Configuration of RAID 5 Edit

How to Configure Software RAID


Here I am giving the step to configure the Software RAID level 5.
Configuring RAID using Linux requires a number of steps that need to be
followed carefully. In the tutorial example, you'll be configuring RAID 5 using a
system with three partition hard disks.

Be sure to adapt the various stages outlined below to your particular


environment.

RAID Partitioning
You first need to create three or more partitions. If you are doing RAID 0 or RAID
5, the partitions should be of approximately the same size, as in this scenario.

Determining Available Partitions

[root@station1 ~]# fdisk -l

Disk /dev/sda: 16.1 GB, 16106127360 bytes


255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1288 10241437+ 83 Linux
/dev/sda3 1289 1353 522112+ 82 Linux swap / Solaris
/dev/sda4 1354 1958 4859662+ 5 Extended

[root@station1 ~]# fdisk /dev/sda

The number of cylinders for this disk is set to 1958.


There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sda: 16.1 GB, 16106127360 bytes


255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1288 10241437+ 83 Linux
/dev/sda3 1289 1353 522112+ 82 Linux swap / Solaris
/dev/sda4 1354 1958 4859662+ 5 Extended

Command (m for help): m


Command action
...
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
.
.
w write table to disk and exit
.
...

Command (m for help): n


First cylinder (1354-1958, default 1354):
Using default value 1354
Last cylinder or +size or +sizeM or +sizeK (1354-1958, default 1958): +512MB

Command (m for help): n


First cylinder (1417-1958, default 1417):
Using default value 1417
Last cylinder or +size or +sizeM or +sizeK (1417-1958, default 1958): +512MB

Command (m for help): n


First cylinder (1480-1958, default 1480):
Using default value 1480
Last cylinder or +size or +sizeM or +sizeK (1480-1958, default 1958): +512MB

Command (m for help): n


First cylinder (1543-1958, default 1543):
Using default value 1543
Last cylinder or +size or +sizeM or +sizeK (1543-1958, default 1958): +512MB

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

[root@station1 ~]# partprobe

Now You have to change each partition in the RAID set to be of type FD (Linux
raid autodetect), and you can do this with fdisk.

[root@station1 ~]# fdisk /dev/sda

The number of cylinders for this disk is set to 1958.


There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sda: 16.1 GB, 16106127360 bytes


255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1288 10241437+ 83 Linux
/dev/sda3 1289 1353 522112+ 82 Linux swap / Solaris
/dev/sda4 1354 1958 4859662+ 5 Extended
/dev/sda5 1354 1416 506016 83 Linux
/dev/sda6 1417 1479 506016 83 Linux
/dev/sda7 1480 1542 506016 83 Linux
/dev/sda8 1543 1605 506016 83 Linux

Command (m for help):

Set The ID Type


Partition /dev/sda5 is the fifth partition on disk /dev/sda. Modify its type using the
t command, and specify the partition number and type code. You also should use
the L command to get a full listing of ID types in case you forget. In this case,
RAID uses type fd, it may be different for your version of Linux.

Command (m for help): t


Partition number (1-8): 5
Hex code (type L to list codes): L

...
...
...
16 Hidden FAT16 61 SpeedStor f2 DOS secondary
17 Hidden HPFS/NTF 63 GNU HURD or Sys fd Linux raid auto
18 AST SmartSleep 64 Novell Netware fe LANstep
1b Hidden Win95 FA 65 Novell Netware ff BBT

Hex code (type L to list codes): fd

Command (m for help): t


Partition number (1-8): 6
Hex code (type L to list codes): fd
Changed system type of partition 6 to fd (Linux raid autodetect)

Command (m for help): t


Partition number (1-8): 7
Hex code (type L to list codes): fd
Changed system type of partition 7 to fd (Linux raid autodetect)

Command (m for help): t


Partition number (1-8): 8
Hex code (type L to list codes): fd
Changed system type of partition 8 to fd (Linux raid autodetect)

Now Make Sure The Change Occurred

Command (m for help): p

Disk /dev/sda: 16.1 GB, 16106127360 bytes


255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1288 10241437+ 83 Linux
/dev/sda3 1289 1353 522112+ 82 Linux swap / Solaris
/dev/sda4 1354 1958 4859662+ 5 Extended
/dev/sda5 1354 1416 506016 fd Linux raid autodetect
/dev/sda6 1417 1479 506016 fd Linux raid autodetect
/dev/sda7 1480 1542 506016 fd Linux raid autodetect
/dev/sda8 1543 1605 506016 fd Linux raid autodetect

Command (m for help):

Save The Changes


Use the w command to permanently save the changes to disk /dev/sda:

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

[root@station1 ~]# partprobe

Preparing the RAID Set


Now that the partitions have been prepared, we have to merge them into a new
RAID partition that we'll then have to format and mount. Here's how it's done.

Create the RAID Set


You use the mdadm command with the --create option to create the RAID set. In
this example we use the --level option to specify RAID 5, and the --raid-devices
option to define the number of partitions to use.

[root@station1 ~]# mdadm --create --verbose /dev/md0 --level=5 \


> --raid-devices=3 /dev/sda5 /dev/sda6 /dev/sda7

mdadm: layout defaults to left-symmetric


mdadm: chunk size defaults to 64K
mdadm: /dev/sda5 appears to contain an ext2fs file system
size=1959896K mtime=Tue Jul 28 14:58:17 2009
mdadm: size set to 505920K
Continue creating array? y
mdadm: array /dev/md0 started.

Confirm RAID Is Correctly Inititalized


The /proc/mdstat file provides the current status of all RAID devices. Confirm that
the initialization is finished by inspecting the file and making sure that there are
no initialization related messages. If there are, then wait until there are none.

root@station1 ~]# cat /proc/mdstat


Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda7[2] sda6[1] sda5[0]
1011840 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

[root@station1 ~]#

Notice that the new RAID device is called /dev/md0. This information will be
required for the next step.

Format The New RAID Set


Your new RAID partition now has to be formatted. The mkfs.ext3 command is
used to do this.

[root@station1 ~]# mkfs.ext3 /dev/md0


mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
126720 inodes, 252960 blocks
12648 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=260046848
8 block groups
32768 blocks per group, 32768 fragments per group
15840 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 23 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

[root@station1 ~]#

Create the mdadm.conf Configuration File


Your system doesn't automatically remember all the component partitions of your
RAID set. This information has to be kept in the mdadm.conf file. The formatting
can be tricky, but fortunately the output of the mdadm --detail --scan --verbose
command provides you with it. Here we see the output sent to the screen.

[root@station1 ~]# mdadm --detail --scan --verbose


ARRAY /dev/md0 level=raid5 num-devices=3
UUID=93c8a50d:218e894f:8539640a:88bc50ad
devices=/dev/sda5,/dev/sda6,/dev/sda7

Here we export the screen output to create the configuration file.

[root@station1 ~]# mdadm --detail --scan --verbose > /etc/mdadm.conf

Create A Mount Point For The RAID Set


The next step is to create a mount point for /dev/md0. In this case we'll create
one called /mnt/raid

[root@station1 ~]# mkdir /mnt/raid

Edit The /etc/fstab File


The /etc/fstab file lists all the partitions that need to mount when the system
boots. Add an Entry for the RAID set, the /dev/md0 device.

/dev/md0 /mnt/raid ext3 defaults 12

Do not use labels in the /etc/fstab file for RAID devices; just use the real device
name, such as /dev/md0. In older Linux versions, the /etc/rc.d/rc.sysinit script
would check the /etc/fstab file for device entries that matched RAID set names
listed in the now unused /etc/raidtab configuration file. The script would not
automatically start the RAID set driver for the RAID set if it didn't find a match.
Device mounting would then occur later on in the boot process. Mounting a RAID
device that doesn't have a loaded driver can corrupt your data and produce error.

[root@station1 ~]# mount /dev/md0 /mnt/raid


Check The Status Of The New RAID
The /proc/mdstat file provides the current status of all the devices.

[root@station1 ~]# cat /proc/mdstat

Das könnte Ihnen auch gefallen