Beruflich Dokumente
Kultur Dokumente
Declaration:
Evaluator’s Comments:
___________________________________________________
EMC Symmetrix DMX-4 enables you to manage and protect all of your data—
more than 1 petabyte of storage—and keep it available at all times.
Symmetrix DMX-4 provides customized Flash drives that break the
performance barriers of traditional disk technology because they are
optimized to meet high-end storage requirements. DMX-4 also delivers built-in
RSA security technology to keep your critical data safe, as well as high
availability to ensure constant data access. Best of all, the DMX-4 is energy
efficient and easy to manage
Features Benefits
Tiered storage with Optimize service levels and reduce costs with
quality of service Flash, Fibre Channel, and Serial Advanced
(QoS) Technology Attachment (SATA) drives.
PART B
Q.4. Choose 2 RAID products, one offered by EMC and other offered
by IBM for similar requirements and compare and contrast the
features supported.
Ans :
Performance All volumes are spread The system is susceptible to disk hot spots which can
over all disks. Thus, all significantly degrade performance. Frequent manual
disks handle equal tuning is often required.
workloads, eliminating
disk hot spots.
FEATURES IBM XIV EMC CLARiiON and Symmetrix
Rebuild time Revolutionary Rebuild time can take hours, increasing the
protection scheme possibility of additional disk failures that can lead to
ensures 30-minute data loss.
rebuild time (or less)
for 1TB disks,
minimizing the
likelihood of a double
disk failure.
Flexible duration YES (1, 2, 3, or NO (DMX and V-Max 3-year standard hardware
warranties 4 years, warranty; most licensed software has only a 90-day
customer standard warranty covering media defects, not
choice, covers “bugs”)
hardware and
licensed
software)
problems
Asynchronous remote YES (sends NO (delays sending data, leading to more data loss)
copy design can data quickly,
reduce data loss reducing data
loss to as little
as 3-5
seconds)
Granted, the I/O path I was working with was not a traditional hard disk. It
was a LUN presented from a SAN with a large amount of cache, and to
simplify to some extent, the LUN was a RAID 0 stripe set across 12 virtualized
drives with a rather large stripe unit size (960K). But how should I explain
why 8K random I/Os could outperform 8K sequential I/Os?
Random I/Os were able to effectively hash I/Os across multiple drives
that make up the RAID 0 device.
Relatively large RAID 0 stripe unit size of 960K caused 8K sequential
I/Os to cluster around the same drives. Note that it would take 120
sequential I/Os to fill a single 960K stripe.
A base amount of cache was assigned to each drive in RAID 0. And
when random I/Os were hashed across 12 drives, the I/Os benefited
from larger amount of cache.
Below you will find a table that summarizes the key quantitative attributes of
the various RAID levels for easy comparison. For the full details on any RAID
level, see its own page, accessible here. For a description of the different
characteristics, see the discussion of factors differentiating RAID levels. Also
be sure to read the notes that follow the table:
Random Sequential
RAID Number Storage Fault Random Sequential
Capacity Availability Write Write Cost
Level of Disks Efficiency Tolerance Read Perf Read Perf
Perf Perf
0 2,3,4,... S*N 100% none $
1 2 S*N/2 50% $$
varies,
2 many ~ 70-80% $$$$$
large
S*N0*(N3-
03/30 6,8,9,10,... (N3-1)/N3 $$$$
1)
S*N0*(N5-
05/50 6,8,9,10,... (N5-1)/N5 $$$$
1)
S*((N/2)- ((N/2)-
15/51 6,8,10,... $$$$$
1) 1)/N
For the number of disks, the first few valid sizes are shown; you can
figure out the rest from the examples given in most cases. Minimum
size is the first number shown; maximum size is normally dictated by
the controller. RAID 01/10 and RAID 15/51 must have an even number
of drives, minimum 6. RAID 03/30 and 05/50 can only have sizes that
are a product of integers, minimum 6.
For capacity and storage efficiency, "S" is the size of the
smallest drive in the array, and "N" is the number of drives in the
array. For the RAID 03 and 30, "N0" is the width of the RAID 0
dimension of the array, and "N3" is the width of the RAID 3 dimension.
So a 12-disk RAID 30 array made by creating three 4-disk RAID 3
arrays and then striping them would have N3=4 and N0=3. The same
applies for "N5" in the RAID 05/50 row.
Storage efficiency assumes all drives are of identical size. If this is not
the case, the universal computation (array capacity divided by the sum
of all drive sizes) must be used.
Performance rankings are approximations and to some extent, reflect
my personal opinions. Please don't over-emphasize a "half-star"
difference between two scores!
Cost is relative and approximate, of course. In the real world it will
depend on many factors; the dollar signs are just intended to provide
some perspective.
Q.6. Based on average number of IOPS compare the performance
offered by RAID levels 1+0, 0+1, 3 and 5.
Ans : Every disk in your storage system has a maximum theoretical IOPS
value that is based on a formula. Disk performance — and IOPS — is based
on three key factors:
Rotational speed (aka spindle speed). Measured in revolutions
per minute (RPM), most disks you’ll consider for enterprise storage
rotate at speeds of 7,200, 10,000 or 15,000 RPM with the latter two
being the most common. A higher rotational speed is associated with
a higher performing disk. This value is not used directly in
calculations, but it is highly important. The other three values depend
heavily on the rotational speed, so I’ve included it for completeness.
Average latency. The time it takes for the sector of the disk being
accessed to rotate into position under a read/write head.
Average seek time. The time (in ms) it takes for the hard drive’s
read/write head to position itself over the track being read or written.
There are both read and write seek times; take the average of the
two values.
To calculate the IOPS range, use this formula: Average IOPS: Divide 1 by
the sum of the average latency in ms and the average seek time in ms (1 /
(average latency in ms + average seek time in ms).
Sample drive:
In RAID level 3
1 Read operation = 1 IOPS
1 Write operation = 4 IOPS
In RAID level 5
1 Read operation = 1 IOPS
1 Write operation = 4 IOPS
The chart below summarizes the read and write RAID penalties for the most
common RAID levels.
A good starting point formula is below. This formula does not use the array
IOPS value; it uses a workload IOPS value that you would derive on your own
or by using some kind of calculation tool, such as the Exchange Server
calculator.
The RAID 5 write penalty in a 4+1 RAID group is 4 while the RAID 10 write
penalty is 2.
Before you even put this in a spreadsheet you know what it will tell you-
In a 100% Read Only environment RAID 5 and RAID 10 will give the
same performance. RAID 5 may use less disks to do it but not
necessarily.
In a 100% Write Only environment, RAID 5 will require twice as many
disk IOPS and almost twice the number of disks.
Anywhere in between those two extremes, the more writes required,
the less number of RAID 10 disks you will need to achieve the
performance.
If we stop there, it doesn’t seem like there is any point in using RAID 5 since
even in the best case scenario, there is only a partial chance that we will use
less disks. That is where the cost and space effectiveness issues come in.