Sie sind auf Seite 1von 6

8/14/2014 Disk to LUN I/O performance on SAN and NAS

http://www.dba-oracle.com/t_disk_lun_san_nas_performance_bottleneck.htm 1/6
Oracle Tips
Got Questions?
New 12c Poster Available!
Learn Oracle at Sea!
BEWARE of 11gR2
Upgrade Gotchas!
Free AWR Report
Analysis





Search BC Oracle Sites
Search
Home
E-mail Us
Oracle Articles
Oracle Training
Oracle Tips
Oracle Forum
Class Catalog
Remote DBA
Oracle Tuning
Emergency 911
RAC Support
Apps Support
Analysis
Design
Implementation
Oracle Support
SQL Tuning
Security
Oracle UNIX
Oracle Linux
Monitoring
Remote support
Remote plans
Remote services
Application Server
Applications
Oracle Forms
Oracle Portal
App Upgrades
SQL Server
Oracle Concepts
Sof tware Support
Remote Support
Development
Implementation
Consulting Staf f


Disk to LUN I/O performance on SAN and NAS
Oracle Tips by Burleson Consulting
Many Oracle professionals note that network attached storage (NAS) and Storage
Area Networks (SAN) can result in slower I/O throughput.
These disk mapping concepts are discussed in the book "Oracle Disk I/O Tuning", but
lets review the Device Media Control Language (DMCL) between the logical devices
(the logical unit, or LUN), and the physical disk drives, I/O buffers, SCSI controllers
and device drivers. Mike Ault notes in his book:
"It has also been reported that some hardware RAIDs support a number
of different LUNs (logical disks), but these LUNs share a common set of
I/O buffers between them.
This can cause SCSI QFULL conditions on those devices that do not
have commands queued."
Steve Karam notes this relationship between ASM and LUN mapping:
One thing to remember is that ASM is not RAID. Oracle portrays ASM
as a Volume Manager, filesystem, miracle, whatever you would like to
call it, but in reality it is no more than extent management and load
balancing; it scatters extents across your LUNs (1MB stripe size for
datafiles/archivelogs, 128k stripe size for redo/controlfiles). It also
provides extent-based mirroring for extra redundancy.
This benefits us in a couple ways. First, remember that your OS, HBA, or
other parts of the host driver stack may have limits per LUN on I/O.
Distributing your extents across multiple LUNs with ASM will provide
better I/O concurrency by load balancing across them, eliminating this
bottleneck.
Second, carving into multiple LUNs allows multiple ASM volumes.
Multiple volumes help us if our hardware has any LUN-based migration
utilities for snapshots or cloning.
Third, you may end up with multiple LUNs if you need to add capacity.
ASM allows us to resize a diskgroup on-the-fly AND rebalance our
extents evenly at the same time when we add a new LUN. Even if you
only start with a single LUN, you may end up with more in the long run.
Fourth, because an ASM diskgroup is not true RAID, you are able to use
it to stripe across volumes. This means that in a SAN with 3 trays, you
can carve a LUN from each tray and use it to form a single ASM
diskgroup. This further distributes your storage and reduces throughput

8/14/2014 Disk to LUN I/O performance on SAN and NAS
http://www.dba-oracle.com/t_disk_lun_san_nas_performance_bottleneck.htm 2/6







Consulting Prices
Help Wanted!

Oracle Posters
Oracle Books
Oracle Scripts
Ion
Excel-DB
Don Burleson Blog






bottlenecks.
I have not seen any tried and true formula for the number of LUNs per
ASM diskgroup, but you can calculate it based on your throughput per
capacity. Make sure the LUNs provide maximum and equivalent I/O
operations per second per gigabyte.

LUN Mapping for Oracle disks
The central issue the interface between the logical "LUN" and the mapping of the disk
device drivers and disk controllers to the LUN's:








The DISK array is usually sliced up by a controller into what are known as LUNs
(logical units) and these LUNs are then used to create the logical volumes upon which
we place our virtual disks. Usually there is at least one layer of abstraction (disks-
LUNS) and at least two (disks-LUNS-logical volumes). These layers of abstraction
make pinpointing disk performance issues difficult and may cause masking of
symptoms. Sometimes there is a third level of abstraction, RAID, where multiple
logical disks are sliced into relatively thin (8k to 1 megabyte) stripes and then combined
to form RAID arrays.



8/14/2014 Disk to LUN I/O performance on SAN and NAS
http://www.dba-oracle.com/t_disk_lun_san_nas_performance_bottleneck.htm 3/6
As you can see, tracking a disk performance problem from a database file to the
underlying RAID volume, to the logical disk, to the LUN and finally to the physical disk
level can be daunting. Further complications arise by some automated tuning features of
the more advanced arrays where stripes or LUNs may be migrated from "hot" physical
disks to cooler ones. Luckily, many RAID manufacturers are now providing graphical
interfaces that track IO and other performance metrics down to the physical disk level.
Data transfer rates for disks
The data transfer rates for individual disks varies widely depending on the storage
device. Note: Your disk I/O mileage will vary greatly as a function of on-board RAM
caching, your data buffer size, db_file_multiblock_read_count, parallel access arrays,
&c. See details here:
STORAGEDEVICE
Avg I/O per second
(IOPS)
TRANSFER RATEMEG/SEC
TMS Solid State Disk 400,000 1,600
IEEE1394 1,100 400-800

PC Solid State Disk 7,000 62
Typical PC disk 150 15-30

The interface also has an effect on I/O throughput speed:
Interface type Speed
Serial 115 kb/s
Parallel(standard) 115 kb/s
Parallel(ECP/EPP) 3.0 Mb/s
SCSI 5-320 Mb/sec
ATA 3.3 - 133Mb/sec
USB1.1 1.5 Mb/s
USB2.x 60 Mb/s
IEEE1394(b) 50-400 Mb/s
8/14/2014 Disk to LUN I/O performance on SAN and NAS
http://www.dba-oracle.com/t_disk_lun_san_nas_performance_bottleneck.htm 4/6
Original IDE 3.3-33 Mb/sec
SATA 150 Mb/s
Fibre Channel 2 Gb/sec
Detecting bottlenecks in SAN and NAS
The location of a bottleneck depends upon many factors, including LUN's sharing
common I/O buffers and a LUN mapped to a device with few controllers.
Solaris LUN bottlenecks
The sd_max_throttle variable sets the maximum number of commands that the SCSI sd
driver will attempt to queue to a single HBA driver. The default value is 256. This
variable must be set to a value less than or equal to the maximum queue depth of each
LUN connected to each instance of the sd driver. If this is not done, then commands
may be rejected because of a full queue condition and the sd driver instance that
receives the queue full message will throttle down sd_max_throttle to 1.



If you like Oracle tuning, see the book "Oracle Tuning: The Definitive
Reference", with 950 pages of tuning tips and scripts.
You can buy it direct from the publisher for 30%-off and get instant
access to the code depot of Oracle tuning scripts.
8/14/2014 Disk to LUN I/O performance on SAN and NAS
http://www.dba-oracle.com/t_disk_lun_san_nas_performance_bottleneck.htm 5/6


Burleson is the American Team
Note: This Oracle documentation was created as a support and Oracle training
reference for use by our DBA performance tuning consulting professionals.
Feel free to ask questions on our Oracle forum.
Verify experience!Anyone considering using the services of an Oracle support
expert should independently investigate their credentials and experience, and
not rely on advertisements and self-proclaimed expertise. All legitimate Oracle
experts publish their Oracle qualifications.
Errata? Oracle technology is changing and we strive to update our BC Oracle
support information. If you find an error or have a suggestion for improving our
content, we would appreciate your feedback. Just e-mail:
and include the URL for the page.

Burleson Consulting
The Oracle of Database Support
Oracle Performance Tuning
Remote DBA Services

Copyri ght ? 1996 - 2014
Al l ri ghts reserved by Burl eson
Oracl e ? i s the regi stered trademark of Oracl e Corporati on.
8/14/2014 Disk to LUN I/O performance on SAN and NAS
http://www.dba-oracle.com/t_disk_lun_san_nas_performance_bottleneck.htm 6/6

Das könnte Ihnen auch gefallen