Sie sind auf Seite 1von 8

A Paper Presentation on----

STORAGE AREA NETWORKS

PRESENTED BY: G V SAI CHAND LAKSHMI BHAVANI IV B.TECH B.TECH saichand1633@gmail.com DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING EMAIL: IV R

CHEBROLU ENGINEERING COLLEGE

CHEBROLU- 522002

Abstract:
Storage Area Networks (SANs) have the virtues of high scalability, high availability and high performance. On the other hand,their storage virtualization systems are not compatible with multi-operating systems, and it is hard for the virtualization storage management system to manage multi-type storage. This paper proposes a new virtualization storage management model for SANs.

1. Introduction Storage Area Networks (SANs) use a net-oriented storage structure, which enables the separation of data processing and data storage. SANs have the virtue of high availability and scalability, high I/O performance, and data sharing. SANs employ backup, remote mirroring, and virtualization functions, which has made them more popular. The storage virtualization management system can manage various storage systems which still provide one uniform interface for users. Various [3] storage systems, such as XIOtech , [4] [5] IBM , EMC , all have their own virtualization management systems, which add extra complexity and difficultly. What is a storage area network The Storage Networking Industry Association (SNIA) defines the storage area network (SAN) as a network whose primary purpose is the transfer of data between computer systems and storage elements. A SAN consists of a communication infrastructure, which provides physical connections. It also includes a management layer, which 2

organizes the connections, storage elements, and computer systems so that data transfer is secure and robust.

In simple terms, a SAN is a specialized, highspeed network that attaches servers and storage devices. For this reason, it is sometimes referred to as the network behind the servers. A SAN allows an any-to-any connection across the network, by using interconnect elements such as switches and directors.

Using a SAN can potentially offer the following benefits: - Improvements to application availability: Storage is independent of applications and accessible through multiple data paths for better reliability, availability, and serviceability. - Higher application performance: Storage processing is offloaded from servers and moved onto a separate network. -Centralized and consolidated storage: Simpler management, scalability, flexibility, and availability. - Data transfer and vaulting to remote sites: Remote copy of data that is enabled for disaster protection and against malicious attacks. -Simplified centralized management: Single image of storage media simplifies management. Storage area network storage The storage area network (SAN) liberates the storage device so it is not on a particular server bus, and attaches it directly to the network. In other words, storage is externalized and can be functionally distributed across the organization. The SAN also enables the centralization of storage devices and the clustering of servers.

Different technologies Multiple technology can be used when building a SAN; traditionally the dominant technology is Fiber Channel, but IP based solutions are also quite popular for specific applications The concept of SAN is also independent from the devices that are attached to it. Can be disks, tapes, RAIDs, file servers, or otherStorage.

Disk systems In brief a disk system is a device in which a number of physical storage disks sit side-by-side. By being contained within a single box, a disk system usually has a central control unit that manages all the I/O, simplifying the integration of the system with other devices, such as other disk systems or servers. Depending on the intelligence with which this central control unit is able to manage the individual disks, a disk system can be a JBOD or a RAID. Just A Bunch Of Disks (JBOD) In this case, the disk system appears as a set of individual storage devices to the device they are attached to. The central control unit provides only basic functionality for writing and reading data from the disks.16 Introduction to Storage Area Networks Redundant Array of Independent Disks (RAID) In this case, the central control unit provides additional functionality that makes it possible to utilize the individual disks in such a way to achieve higher faulttolerance and/or performance. The disks themselves appear as a single storage unit to the devices to which they are connected. Depending on the specific functionality offered by a particular disk system, it is

possible to make it behave as a RAID and/or a JBOD; the decision as to which type of disk system is more suitable for a SAN implementation strongly depends on the performance and availability requirements for this particular SAN. A group of hard disks is called a disk array RAID combines a disk array into a single virtual device called RAID drive Provide fault tolerance for shared data and applications Different implementations: Level 0-5 Characteristics: Storage Capacity Speed: Fast Read and/or Fast Write Resilience in the face of device failure RAID Types RAID 0 Stripe with no parity (see next slide for figure) RAID 1 Mirror two or more disks RAID 0+1 (or 1+0) Stripe and Mirrors RAID 3 Synchronous, Subdivided Block Access; Dedicated Parity Drive RAID 5 Like RAID 4, but parity striped across multiple drives Tape systems Tape systems, in much the same way as disk systems do, are devices that comprise all the necessary apparatus to manage the use of tapes for storage purposes. In this case, however, the serial nature of a tape makes it impossible for them to be treated in parallel, as RAID devices are leading to a somewhat simpler architecture to manage and use. There are basically three types of systems: drives, autoloaders and libraries, that are described as follows. Tape drives As with disk drives, tape drives are the means by which tapes can be connected to other devices; they provide the physical and logical structure for reading from, and writing to tapes.

Tape autoloaders Tape autoloaders are autonomous tape drives capable of managing tapes and performing automatic back-up operations. They are usually connected to high-throughput devices that require constant data back-up. Tape libraries Tape libraries are devices capable of managing multiple tapes simultaneously and, as such, can be viewed as a set of independent tape drives or autoloaders. They are usually deployed in systems that require massive storage capacity, or that need some kind of data separation that would result in multiple single-tape systems. As a tape is not a random-access media, tape libraries cannot provide parallel access to multiple tapes as a way to improve performance, but they can provide redundancy as a way to improve data availability and fault-tolerance. Architectures: Direct Attached Storage (DAS) Direct attached storage is the simplest and most commonly used storage model found in most standalone PCs, workstations and servers. A typical DAS configuration consists of a computer that is directly connected to one or several hard disk drives (HDDs) or disk arrays. DAS is a widely deployed technology in enterprise networks. It is easy to understand, acquire and install, and is low cost. It is well suited to the purpose of attaching data storage resources to a computer or a server when capacity, administration, backup, high-availability, high performance are not key requirements. For home PC and small enterprise network applications, DAS is still the dominant choice, as the low-end requirements for growth in capacity, performance and reliability can be easily addressed by the advancements in HDD and bus technologies.

The benefit that comes with the higher layer abstraction in NAS is ease-of-use. Many operating systems, such as UNIX and LINUX, have embedded support for NAS protocols such as NFS. Later versions of Windows OS have also introduced support for the CIFS protocol. Setting up a NAS system, then, involves connecting the NAS storage system to the enterprise LAN (e.g. Ethernet) and configuring the OS on the workstations and servers to access the NAS filer. The many benefits of shared storage can then be easily realized in a familiar LAN environment without introducing a new network infrastructure or new switching devices. Network Attached Storage (NAS) After seeing the consequences of binding storage to individual computers in the DAS model, the benefits of sharing storage resources over the network become obvious. NAS and SAN are two ways of sharing storage over the network. NAS is generally referred to as storage that is directly attached to a computer network (LAN) through network file system protocols such as NFS and CIFS.

Storage Area Network (SAN) SAN provides block-orient I/O between the computer systems and the target disk systems. The SAN may use Fibre Channel or Ethernet (iSCSI) to provide connectivity between hosts and storage. In either case, the storage is physically decoupled from the hosts. The SAN is often built on a dedicated network fabric that is separated from the LAN network to ensure the latency-sensitive block I/O SAN traffic does not interfere with the traffic on the LAN network. This examples shows an dedicated SAN network connecting multiple application servers, database servers, NAS filers on one side, and a number of disk systems and tape drive system on the other. The servers and the storage devices are connected together by the SAN as peers.

Host

Storage

Figure 1: Point to point connection From a cluster and storage infrastructure perspective, point-to-point is not a scalable enterprise configuration and we will not consider it again in this document. Arbitrated Loops A fibre channel arbitrated loop is exactly what it says; it is a set of hosts and devices that are connected into a single loop, as shown in Figure 2 below. It is a cost-effective way to connect up to 126 devices and hosts into a single network. Storage Area Network Components As previously discussed, the primary technology used in storage area networks today is Fibre Channel. This section provides a basic overview of the components in a fibre channel storage fabric as well as different topologies and configurations open to Windows deployments. Fibre Channel Topologies Fundamentally, fibre channel defines three configurations: Point-to-point Fibre Channel Arbitrated Loop (FC-AL) Switched Fibre Channel Fabrics (FC-SW). Although the term fibre channel implies some form of fibre optic technology, the fibre channel specification allows for both fibre optic interconnects as well as copper coaxial cables. Point-to-Point Point-to-point fibre channel is a simple way to connect two (and only two) devices directly together, as shown in Figure 1 below. It is the fibre channel equivalent of direct attached storage (DAS). Figure 2: Fibre Channel arbitrated loop Devices on the loop share the media; each device is connected in series to the next device in the loop and so on around the loop. Any packet traveling from one device to another must pass through all intermediate devices. In the example shown, for host A to communicate with device D, all traffic between the devices must flow through the adapters on host B and device C. The devices in the loop do not need to look at the packet; they will simply pass it through. This is all done at the physical layer by the fibre channel interface card itself; it does not require

Host A

Host B

Device E

Device C

Device D

processing on the host or the device. This is very analogous to the way a token-ring topology operates. Fibre Channel Switched Fabric In a switched fibre channel fabric, devices are connected in a many-tomany topology using fibre channel switches, as shown in Figure 4 below. When a host or device communicates with another host or device, the source and target setup a point-topoint connection (just like a virtual circuit) between them and communicate directly with each other. The fabric itself routes data from the source to the target. In a fibre channel switched fabric, the media is not shared. Any device can communicate with any other device (assuming it is not busy) and communication occurs at full bus speed (1Gbit/Sec or 2Gbit/sec today depending on technology) irrespective of other devices and hosts communicating.

deployed over networks and devices for data management and enterprise growth where data security id of main concern. The storage networking has built a working area and will come out with specifiations and standard recommendations. Reference [1] B.Phillips, Have storage area networks come of age? [J] IEEE Computer, vol.31, no.7, 10-12, July 1998 [2] R. Khattar, Storage Area et al., Introduction Network: to Redbooks

Publications (IBM),1999 [3] XIOTech Corp., http://www.xiotech.com/, May 2004. [4]IBMCorp.http://ww w.redbooks .ibm.com/p ubs/pdfs/ redbo 2003, [5] EMC Corp. http://ww w.emc.com/ products/storage_mana g ement/controlcenter/pdf/H1140_cntrlctr_srm _ plan_ds_ldv.pdf, May 2004. oks/sg245470.pdf, March

Referenced websites:
H os t A H os t B H os t C H os t D

1. IBM System Storage: Storage area networks


http://www03.ibm.com/servers/storage/san/

S w itc hes F ibre C ha nnel F abric

2. Cisco

D ev ic e E

D ev ic e F

D ev ic e G

D e v ic e H

D ev ic e I

http://www.cisco.com 3. Brocade

Conclusion SAN is a data centric network.It

http://www.brocade.com is

4. QLogic

http://www.qlogic.com

Das könnte Ihnen auch gefallen