Sie sind auf Seite 1von 16

A NAS gateway provides the function of a conventional NAS appliance but without integrated disk storage.

The disk storage is attached externally to the gateway, and may also be a standalone offering for direct or SAN attachment. The gateway accepts a file I/O request and translate that to a SCSI block-I/O request to access the external attached disk storage.

Potential advantages Increased choice of disk types. Increased capability (such as large read/write cache or remote copy functions). Increased disk capacity scalability (compared to the capacity limits of an integrated NAS appliance). Ability to preserve and enhance the value of selected installed disk systems by adding file sharing. Ability to offer file sharing and block-I/O on the same disk system. A gateway can be viewed as a NAS/SAN hybrid increasing flexibility and potentially lowering costs(vs. capacity that might go underutilized if it werepermanently dedicated to a NAS appliance or to aSAN).

In addition, NAS gateways offer increased flexibility by delivering greater performance, increased scalability and the ability to mix and match multiple tiers of storage (Fibre Channel and ATA) as well as different classes of storage arrays. Additionally, because NAS gateways separate the NAS head from the storage, they help lower administrative costs, avoid unnecessary hardware purchases and offer high-end NAS services at the price of most midtier appliances. NAS gateways are fast becoming the de facto standard deployment model for the data center.

NAS gateways connect to the existing SAN and provide multiprotocol file services to clients connected to the IP network. NAS gateways leverage SAN storage arrays for their capacity. SAN management tools are used to provision and manage storage resources. NAS gateways can leverage multiple storage arrays for increased performance.

What's the difference: Fibre Channel Fabric vs. Arbitrated Loop


Rick Cook Reprints

By Rick Cook Fibre Channel for SANs comes mostly in two flavors: Fibre Channel Arbitrated Loop (FC-AL) and Fibre Channel Fabric. The two differ considerably in their topology and capabilities. (A third topology, point-to-point, isn't as commonly used for SANs.) FC-AL is the most common and least expensive form of Fibre Channel. It links up to 127 ports in a network sharing the media (either Cat 5, unshielded twisted-pair copper or optical fiber). When a device has data to put on the channel, it requests the use of the media by sending an arbitration signal. If more than one device attempts to use the channel at the same time, the system uses the arbitration signal to decide which device gets the use of the channel. The device with control of the loop then sends an 'open' signal to the destination device and starts sending data. The connection is essentially point-to-point with all the devices between the source and destination of the loop simply repeating the data to pass it on. Although the network's topology is usually in a circle, the devices may also be connected through hubs for reliability and ease of management. A hub or concentrator makes cabling easier and can detect and bypass a bad device or segment of broken fiber so it won't bring down the whole network. Fibre Channel fabric is much simpler than FC-AL, but also more expensive. It relies on one or more central switches to establish direct, point-to-point connections
Arbitrated loop topology
Fibre Channel Arbitrated Loop (FC-AL) is a ring topology that enables you to interconnect a set of nodes. The maximum number of ports that you can have on an FC-AL is 127. The storage unit supports FC-AL as a private loop. It does not support the fabric-switching functions in FC-AL.

The storage unit supports up to 127 hosts or devices on a loop. However, the loop goes through a loop initialization process (LIP) whenever you add or remove a host or device from the loop. LIP disrupts any I/O operations currently in progress. For this reason, you must have only a single host and a single storage unit on any loop. Note: The storage unit does not support FC-AL topology on adapters that are configured for FICON protocol. Figure 1 shows an illustration of an arbitrated loop topology configuration that includes two host systems and two storage units. Figure 1. Arbitrated loop topology example

Legend 1 is the host system. 2 is the storage unit.

Point-to-point topology
The point-to-point topology, also known as direct connect, enables you to interconnect ports directly. Figure 2 shows an illustration of a point-to-point topology configuration that includes one host system and one storage unit. Figure 2. Point-to-point topology example

Legend 1 is the host system.

2 is the storage unit.

The storage unit supports direct point-to-point topology at the following maximum distances: 1 Gb shortwave adapters have a maximum distance of 500 meters (1640 ft) 2 Gb shortwave adapters have a maximum distance of 300 meters (900 ft) 4 Gb shortwave adapters have a maximum distance of 300 meters (900 ft) 8 Gb shortwave adapters have a maximum distance of 150 meters (492 ft) 2 Gb longwave adapters have a maximum distance of 10 kilometers (6.2 mi) 4 Gb longwave adapters can have a maximum distance of 4 kilometers (2.5 mi) or 10 kilometers (6.2 mi), depending on the hardware configuration. 8 Gb longwave adapters can have a maximum of 4 km (2.5 miles) or 10 km (6.2 miles), depending on the hardware configuration

The maximum distances also vary depending on the cable type. There are three basic types of optical cable fibre. The orange colored cables are shortwave, multimode OM2 type cables. The aqua colored multimode cables are OM3 type and are laser optimized. The yellow colored longwave cables are single fibre. The connection speed in Gigabits per second will determine the distance that is allowed. Table 1. Connection speed and distance by cable type Cable type OM2 OM3 OM2 OM3 OM2 OM3 OM2 OM3 Speed 1 Gbps 1 Gbps 2 Gbps 2 Gbps 4 Gbps 4 Gbps 8 Gbps 8 Gbps Distance 500 m (1640 ft) 500 m (1640 ft) 300 m (900 ft) 500 m (1640 ft) 150 m (492 ft) 270 m (886 ft) 50 m (164 ft) 150 m (492 ft)

The maximum distance for a longwave cables will also vary depending on the speed and the type of optical transducer. Most small form-factor pluggables (SFPs) can operate at 10 km but must be selected for that distance and consistent with the connection speed. For example, 4 Gb longwave adapters can have a maximum of 4 km (2.5 miles) or 10 km (6.2 miles) depending on the hardware configuration.

Switched-fabric topology
The switched-fabric topology provides the underlying structure that enables you to interconnect multiple nodes. The distance can be extended by thousands of miles using routers and other storage area network components. The storage unit supports increased connectivity with the use of Fibre Channel (SCSI-FCP and FICON) directors. Specific details on status, availability, and configuration options that are supported by the storage unit are available at IBM System Storage DS8000 series. The storage unit supports the switched-fabric topology with point-to-point protocol. You must configure the storage unit Fibre Channel adapter to operate in point-to-point mode when you connect it to a fabric topology. Figure 3 shows an illustration of a switched-fabric topology configuration that includes two host systems, two storage units, and three switches. Figure 3. Switched-fabric topology example

Legend 1 is the host system. 2 is the storage unit. 3 is a switch.

Arbitrated loop topology


Fibre Channel Arbitrated Loop (FC-AL) is a ring topology that enables you to interconnect a set of nodes. The maximum number of ports that you can have on an FC-AL is 127.

The storage unit supports FC-AL as a private loop. It does not support the fabric-switching functions in FC-AL. The storage unit supports up to 127 hosts or devices on a loop. However, the loop goes through a loop initialization process (LIP) whenever you add or remove a host or device from the loop. LIP disrupts any I/O operations currently in progress. For this reason, you must have only a single host and a single storage unit on any loop. Note: The storage unit does not support FC-AL topology on adapters that are configured for FICON protocol. Figure 1 shows an illustration of an arbitrated loop topology configuration that includes two host systems and two storage units. Figure 1. Arbitrated loop topology example

Legend 1 is the host system. 2 is the storage unit.

Point-to-point topology
The point-to-point topology, also known as direct connect, enables you to interconnect ports directly. Figure 2 shows an illustration of a point-to-point topology configuration that includes one host system and one storage unit. Figure 2. Point-to-point topology example

Legend

1 is the host system. 2 is the storage unit.

The storage unit supports direct point-to-point topology at the following maximum distances: 1 Gb shortwave adapters have a maximum distance of 500 meters (1640 ft) 2 Gb shortwave adapters have a maximum distance of 300 meters (900 ft) 4 Gb shortwave adapters have a maximum distance of 300 meters (900 ft) 8 Gb shortwave adapters have a maximum distance of 150 meters (492 ft) 2 Gb longwave adapters have a maximum distance of 10 kilometers (6.2 mi) 4 Gb longwave adapters can have a maximum distance of 4 kilometers (2.5 mi) or 10 kilometers (6.2 mi), depending on the hardware configuration. 8 Gb longwave adapters can have a maximum of 4 km (2.5 miles) or 10 km (6.2 miles), depending on the hardware configuration

The maximum distances also vary depending on the cable type. There are three basic types of optical cable fibre. The orange colored cables are shortwave, multimode OM2 type cables. The aqua colored multimode cables are OM3 type and are laser optimized. The yellow colored longwave cables are single fibre. The connection speed in Gigabits per second will determine the distance that is allowed. Table 1. Connection speed and distance by cable type Cable type OM2 OM3 OM2 OM3 OM2 OM3 OM2 OM3 Speed 1 Gbps 1 Gbps 2 Gbps 2 Gbps 4 Gbps 4 Gbps 8 Gbps 8 Gbps Distance 500 m (1640 ft) 500 m (1640 ft) 300 m (900 ft) 500 m (1640 ft) 150 m (492 ft) 270 m (886 ft) 50 m (164 ft) 150 m (492 ft)

The maximum distance for a longwave cables will also vary depending on the speed and the type of optical transducer. Most small form-factor pluggables (SFPs) can operate at 10 km but must be selected for that distance and consistent with the connection speed. For example, 4 Gb longwave adapters can have a maximum of 4 km (2.5 miles) or 10 km (6.2 miles) depending on the hardware configuration.

Switched-fabric topology
The switched-fabric topology provides the underlying structure that enables you to interconnect multiple nodes. The distance can be extended by thousands of miles using routers and other storage area network components. The storage unit supports increased connectivity with the use of Fibre Channel (SCSI-FCP and FICON) directors. Specific details on status, availability, and configuration options that are supported by the storage unit are available at IBM System Storage DS8000 series. The storage unit supports the switched-fabric topology with point-to-point protocol. You must configure the storage unit Fibre Channel adapter to operate in point-to-point mode when you connect it to a fabric topology. Figure 3 shows an illustration of a switched-fabric topology configuration that includes two host systems, two storage units, and three switches. Figure 3. Switched-fabric topology example

Legend 1 is the host system. 2 is the storage unit. 3 is a switch. United StatesSite MapContactsHitachi Global

Home > Corporate > HDS Blogs > HDS Blog Roll > Hu's Blog

Products, Solutions and more

Hu's Blog Monolithic versus modular storage is not an either/or question


by Hu Yoshida on August 3, 2010 Comments(6 ) | Contact Hu Those of you who subscribe to Gartner reports may have seen their recent report: Choosing Between Monolithic Versus Modular Storage: Robustness, Scalability and Price Are the Tiebreakers While I agree with some of their definitions of monolithic and modular storage, it is no longer a question of one versus the other. With the Hitachi USP V/VM we combine the best of both worlds, by providing a monolithic or enterprise tier 1 front-end with lower cost modular back-end storage. I agree with their description of monolithic storage as having many controllers that share direct access to a large, high performance, global cache, supporting a large number of host connections, including mainframes, and providing redundancy to ensure high availability and reliability. I also agree with their definition of modular storage, which contains two variants, a dual controller architecture with separate cache memory and a scale out architecture that can have many nodes with separate caches in each node. I also agree that modular storage is easier to expand capacity by adding modules of storage trays, and that their acquisition costs are lower due to their simpler design (no global cache). The differences between monolithic storage and modular storage The key difference between monolithic storage and modular storage is the cache architecture. A dynamic global cache enables the tight coupling or pooling of all the storage resources in a monolithic storage system. As we add incremental resources like front-end port processors, cache modules, back-end array processors, disk modules, and program products like Hitachi Universal Replication, they are tightly coupled through the global cache so that they create a common pool of storage resources, which can be

dynamically configured to scale up or to scale out to meet different host server requirements. Separate caches, in controllers or in nodes, create silos of storage resources. Host server volumes can only access the storage resources that are in the controller or node that it is attached to. The host server may access another volume in another controller or node, but it cannot have one volume extend across multiple controllers or nodes. Since this is not a common pool of storage resources, this leads to fragmentation and under-utilization of resources within the controllers or nodes. One node may be running at 90% utilization while other nodes are idling at 10% or 20%. While most analysts like Gartner will acknowledge that dual controller systems, with limited amounts of cache and compute capacity, cannot match monolithic systems in performance and throughput. They make the assumption that multinode scale-out architectures hold the promise of helping modular systems to asymptotically approach monolithic storage system levels of throughput. I disagree since the throughput and performance that you get from multimode scale-out architecture is limited to a distribution of the workload across multiple nodes. Unless the distribution is perfectly balanced across the nodes, you have the fragmentation that I mentioned earlier. Even if the cumulative total of cache and compute capacity is the same as what is in a monolithic storage system, it is not tightly coupled into a common pool of resources, and cannot match their performance and throughput . The Hitachi AMS 2000 family of modular storage is a dual controller storage system with separate caches. However, there is additional intelligence in the architecture that enables load balancing of LUN ownership between the two controller caches to ensure that one controller is not overworked while the other controller is idling. There are some single thread workloads where modular storage can outperform monolithic storage, but in multithreaded workloads the monolithic storage will have higher performance and throughput due to its larger cache, multiple compute processors, and load balancing across storage port processors. So while there are important differences between monolithic and modular storage, the best way to use them is to use them in a tiered configuration. Since 60% to 80% of storage does not need tier 1 performance, it does not need to be on tier 1 storage. However, all your storage needs tier 1 protection and availability. You can achieve that by virtualizing modular storage as tier 2 or 3 storage behind a tier 1 monolithic storage front-end. The modular dual controller or multi-node scale out storage systems now sit behind a global cache, and become part of a pool of common resources that can be dynamically allocated based on business requirements. The advantages of modular storage around cost and ease of expansion are coupled with the advantages of monolithic (enterprise) tier 1 functionality and performance, with common management, protection, and search. USP V/VM: the best of both worlds One of the disadvantages cited for monolithic storage is the higher cost. That is only true in smaller configurations if all the storage capacity resides in the monolithic system. If most or even all of the storage capacity resides on external modular storage that is virtualized behind a USP V/VM, the cost of the combination will be even lower since all the storage is now efficiently managed as a common pool of

storage resources, saving operational as well as capital costs. Since the USP V/VM does Dynamic Provisioning, it can save time and the costs for provisioning external modular storage, thin provision and reclaim unused capacity, and wide stripe the modular storage for higher performance. The data mobility provided by the USP V/VM will increase availability by non-disruptively moving the data off of the modular storage during scheduled down times or for technology refresh migrations, further reducing operational costs over stand alone modular storage. Host servers are going through a massive consolidation with the availability of multi-core processors and virtual server platforms like VMware and Hyper- V. These virtual server platforms are driving 10 to 20 times the I/O workload of non- virtual servers, and virtual server cluster are driving as much as 100 times this load through one file system. This type of workload requires a monolithic storage system that can scale up through a tightly coupled, global cache on the front end while the majority of the storage capacity resides on lower cost modular storage that is virtualized behind it. So I do agree with Gartner for the most part on the differences between monolithic and modular storage, but I do not think it has to be an either/or decision as to which storage you chose. I believe the best choice is a combination of modular storage that is virtualized behind monolithic storage as we do with the USP V/VM. This way you can have the best of modular storage combined with the best of monolithic storage, at the lowest total cost. Where do you fall on this issue? You might also like:

The Use of Switches in Storage Systems

Redefining scale-up and scale-out

Modular or Monolithic? With VSP, Its Not Either/Or but AND

What is the role of cache in a virtual data center ...

inShare Posted in Enterprise Solutions, Industry Talk, Midrange Solutions, Software 6 Comments

Comments (6 )
Post Comment

The 3 big lessons of tech blogs that succeed | Hyten Content

on 06 Aug 2010 at 8:40 am

[...] their knowledge, which is the same as building trust. Dont be skimpy. Look at the length of this excellent blog post by Hu Yoshida at [...]

Vinod Subramaniam on 06 Aug 2010 at 12:03 pm


Here are my thoughts at the risk of sounding like a megalomaniac. Gartner is asking the wrong question or at least barking up the wrong tree which is why there is no right answer to this question. There are two major focus areas for optimization in a Storage Array. One is the capacity itself and the field of capacity management and estimation does a good job of optimizing this area. The other focus area is the infrastructure surrounding the capacity that consists of front end cpus, back end cpus and cache. This area has very little focus and is mostly estimated using vendor supplied thumb rules that are heavily outdated. The old 80-20 rule applies here. 80% of the storage arrays run at 20% average cpu utilization particularly the back end cpus. So the way to go for a customer is to ask the question : Is the workload IO hungry or Capacity hungry ? Once there is a clear answer to that question what is needed from Storage Vendors whether modular, enterprise or grid is a way of partitioning the array and assigning workloads to partitions that have differing IOPS capability. Vendors do offer a capability for partitioning e.g the USPV. But there are limitations. The backend on the USPV is shared and cannot be partitioned. All cache algorithms including destaging algorithms are global. The customer should be given the capability of partitioning a array and in addition to that carving out virtual CPUs or vCPUs that he can assign to partitions. For e.g a 20TB SATA disk partition that is used to hold videos probably needs only 4 front end vCPUs and maybe 2 backend vCPUs. On the other extreme a partition running SAP BW would need maybe 16 front end vCPUs and 32 back end vCPUs. Also workloads shift with time and also are subject to seasonal highs and lows based on business cycles. Customers should have the capability to steal vCPUs from one partition and assign it other partitions. So back to the 80/20 rule again and the scale out and scale up argument. Vendors need to add two dimensions to scaling Capacity and CPU. If you scaling simply because the array is maxed out on number of drives maybe you need to implement some form of Storage Virtualization and Tiering particularly controller based virtualization. If you are scaling because the CPUs are maxed out then you need to scale up. 80% of the customers scale / replace arrays since they are maxed out on capacity 20% of the customer scale / replace arrays since they are maxed out on CPUs.

Vinod

Chris M Evans on 09 Aug 2010 at 4:10 am


Hu I think I agree with most of your comments and theres no denying Ive always been a fan of Hitachis UVM product. However I do find that the technology benefits of multi-tiering are countered by the different approach to management thats required. For instance, I believe many people still think UVM virtualisation is a 1:1 LUN pass-through technology that the 10GB LUN on the modular storage has to be presented as a 10GB LUN to the USP V host. Of course that isnt true; as you know, a modular array can be configured with large LUNs that are then carved up by the USP V as if they were RAID groups. If perception about UVM is less than optimal then understanding the best approach to its implementation is probably also true; should the customer use stripe across the modular devices? Should external LUNs be presented from many storage ports? How do I create a layout I can expand in the future? Ultimately of course, I believe the benefits of UVM outweigh the additional management overhead for most customers. But its not a simple technical discussion. Regards Chris

Hu Yoshida on 12 Aug 2010 at 6:27 am


Chris, you are right nothing is ever as simple as it seems.

The Storage Architect Blog Archive Choosing Between Monolithic and Modular Architectures Part I on 24 Aug 2010 at 11:21 am
[...] sometimes called Enterprise storage array. Hu Yoshida discusses the subject on one of his recent blog posts. Looking at the wide range of storage devices, Ive categorised arrays into the following [...]

Brian on 21 Sep 2010 at 12:56 pm


Hu, In general I agree with your comments, and there are some interesting metrics out there that both back you up and (to some degree) refute your thesis. These can be found, by and large, on the SPC, Storage Performance Counils, website. If you look at the SPC1 benchmark, where heavier and heavier workloads are applied till no more IOPS can be obtained for a given storage array, the dual-controller modular arrays from essentially all vendors hit some knee on their performance curve, and suddenly latency and lag skyrocket as you attempt to add workload. This seems to happen around 60%-80% of max performance. If you look at the HDS USP and its SUN and HP brethern, there is no knee, the performance line is nearly flat, lag and latency increase very slowly, until you max out with no more IOPS available from the platform, and the max is much higher than any dual-controller array. Clearly, the dual-controller arrays are

bottlenecking on something in the mix, almost certainly CPU power in the controller or maxed out controller bus systems. The very large amount of storage used for the SPC-1 benchmark pretty much limits the influence of cache, for most products. So, your thesis is proven and well documented in this one (admittedly fairly simple-minded) metric. I/O loads vary, but in the aggregate, this is what would be expected to happen out in the real world as well, with multiple transactional loads and other random loads eventually finding the bottleneck at peak load time. The fly in the ointment is 3PAR, which is not really modular storage, having up to 8 controllers fronting shared storage. But your assertion is that this type of multi-node array without shared global cache will still not perform as well as the USP/USP-V. In fact, 3PAR exhibits almost exactly the same type of nearflat performance scaling, with no knee on the line at all, just like USP. No bottleneck appears as you approach max IOPS. And 3PAR has a higher IOPS max than USP. The USP SPC1` test was from 2008, and 3PAR was from 2009 (I believe), but these were the latest published figures for either. USP @ ~ 200K IOPS, and 3PAR @ ~225K IOPS. I suspect that this is the type of possible performance from multi-node modular arrays that Gartner was referring to in their paper (which I have NOT seen, so I dont know for sure) but it lines up nicely with the one line from Garner that you quoted, where they assumed that multinode scale-out architectures hold the promise of helping modular systems to asymptotically approach monolithic storage system levels of throughput. Quite frankly, the 3PAR SPC-1 result makes their assertion a bit obsolete at least the part about asymptotically approaching monolithic performance. Multi-node modular arrays without shared global cache can outperform monolithic, even at extremely heavy pseudo-random transactional loads. Of course, real world environments never match benchmark loads, no matter how much pseudorandomness is embodied in the benchmark, which is EMCs pretend reason for not particpating (in reality, if they could outperform the rest, theyd be there with bells on, can there be any question of that?) So, I am watching the new USP/V/VS with great interest, to see how it stacks up using the industry standard SPC benchmark (if HDS will go there). I would expect it to handily put HDS back into the lead for max IOPS, but 3PAR and others have not been sitting idle either.

Post a Comment
Name (*)

Email (*) Note: Your email will not be posted on HDS.com

Website

Comment

Hu Yoshida
Vice President and Chief Technology Officer

Archives Categories Blogs I Follow

Connect with Us Search Blog


Search only HDS Blog

HDS Comment Policy


Click here to learn about our HDS Blog Comment Policy

Most Popular
10A Consensus on Storage Efficiencies 4Why RAID and Erasure Codes Need to be Considered in Disk Purchases 2Buying Disks or Buying Storage Efficiencies 1The Tipping Point for Hard Disk Prices? 1HDS Places in FORTUNEs 100 Best Places to Work with Innovation and Trust

Recent Videos
How to solve the Information Glut

Neville Vincent, vice president and general manager, Hitachi Data Systems Australia and New Zealand, provides some recommendations... Watch or download the Video

Hu Yoshida on Twitter
Today on #HDS.com: I take a look at the different roles of #storage#data protection efficiencies:http://t.co/Ed8wsX8N
4 days ago

What can Jeremy Lin teach us about #IT? Do more with what we have. I explain here:http://t.co/eVwJxn9g #HDS #linsanity#virtuaLINsanity
5 days ago

Building on my last few posts, I discuss #storage performance#efficiency implications for#mainframe (Cc: @YoClaus)http://t.co/NMs7tAiI
7 days ago

ContactsLegalTerms of UseSite MapCareersPartnerXchange LoginRSS and Subscriptions Hitachi Data


Systems Corporation 2012. All Rights Reserved.

Powered by

Translate

Das könnte Ihnen auch gefallen