Sie sind auf Seite 1von 10

SearchNetworking.

com
Converged Enhanced Ethernet: New protocols enhance data center Ethernet
Price, performance and flexibility have made 10 Gigabit Ethernet (10 GbE) an attractive choice for the data center. While 10 GbE has made inroads, lack of features in existing Ethernet protocols limit its further penetration. The critical issue with Ethernet is that it does not guarantee that packets will not be lost when a switch or end node is momentarily overwhelmed by incoming packets. The IEEE and Internet Engineering Task Force (IETF) are currently at work developing protocols that will improve network efficiency and eliminate situations in which packets are lost. Their work is considered critical to ensuring the performance of Fibre Channel over Ethernet (FCoE) and Internet SCSI (iSCSI). Work is under way to address:

Traffic prioritization Congestion control Improved route selection

The set of protocols designed to address these issues has been named Converged Enhanced Ethernet, or Lossless Ethernet. Traffic prioritization and control A major advantage of 10 GbE over competing technologies is that separate networks for storage area networks (SANs), server-to-server communication and the LAN can be replaced with a single 10 GbE network. While 10 Gb links may have sufficient bandwidth to carry all three types of data, bursts of traffic can overwhelm a switch or endpoint. SAN performance is extremely sensitive to delay. Slowing down access to storage has an impact on server and application performance. Server-to-server traffic also suffers from delays, while LAN traffic is less sensitive. There must be a mechanism to allocate priority to critical traffic while lower-priority data waits until the link is available. Existing Ethernet protocols do not provide the controls needed. A receiving node can send an 802.3x PAUSE command to stop the flow of packets, but PAUSE stops all packets. 802.1p was developed in the 1990s to provide a method to classify packets into one of eight priority levels. However, it did not include a mechanism to pause individual levels. The IEEE is now developing 802.1Qbb Priority-based Flow Control (PFC) to provide a way to stop the flow of low-priority packets while permitting high-priority data to flow. A bandwidth allocation mechanism is also required. 802.1Qaz Enhanced Transmission Selection (ETS) provides a way to group one or more 802.1p priorities into a priority group. All of the priority levels within a group should require the same level of service. Each priority group is then assigned a percentage allocation of

the link. One special priority group is never limited and can override all other allocations and consume the entire bandwidth of the link. During periods when high-priority groups are not using their allocated bandwidth, lower-priority groups are allowed to use the available bandwidth. Congestion control 802.1Qbb and 802.1Qaz by themselves don't solve the packet loss problem. They can pause low-priority traffic on a link, but they don't prevent congestion when a switch or end node is being overwhelmed by high-priority packets from two or more links. There must be a way for receiving nodes to notify sending nodes to slow their rate of transmission. IEEE 802.1Qau provides such a mechanism. When a receiving node detects that it is nearing the point where it will begin discarding incoming packets, it sends a message to all nodes currently sending to it. Sending nodes slow their transmission rate. Then, when congestion is cleared, the node sends a message informing senders to resume their full rate. Improved route selection Spanning tree protocol was developed early in Ethernet history. It specifies a procedure to eliminate routing loops without requiring manual configuration. Switches communicate with one another to select a root node. Then each node determines its least costly route to the root. If a switch is added or removed or a link fails, the remaining switches communicate to determine a new root and new paths to it. Spanning tree has worked well, but it has limitations. Traffic between nodes can flow through the root even when there is a more direct node-to-node route. There is no way to spread traffic over multiple equal-cost routes. Finally, the process of determining a new root and paths to it can be slow. Network traffic stops while the process takes place. The spanning tree standard has been enhanced to provide separate sets of routes per virtual LAN and within sections of the network, but it still does not necessarily select optimal routes or take advantage of multiple links. The limited processing power and memory in early switches dictated that calculations required in determining the root node and routes must be relatively simple. Processors and memory in current switches enable more complex route selection protocols. The IETF and IEEE are working together to develop IEEE 802.1aq and Transparent Interconnection of Lots of Links (TRILL). The goal of these efforts is to use a link-state routing protocol to determine the most efficient routes through the network, react very quickly to network changes, and take advantage of multiple routes by spreading traffic across them. About the author: David B. Jacobs of The Jacobs Group has more than 20 years of networking industry experience. He has managed leading-edge software development projects and consulted to Fortune 500 companies as well as software startups.

15 Apr 2009

SearchStorage.com
iSCSI SAN solutions getting boost by shared storage, virtual servers
By the time iSCSI became a standard protocol five years ago, most large enterprises had already adopted Fibre Channel (FC) as the network technology to connect their servers and storage devices. And now that users have spent an estimated $60 billion on FC to transport commands and data across their storage-area networks (SANs), they're not likely to rip out their battle-tested infrastructures. But iSCSI SANs are finding favor among businesses of varying sizes that resisted shared storage because of Fibre Channel's cost and complexity. These iSCSI SAN solutions are especially popular for working with Windows-based files, database and application servers. They're also increasingly a consideration among large enterprises for departmental applications, server workloads that aren't bandwidth-intensive and for virtual servers. The connection between shared storage and virtualized servers "has changed the game in the last year and a half," said Rick Villars, a storage analyst at IDC. The pros and cons of iSCSI SAN solutions vs. Fibre Channel Weighing the merits of iSCSI SAN solutions starts with the concept of shared storage. IT organizations that choose to separate storage from servers generally do so to better use and allocate resources and manage and protect data. SCSI is the standards-based interface, or command set, that connects the servers and storage arrays and transfers data across the network. The SAN provides a central pool of networked storage resources.
LEARN MORE ABOUT iSCSI SAN SOLUTIONS

Read about why one company chose an iSCSI SAN solution Get more information in our special report on iSCSI SAN storage Download our free guide on iSCSI storage The physical storage is broken up into logically smaller chunks, and each chunk is assigned a SCSI logical unit number (LUN). The SCSI LUN appears to the application server as if it were a local disk drive, even though it's in reality across the network. That leaves the major distinction between iSCSI and Fibre Channel in the transport and network technologies. An iSCSI SAN uses the Internet Protocol over Ethernet, whereas in the traditional FC SAN architecture, the Fibre Channel Protocol (FCP) handles the transport of SCSI commands over a Fibre Channel network. Users find iSCSI easier to install and manage than Fibre Channel, not to mention much cheaper to deploy. Software- or hardware-based initiators send the SCSI commands over the IP network. Software initiators ship with all major operating systems, reducing the cost associated with setting up an iSCSI SAN. Performance-

boosting hardware initiators, implemented through iSCSI host bus adapters (HBAs), are also available, but they can add $200 to $500 to the price of a low-end SAN. "The value proposition of iSCSI is around low cost," said Greg Schulz, founder of storage consulting firm StorageIO Group. "That's why you see the vast majority of installs with software-based initiators that are built right in." Schulz estimates that 85% to 90% of installs are done with software initiators. That's one of the tipping points between Fibre Channel and iSCSI. Prices for FC's host bus adapters, which run the network protocol and SCSI protocol, start at approximately $500 but go higher, depending on performance and the number of ports. Per-port switches, cabling and server-side connectivity are also often more expensive. Complicating matters further, IT managers need special knowledge to run a Fibre Channel network. With an iSCSI SAN solution, they can simply leverage the experience and investments they've collected with their existing Ethernet and IP networks, even though they still need to learn the nuances of storage allocation. Fibre Channel, on the other hand, was designed to handle transaction-heavy loads reliably with no data loss and low latency. Most companies currently use 4 Gbps Fibre Channel, although 8 Gbps is starting to gain ground. Either way, there's more bandwidth with Fibre Channel than with the 1 Gigabit Ethernet that most iSCSI SANs now employ. Fans of iSCSI can argue that bandwidth isn't the sole determiner, and they can get ample performance with many workloads. Other factors in the performance equation include the number of processors, host ports, cache memory, disk spindles or drives, and how wide they can be striped. On the storage end, users don't need to make an either/or choice. They can opt for multiprotocol arrays, or unified storage, that support both iSCSI and FCP for block-based storage and data access with SANs and NFS and CIFS for file-based storage and data access with network-attached storage (NAS). "Why limit yourself to a single protocol when you're not sure that protocol is going to last forever?" asks George Crump, founder of consulting firm Storage Switzerland. Crump says he's hesitant to recommend an iSCSI-only target, or storage pool, to clients. Simultaneous deployment is the wave of the future, according to Robert Passmore, an analyst at Stamford, Conn.-based Gartner Inc. He's already witnessing large companies eliminate file servers and consolidate file services to NetApp Inc.'s NAS device, put low-end servers that don't need performance onto iSCSI and leave the mission-critical, highest-performing applications on Fibre Channel. iSCSI solutions and virtual servers A hybrid approach also is helpful to enterprises that add virtual servers into the mix. Server virtualization technology frees a company to host many instances of an operating system on a single physical box, and administrators can shift server workloads from one box to another to optimize resources. However, that becomes difficult if the storage sits on the same box. In virtual environments, iSCSI SAN solutions can claim an inherent advantage over Fibre Channel SANs with virtual machine (VM) mobility when combined with the ability to provision virtual servers from SAN-bootable

devices, according to Jeff O'Neal, senior director of data center solutions at NetApp. Also, backups to tape or disk can be done directly from a guest operating system, or a VM, with an iSCSI SAN, whereas a Fibre Channel SAN adds a layer of management to the virtual machine hypervisor, he noted. "The VMware adoption curve has been so steep that it's driven more and more companies to use a SAN where they might not have done so before," said Andrew Reichman, an analyst at Forrester Research Inc. Other options include configuring VMware to use NFS with NAS devices or choosing a NAS gateway in front of a traditional block-based storage array. Reichman recommends that companies with no Fibre Channel look at iSCSI solutions, and even companies with Fibre Channel consider iSCSI to network additional servers at a lower cost. "Going iSCSI now will make it a smoother transition to 10 Gigabit Ethernet [10 GbE] later," he said. But iSCSI SAN users won't concern themselves with that performance boost until the price of 10 Gigabit Ethernet drops over the next few years. "The reason iSCSI is doing well in the market is because of cost savings, not because it could or couldn't do the job that Fibre Channel did," Gartner's Passmore said. "If you look at 10 Gigabit Ethernet today, the cost of the components is still well above Fibre Channel costs. You get performance that's not needed in the market that's buying iSCSI." So far, 10 Gigabit Ethernet is finding a captive audience only in select iSCSI scenarios, such as between the storage array and the switch. The array's ability to use the higher bandwidth can reduce the number of ports, offsetting the higher cost of 10 Gigabit Ethernet, Passmore said. On the initiator side, because few servers need more than 1 Gbps for storage traffic, users won't opt for 10 GigE there until it becomes economical. But even cheaper 10 Gigabit Ethernet won't enable iSCSI to eliminate Fibre Channel. Building large, complex iSCSI SANs can be tough because of the dearth of management tools and industrial-strength test configurations from vendors. For large enterprises, Fibre Channel over Ethernet (FCoE) technology might eventually be a more palatable option. The Ethernet, however, won't be the garden variety in use today. Typical Ethernet needs high-overhead TCP/IP to provide for retries and acknowledgements and a level of flow control in delivering information. Even then, it's not "lossless," with a guarantee that no frames or packets will be dropped. Cisco Systems Inc., Brocade Communications Systems Inc. and other industry players are working on standards to enhance Ethernet for data center use, adding lossless characteristics and high-performance transport of multiple network traffic types. Converged Enhanced Ethernet (CEE) standards are expected next year, and the technology will need a year or two more to mature, according to Mario Blandini, Brocade's director of product marketing for data center infrastructure. Once products supporting CEE hit the market and prices drop for 10 Gigabit Ethernet, all storage traffic could shift to Ethernet. Fibre Channel users would get to use the same management tools and leverage their existing skills and experience, much the same as iSCSI users do now at the low end. "Fibre Channel, at least for the foreseeable future, is the de facto standard for a mid- to upper-size SAN," Passmore said. "We estimate users spent $60 billion getting where they are today. [They're not] going to unplug

it next week just to be able to spend slightly less money. But that doesn't mean those same companies might not want to connect to other servers in the SAN using iSCSI -- and that's just what's happening." About the author: Carol Sliwa is a veteran IT journalist. 17 Jun 2008

SearchStorage.com
Best practices for getting the most out of an iSCSI SAN
One best practice overrides all others when deploying an iSCSI storage area network (SAN). An IT organization must separate the storage traffic, either physically or logically, from the ordinary LAN traffic to run the network efficiently. The SAN shouldn't have to compete for bandwidth. Isolating the storage traffic will help to improve response times, prevent bottlenecks, nip potential performance issues in the bud and build in security. Segregating SAN traffic also helps to address the TCP/IP overhead and flow control issues inherent in an Ethernet network. "A lot of people assume, 'I've got a couple of free Ethernet ports on existing servers. I'll just use those for SAN and throw my data traffic over them,' " said Rick Villars, a storage industry analyst at International Data Corp. "That's not going to work very well. You're going to find that the iSCSI traffic is much more intense than the data traffic." Logical separation means setting up a virtual LAN to segregate the SAN traffic from the LAN traffic. The physical route involves dedicating separate Ethernet switches to the iSCSI network not hooking into corporate switches that were already in use. That may necessitate the addition of a couple of new Ethernet ports. Beyond the best practice of separating the storage traffic, the rules of good storage management apply. Some other best practices for operating iSCSI SANs include: Buy, build and configure all parts of the iSCSI SAN as fully redundant. Robert Passmore, an analyst at Gartner Inc., emphasized that any SAN or storage outage is disruptive, so 100% uptime is the goal. That means an organization must pay attention to even "the tiniest details," he noted. His advice: Build two independent fabrics and select switches that support nondisruptive firmware upgrades. Choose a storage array that is fully redundant with no single points of failure; it should support multiple spare disks and automatic rebuilds and nondisruptive upgrades of hardware and software. Each controller board should have at least two iSCSI ports, each going to a separate Ethernet fabric. If you buy a 16-port switch, buy another one and connect one network interface card (NIC) card to one switch and another NIC card to another switch. Take regular snapshots of the data synchronized with the application, the SAN switch configurations and the array configurations. Consider setting up a second site with fully redundant hardware, and create a disaster recovery plan.

On the staffing side, establish a trained and dedicated storage team to monitor the environment and enforce rigid change control procedures. "I was finding a lot of users who thought that high availability was restricted to buying hardware with no single point of failure and building redundant networks," Passmore said. "They were ignoring human error." Ensure that the management tools are secure. Deploying software tools that let IT shops set administrative rolls and limit the scope of actions each individual can take will help keep the iSCSI SAN environment under control. Ensure that only authorized employees can access the tools and physical assets. Administrators need to make sure that only the desired servers can access the designated target storage. "The good news with regular Ethernet is that any server should be able to get to the storage," said Greg Schulz, founder and analyst at consulting firm The StorageIO Group. "That's also the bad thing. It's a potential security risk. So take adequate steps to make sure that only the servers that are intended or authorized are the ones that are in fact able to access the storage." Use performance analytics tools. Examining server workloads and measuring the IOPS going in and out of the storage array can help to determine which network architecture is the best fit, advised Andrew Reichman, an analyst at Forrester Research Inc. Although some organizations do that by trial and error, diagnostic tools afford the more scientific approach. Reichman recommends building a cost model to assess the benefits of iSCSI and facilitate intelligent decision making on which network is right for each application and workload. One option is a hybrid SAN in which mission-critical or performance-oriented servers go on the Fibre Channel network and smaller and less critical servers go onto iSCSI. Choosing a multiprotocol storage array that supports both Fibre Channel and iSCSI will help; some enable a primary path on Fibre Channel and a secondary path on iSCSI to build in redundancy at a lower cost point. When using iSCSI SANs with virtual server environments, devise a plan to effectively balance the load between the servers, the storage and the network. One of the drivers for iSCSI SAN growth is the popularity of server virtualization technology from vendors such as VMware Inc., Microsoft Corp. and Citrix Systems Inc. The special characteristics of those environments need to be factored into the equation. "If you don't have discipline in setting up virtual servers and adding additional performance and capacity, you can reach a point where suddenly you've got 400 virtual machines all trying to access the same storage array," said Villars. "There, it doesn't matter if you have Fibre Channel or iSCSI. If you put that kind of load on the system, somewhere in the network or in the array, it will break." Because server workloads can be shifted to different physical boxes with ease in virtual environments, check to see if the storage array comes with advanced virtualization functionality of its own. Pointing a server to a

virtual volume, rather than assigning a disk to a specific server, will make it easier to shift data between different disks or systems without disrupting the server application. Also useful in virtual server environments is thin provisioning, whereby disk storage space is allocated on demand to boost efficient use of storage capacity in SANs. "In a virtual environment, if you don't have thin provisioning, you can very quickly eat up all your storage," said Villars.

17 Jun 2008

SearchStorage.com
Best practices for getting the most out of an iSCSI SAN
One best practice overrides all others when deploying an iSCSI storage area network (SAN). An IT organization must separate the storage traffic, either physically or logically, from the ordinary LAN traffic to run the network efficiently. The SAN shouldn't have to compete for bandwidth. Isolating the storage traffic will help to improve response times, prevent bottlenecks, nip potential performance issues in the bud and build in security. Segregating SAN traffic also helps to address the TCP/IP overhead and flow control issues inherent in an Ethernet network. "A lot of people assume, 'I've got a couple of free Ethernet ports on existing servers. I'll just use those for SAN and throw my data traffic over them,' " said Rick Villars, a storage industry analyst at International Data Corp. "That's not going to work very well. You're going to find that the iSCSI traffic is much more intense than the data traffic." Logical separation means setting up a virtual LAN to segregate the SAN traffic from the LAN traffic. The physical route involves dedicating separate Ethernet switches to the iSCSI network not hooking into corporate switches that were already in use. That may necessitate the addition of a couple of new Ethernet ports. Beyond the best practice of separating the storage traffic, the rules of good storage management apply. Some other best practices for operating iSCSI SANs include: Buy, build and configure all parts of the iSCSI SAN as fully redundant. Robert Passmore, an analyst at Gartner Inc., emphasized that any SAN or storage outage is disruptive, so 100% uptime is the goal. That means an organization must pay attention to even "the tiniest details," he noted. His advice: Build two independent fabrics and select switches that support nondisruptive firmware upgrades. Choose a storage array that is fully redundant with no single points of failure; it should support multiple spare disks and automatic rebuilds and nondisruptive upgrades of hardware and software. Each controller board should have at least two iSCSI ports, each going to a separate Ethernet fabric. If you buy a 16-port switch, buy another one and connect one network interface card (NIC) card to one switch and another NIC card to another switch. Take regular snapshots of the data synchronized with the application, the SAN switch configurations

and the array configurations. Consider setting up a second site with fully redundant hardware, and create a disaster recovery plan. On the staffing side, establish a trained and dedicated storage team to monitor the environment and enforce rigid change control procedures. "I was finding a lot of users who thought that high availability was restricted to buying hardware with no single point of failure and building redundant networks," Passmore said. "They were ignoring human error." Ensure that the management tools are secure. Deploying software tools that let IT shops set administrative rolls and limit the scope of actions each individual can take will help keep the iSCSI SAN environment under control. Ensure that only authorized employees can access the tools and physical assets. Administrators need to make sure that only the desired servers can access the designated target storage. "The good news with regular Ethernet is that any server should be able to get to the storage," said Greg Schulz, founder and analyst at consulting firm The StorageIO Group. "That's also the bad thing. It's a potential security risk. So take adequate steps to make sure that only the servers that are intended or authorized are the ones that are in fact able to access the storage." Use performance analytics tools. Examining server workloads and measuring the IOPS going in and out of the storage array can help to determine which network architecture is the best fit, advised Andrew Reichman, an analyst at Forrester Research Inc. Although some organizations do that by trial and error, diagnostic tools afford the more scientific approach. Reichman recommends building a cost model to assess the benefits of iSCSI and facilitate intelligent decision making on which network is right for each application and workload. One option is a hybrid SAN in which mission-critical or performance-oriented servers go on the Fibre Channel network and smaller and less critical servers go onto iSCSI. Choosing a multiprotocol storage array that supports both Fibre Channel and iSCSI will help; some enable a primary path on Fibre Channel and a secondary path on iSCSI to build in redundancy at a lower cost point. When using iSCSI SANs with virtual server environments, devise a plan to effectively balance the load between the servers, the storage and the network. One of the drivers for iSCSI SAN growth is the popularity of server virtualization technology from vendors such as VMware Inc., Microsoft Corp. and Citrix Systems Inc. The special characteristics of those environments need to be factored into the equation. "If you don't have discipline in setting up virtual servers and adding additional performance and capacity, you can reach a point where suddenly you've got 400 virtual machines all trying to access the same storage array," said Villars. "There, it doesn't matter if you have Fibre Channel or iSCSI. If you put that kind of load on the system, somewhere in the network or in the array, it will break."

Because server workloads can be shifted to different physical boxes with ease in virtual environments, check to see if the storage array comes with advanced virtualization functionality of its own. Pointing a server to a virtual volume, rather than assigning a disk to a specific server, will make it easier to shift data between different disks or systems without disrupting the server application. Also useful in virtual server environments is thin provisioning, whereby disk storage space is allocated on demand to boost efficient use of storage capacity in SANs. "In a virtual environment, if you don't have thin provisioning, you can very quickly eat up all your storage," said Villars.

17 Jun 2008

Das könnte Ihnen auch gefallen