Beruflich Dokumente
Kultur Dokumente
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. 2011 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell. Dell, the DELL logo, and the DELL badge, PowerConnect, and PowerVault are trademarks of Dell Inc. Symantec and the SYMANTEC logo are trademarks or registered trademarks of Symantec Corporation or its affiliates in the US and other countries. Microsoft, Windows, Windows Server, and Active Directory are either trademarks or registered trademarks of Microsoft Corporation in the United States and/or other countries. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.
September 2011
Page ii
Contents
Data Center Fabric Convergence ................................................................................... 2 Fibre Channel Over Ethernet..................................................................................... 3 FCoE Switching ........................................................................................................ 4 FCoE CNAs ............................................................................................................. 4 FCoE and iSCSI ........................................................................................................ 5 Migration from FC to FCoE .......................................................................................... 5 Conclusion ............................................................................................................. 7
Figures
Figure 1.Todays data center with multiple networks and with a consolidated Network ................ 2 Figure 2.FCoE Ethernet Switch..................................................................................... 4 Figure 3.First phase in migrating to FCoE ........................................................................ 6 Figure 4.Second phase in migrating to FCoE..................................................................... 6 Figure 5.Third and final phase in migrating to FCoE ........................................................... 7
Page iii
Reduced space, power and cooling requirements as the number of switches and adapters would be much lower. The reasons multiple data center networks exist is the network and storage requirements have traditionally been different. But with advances in technology and changes in market dynamics, the needs of these two networks are converging. Additionally, standards bodies have been working to define standards that support a converged fabric.
Figure 1.
Todays data center with multiple networks and with a consolidated Network
Page 2
When the underlying requirements of storage and data networks are evaluated, Ethernet has the most promise for meeting the requirements for the converged fabric. Ethernet is the predominant network choice for interconnecting resources in the data center; it is ubiquitous and well understood by network engineers and developers worldwide, Ethernet has withstood the test of time and has been deployed for over twenty five years, and the cost per port of Ethernet is low. With 10GbE rapidly being adopted, and the approval of 40GbE/100GbE standards, Ethernet is now able to meet the performance requirements for a converged fabric. New Ethernet standards, such as DCB and FCoE address packet loss, flow control and other issues that now make it possible to use Ethernet as the basis for a converged fabric in the data center. Fibre Channel Over Ethernet Fibre Channel over Ethernet (FCoE) was adopted as an ANSI standard in June 2009. FCoE was developed by the International Committee for Information Technology Standards (INCITS) T11 committee. The FCoE protocol specification maps Fibre Channel (FC) upper layer protocols directly over a bridged Ethernet network and does not require the Internet Protocol (IP) for forwarding, as is the case with iSCSI. FCoE provides an evolutionary approach towards migration of FC SANs to an Ethernet switching fabric that preserves Fibre Channel constructs and provides latency, security, and traffic management attributes similar to those of FC while preserving investments in FC tools, training, and SAN devices (FC switches and FC-attached storage). The primary motivation driving the standardization of FCoE was the desire to preserve existing investment in FC SANS, while enabling data center I/O consolidation based on using Ethernet as the unified switching fabric for both LAN and SAN applications. The need for I/O consolidation is especially pressing for enterprises that are making a major commitment to server virtualization as a strategy for server resource optimization and energy conservation. Server virtualization on a large scale requires shared access to storage resources, such as that enabled by a SAN-based implementation of virtualized storage. A typical high performance server configuration currently includes a pair of Fibre Channel host bus adapters (HBAs) and two or more GbE network interface cards (NICs). Vendors of virtual machine software often recommend 6 or more GbE NIC ports for high performance servers. As the number of cores per CPU increases, the number of virtual machines per server will also increase, driving the need for significantly higher levels of both network and SAN I/O. By consolidating network and SAN I/O based on 10 GbE, followed by 40 GbE and 100 GbE in the future, the number of adapters required per server can be greatly reduced. For example, implementing FCoE over a pair of 10 Gigabit Ethernet adapter ports provides the equivalent bandwidth of two 4 Gbps FC connections and twelve GbE connections. This consolidation reduces the servers footprint with an 86% reduction in the number of cables, and also reduces the requirement for power and cooling by eliminating the hundreds of watts of power consumed by 12 adapters and the corresponding switch ports. Another economic advantage of FCoE is its ability to leverage traditionally low prices for Ethernet switch ports and adapters driven by large manufacturing volumes and strong competition. FCoE also promises improved application management and control, with all application-related data flows tracked and analyzed with a single set of Ethernet management tools. FCoE is dependent on parallel standards efforts to make Ethernet lossless by eliminating packet drops due to buffer overflow and to make other enhancements in Ethernet as a unified data center Page 3
Converging Storage and Data Networks in the Data Center switching fabric. The IEEE Data Center Bridging (DCB) and related standards are efforts are still ongoing. A combination of FCoE and DCB standards will have to be implemented both in server converged NICs and in data center switch ASICs before FCoE is ready to serve as a fully functional standards-based extension and migration path for Fibre Channel SANs in high performance data centers.
FCoE Switching
Figure 2 shows a possible architecture for the data plane of a FCoE switch that could be built based on existing Ethernet switch fabrics. The control plane (not shown in the figure) would also need to provide FCoE functionality, including support for FSPF routing and the Name Server database. Server attachment would be through the DCB Ethernet ports or line cards, while the connections to the FC SAN or FC Disk Arrays would be via FC ports. The FC ports would include the FCoE gateway encapsulation functionality plus a DCB Ethernet interface to the Ethernet switch fabric. Both the DCB Ethernet and the FC/FCoE ports would have to be implemented with a new generation of Ethernet ASIC chip sets that support the new technologies. With a number of IEEE study groups, the IETF, and T11 all involved in specifying standards that impact FCoE, it is likely that switch vendors will want to see a high degree of standards maturity before making the commitment to high performance DCB capable ASICs. An architecture such as that shown in Figure 2 not only has the potential to preserve prior investments in modular 10 GbE switches, but offers the potential for a high degree of configuration flexibility where the same modular chassis can be configured as a traditional Ethernet switch, a DCB Ethernet switch, a FCoE switch, a FC switch, or with other combinations of port types.
Figure 2.
FCoE CNAs
Servers connected to the FCoE switched network will need Converged Network Adapters (CNAs) that support both DCB Ethernet and the Fiber Channel upper layer N-port functionality. The common product implementation is a single or dual-port 10 GbE intelligent NIC that includes hardware-based offload and acceleration for both TCP applications and FcoE. Protocol offload avoids per packet processing in software and enhances performance while reducing CPU utilization and latency. Products in this category have already been announced by a number of vendors.
Page 4
Converging Storage and Data Networks in the Data Center Initial products such as these will need to be based on pre-standard implementations of DCB Ethernet functionality because these standards have not yet been ratified. As these standards are finalized, the first generation CNAs may require some modifications in order to comply with evolving standards.
Page 5
Figure 3.
The second phase involves replacing the FC switches at the aggregation layer with FCoE switches (see figure 4). This second step is a key step in the migration process. If your architecture does not employ a 3-tier model, this phase can be skipped.
Figure 4.
Page 6
Converging Storage and Data Networks in the Data Center The final phase is to replace the core Ethernet data switch and the core FC switch with a unified FCoE switch that supports both FCoE and Ethernet connectivity (see figure 5). This provides a completed end to end converged network. New storage devices with FCoE interfaces can be integrated as storage demands increase. This three phase migration strategy minimizes disruption to data center operations, minimizes the risk of implementation and spreads the cost of migration over a long period of time.
Figure 5.
Conclusion
Fibre Channel over Ethernet promises to play a major role in preserving prior investments in Fibre Channel Technology and storage resources while driving I/O consolidation for servers and eventually setting the stage for complete unification of data center switching fabrics. However, considering the mission critical nature of data center storage networking, adoption of FCoE is expected to be a very gradual and orderly process, gated in large part by the maturity of numerous relevant standards and the availability of cost-effective products that provide full implementations of those standards.
Page 7