Sie sind auf Seite 1von 9

Converging Storage and Data Networks in the Data Center

A Dell Technical White Paper

Converging Storage and Data Networks in the Data Center

THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. 2011 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell. Dell, the DELL logo, and the DELL badge, PowerConnect, and PowerVault are trademarks of Dell Inc. Symantec and the SYMANTEC logo are trademarks or registered trademarks of Symantec Corporation or its affiliates in the US and other countries. Microsoft, Windows, Windows Server, and Active Directory are either trademarks or registered trademarks of Microsoft Corporation in the United States and/or other countries. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.

September 2011

Page ii

Converging Storage and Data Networks in the Data Center

Contents
Data Center Fabric Convergence ................................................................................... 2 Fibre Channel Over Ethernet..................................................................................... 3 FCoE Switching ........................................................................................................ 4 FCoE CNAs ............................................................................................................. 4 FCoE and iSCSI ........................................................................................................ 5 Migration from FC to FCoE .......................................................................................... 5 Conclusion ............................................................................................................. 7

Figures
Figure 1.Todays data center with multiple networks and with a consolidated Network ................ 2 Figure 2.FCoE Ethernet Switch..................................................................................... 4 Figure 3.First phase in migrating to FCoE ........................................................................ 6 Figure 4.Second phase in migrating to FCoE..................................................................... 6 Figure 5.Third and final phase in migrating to FCoE ........................................................... 7

Page iii

Converging Storage and Data Networks in the Data Center

Data Center Fabric Convergence


In typical large data centers, IT managers deploy different types of networks for data and storage networks. Todays data networks are mostly based on 1GbE, with 10GbE adoption expected to exponentially increase in 2011 and 40GbE/100GbE coming soon after. Enterprise data centers primarily use fiber channel for storage interconnects due to its reliability, performance and lossless operation. The operational and capital costs associated with having multiple networks are quite significant. Each network has separate switches, adapters, cables and management software. Often, servers have multiple 1GbE adapters and FC adapters for performance, management or reliability reasons. Additionally, power, space and cooling requirements for two networks is quite significant, often exceeding the capital equipment expenditure for the equipment itself. IT managers realize the continued cost and complexity of managing both a data network and a storage network is not a sustainable long-term solution. They are now looking for a solution that will enable them to consolidate their data and storage traffic onto a single, consolidated data center fabric. A consolidated data center fabric would dramatically simplify data center architectures and reduce cost. Some of the benefits of a consolidated fabric include: Reduced management cost as there would only be one fabric to manage. Reduced hardware costs as there would only be one set of switches, adapters and cables to purchase Simplified cabling infrastructure as there would only be one set of cables to manage.

Reduced space, power and cooling requirements as the number of switches and adapters would be much lower. The reasons multiple data center networks exist is the network and storage requirements have traditionally been different. But with advances in technology and changes in market dynamics, the needs of these two networks are converging. Additionally, standards bodies have been working to define standards that support a converged fabric.

Figure 1.

Todays data center with multiple networks and with a consolidated Network

Page 2

Converging Storage and Data Networks in the Data Center

When the underlying requirements of storage and data networks are evaluated, Ethernet has the most promise for meeting the requirements for the converged fabric. Ethernet is the predominant network choice for interconnecting resources in the data center; it is ubiquitous and well understood by network engineers and developers worldwide, Ethernet has withstood the test of time and has been deployed for over twenty five years, and the cost per port of Ethernet is low. With 10GbE rapidly being adopted, and the approval of 40GbE/100GbE standards, Ethernet is now able to meet the performance requirements for a converged fabric. New Ethernet standards, such as DCB and FCoE address packet loss, flow control and other issues that now make it possible to use Ethernet as the basis for a converged fabric in the data center. Fibre Channel Over Ethernet Fibre Channel over Ethernet (FCoE) was adopted as an ANSI standard in June 2009. FCoE was developed by the International Committee for Information Technology Standards (INCITS) T11 committee. The FCoE protocol specification maps Fibre Channel (FC) upper layer protocols directly over a bridged Ethernet network and does not require the Internet Protocol (IP) for forwarding, as is the case with iSCSI. FCoE provides an evolutionary approach towards migration of FC SANs to an Ethernet switching fabric that preserves Fibre Channel constructs and provides latency, security, and traffic management attributes similar to those of FC while preserving investments in FC tools, training, and SAN devices (FC switches and FC-attached storage). The primary motivation driving the standardization of FCoE was the desire to preserve existing investment in FC SANS, while enabling data center I/O consolidation based on using Ethernet as the unified switching fabric for both LAN and SAN applications. The need for I/O consolidation is especially pressing for enterprises that are making a major commitment to server virtualization as a strategy for server resource optimization and energy conservation. Server virtualization on a large scale requires shared access to storage resources, such as that enabled by a SAN-based implementation of virtualized storage. A typical high performance server configuration currently includes a pair of Fibre Channel host bus adapters (HBAs) and two or more GbE network interface cards (NICs). Vendors of virtual machine software often recommend 6 or more GbE NIC ports for high performance servers. As the number of cores per CPU increases, the number of virtual machines per server will also increase, driving the need for significantly higher levels of both network and SAN I/O. By consolidating network and SAN I/O based on 10 GbE, followed by 40 GbE and 100 GbE in the future, the number of adapters required per server can be greatly reduced. For example, implementing FCoE over a pair of 10 Gigabit Ethernet adapter ports provides the equivalent bandwidth of two 4 Gbps FC connections and twelve GbE connections. This consolidation reduces the servers footprint with an 86% reduction in the number of cables, and also reduces the requirement for power and cooling by eliminating the hundreds of watts of power consumed by 12 adapters and the corresponding switch ports. Another economic advantage of FCoE is its ability to leverage traditionally low prices for Ethernet switch ports and adapters driven by large manufacturing volumes and strong competition. FCoE also promises improved application management and control, with all application-related data flows tracked and analyzed with a single set of Ethernet management tools. FCoE is dependent on parallel standards efforts to make Ethernet lossless by eliminating packet drops due to buffer overflow and to make other enhancements in Ethernet as a unified data center Page 3

Converging Storage and Data Networks in the Data Center switching fabric. The IEEE Data Center Bridging (DCB) and related standards are efforts are still ongoing. A combination of FCoE and DCB standards will have to be implemented both in server converged NICs and in data center switch ASICs before FCoE is ready to serve as a fully functional standards-based extension and migration path for Fibre Channel SANs in high performance data centers.

FCoE Switching
Figure 2 shows a possible architecture for the data plane of a FCoE switch that could be built based on existing Ethernet switch fabrics. The control plane (not shown in the figure) would also need to provide FCoE functionality, including support for FSPF routing and the Name Server database. Server attachment would be through the DCB Ethernet ports or line cards, while the connections to the FC SAN or FC Disk Arrays would be via FC ports. The FC ports would include the FCoE gateway encapsulation functionality plus a DCB Ethernet interface to the Ethernet switch fabric. Both the DCB Ethernet and the FC/FCoE ports would have to be implemented with a new generation of Ethernet ASIC chip sets that support the new technologies. With a number of IEEE study groups, the IETF, and T11 all involved in specifying standards that impact FCoE, it is likely that switch vendors will want to see a high degree of standards maturity before making the commitment to high performance DCB capable ASICs. An architecture such as that shown in Figure 2 not only has the potential to preserve prior investments in modular 10 GbE switches, but offers the potential for a high degree of configuration flexibility where the same modular chassis can be configured as a traditional Ethernet switch, a DCB Ethernet switch, a FCoE switch, a FC switch, or with other combinations of port types.

Figure 2.

FCoE Ethernet Switch

FCoE CNAs
Servers connected to the FCoE switched network will need Converged Network Adapters (CNAs) that support both DCB Ethernet and the Fiber Channel upper layer N-port functionality. The common product implementation is a single or dual-port 10 GbE intelligent NIC that includes hardware-based offload and acceleration for both TCP applications and FcoE. Protocol offload avoids per packet processing in software and enhances performance while reducing CPU utilization and latency. Products in this category have already been announced by a number of vendors.

Page 4

Converging Storage and Data Networks in the Data Center Initial products such as these will need to be based on pre-standard implementations of DCB Ethernet functionality because these standards have not yet been ratified. As these standards are finalized, the first generation CNAs may require some modifications in order to comply with evolving standards.

FCoE and iSCSI


iSCSI was ratified as an Internet standard in 2003 by the IETF as a low cost IP-based protocol for connecting hosts to block-level storage over existing or dedicated IP networks. iSCSI was developed to address the need for a lower cost alternative to Fibre Channel. iSCSI over Gigabit Ethernet has proven to be popular with small and medium-sized businesses (SMBs) but has not been able to displace FC because of a number of issues, including performance limitations with 1 GbE, lack of lossless Ethernet packet delivery features, and a lack of administration/management tools equivalent in functionality to those of FC. iSCSI over 10 GbE, in conjunction with intelligent NICs that offload the host CPU from TCP/IP and iSCSI processing chores, can solve the performance issue. However, this solution has not yet been widely adopted because of the relatively high costs of 10 GbE switch ports and 10 GbE intelligent iSCSI NICs. By the time that FCoE is a mature technology, the economic picture for 10 GbE iSCSI is likely to much more favorable. The emergence of DCB Ethernet for data center applications will benefit iSCSI, as well as FCoE. Therefore, as 10 GbE iSCSI and FCoE continue to mature, it can be expected that iSCSi will become considerably more competitive with both FC and FCoE. Over the same time frame, virtually all enterprises with data centers are likely to adopt one of these three SAN technologies as a side effect of server virtualization becoming ubiquitous. Data centers that have considerable prior investments in FC will naturally gravitate toward FCoE, while new adopters of SAN technology will want to consider the relative merits of FCoE vs. iSCSI over 10 GbE within the time frame of the purchase decision.

Migration from FC to FCoE


The success of FCoE depends on a number of separate standardization efforts in three standards bodies, as well as an extensive ecosystem comprised of a variety of vendors in the storage, adapter, switch, and semiconductor markets. While the FCoE aspects of the standards were driven with an apparent sense of urgency, Ethernet standards process are considerably slower due to the open nature of IEEE standards requirements and the sheer size and diversity of the Ethernet vendor community. As a result, the FC aspects of FCoE have been standardized well in advance of the full suite of DCB Ethernet enhancements described earlier. Adoption strategies for FCoE vary, depending on the situation. New data centers have the luxury of being able to start from scratch, not having to replace existing infrastructure. In this case, customers should plan on an end-to-end FCoE architecture, from the server to the storage device. For most data centers, the adoption of FCoE can be accomplished in phases. Adopting FCoE in phases minimizes the disruption caused by introducing new technology to a data center, minimizes the possibility of problems and spreads the cost of implementation over a longer period of time. The first phase in adopting FCoE is implementing FCoE between the server and the top of rack switch (see figure 3). Customers replace network cards and FC cards in servers with Converged Network Adapters that route both IP traffic and storage traffic over Ethernet. Customers also replace ToR network switches with a FCoE ToR (Top of Rack) switch. The FCoE ToR switch has multiple Ethernet ports to connect to servers, 10GbE ports to connect to the core network, and FC ports to connect to the storage network. These changes eliminate FC switches, multiple I/O cards and cabling, resulting in lower capex, lower operation cost and simplified management.

Page 5

Converging Storage and Data Networks in the Data Center

Figure 3.

First phase in migrating to FCoE

The second phase involves replacing the FC switches at the aggregation layer with FCoE switches (see figure 4). This second step is a key step in the migration process. If your architecture does not employ a 3-tier model, this phase can be skipped.

Figure 4.

Second phase in migrating to FCoE.

Page 6

Converging Storage and Data Networks in the Data Center The final phase is to replace the core Ethernet data switch and the core FC switch with a unified FCoE switch that supports both FCoE and Ethernet connectivity (see figure 5). This provides a completed end to end converged network. New storage devices with FCoE interfaces can be integrated as storage demands increase. This three phase migration strategy minimizes disruption to data center operations, minimizes the risk of implementation and spreads the cost of migration over a long period of time.

Figure 5.

Third and final phase in migrating to FCoE

Conclusion
Fibre Channel over Ethernet promises to play a major role in preserving prior investments in Fibre Channel Technology and storage resources while driving I/O consolidation for servers and eventually setting the stage for complete unification of data center switching fabrics. However, considering the mission critical nature of data center storage networking, adoption of FCoE is expected to be a very gradual and orderly process, gated in large part by the maturity of numerous relevant standards and the availability of cost-effective products that provide full implementations of those standards.

Page 7

Das könnte Ihnen auch gefallen