Sie sind auf Seite 1von 5

IT@Intel White Paper

Intel IT
IT Best Practices
Data Centers
January 2011

Upgrading Data Center Network Architecture


to 10 Gigabit Ethernet

Executive Overview
Upgrading our network To accommodate the increasing demands that data center growth places on Intel’s

architecture will optimize our network, Intel IT is converting our data center network architecture to 10 gigabit

data center infrastructure to Ethernet (GbE) connections. Existing 100 megabit per second (Mb/s) and 1 GbE
connections no longer support Intel’s growing business requirements. Our new
respond faster to business
10 GbE data center fabric design will enable us to accommodate current growth
needs while enhancing
and meet increasing network demand in the future.
the services and value IT
brings to the business. Intel IT is engaged in a verticalization strategy • Reduced total cost of ownership in a
that optimizes data center resources to meet virtualized environment. A 10 GbE fabric
specific business requirements in different has the potential to reduce network cost
computing areas. Data center trends in three in our virtualized environment by 18 to 25
of these areas drove our decision to upgrade, percent, mostly due to simplifying the LAN
including server virtualization and consolidation and cable infrastructures. The new system
in Office and Enterprise computing environ- also requires fewer data center space,
ments, and rapid growth in Design computing power, and cooling resources.
applications and their performance requirements. • Increased throughput. Faster connections
Additionally, we experience 40 percent per year and reduced network latency provide
growth in our Internet connection requirements. design engineers with faster workload
While designing the new data center fabric, we completion times and improved productivity.
tested several 10 GbE connection products and • Increased agility. The network can easily
Matt Ammann chose those that offered the highest quality adapt to meet changing business needs and
Senior Network Engineer, Intel IT performance and reliability. We also balanced will enable us to meet future requirements,
ideal design against cost considerations. such as additional storage capacity.
Carlos Briceno
Senior Network Engineer, Intel IT The new data center fabric design provides Upgrading our network architecture will optimize
many benefits: our data center infrastructure to respond faster
Kevin Connell
• Reduced data center complexity. As to business needs while enhancing the services
Senior Network Engineer, Intel IT
virtualization increases, a 10 GbE network and value IT brings to the business.
Sanjay Rungta allows us to use fewer physical servers
Principal Engineer, Intel IT and switches.
IT@Intel White Paper Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

Contents BACKGROUND limiting factor in supporting faster throughput.


Our Internet connection requirements are
Executive Overview............................. 1 Intel’s 97 data centers are at the center
growing 40 percent each year as well, which
of a massive worldwide computing
Background. ........................................... 2 requires faster connectivity than is possible
environment, occupying almost
with a 1 GbE data center fabric.
Solution................................................... 2 460,000 square feet and housing
Simplifying Virtualization for approximately 100,000 servers. These
Office and Enterprise Applications . . 3 servers support five main computing
Increasing Throughput and application areas, often referred to as SOLUTION
Reducing Latency for Design “DOMES”: Design engineering, Office,
To meet these demands, we
Applications........................................ 4
Manufacturing, Enterprise, and Services.
determined it was necessary to
Choosing the Right
Network Components....................... 4 convert our data center network
Intel’s rapidly growing business requirements
architecture from 100 megabit per
Future Plans for Office place increasing demands on data center
second (Mb/s) and 1 GbE connections
and Enterprise I/O and resources. Intel IT is engaged in a verticalization
to 10 GbE connections. Our new 10 GbE
Storage Consolidation ........................ 4 strategy that examines each application area
data center fabric design will meet
and provides technology solutions that meet
Conclusion. ............................................. 5 current needs while positioning us to
specific business needs. We are also developing
incorporate new technology to meet
Acronyms................................................ 5 an Office and Enterprise private cloud, and we
future network requirements.
see opportunities to expand cloud computing
to support manufacturing.
For example, the 10 GbE network will simplify
These strategies, combined with the following the virtualized host architecture used for
significant trends in several computing Office and Enterprise computing, and will also
application areas, compelled us to evaluate reduce network component cost. For Design
whether our existing 1 gigabit Ethernet (GbE) computing, the 10 GbE network will reduce
network infrastructure was sufficient to meet latency and allow faster application response
network infrastructure requirements. times without the expense associated
with alternative low-latency systems such
• Large-scale virtualization in the Office
as InfiniBand*. And, although storage area
and Enterprise computing domains.
networks (SANs) in the Office and Enterprise
• Increasing compute density in the
application areas currently use a separate
Design computing domain.
Fibre Channel (FC) fabric, we anticipate that
In addition, high-performance Intel® processors as 10 GbE technology matures, we will
and clustering technologies are rapidly be able to consolidate the SANs onto the
increasing file server performance. This means
IT@INTEL 10 GbE network—thereby reducing network
The IT@Intel program connects IT that the network, not the file servers, is the complexity even further.
professionals around the world with their
peers inside our organization – sharing
lessons learned, methods and strategies.
Our goal is simple: Share Intel IT best
practices that create business value and
make IT a competitive advantage. Visit
us today at www.intel.com/IT or contact
your local Intel representative if you’d
like to learn more.

2 www.intel.com/IT
Upgrading Data Center Network Architecture to 10 Gigabit Ethernet IT@Intel White Paper

Simplifying Virtualization By converting to a 10 GbE data center Table 1. 10 Gigabit Ethernet (GbE) Cost Savings

for Office and Enterprise design, we can simplify server connectivity LAN 10 GbE Fabric Effect
Component on Cost
Applications by reducing the number of LAN ports from
Intel’s data center strategy for Office eight to two 10 GbE and one 1GbE. This Cable 48% Cost Reduction
significantly reduces cable and infrastructure Infrastructure
and Enterprise relies on virtualization and
consolidation to reduce data center cost and complexity, also shown in Figure 1 (right).
LAN 50% Cost Reduction
power consumption, while reducing time to Presently, the SAN will continue to use FC Infrastructure
(per Port)
provision servers. Our current consolidation connections and host bus adapters (HBAs).
level is 12:1, and we are targeting a 20:1 Server 12% Cost Increase
In addition to simplifying physical infra­struc­
consolidation level or greater with newer dual-
ture, a 10 GbE network fabric also has the
socket servers based on Intel® Xeon® processors. Storage 0% (Same for Both Fabrics)
potential to reduce the overall total cost of Infrastructure
As virtualization increases, so does the ownership (TCO) for LAN components per
number of server connections. Using a 1 GbE Total Savings 18.5% Overall Cost Reduction
server by about 18.5 percent compared to
(per Server)
network fabric, a single physical server on the the 1 GbE fabric, as shown in Table 1.
LAN requires eight GbE LAN ports, as shown
in Figure 1 (left).

Eight LAN Ports in a Three LAN Ports in a


1 Gigabit Ethernet Environment 10 Gigabit Ethernet Environment

Storage Area Network Storage Area Network


and Backup and Restore and Backup and Restore

Two 4-GB Fibre Channel (FC) Connections; Two 4-GB FC Connections;


Host Bus Adapter (HBA) for FC SAN Connectivity HBA for FC SAN Connectivity

Host Host

Virtual Virtual
Machinen Machinen

Backup Live Migration Production 1x Physical 1x Physical


2 Gb/s 2 Gb/s 2 Gb/s Network Interface Network Interface
Card Card

Service Remote Remote


Trunked
Console Management Management
100 Mb/s 100 Mb/s 100 Mb/s
Backup Live Migration Production
2 Gb/s 2 Gb/s 2 Gb/s

Figure 1. Implementing a 10 gigabit Ethernet data center fabric reduces the number of LAN ports for our virtualized host architecture from eight to three.

www.intel.com/IT 3
IT@Intel White Paper Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

Increasing Throughput and a 10 GbE port by 65 percent by selecting the comprised of multi-strand trunk bundles
Reducing Latency for Design right architecture and supplier. For example, and cassettes. This technology can support
Applications we had to decide where to place the network 1 GbE and 10 GbE connections and can be
For some specific silicon design workloads, switch—on top of the rack, which provides more upgraded easily to support 40 and 100 GbE
we needed to build a small, very low-latency flexibility but is more expensive, or in the center parallel-optic connections by simply swapping
cluster between servers. Several parallel of the row. We chose to use center-of-the-row a cassette. The current range for 10 GbE is
applications running on these servers typically switches to reduce cost. 300 meters on Optical Multimode 3 (OM3)
carry very large packets. As shown in Table multi-mode fiber (MMF) and 10 kilometers on
Higher transmission speed requirements have
2, we compared application response times, single-mode fiber (SMF).
led to new cable technologies, which we are
using several network fabric options. deploying to optimize our implementation of To maximize the supportable distances
The 10 GbE network provided acceptable a 10 GbE infrastructure: for 10 GbE, and 40 GbE/100 GbE when it
performance at an acceptable price point. arrives, we changed Intel’s fiber standard to
Small-form factor pluggable (SFP+) direct-
For messages 16 megabytes (MB) and reflect a minimum of OM3 MMF and OM4
attach cables. These twinaxial cables support
larger, the 10 GbE performance was about where possible, and we try to use more
10 GbE connections over short distances of
one-quarter of the 1 GbE response time, and energy-efficient SFP+ ports.
up to 7 meters. Some suppliers are producing
was closer to the performance of InfiniBand, a cable with a transmission capability of up to
a more expensive low-latency switched 15 meters.
fabric subsystem. In the table, “multi-hop”
Connectorized cabling. We are using this FUTURE PLANS FOR
is defined as having to use more than one
switch to get through the network.
technology to simplify cabling and reduce OFFICE AND ENTERPRISE
installation cost, because it is supported
I/O AND STORAGE
over SFP+ ports. One trunk cable that we
Choosing the Right
use can support 10 GbE up to 90 meters
CONSOLIDATION
Network Components Historically, Ethernet’s bandwidth
and provides six individual connections. This
We surveyed the market to find products limitations have kept it from being the
reduces the amount of space required to
that met our technical requirements at the fabric of choice for some application
support comparable densities by 66 percent.
right price point. We discovered that not all areas, such as I/O, storage, and
The trunks terminate on a variety of options,
equipment and network cards are identical— interprocess communication (IPC).
providing for a very flexible system. We
performance can vary. With extensive testing, Consequently, we have used other
also use a Multi-fiber Push-On (MPO) cable,
we found that we could reduce the cost for fabrics to meet high-bandwidth,
which is a connectorized fiber technology

Table 2. Application Response Times for Various Packet Sizes

Application Response Time in Microseconds


Multi-Hop
Packet Size 1 Gigabit Ethernet (GbE) Multi-Hop One-Hop Multi-Hop
in Bytes Current Standard 10 GbE 1 GbE InfiniBand*
8 69.78 62.50 41.78 15.57
128 75.41 62.55 44.77 17.85
1,024 116.99 62.56 64.52 32.25
4,096 165.24 65.28 103.15 60.15
16,384 257.41 62.47 195.87 168.57
32,768 414.52 129.48 348.55 271.95
65,536 699.25 162.30 627.12 477.93
131,072 1,252.90 302.15 1,182.41 883.83
Note: Intel internal measurements, June 2010.

4 www.intel.com/IT
Upgrading Data Center Network Architecture to 10 Gigabit Ethernet IT@Intel White Paper

low-latency, no-drop needs, such as FC. CONCLUSION


The advent of 10 GbE is enabling us ACRONYMS
A high-performance 10 GbE data
to converge all our network needs GbE gigabit Ethernet
center infrastructure can simplify
onto a single, flexible infrastructure.
the virtualization of Office and GB gigabyte

Several factors are responsible for increasing Enterprise applications and reduce FC Fibre Channel
the I/O demand on our data centers. First, per-server TCO. In addition, 10 GbE’s HBAs host bus adapter
when more servers are added to the data lower network latency and increased IOPS input/output operations
center, it increases input/output operations throughput performance can support per second
per second (IOPS), which creates a proportional our design teams’ high-density IPC interprocess communication
demand on the network. In addition, as each computing needs, improving design Mb/s megabits per second
generation of processors becomes more engineer productivity. MB megabytes
complex, the amount of data associated with MMF multi-mode fiber
Our analysis shows that for a virtualized
silicon design also increases significantly— MPO Multi-fiber Push-On
environment, a 10 GbE infrastructure can
again, increasing network demand. Finally, OM3 Optical Multimode 3
reduce our network TCO by as much as
systems with up to 512 gigabytes (GB) SAN storage area network
18 to 25 percent. And, for design applications,
of memory are becoming more common,
where low latency is required, 10 GbE can play SFP+ small form-factor pluggable
and these systems also need a high-speed
a crucial role without requiring expensive low- SMF single-mode fiber
network to read, write, and back up large
latency technology. The new fabric will also TCO total cost of ownership
amounts of data.
reduce data center complexity and increase
Our upgrade to 10 GbE products will enable us our network’s agility to meet Intel’s growing
to consolidate storage for Office and Enterprise data center needs.
applications while reducing our 10 GbE per-
port cost. When I/O convergence on Ethernet
becomes a reality, multiple traffic types, such
as LAN, storage, and IPC, can be consolidated
onto a single, easy-to-use network fabric. We
have conducted multiple phases of testing, and
in the near future, these 10 GbE ports will be
carrying multiple traffic types.

For more information on Intel IT best practices,


visit www.intel.com/it.

This paper is for informational purposes only. THIS DOCUMENT IS Intel, the Intel logo, and Xeon are trademarks of Intel Corporation in the U.S.
PROVIDED “AS IS” WITH NO WARRANTIES WHATSOEVER, INCLUDING and other countries.
ANY WARRANTY OF MERCHANTABILITY, NONINFRINGEMENT, FITNESS
* Other names and brands may be claimed as the property of others.
FOR ANY PARTICULAR PURPOSE, OR ANY WARRANTY OTHERWISE
ARISING OUT OF ANY PROPOSAL, SPECIFICATION OR SAMPLE. Intel Copyright © 2011 Intel Corporation. All rights reserved.
disclaims all liability, including liability for infringement of any proprietary
rights, relating to use of information in this specification. No license, express Printed in USA Please Recycle
or implied, by estoppel or otherwise, to any intellectual property rights is 0111/AC/KC/PDF 324159-001US
granted herein.

Das könnte Ihnen auch gefallen