Beruflich Dokumente
Kultur Dokumente
eGuide
At the heart of the enterprise, the data center network is the core of all corporate communications. But as
application environments mature, becoming more services-oriented, those flows are richer and more
compute-intensive and bandwidth-hungry than many legacy networks can handle efficiently. Top that challenge off
with server consolidation, server virtualization and the trend toward convergence of data and storage on a single
fabric. The pressure on the data center network is coming from all sides. Today, many enterprise IT professionals
are rethinking their traditional approaches to the network. In these articles, Network World and its sister publication
InfoWorld explore how to approach networking today, starting with the basics and moving on from there.
IN THIS eGUIDE
2 Everything 5 Four Trends Shape 9 Emerging IEEE 13 10G Ethernet 17 Seven Resolutions 19 Data Center
You Need to Know the New Data Center Ethernet Standards Shakes Net Design for Network Network Resources
About Building Solid, IT execs adapt to a new reality Could Soothe Data to the Core Management Additional tools, tips
Reliable Networks as x86 virtualization trans- Center Headaches Shift from three- to two-tier One analyst’s advice on how and documentation
A networking primer on the forms the data center forever Under development is a way architectures driven by need to keep your edge
fundamentals, from the right to offload policy, security and for speed, server virtualization,
switches to the right network management processing from unified switching fabrics
monitoring techniques virtual switches
Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by
the event that it’s not, you’re likely better off bonding mul-
A networking primer on the fundamentals, from the right tiple 1Gbit links rather than upgrading to 10G for those
switches to the right network monitoring techniques closets. As 10G drops in price, this will change, but for
now, it’s far cheaper to bond several 1Gbit ports than to
While almost every part of a modern data center can be con- The size of the organization will determine the size and ca- add 10G capability to both the core and the edge.
sidered mission critical, the network is the absolute founda- pacity of the core. In most infrastructures, the data center core In the likely event that VoIP will be deployed, it may be bene-
tion of all communications. That’s why it must be designed is constructed differently from the LAN core. If we take a hypo- ficial to implement small modular switches at the edge as well,
and built right the first time. After all, the best servers and thetical network that has to serve the needs of a few hundred allowing Power over Ethernet (PoE) modules to be installed in
storage in the world can’t do anything without a solid network. or a thousand users in a single building, with a data center the same switch as the non-PoE ports. Alternatively, deploying
To that end, here are a variety of design points and in the middle, it’s not uncommon to find that there are big trunked PoE ports to each user is also a possibility. This allows
best practices to help tighten up the bottom end. switches in the middle and aggregation switches at the edges. a single port to be used for VoIP and desktop access tasks.
Ideally, the core is composed of two modular switching In the familiar hub-and-spoke model, the core connects
Core considerations platforms that carry data from the edge over gigabit fiber, to the edge aggregation switches with at least two links,
The term “network” applies to everything from LAN to SAN located in the same room as the server and storage infra- either connecting to the server infrastructure with direct
to WAN. All these variations require a network core, so structure. Two gigabit fiber links to a closet of, say, 100 copper runs or through server aggregation switches in
let’s start there. switch ports is sufficient for most business purposes. In each rack. This decision must be determined site by site,
2 of 19
Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by
due to the distance limitations of copper cabling. essential to proper network operation. A full discussion of transactional traffic. iSCSI networks can be built using
Either way, it’s cleaner to deploy server aggregation these two technologies is beyond the scope of this guide, the same Ethernet switches that handle normal network
switches in each rack and run only a few fiber links back but correct configuration of these two elements will have a traffic – although iSCSI networks should be confined into
to the core than try to shoehorn everything into a few huge significant effect on the resiliency and proper operation of their own VLAN at the least, and possibly built on a spe-
switches. In addition, using server aggregation switches will any Layer-3 switched network. cific set of Ethernet switches that separate this traffic for
allow redundant connections to redundant cores, which will performance reasons.
eliminate the possibility of losing server communications in Minding the storage Make sure to choose the switches used for an iSCSI
the event of a core switch failure. If you can afford it and Once the core has been built, you can take on storage net- storage network carefully. Some vendors sell switches
your layout permits it, use server aggregation switches. working. Although other technologies are available, when that perform well with a normal network load but bog
Regardless of the physical layout method, the core switches you link servers to storage arrays, your practical choice will down with iSCSI traffic due to the internal structure of the
need to be redundant in every possible way: redundant power, probably boil down to a familiar one: Fibre Channel or iSCSI? switch itself. Generally, if a switch claims to be “enhanced
redundant interconnections, and redundant routing protocols. Fibre Channel is generally faster and delivers lower laten- for iSCSI,” it will perform well with an iSCSI load.
Ideally, they should have redundant control modules as well, cy than iSCSI, but it’s not truly necessary for most applica- Either way, your storage network should mirror the main
but you can make do without them if you can’t afford them. tions. Fibre Channel requires specific FC switches and costly network and be as redundant as possible: redundant
Core switches will be responsible for switching nearly FC HBAs in each server – ideally two for redundancy – switches and redundant links from the servers (whether
every packet in the infrastructure, so they need to be bal- while iSCSI can perform quite well with standard gigabit FC HBAs, standard Ethernet ports, or iSCSI accelerators).
anced accordingly. It’s a good idea to make ample use of copper ports. If you have transaction-oriented applica- Servers do not appreciate having their storage suddenly
Hot Standby Routing Protocol (HSRP) or Virtual Routing tions such as large databases with thousands of users, disappear, so redundancy here is at least as important as
Redundancy Protocol (VRRP). These allow two discrete you can probably choose iSCSI without affecting perfor- it is for the network at large.
switches to effectively share a single IP and MAC address, mance and save a bundle.
which is used as the default route for a VLAN. In the event Fibre Channel networks are unrelated to the rest of the Going virtual
that one core fails, those VLANs will still be accessible. network. They exist all on their own, linked only to the Speaking of storage networking, you’re going to need some
Finally, proper use of Spanning-Tree Protocol (STP) is main network via management links that do not carry any form of it if you plan on running enterprise-level virtualiza-
3 of 19
Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by
tion. The ability for virtualization hosts to migrate virtual is still the safest bet and less prone to human error. If you be used, and bonding these interfaces will not necessarily
servers across a virtualization farm absolutely requires sta- can physically separate that traffic by adding interfaces to result in performance improvements on a per-server basis.
ble and fast central storage. This can be FC, iSCSI, or even the virtualization hosts, then do so. However, if you require significant back-end server-to-serv-
NFS in most cases, but the key is that all the host servers Each pair of interfaces should be bonded using some er communication, such as front-end Web servers and back-
can access a reliable central storage network. form of link aggregation, such as Link Aggregation Control end database servers, it’s advisable to dedicate that traffic
Networking virtualization hosts isn’t like networking Protocol (LACP) or 802.3ad. Either should suffice, though to a specific set of bonded links. They will likely not need to
a normal server, however. While a server might have a your switch may support only one form or the other. Bond- be trunked, but bonding those links will again provide load-
front-end and a back-end link, a virtualization host might ing these links establishes load-balancing as well as balancing and redundancy on a host-by-host basis.
have six or more Ethernet interfaces. One reason is per- failover protection at the link level and is an absolute re- While a dedicated management interface isn’t truly a
formance: A virtualization host pushes more traffic than a quirement, especially since you’d be hard-pressed to find requirement, it can certainly make managing virtualization
normal server due to the simple fact that as many as doz- a switch that doesn’t support it. hosts far simpler, especially when modifying network pa-
ens of virtual machines are running on a single host. The In addition to bonding these links, the front-end bundle rameters. Modifying links that also carry the management
other reason is redundancy: With so many VMs on one should be trunked with 802.1q. This allows multiple VLANs traffic can easily result in a loss of communication to the
physical machine, you don’t want one failed NIC to take a to exist on a single logical interface and makes deploying virtualization host.
whole bunch of virtual servers offline at once. and managing virtualization farms significantly simpler. You So if you’re keeping count, you can see how you might
To combat this problem, virtualization hosts should can then deploy virtual servers on any VLAN or mix of VLANs have seven or more interfaces in a busy virtualization host.
be constructed with at least two dedicated front-end on any host without worrying about virtual interface configu- Obviously, this increases the number of switch ports required
links, two back-end links, and, ideally, a single manage- ration. You also don’t need to add physical interfaces to the for a virtualization implementation, so plan accordingly. The
ment link. If this infrastructure will service hosts that live hosts just to connect to a different VLAN. increasing popularity of 10G networking – and the dropping
in semi-secure networks (such as a DMZ), then it may The virtualization host storage links don’t necessarily cost of 10G interfaces – may enable you to drastically re-
be reasonable to add physical links for those networks need to be either bonded or trunked unless your virtual duce the cabling requirements so that you can simply use a
as well, unless you’re comfortable passing semi-trusted servers will be communicating with a variety of back-end pair of trunked and bonded 10G interfaces per host with a
packets through the core as a VLAN. Physical separation storage arrays. In most cases, a single storage array will management interface. If you can afford it, do it.•
4 of 19
Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by
5 of 19
Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by
net boards, you’d still only be able to support 24 VMs. – but I still see people buying a lot of Fibre Channel be- same vendor’s gear, Newman says. Interoperability is un-
“The nice thing about I/O virtualization is that everything cause they’re told it’s the way to go, even though our tests proven as yet.
shares the one InfiniBand or 10G Ethernet connection as actually show that the network often isn’t the bottleneck,” Scott Engel, director of IT infrastructure, Transplace,
lots of 1G pipes.” he says. “What you can do with Fibre Channel you can a third-party logistics provider in Dallas, identifies FCoE
At Wholesale Electric, Fife is using Xsigo’s virtual I/O do with 10G Ethernet and get equivalent or better per- as one of the two biggest networking and infrastructure
Director to decouple processing, storage and I/O. “By do- formance, even if that’s not the belief of SAN buyers and changes coming to the company’s data center over the
ing so we’ve essentially built our own cloud because we vendors.” next year. The other is 10G to the servers, he says.
can assign processor, RAM, disk and I/O on an as-needed This is early days for FCoE, but plenty of folks are look- Indeed, Newman says, the real tipping point in the
basis, and then, when they’re no longer needed, get rid of ing at the technology, says David Newman, president data center will happen over the next 12 to 18 months
it all and do something else,” he says. “There are no rigid of Network Test, an independent test firm, and Network when 10G replaces 1G Ethernet on server motherboards.
guidelines within which we have to operate. We can be World Lab Alliance member. If nothing more, the technol- “That’ll have all sorts of follow-on effects, enabling data-
extremely flexible.” ogy has cost in its favor, he says. storage convergence is just one,” he says.
“Besides the capital cost of the equipment, there’s Watch for this year to be the first with “appreciable
TREND NO. 2: the operational expense issue. People who run plain old numbers” of 40G switch ports shipping, Newman says.
Data and storage convergence Ethernet cost less than people who know Fibre Channel,” Fatter network pipes will be needed to accommodate the
Today’s data centers typically have distinct data and stor- Newman says. “On economic grounds, it’ll be cheaper to higher-speed server connections.
age networks, and nobody much likes that situation. “As provision FCoE than running separate infrastructures.”
soon as people can recombine those two networks, that’s Today, Brocade and Cisco have FCoE-capable switches TREND NO. 3:
what they’re going to do,” says Joel Snyder, senior partner that fully support all prioritizations and new mechanisms Faster processors, greater consolidation
with consulting firm Opus One and another member of the on Ethernet for delivering Fibre Channel-like service levels, By now, most enterprises have server consolidation sto-
Network World Lab Alliance. and other vendors are coming into the fray, as well. So ries to share, spun around a virtualization theme. They tell
“My belief and, yes, hope is that we’ll get rid of pure Fi- building a working, end-to-end FCoE network that handles of impressive physical-to-virtual server ratios, often in the
bre Channel and go to Fibre Channel over Ethernet [FCoE] data and storage is possible today – at least using the double digits. But consolidation in the data center is just
6 of 19
Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by
“We’re not talking about the container itself, but the concept, being able to say ‘I need
IT’S THE INSIDE eight racks of servers, four racks of storage, a rack and half of networking, and here’s the
THAT MATTERS power and cooling it will consume,’ and optimize that way.”
—Doug Oathout, vice president of converged infrastructure, HP
beginning, some say. Nehalem-EX. That chip is expected out by mid-year. one potential alternative to building out new or extending
The maturity and comfort levels around virtualization “If you start at the chip level, the ability to deliver more existing data centers. “Software routes around failures,
are growing, which means enterprises are showing the performance per processor core but also pack four times and maybe you’d replace that truck with a new one every
willingness to put more and more VMs on a single system, as many cores onto a single chip gives a vast amount of three years or so,” he says.
says Steve Sibley, an IBM Power Systems manager. Within new capacity and capability to put more virtual servers The data center-in-a-box concept is one that bears
the year, he adds, the Power 750 will support up to 320 onto a single platform without sacrificing performance or watching, agrees Doug Oathout, vice president of con-
VMs on a single server, the Power 770 and 780 up to 640 capability of the overall system,” Sibley says. “That design verged infrastructure at HP. Companies already are using
VMs and plans for up 1,000 VMs. point is enabling systems or offerings that give clients the data centers like pods or trailers outside their facilities,
The ability to support higher numbers of VMs per phys- ability to consolidate even more than they used to on a optimizing server, storage, networking, cooling and power
ical server comes on the back of faster processors, of single platform at much cheaper prices than ever before.” distribution resources for that size container, he says.
course. In IBM’s case, the company recently introduced “Now we see the performance-optimization trend moving
the Power7, an eight-core chip that delivers four times TREND 4: Infrastructure optimization inside the data center.”
the virtualization capability, scalability and performance Will your data center strategy one day include a semi This is not to say the data center is going to turn into
than its predecessor, Sibley says. The high-end Power7- tractor-trailer full of hands-off gear parked in some spot parking lot full of semis. But enterprises that run out of
based Power 780 and 770 servers will come with up to selected for optimal cooling and power supply? space, electricity, cooling and capacity today can take the
64 Power7 cores, for example. Dan Kusnetzky, vice president of research operations container concept and move that type of asset inside the
Intel, too, is readying an eight-core chip, code-named at The 451 Group, says he can imagine so – at least as data center, Oathout says. “We’re not talking about the
7 of 19
Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by
container itself, but the concept, being able to say ‘I need Oathout says. building blocks within the data center. It’s mindboggling
eight racks of servers, four racks of storage, a rack and “There’s so much more waste when you build a data how much more efficient that is compared to building a
half of networking, and here’s the power and cooling it will center to the ultimate capacity vs. building it to what it monolithic data center that has mega watts and 100,000
consume,’ and optimize that way.” needs to do, so you could almost call this a retrofitting square feet of space yet is incapable of supporting the
Piecing together a data center section by section is trend,” Oathout adds. “I’m going to optimize what I’ve got, equipment you need for your next workload.”
far less costly than the traditional go-for-broke approach, doing it with localized power, cooling and energy for the
and delivering power and cooling a section at a time is specific work I want to get done in this environment. Then Schultz is a freelance IT writer in Chicago. You can reach
far more efficient than moving it across a long distance, I take the next step, with multiple pods, instantiations or her at bschultz5824@gmail.com.
8 of 19
Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by
9 of 19
Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by
and external networks. This would alleviate the need for 2010. Specifically, bg addresses edge virtual bridging: an VEPA allows an external bridge – or switch – to perform
virtual switches on blade servers to store and process ev- environment where a physical end station contains mul- inter-VM hairpin forwarding of frames, something stan-
ery feature – such as security, policy and access control tiple virtual end stations participating in a bridged LAN. dard 802.1Q bridges or switches are not designed to do.
lists (ACLs) – resident on the external data center switch.
10 of 19
Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by
Cisco and HP are leading proponents of the IEEE effort despite the fact that
OF LIKE MINDS Cisco is charging hard into HP’s traditional server territory while HP is ramping
up its networking efforts ....
“On a bridge, if the port it needs to send a frame on VMware to have a policy follow a VM as it moves. This says Joe Pelissier, technical lead at Cisco.
is the same it came in on, normally a switch will drop multichannel capability attaches a tag to the frame that “It greatly reduces the number of things you have to
that packet,” says Paul Congdon, CTO at HP ProCurve, vice identifies which VM the frame came in on. manage and simplifies management because the control-
chair of the IEEE 802.1 group and a VEPA author. “But But another extension was required to allow users to ling switch is doing all of the work,” Pelissier says.
VEPA enables a hairpin mode to allow the frame to be deploy remote switches – instead of those adjacent to
forwarded out the port it came in on. It allows it to turn the server rack – as the policy controlling switches for the Cisco, HP say they’re in synch
around and go back.” virtual environment. This is where 802.1Qbh comes in: It What’s still missing from bg and bh is a discovery protocol
VEPA does not modify the Ethernet frame format but allows edge virtual bridges to replicate frames over mul- for autoconfiguration, Pelissier says. Some in the 802.1
only the forwarding behavior of switches, Congdon says. tiple virtual channels to a group of remote ports. This will group are leaning toward using the existing Logical Link
But VEPA by itself was limited in its capabilities. So HP enable users to cascade ports for flexible network design, Discovery Protocol (LLDP), while others, including Cisco
combined its VEPA proposal with a Cisco’s VN-Tag pro- and make more efficient use of bandwidth for multicast, and HP, are inclined to define a new protocol for the task.
posal for server/switch forwarding, management and broadcast and unicast frames. “LLDP is limited in the amount of data it can carry and
administration to support the ability to run multiple vir- The port extension capability of bh lets administrators how quickly it can carry that data,” Pelissier says. “We
tual switches and multiple VEPAs simultaneously on the choose the switch they want to delegate policies, ACLs, need something that carries data in the range of 10s
endpoint. filters, QoS and other parameters to VMs. Port extenders to 100s of kilobytes and is able to send the data faster
This required a channeling scheme for bg, which is will reside in the back of a blade rack or on individual rather than one 1,500 byte frame a second. LLDP doesn’t
based on the VN-Tag specification created by Cisco and blades and act as a line card of the controlling switch, have fragmentation capability either. We want to have the
11 of 19
Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by
capability to split the data among multiple frames.” the same thing: reducing the number of managed data VEPA form the lowest layer of implementation, and you
Cisco and HP are leading proponents of the IEEE effort center elements and defining a clear line of demarcation can move all the way to more complex solutions such as
despite the fact that Cisco is charging hard into HP’s tradi- between NIC, server and switch administrators when mon- Cisco’s VN-Tag.”
tional server territory while HP is ramping up its networking itoring VM communications. And the proposals seem to have broad industry support.
efforts in an attempt to gain control of data centers that have “This isn’t the battle it’s been made out to be,” Pelissier says. “We do believe this is the right way to go,” says Dhriti-
been turned on their heads by virtualization technology. Though Congdon acknowledges he initially proposed man Dasgupta, senior manager of data center marketing
Cisco and HP say their VEPA and VN-Tag/multichannel VEPA as an alternative to Cisco’s VN-Tag technique, the at Juniper. “This is putting networking where it belongs,
and port extension proposals are complementary despite two together present “a nice layered architecture that which is on networking devices. The network needs to
reports that they are competing techniques to accomplish builds upon one another where virtual switches and know what’s going on.”•
12 of 19
Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by
13 of 19
Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by
tation and scale. But the company also supports two-tier like require unique network attributes, according to Nick Lip- ing, highly reliable and faultless with low and predictable la-
architectures should customers demand it. pis, an adviser to network equipment buyers, suppliers and tency for broadcast, multicast and unicast traffic types.
“We are offering both,” says Senior Product Manager service providers. Network performance has to be non-block- “New applications are demanding predictable perfor-
Thomas Scheibe. “It boils down to what the customer
tries to achieve in the network. Each tier adds another two
hops, which adds latency; on the flipside it comes down to
FORK IN THE ROAD
Virtualization, inexpensive 10G links and unified Ethernet switching fabrics are catalyzing a migration from three-tier Layer 3
what domain size you want and how big of a switch fabric data center switching architectures to flatter two-tier Layer 2 designs, which subsume the aggregation layer into the access
you have in your aggregation layer. If the customer wants layer. Proponents say this will decrease cost, optimize operational efficiency and simplify management.
14 of 19
Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by
New data centers require cut-through switching – which is not a new concept – to
THE OLD significantly reduce or even eliminate buffering within the switch. Cut-through
SWITCHEROO switches can reduce switch-to-switch latency from 15 to 50 microseconds to 2 to 4.
—Robin Layland, principal, Layland Consulting
mance and latency,” says Jayshree Ullal, CEO of Arista Most current switches are store-and-forward devices access switching layer has been subsumed into the serv-
Networks, a privately held maker of low-latency 10G Eth- that store data in large buffer queues and then forward it ers themselves, Lippis notes.
ernet top-of-rack switches for the data center. “That’s to the destination when it reaches the top of the queue. “In this model there is no third tier where traffic has
why the legacy three-tier model doesn’t work. Most of the “The result of all the queues is that it can take 80 micro- to flow to accommodate server-to-server flows; traffic is
switches are 10:1, 50:1 oversubscribed,” meaning dif- seconds or more to cross a three-tier data center,” he says. either switched at access or in the core at less than 10
ferent applications are contending for limited bandwidth New data centers require cut-through switching – which microseconds,” he says.
which can degrade response time. is not a new concept – to significantly reduce or even Because of increased I/O associated with virtual
This oversubscription plays a role in the latency of to- eliminate buffering within the switch, Layland says. Cut- switching in the server there is no room for a blocking
day’s switches in a three-tier data center architecture, through switches can reduce switch-to-switch latency from switch in between the access and the core, says Asaf
which is 50 to 100 microseconds for an application re- 15 to 50 microseconds to 2 to 4, he says. Somekh, vice president of marketing for Voltaire, a maker
quest across the network, Layland says. Cloud and virtual- Another factor negating the three-tier approach to data of Infiniband and Ethernet switches for the data center.
ized data center computing with a unified switching fabric center switching is server virtualization. Adding virtualization “It’s problematic to have so many layers.”
requires less than 10 microseconds of latency to function to blade or rack-mount servers means that the servers them- Another requirement of new data center switches is to
properly, he says. selves take on the role of access switching in the network. eliminate the Ethernet spanning tree algorithm, Layland
Part of that requires eliminating the aggregation tier in a Virtual switching inside servers takes place in a hyper- says. Currently all Layer 2 switches determine the best
data center network, Layland says. But the switches themselves visor and in other cases the network fabric is stretched to path from one endpoint to another using the spanning
must use less packet buffering and oversubscription, he says. the rack level using fabric extenders. The result is that the tree algorithm.
15 of 19
Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by
Only one path is active, the other paths through the fab- world,” Layland says. ber and cost of interface adapters by half, Layland notes.
ric to the destination are only used if the best path fails. The Finally, cost is a key factor in driving two-tier architec- And by eliminating the need for an aggregation layer of
lossless, low-latency requirements of unified fabrics in virtu- tures. Ten gigabit Ethernet ports are inexpensive – about switching, there are fewer switches to operate, support,
alized data centers requires switches using multiple paths $500, or twice that of Gigabit Ethernet ports yet with 10 maintain and manage.
to get traffic to its destination, Layland says. These switches times the bandwidth. Virtualization allows fewer servers to “If you have switches with adequate capacity and
continually monitor potential congestion points and pick the process more applications, thereby eliminating the need you’ve got the right ratio of input ports to trunks, you don’t
fastest and best path at the time the packet is being sent. to acquire more servers. need the aggregation layer,” says Joe Skorupa, a Gartner
“Spanning tree has worked well since the beginning And a unified fabric means a server does not need sepa- analyst. “What you’re doing is adding a lot of complexity
of Layer 2 networking but the ‘only one path’ [approach] rate adapters and interfaces for LAN and storage traffic. and a lot of cost, extra heat and harder troubleshooting
is not good enough in a non-queuing and non-discarding Combining both on the same network can reduce the num- for marginal value at best.” •
16 of 19
Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by
2
Here are some suggestions for “resolutions” [as you look] Become more application-aware. users to call the help desk – their experience in using
forward to the road ahead: How can you really be in tune with the business or (or trying to use) the applications and services which IT
organization you are supporting if you don’t know provides. User quality-of-experience data can be gained
1
Build an understanding of where and how well the really important apps and services via on-client agents, synthetic traffic generators (whether
IP videoconferencing. are running? And in the converse, how can you understand internally managed or externally subscribed), or by pas-
It’s big, it’s bad, and it’s going to change your if the loads your network is carrying are even relevant or just sively monitoring traffic and comparing request/response
life, especially when desktop videoconferencing starts to so much streaming audio keeping remote office workers patterns. Best practices employ a mix of these, but any
catch on. Videoconferencing is real-time, requires priority entertained during the business day? Look to NetFlow (or one is better than none.
QoS, low latency and many times more bandwidth than similar) data or packet-based monitoring tools to give you
4
VoIP. If you had to shake a few skeletons out of the wir- this perspective. Think proactive/preventative.
ing closet when you rolled out VoIP, you better be ready Similar to #3, but more broadly speaking, an
3
for a lot more skeletons. Start by finding out what type Start tracking user experience. ounce of problem prevention is worth at least a
of videoconferencing is being used or is planned for your Even if you love the thrill of firefighting and trou- pound of frantic troubleshooting cure. And there are lots
workforce, and figure out how much load this will create bleshooting gnarly performance issues across of options here. One of the most effective is to get better
on your network before it starts a viral ramp-up. distributed, n-tier architectures, the greatest satisfaction change control in place, thus preventing the “oops” mo-
17 of 19
Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by
Take advantage of the fact that you can help to measure IT service delivery in a way
AN OPEN EXCHANGE that the other guys can’t – in context with everything else that is going across the
wire – and share that data openly and freely.
ments when the upgrade you roll out breaks something can’t – in context with everything else that is going across compliance auditing, and predictive analytics.
else (or a lot of other things). Others include using service the wire – and share that data openly and freely. Many
7
mapping and assessing health and risk on a sustained times, network-facing data can be the most effective Figure out how to leverage
basis, or using predictive analytics tools to help you sniff place to start the triage process when no one else is able virtualization.
out the important early warning signs of pending issues to get to the root of a problem. One of the more interesting evolutions of man-
hidden in all of that performance monitoring data you’ve agement technology is the growth in the number of hy-
6
been collecting. Embrace automation. pervisor platforms in place around your network. What
With the onslaught of virtualization (a.k.a. “serv- started purely as a computing system concept has rap-
5
Make friends with the system er hide and seek”), mobility (a.k.a. “client hide idly spread to network equipment, so you can now deploy
admins and app support guys. and seek”) and composite Web applications (a.k.a. – you management tools to new places as virtual images or vir-
OK, maybe that’s two resolutions, but it’s all guessed it – “application hide and seek”) you won’t be tual appliances quickly and easily. Keep these in mind
about getting along better. Unless you are in the minority, able to keep up with all of the moving parts without auto- when you are trying to work out how to achieve better
your cross-organization working relationships usually look mating discovery and upkeep of relationship recognition distribution of management tools and instrumentation.•
more like a Big Fat Greek Wedding than one big happy and modeling. Automation is also available for respond-
family. Take advantage of the fact that you can help to ing to well-known event scenarios with pre-scripted ac- Frey is a senior analyst with Enterprise Management
measure IT service delivery in a way that the other guys tions, change management for configuration roll-backs, Associates.
18 of 19
Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management
RETHINKING THE DATA CENTER NETWORK Sponsored by
www.idc.com
Freeing your
generations. Meanwhile, the applications running across the network have become help spur growth, cut costs, and stay ahead of the reduce complexity, enhance business agility, and
more ubiquitous and more demanding. Underlying this cycle, the network has competition. manage costs.
become much more important to businesses of all sizes — including midmarket firms
Businesses are forced to undergo digital Reduced complexity
F.508.935.4015
transformation to remain competitive and enable The network is the foundation for a converged
Driven by the financial crisis, midmarket firms are taking a close look at all budget line new services like supply chain automation, business infrastructure and for advanced applications like
Experience innovation with end-to-end unified solutions items. They demand solutions that provide more than sufficient functionality for their analytics, manufacturing systems automation, ordering unified communications and collaboration.
current networking needs and also leave plenty of headroom to scale their network in systems, and customer relationship management. The
HP simplifies the network with a unified approach—
the years to come, in terms of both bandwidth and functionality. At the same time, solution to IT sprawl lies in a converged infrastructure
wired and wireless hardware, software, management,
they want these network systems to be cost effective to deploy and run. where IT silos are brought together into pools of
P.508.872.8200
Introduction ............................................................................................................... 2 and security. HP’s integrated, easy-to-use management
One company striving to address these needs is HP. HP ProCurve networking virtualized assets, shared by many applications and
features reduce the time and complexity of planning,
HP ProCurve business white paper: Executive summary .............................................. 2 Networks in Transition ................................................................................................. 2 services. The network is the foundation of a converged
products include a broad line of LAN core switches, LAN edge switches, and wireless deploying, and operating wired and wireless LANs
LAN and network security solutions that are all brought together under a unified Challenge among change ..................................... 2 Performance............................................................................................................ 2 infrastructure. The systems and applications are
from a single management platform. This eliminates
White Paper Research Report White Paper White Paper Solution Brief
Freeing Your Network IDC: ROI of Switched Redefining the Economics Interconnecting the Innovation Through
Infrastructure featuring Ethernet Networking of Networking Intelligent EDGE End-to-End Unified
Gartner Solutions Networking Solutions
See why Gartner and the network in- Learn how building upon the ProCurve
The evolution of networks has trans- IDC interviewed several organiza- dustry have positioned HP as a leader Adaptive EDGE Architecture, moving Through a broad portfolio of
formed business while simultaneous- tions to determine their future and the fastest-growing enterprise intelligence and functionality to the secure unified networking
ly creating legacy systems that have networking strategies. Learn how Ethernet LAN networking vendor. HP’s edge of the network and interconnecting products and solutions, HP helps
become difficult to manage. Learn they built a foundation to meet commitment to industry-standards all devices with Interconnect Fabric of- businesses to reduce complexity,
about the benefits of converged infra- network switch demands with an can help organizations optimize their ferings, can help companies effectively enhance business agility, and
structure and revised policies to take average 5.7 month payback and networks. establish a secure, mobile, multi-service manage costs. Review this Solu-
systems into the next generation. zero annual maintenance fees. infrastructure. Minimize investment risk tion Brief for midsize companies.
Read more >> and ensure maximum value — immedi-
Read more >> Learn more >> ately and well into the future. Read more >>
19 of 19
Building Solid, Four Trends Shape the Soothing Data Center 10G Ethernet Shakes Net Seven Resolutions for
Resources
Reliable Networks New Data Center Headaches Design to the Core Network Management