Sie sind auf Seite 1von 14

TopofRackvsEndofRowDataCenter

Designs
Apr5,2009BradHedlund

Thisarticleprovidesacloseexaminationandcomparisonoftwopopulardatacenterphysical
designs,TopofRack,andEndofRow.Wewillalsoexploreanewalternativedesignusing
FabricExtenders,andfinishoffwithaquicklookathowCiscoUnifiedComputingmightfitinto
thispicture.Letsgetstarted!

TopofRackDesign

IntheTopofRackdesignserversconnecttooneortwoEthernetswitchesinstalledinsidethe
rack.Thetermtopofrackhasbeencoinedforthisdesignhowevertheactualphysical
locationoftheswitchdoesnotnecessarilyneedtobeatthetopoftherack.Otherswitch
locationscouldbebottomoftherackormiddleofrack,howevertopoftherackismost
commonduetoeasieraccessibilityandcleanercablemanagement.Thisdesignmayalso
sometimesbereferredtoasInRack.TheEthernettopofrackswitchistypicallylowprofile
(1RU2RU)andfixedconfiguration.ThekeycharacteristicandappealoftheTopofRack
designisthatallcoppercablingforserversstayswithintherackasrelativelyshortRJ45patch

cablesfromtheservertotherackswitch.TheEthernetswitchlinkstheracktothedatacenter
networkwithfiberrunningdirectlyfromtheracktoacommonaggregationareaconnectingto
redundantDistributionorAggregationhighdensitymodularEthernetswitches.
Eachrackisconnectedtothedatacenterwithfiber.Therefore,thereisnoneedforabulkyand
expensiveinfrastructureofcoppercablingrunningbetweenracksandthroughoutthedata
center.Largeamountsofcoppercablingplacesanadditionalburdenondatacenterfacilitiesas
bulkycoppercablecanbedifficulttoroute,canobstructairflow,andgenerallyrequiresmore
racksandinfrastructurededicatedtojustpatchingandcablemanagement.Longrunsoftwisted
paircoppercablingcanalsoplacelimitationsonserveraccessspeedsandnetworktechnology.
TheTopofRackdatacenterdesignavoidstheseissuesasthereisnoneedtoforalarge
coppercablinginfrastructure.ThisisoftenthekeyfactorwhyaTopofRackdesignisselected
overEndofRow.
Eachrackcanbetreatedandmanagedlikeanindividualandmodularunitwithinthedata
center.Itisveryeasychangeoutorupgradetheserveraccesstechnologyrackbyrack.Any
networkupgradesorissueswiththerackswitcheswillgenerallyonlyaffecttheserverswithin
thatrack,notanentirerowofservers.Giventhattheserverconnectswithveryshortcopper
cableswithintherack,thereismoreflexibilityandoptionsintermsofwhatthatcableisand
howfastofaconnectionitcansupport.Forexample,a10GBASECX1coppercablecouldbe
usedtoprovidealowcost,lowpower,10gigabitserverconnection.The10GBASECX1cable
supportsdistancesofupto7meters,whichworksfineforaTopofRackdesign.
Fibertoeachrackprovidesmuchbetterflexibilityandinvestmentprotectionthancopper
becauseoftheuniqueabilityoffibertocarryhigherbandwidthsignalsatlongerdistances.
Futuretransitionsto40gigabitand100gigabitnetworkconnectivitywillbeeasilysupportedon
afiberinfrastructure.Giventhecurrentpowerchallengesof10Gigabitovertwistedpaircopper
(10GBASET),anyfuturesupportof40or100Gigabitontwistedpairwilllikelyhaveveryshort
distancelimitations(inrackdistances).ThistooisanotherkeyfactorwhyTopofRackwouldbe
selectedoverEndofRow.

Theadoptionofbladeserverswithintegratedswitchmoduleshasmadefiberconnectedracks
morepopularbymovingtheTopofRackconceptinsidethebladeenclosureitself.Ablade
serverenclosuremaycontain2,4,ormoreethernetswitchingmodules,multipleFCswitches,
resultinginanincreasingnumberofswitchestomanage.
OnesignificantdrawbackoftheTopofRackdesignistheincreasedmanagementdomainwith
eachrackswitchbeingauniquecontrolplaneinstancethatmustbemanaged.Inalargedata
centerwithmanyracks,aTopofRackdesigncanquicklybecomeamanagementburdenby
addingmanyswitchestothedatacenterthatareeachindividuallymanaged.Forexample,ina
datacenterwith40racks,whereeachrackcontained(2)TopofRackswitches,theresult
wouldbe80switchesonthefloorjustprovidingserveraccessconnections(notcounting
distributionandcoreswitches).Thatis80copiesofswitchsoftwarethatneedtobeupdated,80
configurationfilesthatneedtobecreatedandarchived,80differentswitchesparticipatingin
theLayer2spanningtreetopology,80differentplacesaconfigurationcangowrong.Whena
TopofRackswitchfailstheindividualreplacingtheswitchneedstoknowhowtoproperly
accessandreplacethearchivedconfigurationofthefailedswitch(assumingitwascorrectly
andrecentlyarchived).Theindividualmayalsoberequiredtoperformsomeverificationtesting
andtroubleshooting.Thisrequiresahigherskillsetindividualwhomaynotalwaysbeavailable
(orifsocomesatahighprice),especiallyinaremotelyhostedlightsoutfacility.
ThetopofrackdesigntypicallyalsorequireshigherportdensitiesintheAggregationswitches.

Goingbacktothe80switchexample,witheachswitchhavingasingleconnectiontoeach
redundantAggregationswitch,eachAggregationswitchrequires80ports.Themoreportsyou
haveintheaggregationswitches,themorelikelyyouaretofacepotentialscalability
constraints.Oneoftheseconstraintsmightbe,forexample,STPLogicalPorts,whichisa
productofaggregationportsandVLANs.Forexample,ifIneededtosupport100VLANsin
singleL2domainwithPVSTonall80portsoftheaggregationswitches,thatwouldresultin
8000STPLogicalPortsperaggregationswitch.Mostrobustmodularswitchescanhandlethis
number.Forexample,theCatalyst6500supports10,000PVSTinstancesintotal,and1800
perlinecard.AndtheNexus7000supports16,000PVSTinstancesgloballywithnoperline
cardrestrictions.Nonetheless,thisissomethingthatwillneedtobepayedattentiontoasthe
datacentergrowsinnumbersofportsandVLANs.Anotherpossiblescalabilityconstraintisraw
physicalportsdoestheaggregationswitchhaveenoughcapacitytosupportallofthetopof
rackswitches?Whataboutsupportfor10Gigabitconnectionstoeachtopofrackswitch,how
welldoestheaggregationswitchscalein10gigabitports?
SummaryofTopofRackadvantages(Pros):
CopperstaysInRack.Nolargecoppercablinginfrastructurerequired.
Lowercablingcosts.Lessinfrastructurededicatedtocablingandpatching.Cleanercable
management.
Modularandflexibleperrackarchitecture.Easyperrackupgrades/changes.
Futureproofedfiberinfrastructure,sustainingtransitionsto40Gand100G.
Shortcoppercablingtoserversallowsforlowpower,lowcost1oGE(10GBASECX1),40G
inthefuture.
ReadyforUnifiedFabrictoday.
SummaryofTopofRackdisadvantages(Cons):
Moreswitchestomanage.Moreportsrequiredintheaggregation.
Potentialscalabilityconcerns(STPLogicalports,aggregationswitchdensity).
MoreLayer2servertoservertrafficintheaggregation.
RacksconnectedatLayer2.MoreSTPinstancestomanage.
Uniquecontrolplaneper48ports(perswitch),higherskillsetneededforswitch
replacement.

EndofRowDesign

Servercabinets(orracks)aretypicallylinedupsidebysideinarow.Eachrowmightcontain,
forexample,12servercabinets.ThetermEndofRowwascoinedtodescribearackor
cabinetplacedateitherendoftheserverrowforthepurposeofprovidingnetwork
connectivitytotheserverswithinthatrow.Eachservercabinetinthisdesignhasabundleof
twistedpaircoppercabling(typicallyCategory6or6A)containingasmanyas48(ormore)
individualcablesroutedtotheEndofRow.TheEndofRownetworkracksmaynot
necessarilybelocatedattheendofeachactualrow.Theremaybedesignswhereahandfulof
networkracksareplacedinasmallrowoftheirowncollectivelyprovidingEndofRowcopper
connectivitytomorethanonerowofservers.
Foraredundantdesigntheremightbetwobundlesofcoppertoeachrack,eachrunningto
oppositeEndofRownetworkracks.Withintheservercabinetthebundleofcopperistypically
wiredtooneormorepatchpanelsfixedtothetopofthecabinet.Theindividualserversusea
relativelyshortRJ45copperpatchcabletoconnectfromtheservertothepatchpanelinthe
rack.Thebundleofcopperfromeachrackcanberoutedthroughoverheadcabletroughsor
ladderracksthatcarrythedensecopperbundlestotheEndofRownetworkracks.Copper
bundlescanalsoberoutedunderneatharaisedfloor,attheexpenseofobstructingcoolair
flow.Dependingonhowmuchcopperisrequired,itiscommontohavearackdedicatedto
patchingallofthecoppercableadjacenttotherackthatcontainstheEndofRownetwork
switch.Therefore,theremightbetwonetworkracksateachendoftherow,oneforpatching,
andoneforthenetworkswitchitself.Again,anRJ45patchcableisusedtolinkaportonthe
networkswitchtoacorrespondingpatchpanelportthatestablishesthelinktotheserver.The
largequantityofRJ45patchcablesattheEndofRowcancauseacablemanagementproblem

andwithoutcarefulplanningcanquicklyresultinanuglyunmanageablemess.
AnothervariationofthisdesigncanbereferredtoasMiddleofRowwhichinvolvesroutingthe
coppercablefromeachserverracktoapairofrackspositionednexttoeachotherinthe
middleoftherow.Thisapproachreducestheextremecablelengthsfromthefarendserver
cabinets,howeverpotentiallyexposestheentirerowtoalocalizeddisasterattheMiddleof
Row(suchasleakingwaterfromtheceiling)thatmightdisruptbothserveraccessswitchesat
thesametime.

TheEndofRownetworkswitchistypicallyamodularchassisbasedplatformthatsupports
hundredsofserverconnections.Typicallythereareredundantsupervisorengines,power
supplies,andoverallbetterhighavailabilitycharacteristicsthantypicallyfoundinaTopof
Rackswitch.ThemodularEndofRowswitchisexpectedtohavealongerlifespanofatleast
5to7years(orevenlonger).Itisuncommonfortheendofrowswitchtobefrequently
replaced,onceitsinitsinandanyfurtherupgradesareusuallycomponentlevelupgrades
suchasnewlinecardsorsupervisorengines.
TheEndofRowswitchprovidesconnectivitytothehundredsofserverswithinthatrow.
Therefore,unlikeTopofRackwhereeachrackisitsownmanagedunit,withEndofRowthe
entirerowofserversistreatedlikeoneholisticunitorPodwithinthedatacenter.Network
upgradesorissuesattheEndofRowswitchcanbeserviceimpactingtotheentirerowof
servers.Thedatacenternetworkinthisdesignismanagedperrow,ratherthanperrack.

ATopofRackdesignextendstheLayer2topologyfromtheaggregationswitchtoeach
individualrackresultinginanoveralllargerLayer2footprint,andconsequentlyalarger
SpanningTreetopology.TheEndofRowdesign,ontheotherhand,extendsaLayer1cabling
topologyfromtheEndofRowswitchtoeachrack,resultinginsmallerandmoremanageable
Layer2footprintandfewerSTPnodesinthetopology.
EndofRowisaperrowmanagementmodelintermsofthedatacentercabling.Furthermore,
EndofRowisalsoperrowintermsofthenetworkmanagementmodel.Giventhereare
usuallytwomodularswitchesperrowofservers,theresultofthisisfarfewswitchesto
managewhencomparedtoaTopofRackdesign.Inmypreviousexampleof40racks,letssay
thereare10racksperrow,whichwouldbe4rowseachwithtwoEndofRowswitches.The
resultis8switchestomanage,ratherthan80intheTopofRackdesign.Asyoucansee,the
EndofRowdesigntypicallycarriesanorderofmagnitudeadvantageoverTopofRackinterms
ofthenumberofindividualswitchesrequiringmanagement.Thisisoftenakeyfactorwhythe
EndofRowdesignisselectedoverTopofRack.
WhileEndofRowhasfarlessswitchesintheinfrastructure,thisdoesntnecessarilyequateto
farlesscapitalcostsfornetworking.Forexample,thecostofa48portlinecardinamodular
endofrowswitchcanbeonlyslightlylessinprice(ifnotsimilar)toanequivalent48portTop
ofRackswitch.However,maintenancecontractcostsaretypicallylesswithEndofRowdueto
thefarfewernumberofindividualswitchescarryingmaintenancecontracts.
AswasstatedintheTopofRackdiscussion,thelargequantityofdensecoppercabling
requiredwithEndofRowistypicallyexpensivetoinstall,bulky,restrictivetoairflow,andbrings
itsshareofcablemanagementheadaches.Thelengthytwistedpaircoppercableposesa
challengeforadoptinghigherspeedservernetworkI/O.Forexample,a10gigabitserver
connectionovertwistedpaircoppercable(10GBASET)ischallengingtodayduetothecurrent
powerrequirementsofthe10GBASETsiliconcurrentlyavailable(68Wperend).Asaresult
thereisalsoscarceavailabilityofdenseandcosteffective10GBASETnetworkswitchports.
Astheadoptionofdensecomputeplatformsandvirtualizationquicklyaccelerates,servers
limitedto1GEnetworkI/Oconnectionswillposeachallengeinobtainingthewiderscale
consolidationandvirtualizationcapableinmodernservers.Furthermore,adoptingaunified
fabricwillalsohavetowaituntil10GBASETunifiedfabricswitchportsandCNAsareavailable
(notexpecteduntillate2010).
10GBASETsiliconwilleventually(overthenext24months)reachlowerpowerlevelsand
switchvendors(suchasCisco)willhavedense10GBASETlinecardsformodularswitches
(suchasNexus7000).Servermanufactureswillalsostartshippingtriplespeed10GBASET
LOMs(LANonMotherboard)100/1000/10G,andNIC/HBAvendorswillhaveunifiedfabric

CNAswith10GBASETports.AllofthisisexpectedtoworkonexistingCategory6Acopper
cable.Allbetsareoffhoweverfor40Gandbeyond.
SummaryofEndofRowadvantages(Pros):
Fewerswitchestomanage.Potentiallylowerswitchcosts,lowermaintenancecosts.
Fewerportsrequiredintheaggregation.
RacksconnectedatLayer1.FewerSTPinstancestomanage(perrow,ratherthanper
rack).
Longerlife,highavailability,modularplatformforserveraccess.
Uniquecontrolplaneperhundredsofports(permodularswitch),lowerskillsetrequiredto
replacea48portlinecard,versusreplacinga48portswitch.
SummaryofEndofRowdisadvantages(Cons):
Requiresanexpensive,bulky,rigid,coppercablinginfrastructure.Fraughtwithcable
managementchallenges.
Moreinfrastructurerequiredforpatchingandcablemanagement.
Longtwistedpaircoppercablinglimitstheadoptionoflowerpowerhigherspeedserver
I/O.
Morefuturechallengedthanfutureproof.
Lessflexibleperrowarchitecture.Platformupgrades/changesaffectentirerow.
UnifiedFabricnotarealityuntillate2010.

TopofRackFabricExtender

ThefabricextenderisanewdatacenterdesignconceptthatallowsthefortheTopofRack
placementofserveraccessportsasaLayer1extensionofanupstreammasterswitch.Much
likealinecardinamodularswitch,thefabricextenderisadataplaneonlydevicethatreceives
allofitscontrolplaneintelligencefromitsmasterswitch.Therelationshipbetweenafabric
extenderanditsmasterswitchissimilartotherelationshipbetweenalinecardandits
supervisorengine,onlynowthefabricextendercanbeconnectedtoitsmasterswitch
(supervisorengine)withremotefiberconnections.Thisallowsyoutoeffectivelydecouplethe
linecardsofthemodularEndofRowswitchandspreadthemthroughoutthedatacenter(at
thetopoftherack),allwithoutloosingthemanagementmodelofasingleEndofRowswitch.
Themasterswitchandallifitsremotelyconnectedfabricextendersaremanagedasone
switch.Eachfabricextenderissimplyprovidingaremoteextensionofports(actinglikea
remotelinecard)tothesinglemasterswitch.
UnlikeatraditionalTopofRackswitch,thetopofrackfabricextenderisnotanindividually
managedswitch.Thereisnoconfigurationfile,noIPaddress,andnosoftwarethatneedstobe
managedforeachfabricextender.Furthermore,thereisnoLayer2topologyfromthefabric
extendertoitsmasterswitch,ratheritsallLayer1.Consequently,thereisnoSpanningTree
topologybetweenthemasterswitchanditsfabricextenders,muchlikethereisnoSpanning
Treetopologybetweenasupervisorengineanditslinecards.TheLayer2SpanningTree
topologyonlyexistsbetweenthemasterswitchandtheupstreamaggregationswitchits
connectedto.
ThefabricextenderdesignprovidesthephysicaltopologyofTopofRack,withthelogical
topologyofEndofRow,providingthebestofbothdesigns.Therearefarfewerswitchesto
manage(muchlikeEndofRow)withnorequirementforalargecoppercablinginfrastructure,
andfutureproofedfiberconnectivitytoeachrack.
Thereisacostadvantageaswell.GiventhatthefabricextenderdoesnotneedtheCPU,
memory,andflashstoragetorunacontrolplane,therearelesscomponentsandthereforeless
cost.Afabricextenderisroughly33%lessexpensivethananequivalentTopofRackswitch.
Whenafabricextenderfailsthereisnoconfigurationfilethatneedstoberetrievedand
replaced,nosoftwarethatneedstobeloaded.Thefailedfabricextendersimplyneedstobe
removedandanewoneinstalledinitsplaceconnectedtothesamecables.Theskillset
requiredforthereplacementissomebodywhoknowshowtouseascrewdriver,canunplug
andplugincables,andcanwatchastatuslightturngreen.Thenewfabricextenderwillreceive
itsconfigurationandsoftwarefromthemasterswitchonceconnected.

InthedesignaboveshowinFigure6,topofrackfabricextendersusefiberfromtherackto
connecttotheirmasterswitch(Nexus5000)somewhereintheaggregationarea.TheNexus
5000linkstotheEthernetaggregationswitchlikeanynormalEndofRowswitch.
Note:Upto(12)fabricextenderscanbemanagedbyasinglemasterswitch(Nexus5000).

InFigure7abovethetopofrackfabricextendersusefiberrunningfromtheracktoanEndof
Rowcabinetcontainingthemasterswitch.Themasterswitch,inthiscaseaNexus5000,can
alsoprovide10GEunifiedfabricserveraccessconnections.
Itismorecommonforfibertorunfromtheracktoacentralaggregationarea(asshowin
Figure6).HoweverthedesignshownaboveinFigure7wherefiberalsorunstotheendofa
rowmaystarttogaininterestwithfabricextenderdeploymentsasawaytopreservethelogical
groupingofrowsbyphysicallyplacingthemasterswitchwithintherowofthefabricextenders
linkedtoit.
SummaryofTopofRackFabricExtenderadvantages(Pros):
Fewerswitchestomanage.Fewerportsrequiredintheaggregationarea.(EndofRow)
RacksconnectedatLayer1viafiber,extendingLayer1coppertoserversinrack.Fewer
STPinstancestomanage.(EndofRow)
Uniquecontrolplaneperhundredsofports,lowerskillsetrequiredforreplacement.(Endof
Row)
CopperstaysInRack.Nolargecoppercablinginfrastructurerequired.(TopofRack)
Lowercablingcosts.Lessinfrastructurededicatedtocablingandpatching.Cleanercable
management.(TopofRack)
Modularandflexibleperrackarchitecture.Easyperrackupgrades/changes.(Topof
Rack)

Futureproofedfiberinfrastructure,sustainingtransitionsto40Gand100G.(TopofRack)
Shortcoppercablingtoserversallowsforlowpower,lowcost1oGE(10GBASECX1),40G
inthefuture.(TopofRack)
SummaryofTopofRackFabricExtenderdisadvantages(Cons):
NewdesignconceptonlyavailablesineJanuary2009.Notawidelydeployeddesign,yet.
LinktolearnmoreaboutFabricExtenders.

CiscoUnifiedComputingPods
TheCiscoUnifiedComputingsolutionprovidesatightlycoupledarchitectureofbladeservers,
unifiedfabric,fabricextenders,andembeddedmanagementallwithinasinglecohesive
system.AmultirackdeploymentisasinglesystemmanagedbyaredundantpairofTopof
Rackfabricinterconnectswitchesprovidingtheembeddeddevicelevelmanagement,
provisioning,andlinkingthepodtothedatacenteraggregationEthernetandFibreChannel
switches.

Above,apodof3racksmakesuponesystem.Eachbladeenclosurelinkswithaunifiedfabric
fabricextendertothefabricinterconnectswitcheswith10GBASECX1orUSR10GEfiber
optics(ultrashortreach).AsingleUnifiedComputingSystemcancontainasmanyas40blade
enclosuresasonesystem.Withsuchscalabilitytherecouldbedesignswhereanentirerowof

bladeenclosuresislinkedtotheEndofRoworMiddleofRowfabricinterconnects.As
shownbelow

Thesearenottheonlypossibledesigns,ratherjustacoupleofsimpleexamples.Manymore
possibilitiesexistasthearchitectureisasflexibleasitisscalable.SummaryofUnified
ComputingSystemsadvantages(Pros):
LeveragestheTopofRackphysicaldesign.
LeveragesFabricExtendertechnology.Fewerpointsofmanagement.
Singlesystemofcompute,unifiedfabric,andembeddedmanagement.
Highlyscalableasasinglesystem.
Optimizedforvirtualization.
SummaryofUnifiedComputingSystemdisadvantages(Cons):
CiscoUCSisnotavailableyet.:(AskyourlocalCiscorepresentativeformoreinformation.
UPDATE:CiscoUCShasbeenavailableandshippingtocustomerssinceJune2009
LinktolearnmoreaboutCiscoUnifiedComputingSystem.

DeployingDataCenterdesignsintoPods
ChoosingTopofRackorEndofRowphysicaldesignsisnotanallornothingdeal.Theone

thingalloftheabovedesignshaveincommonisthattheyeachlinktoacommonAggregation
areawithfiber.ThecommonAggregationareacanthereforeservicetheEndofRowpodarea
nodifferentlythanaTopofRackpod.Thisallowsforflexibilityinthedesignchoicesmadeas
thedatacentergrows,PodbyPod.SomepodsmayemployEndofRowcoppercabling,while
anotherpodmayemploytopofrackfiber,witheachpodlinkingtothecommonaggregation
areawithfiber.

Conclusion
Thisarticleisbasedona30+slidedetailedpresentationIdevelopedfromscratchforCisco
coveringdatacenterphysicaldesigns,TopofRackvs.EndofRow.Ifyouwouldliketoseethe
entirepresentationwithaoneononediscussionaboutyourspecificenvironment,please
contactyourlocalCiscorepresentativeandasktoseeBradHedlundsTopofRackvs.Endof
Rowdatacenterdesignspresentation!WhatcanIsay,ashamelessattemptatselfpromotion

Das könnte Ihnen auch gefallen