Sie sind auf Seite 1von 19

HP Bl ladeSyst tem Netw working Referen g nce Archi itecture

HP Virtu Connect FlexFabric Module and V ual F M VMware vSp phere 4


Technica white paper al r

Table of contents
Executive Summary ............................ ...................... .......................................... ..................................... 2 onnect FlexFabr Module Hard ric dware Overvie ...................................... ew ..................................... 3 Virtual Co Designing an HP FlexFab Architecture for VMware v bric e vSphere .............................. ..................................... 4 vailable Netwo Strategy witth Virtual Conne FlexFabric m ork ect modules and M Managed Designing a Highly Av VLANs .......................................... ...................... .......................................... ..................................... 4 Designing a Highly Av vailable Netwo Strategy witth FlexFabric modules and Pas ork ss-through VLAN ............ 8 Ns Designing a vSphere Ne etwork Architect ture with the Vi irtual Connect F FlexFabric mod dule ............................. 11 vNetwo Distributed Switch Design ...................... ork S . .......................................... ................................... 11 Hypervisor Load Balan ncing Algorithm .................. ms .......................................... ................................... 13 HP Virtu Connect an DCC........... ual nd ...................... .......................................... ................................... 14 .......................................... Appendix A: Virtual Con nnect Bill of Materials ............ ................................... 15 Appendix B: Terminology cross-referenc .................. y ce ................................... 16 .......................................... .......................................... Appendix C: Glossary of Terms ........... f ...................... ................................... 17 .......................................... For more information .......................... i ...................... ................................... 19

Execu utive Sum mmary


HP has re evolutionized the way IT thinks about ne etworking and server management. With the release d h of the HP ProLiant Blad P deSystem Gen neration 6 ser rvers, along w Virtual Co with onnect Flex-10 Ethernet 0 modules, HP provided a great platfo for VMwa vSphere. Virtual Conne Flex-10 is the world's orm are ect first techn nology to divid and fine-tu 10Gb Eth de une hernet network bandwidth a the server e k at edge. When combined with Virtual Connect, the BladeSystem architecture s d streamlines th typical change processes he s for provis sioning in the datacenter. HP has si ince evolved Virtual Conne Flex-10 to the next level: Virtual Con V ect nnect FlexFabr modules. ric By combining the pow of ProLiant BladeSystem Generation 7 servers and Virtual Conn wer t m d nect, the Virtual Connect FlexFa abric module allows custom a mers to consolidate network connections and storage k fabrics in a single module. This fu nto urther reduces infrastructure cost and co s e omplexity by e eliminating HBA ada apters and Fib Channel modules at the edge. bre m e These ser rvers include virtualization friendly featu res such as la v arger memory capacity, dense populatio room for additional mez on, a zzanine cards and 6 - 48 (with Intel Hyp s per-Threading technology g rvers ship enabled and AMD Op pteron 6100-s series) process sing cores. Th following P he ProLiant BL Ser standard with the NC5 551i FlexFabr Adapter: ric BL465 G7 B BL685 G7 B

The follow wing ProLiant BL Servers sh with the N hip NC553i FlexFa abric Adapter r: BL460 G7 B BL490 G7 B BL620/680 G7 B G

Additiona ally, the NC551m provides support for t FlexFabric Adapter in P s the c ProLiant Blade eSystem G6 servers. NOTE: Please check the la N atest NC551m QuickSpecs1 for the offic server sup m s cial pport matrix. The Virtual Connect Fle exFabric mod dule, along wi the FlexFab Adapter, introduces a new Physical ith bric Function called the Fle exHBA. The FlexHBA, alon with the Fle ng exNIC, adds t unique ab the bility to finetune each connection to adapt to yo virtual serv channels and workload on-the-fly. The effect of h t our rver ds using the Virtual Connect FlexFabric module is a reduction in t number of interconnect modules e c the f t required to uplink outs side of the enc closure, while still maintain e ning full redun ndancy across the service s console, VMkernel, vir rtual machine (VM) network and storage fabrics. This translates to a lower cost ks s o infrastruc cture with fewe manageme points and cables. er ent d When de esigning a vSp phere Networ infrastructur with the Virtual Connect FlexFabric m rk t module, there re are two frequent netwo architectures customers choose. This document describes how to design f ork s s highly av vailable Virtua Connect stra al ategy with:
1

http://h18004.www1.hp.com m/products/blades/ /components/emullex/nc551m/index x.html

Virtual Conn V nect Manag ged VLANs - In this design, we are maximizing the m management features of Vir f rtual Connect, while provid , ding customers with the flex s xibility to prov vide any networking to any host wit n thin the Virtua Connect do al omain. Simply put, this des y sign will not over-provision servers, while keeping the number of u plinks used to a minimum. This helps o n e o reduce infrastr r ructure cost and complexity by trunking the necessary VLANs (IP Subnets) to the y y Virtual Connect domain, an minimizing potentially e V nd g expensive 10G uplink por Gb rts. Virtual Conn V nect Pass-th hrough VLA ANs - This des sign addresse customer re es equirements to o support a sign s nificant numbe of VLANs fo Virtual Mac er or chine traffic. The previous design has a limited numbe of VLANs it can support. While providing similar s er server profile n network connection assignments as the previous d c design, more uplink ports a required, and VLAN are Tunneling mus be enabled within the Vir T st rtual Connect domain.

Both desi igns provide highly availab network ar h ble rchitecture, an also take into account e nd enclosure level redundan and vSphere cluster de ncy esign. By spre eading the clu uster scheme a across both en nclosures, each can provide loca HA in case of network an enclosure f n al nd failure. Finally, this do F ocument will provide key d esign best pra p actices for vSp phere 4 network architecture with HP FlexFabric, including a w g:vDS design for Hyperviso networking or g vSwitch and dvPortGroup lo balance algorithms v d oad

Virtua Conne FlexF al ect Fabric M Module H Hardwar Overv re view


The Virtual Connect Fle exFabric mod dule is the first Data Center Bridging (DC and Fibre Channel over t CB) r Ethernet (FCoE) solutio introduced into the HP B ladeSystem p ( on portfolio. It pr rovides 24 line rate ports, Full-Duple 240Gbps bridging, sing DCB-hop f ex gle fabric. As sho own in Image 1, there are 8 faceplate e ports. Po X1-X4 are SFP+ transce orts e eiver slots only which can accept a 10G or 8Gb SF y; Gb FP+ transceiver. Ports X5-X are SFP an SFP+ capa X8 nd able, and do n support 8G SFP+ trans not Gb sceivers. NOTE: The CX-4 port prov N C vided by the V Virtual Conne Flex-10 and legacy mod ect d dules has been n depreciated. d
Image 1: HP Virtual Conn nect FlexFabric Module M

hared with int ternal Stackin g Link ports. If the externa port is popu al ulated with a Ports X7 and X8 are sh al nk transceiver, the interna Stacking Lin is disabled .

Important: Even though the Virtual Connect FlexFabric module supports Stacking, stacking only applies to Ethernet traffic. FC uplinks cannot be consolidated, as it is not possible to stack the FC ports, nor provide a multi-hop DCB bridging fabric today.

Designing an HP FlexFabric Architecture for VMware vSphere


In this section, we will discuss two different and viable strategies for customers to choose from. Both provide flexible connectivity for hypervisor environments. We will provide the pros and cons to each approach, and provide you with the general steps to configure the environment.

Designing a Highly Available Network Strategy with Virtual Connect FlexFabric modules and Managed VLANs
In this design, two HP ProLiant c-Class 7000 Enclosures with Virtual Connect FlexFabric modules are stacked to form a single Virtual Connect management domain2. By stacking Virtual Connect FlexFabric modules, customer can realize the following benefits: Management control plane consolidated More efficient use of WWID, MAC and Serial Number Pools Provide greater uplink port flexibility and bandwidth Profile management across stacked enclosures

Shared Uplink Sets provide administrators the ability to distribute VLANs into discrete and defined Ethernet Networks (vNet.) These vNets can then be mapped logically to a Server Profile Network Connection allowing only the required VLANs to be associated with the specific server NIC port. This also allows customers the flexibility to have various network connections for different physical Operating System instances (i.e. VMware ESX host and physical Windows host.) As of Virtual Connect Firmware 2.30 release, the following Shared Uplink Set rules apply per domain: 320 Unique VLANs per Virtual Connect Ethernet module 128 Unique VLANs per Shared Uplink Set 28 Unique Server Mapped VLANs per Server Profile Network Connection Every VLAN on every uplink counts towards the 320-VLAN limit. If a Shared Uplink Set is comprised of multiple uplinks, each VLAN on that Shared Uplink Set is counted multiple times.

Only available with Virtual Connect Manager Firmware 2.10 or greater. Please review the Virtual Connect Manager Release Notes for more information regarding domain stacking requirements: http://h18004.www1.hp.com/products/blades/components/c-class-tech-installing.html

By provid ding two stack Enclosure this will allo for not on Virtual Con ked es, ow nly nnect FlexFabric module failure, but also Enclos b sure failure. The uplink port assigned to each Shared Uplink Set (S ts o d SUS) were vertically offset to allow for horizont redundanc purposes, a shown in F w tal cy as Figure 1-2. IP-based storage (NFS and/or iSCS can be ded SI) dicated and se egregated by a separate v y vNet and assigned uplink port. This design approach prov a vides administrators to ded dicate a netwo (physically ork y switched, directly conn , nected or logi ical within a S Shared Uplink Set) to provide access to IP-based k storage arrays. a Directly connecting an IP-based Storage array ha certain limi c n as itations: Each storage array front-en port will req E nd quire a unique vNet Each defined vNet will requ separate server network connections E uire You are limited to the numb of IP-based arrays base on the num Y ber d ed mber of unassigned uplink ports p

Virtual Connect has the capability to create an in o nternal, privat network wit te thout uplink p ports, by using g the low la atency mid-pla connectio to facilitatte communica ane ons ation. This vN can be use for cluster Net ed heartbeat networks, or in this case VMotion and/ Fault Tole r V /or erance traffic. Traffic will not pass to the upstream switch infrastructure, whic will elimina the bandw m ch ate width otherwise consumed.
Figure 1-1: Physical VMware vSphere Cluste Design er

Figure 1-2 show the ph hysical cablin The X5 an X6 Etherne ports of the FlexFabric module are ng. nd et connectin to a redund ng dant pair of Top of Rack (T T ToR) switches, using LACP ( (802.3AD) for link

redundancy. The ToR switches can be placed End of Row to save on infrastructure cost. Ports X7 are used for vertical External Stacking Links, while X8 are used for Internal Stacking Links. As noted in the previous section, Virtual Connect FlexFabric Stacking Links will only carry Ethernet traffic, and do not provide any Fibre Channel stacking options. Thus, ports X1 and X2 from each module are populated with 8Gb SFP+ transceivers, providing 16Gb net FC bandwidth for storage access. Ports X3 and X4 are available to provide additional bandwidth if FC storage traffic is necessary. If additional Ethernet bandwidth is necessary, ports Enc0:Bay2:X5, Enc0:Bay2:X6, Enc1:Bay1:X5, and Enc1:Bay1:X6 can be used for additional Ethernet Networks or Shared Uplink Sets.

Figure 1-2: Physical design 2 g

Figure 1-3 Logical design 3:

Design ning a Hig ghly Availa able Netw work Strategy with FlexFabric modules c and Pa ass-through VLANs
In this de esign, two HP ProLiant c-Cla 7000 Enc losure with Vi ass irtual Connect FlexFabric m modules are stacked to form a single Virtual Con nnect manage ement domain3. By stacking Virtual Con n nnect FlexFabri modules, cu ic ustomer can realize the folllowing benefi its: Management control plane consolidated M d More efficient use of WWID MAC and Serial Numbe Pools M t D, er Provide greate uplink port flexibility and bandwidth P er d Profile management across stacked enclo P osures

This desig does not ta into account for other p gn ake physical serve instances (i.e. Windows Server.) If er the desig requires sup gn pport for multiple types of p physical OS instances, whe the non-hy ere ypervisor hosts require access to a specific VL o LAN, addition uplink por will be requ nal rts uired. This w add will additiona cost and ad al dministrative overhead to th overall des o he sign. This desig also does not take into account wher multiple hyp gn a re pervisor hosts will require d s different Virtual Machine netwo M orking. If ther is a prerequ re uisite to suppo this, additi ort ional uplink p ports will be necessary to tunnel the specific VLA y e AN(s). By provid ding two stack Enclosure this will allo for not only Virtual Conn ked e, ow y nect FlexFabric module failure, but also Enclos b sure failure. The uplink port assigned to each vNet w ts o were offset to a allow for horizonta redundancy purposes. To reduce trans al y o sceiver and upstream port cost, a 1Gb S SFP transceiver would be used to provid Service Co u de onsole network king. SI) dicated and se egregated by a separate v y vNet and IP-based storage (NFS and/or iSCS can be ded a vides administrators to ded dicate a netwo (physically ork y assigned uplink port. This design approach prov

Only availa able with Virtual Co onnect Manager Fir rmware 2.10 or gre eater. Please review the Virtual Conne Manager Releas Notes for more w ect se information regarding domain stacking requirements: http://h1800 n n 04.www1.hp.com/ /products/blades/components/c-class--tech-installing.html

switched, directly conn , nected or logi ical within a S Shared Uplink Set) to provide access to IP-based k storage arrays. a Directly connecting an IP-based Storage array ha certain limi c n as itations: Each storage array front-en port will req E nd quire a unique vNet Each defined vNet will requ separate server network connections E uire You are limited to the numb of IP-based arrays base on the num Y ber d ed mber of unassigned uplink ports p

Virtual Connect has the capability to create an in o nternal, privat network wit te thout uplink p ports, by using g the low la atency mid-pla connectio to facilitatte communica ane ons ation. This vN can be use for cluster Net ed heartbeat networks, or in this case VMotion and/ Fault Tole r V /or erance traffic. Traffic will not pass to the upstream switch infrastructure, whic will elimina the bandw m ch ate width otherwise consumed.
Figure 1-11 Physical VMware vSphere Clus Design 1: ster

Figure 1-12 show the physical cabling. The Ethe ernet ports X3 and X4 of the FlexFabric m module are connectin to a redund ng dant pair of Top of Rack (T T ToR) switches, using LACP ( (802.3AD) for link redundan ncy. The ToR switches can be placed En of Row to s nd save on infras structure cost. Ports X7 are e used for vertical Extern Stacking Links, while X8 are used for Internal Stac v nal L 8 r cking Links. As noted in the previous section, Virtual Connect FlexFabric St t tacking Links will only carry Ethernet y traffic, an do not pro nd ovide any Fibr Channel sta re acking options Thus, ports X1 and X2 fr s. rom each module are populated with 8Gb SF transceive providing 16Gb net FC bandwidth f storage a FP+ ers, C for access. If additional FC bandwidth is necessary, the transceiv installed in X3 will need to be moved F h , ver n d d to X6, wh an 8Gb SFP+ transceiv would be installed into X3 on all mod hile S ver dules.

Important: This design uses X1-X3 for FC connectiv I r vity, no additional FC ports can be used. s If additio onal Ethernet bandwidth is necessary, po Enc0:Bay2 b n orts 2:X3, Enc0:Ba ay2:X4, Enc1:Bay1:X3, and Enc1 1:Bay1:X4 can be used for additional Et thernet Netwo orks or Shared Uplink Sets. d
Figure 1-12 Physical desig 2: gn

10

Figure 1-13 Logical design 3: n

gning a vSphere Networ Archit v rk tecture w the Virtual with Desig Connect FlexFabric module m
The introd duction of VM Mware vSpher 4, customer are now ab to centrally manage the network re rs ble y e configura ation within th hypervisor. This new fea he ature, called v vNetwork Dist tributed Switc 4 (vDS), ch allows an administrato to create a centralized d n or distributed vSw witch. Port Gro oups are still utilized in this s new mod but have a different ass del, sociation to ho uplink por ost rts. Host uplin ports are a nk added to Uplink Groups (dvUpli inkGroup), wh here a logicall association b between the d dvUplinkGrou and a up PortGroup (dvPortGrou is formed. vDS can ser up) . rvice any of th vmkernel fu he unctions; Serv vice Console, VMotion, IP Storage, and Virtual Machine traffic . , a In this chapter, we will outline the overall vDS de o esign. The vD design will remain the sa DS ame, regardles of the Virtua Connect de ss al esign chosen.

vNetw work Distrib buted Switch Design


Each pair of redundan pNICs will be assigned t dvUplinkGr nt b to roups, which t then are assig gned to a specific vDS. This will simplify host network conf v figuration, wh providing all of the ben hile nefits of a vDS. Tab 2-1 and Figure 2-1 shows which hyp ble pervisor netwo orking function will be assig gned to vDS configura ation.
Table 2- VMware vDS Configuration -1

Vmkerne Function el
Service Console C

vDS Name
dvs1_mgmt

dvPor rtGroup Name e


dvPort tGroup1_Mgmt t

Requires vS Sphere 4.0 Enterpri Plus licensing ise

11 1

Table 2- VMware vDS Configuration -1

Vmkerne Function el
VMotion VM Netw working1 VM Netw workingN

vDS Name
dvs2_vmkernel dvs3_vmnet dvs3_vmnet

dvPor rtGroup Name e


dvPort tGroup2_vmker rnel dvPort tGroup3_vmnet t100 dvPort tGroupN_vmne NNN et

Figure 2-1: Hypervisor Netw working Design

VMware Fault Tolerance (FT) could introduce mo complexity in to the ove ore y erall design. V VMware states tha a single 1G NIC should be dedicated for FT loggi at Gb d d ing, which wo ould have the potential to starve an shared pNIC with that of another vmk ny f kernel function (i.e. VMotion traffic.) FT has not been n taken into consideratio within this document. Ev though FT could be sha o on ven T ared with ano other vmkernel function, and if FT is a design requirement, then t overall im the mpact of its inc clusion should be d examined d. vSphere 4.1 introduce a new featu called Net 4 es ure twork I/O Co ontrol5 (NetIOC), which pro ovides QoSlike capa abilities within the vDS. Ne etIOC can ide entify the following types of traffic: f

http://www w.vmware.com/files/pdf/techpaper/V VMW_Netioc_BestP Practices.pdf

12

Virtual Machine FT Logging iSCSI NFS Service Console Management vMotion

NetIOC can be used to control identified traffic, when multiple types of traffic are sharing the same pNIC. In our design example, FT Logging could share the same vDS as the vmkernel, and NetIOC would be used to control the two types of traffic. With the design example given, there are three options one could choose to incorporate FT Logging:
Table 2-2 VMware Fault Tolerance Options

FT Design Choice
Share with VMotion network

Justification
The design choice to keep VMotion traffic internally to the Enclosure allows the use of low latency links for inter-Enclosure communication. By giving enough bandwidth for VMotion and FT traffic, while defining a NetIOC policy, latency should not be an issue.

Rating
***

Non-redundant VMotion Dedicate one pNIC for VMotion traffic, and the other for FT logging and FT networks traffic. Neither network will provide pNIC redundancy. Add additional FlexFabric Adapters and Modules This option increases the overall CapEx to the solution, but will provide more bandwidth options.

** *

Hypervisor Load Balancing Algorithms


VMware provides a number of different NIC teaming algorithms, which are outlined in Table 2-2. As the table shows, any of the available algorithms can be used, except IP Hash. IP Hash requires switch assisted load balancing (802.3ad), which Virtual Connect does not support 802.3ad with server downlink ports. HP and VMware recommend using Originating Virtual Port ID, as shown in Table 2-2.
Table 2-3 VMware Load Balancing Algorithms

Name
Originating Virtual Port ID Source MAC Address IP Hash

Algorithm
Choose an uplink based on the virtual port where the traffic entered the virtual switch. MAC Address seen on vnic port Hash of Source and Destination IPs. Requires switch assisted load balancing, 802.3ad. Virtual Connect does not support 802.3ad on server downlink ports, as 802.3ad is a Point-to-Point bonding protocol. Highest order uplink from the list of Active pNICs.

Works with VC
Yes Yes No

Explicit Failover

Yes

13

HP Virtual Connect and DCC


Virtual Connect firmware v2.30 introduced Device Control Channel (DCC) support to enable Smart Link, Dynamic Bandwidth Allocation, and Network Assignement to FlexNICs without powering off the server. There are three components required for DCC: NIC Firmware (Bootcode 5.0.11 or newer) NIC Driver (Windows Server v5.0.32.0 or newer; Linux 5.0.19-1 or newer; VMware ESX 4.0 v1.52.12.v40.3; VMware ESX 4.1 v1.60) Virtual Connect Firmware (v2.30 or newer)

14

Appendix A: Virtual Connect Bill of Materials


Table A-1 Virtual Connect FlexFabric module Mapped VLAN BoM

Partnumber
571956-B21 AJ716A 487649-B21 455883-B21 Or 487655-B21

Description
HP Virtual Connect FlexFabric Module HP StorageWorks 8Gb B-series SW SFP+ .5m 10Gb SFP+ DAC Stacking Cable 10Gb SR SFP+ transceiver

Qty
4 8 2 4

3m SFP+ 10Gb Copper DAC

Table A-2 Virtual Connect FlexFabric module Tunneled VLAN BoM

Partnumber
571956-B21 487649-B21 AJ716A 453154-B21 455883-B21 Or 487655-B21

Description
HP Virtual Connect FlexFabric Ethernet Module .5m 10Gb SFP+ DAC Stacking Cable HP StorageWorks 8Gb B-series SW SFP+ 1Gb RJ45 SFP transceiver 10Gb SR SFP+ transceiver

Qty
4 2 8 2 4

3m SFP+ 10Gb Copper DAC

15

Appendix B: Terminology cross-reference


Table C-1 Terminology cross-reference

Customer term
Port Bonding or Virtual Port VLAN Tagging

Industry term

IEEE term

Cisco term
Etherchannel or channeling (PaGP) Trunking

Nortel term
MultiLink Trunking (MLT) 802.1Q

HP Virtual Connect term


802.3ad LACP

Port Aggregation 802.3ad LACP or Port-trunking LACP VLAN Trunking 802.1Q

Shared Uplink Set

16

Appendix C: Glossary of Terms


Table C-1 Glossary Term vNet/Virtual Connect Ethernet Network Definition A standard Ethernet Network consists of a single broadcast domain. However, when VLAN Tunnelling is enabled within the Ethernet Network, VC will treat it as an 802.1Q Trunk port, and all frames will be forwarded to the destined host untouched. An uplink port or a group of uplink ports, where the upstream switch port(s) is configured as an 802.1Q trunk. Each associated Virtual Connect Network within the SUS is mapped to a specific VLAN on the external connection, where VLAN tags are removed or added as Ethernet frames enter or leave the Virtual Connect domain. Let VC automatically determine best FlexNIC speed Manually set FlexNIC speed (up to Maximum value defined) Device Control Channel: method for VC to change either a FlexNIC or FlexHBA Adapter port settings on the fly (without power no/off) A Cisco proprietary technology that combines multiple NIC or switch ports for greater bandwidth, load balancing, and redundancy. The technology allows for bi-directional aggregated network traffic flow. One of four virtual NIC partitions available per FlexFabric Adapter port. Each capable of being tuned from 100Mb to 10Gb The second Physical Function providing an HBA for either Fibre Channel or iSCSI functions An industry standard protocol that enables multiple virtual networks to run on a single link/port in a secure fashion through the use of VLAN tagging. An industry standard protocol that allows multiple links/ports to run in parallel, providing a virtual single link/port. The protocol provides greater bandwidth, load balancing, and redundancy. Link Aggregation Control Protocol (see IEEE802.3ad) LAN-on-Motherboard. Embedded network adapter on the system board Maximum FlexNIC speed value assigned to vNet by the network administrator. Can NOT be manually overridden on the server profile. Global Preferred and Maximum FlexNIC speed values that override defined vNet values when multiple vNets are assigned to the same FlexNIC Mezzanine Slot 1; LAM on Motherboard/systemboard NIC

Shared Uplink Set (SUS)

Auto Port Speed** Custom Port Speed** DCC**

EtherChannel*

FlexNIC**

FlexHBA*** IEEE 802.1Q

IEEE 802.3ad

LACP LOM Maximum Link Connection Speed** Multiple Networks Link Speed Settings** MZ1 or MEZZ1; LOM

17

Network Teaming Software

A software that runs on a host, allowing multiple network interface ports to be combined to act as a single virtual port. The software provides greater bandwidth, load balancing, and redundancy. Physical NIC port. A FlexNIC is seen by VMware as a pNIC Combining ports to provide one or more of the following benefits: greater bandwidth, load balancing, and redundancy. A Cisco proprietary protocol aids in the automatic creation of Fast EtherChannel links. PAgP packets are sent between Fast EtherChannel-capable ports to negotiate the forming of a channel. A term typically used in the Unix/Linux world that is synonymous to NIC teaming in the Windows world. Preferred FlexNIC speed value assigned by a vNet by the network administrator. 802.1Q VLAN tagging Combining ports to provide one or more of the following benefits: greater bandwidth, load balancing, and redundancy. See also Port Aggregation. A virtual network within a physical network. Tagging/marking an Ethernet frame with an identity number representing a virtual network. A Cisco proprietary protocol used for configuring and administering VLANs on Cisco network devices. Virtual NIC port. A software-based NIC used by VMs
**The feature was added for Virtual Connect Flex-10

pNIC Port Aggregation Port Aggregation Protocol (PAgP)*

Port Bonding Preferred Link Connection Speed** Trunking (Cisco) Trunking (Industry)

VLAN VLAN Tagging VLAN Trunking Protocol (VTP)* vNIC

*The feature is not supported by Virtual Connect. ***The feature was added for Virtual Connect FlexFabric modules

18

For more info m ormation n


To read more about th Virtual Con m he nnect FlexFabr module, go to www.hp.com/go/virtuallconnect. ric o c

Co opyright 2010 Hew wlett-Packard Develo opment Company, L.P. The information contained herein is subject to n chan without notice. The only warranties for HP products and services are set forth in the express warranty nge T s s statements accompanyin such products an services. Nothing herein should be construed as consttituting an ng nd ditorial errors or omissions contained h additional warranty. HP shall not be liable for technical or ed P herein. Trade emark acknowledgments, if needed. c024 499726, August 2010

Das könnte Ihnen auch gefallen