Sie sind auf Seite 1von 30

Nexus 1000V Deployment Scenarios

Dan Hersey Steve Tegeler

Cisco Nexus 1000V Components


Server 1
VM #1 VM #2 VM #3 VM #4 VM #5

Server 2
VM #6 VM #7 VM #8 VM #9

Server 3
VM #10 VM #11 VM #12

VEM
VMW ESX

VEM
VMW ESX

VEM
VMW ESX

Virtual Ethernet Module(VEM)


Replaces existing vSwitch Enables advanced switching capability on the hypervisor Provides each VM with dedicated switch ports

Virtual Supervisor Module(VSM)


CLI interface into the Nexus 1000V Leverages NX-OS 4.01 Controls multiple VEMs as a single network device

Nexus 1000V

Virtual Center

VSM

Cisco Nexus 1000V


Faster VM Deployment

Cisco VN-LinkVirtual Network Link


Policy-Based VM Connectivity
Server
VM #1 VM #2 VM #3 VM #4 VM #5

Mobility of Network & Security Properties


Server
VM #6 VM #7

Non-Disruptive Operational Model

VM #8

Cisco Nexus 1000V VMW ESX VMW ESX

Defined Policies
WEB Apps HR DB Compliance Virtual Center

VM Connection Policy
Defined in the network Applied in Virtual Center Linked to VM UUID

Cisco Nexus 1000V


Richer Network Services

VN-Link: Virtualizing the Network Domain


Policy-Based VM Connectivity
Server
VM #1 VM #2 VM #3 VM #4 VM VM #1 #5

Mobility of Network & Security Properties


Server
VM VM #2 #6

Non-Disruptive Operational Model


VM VM #3 #7 VM VM #4 #8

Cisco Nexus 1000V VMW ESX VMW ESX

VMs Need to Move


VMotion DRS SW Upgrade/Patch Hardware Failure Virtual Center

VN-Link Property Mobility


VMotion for the network Ensures VM security Maintains connection state

Cisco Nexus 1000V


Increase Operational Efficiency

VN-Link: Virtualizing the Network Domain


Policy-Based VM Connectivity
Server
VM #1 VM #2 VM #3 VM #4 VM #5

Mobility of Network & Security Properties


Server
VM #6 VM #7

Non-Disruptive Operational Model

VM #8

Cisco Nexus 1000V VMW ESX VMW ESX

Server Benefits
Maintains existing VM mgmt Reduces deployment time Improves scalability Reduces operational workload Enables VM-level visibility Virtual Center

Network Benefits
Unifies network mgmt and ops Improves operational security Enhances VM network features Ensures policy persistence Enables VM-level visibility

Nexus 1000V Virtual Chassis Model


One Virtual Supervisor Module managing multiple Virtual Ethernet Modules
Dual Supervisors to support HA environments

A single Nexus 1000V can span multiple ESX Clusters

SVS-CP# show module Mod Ports Module-Type --- ----- --------------------------------1 1 Supervisor Module 2 1 Supervisor Module 3 48 Virtual Ethernet Module 4 48 Virtual Ethernet Module --More--

Model Status ------------------ ---------Cisco Nexus 1000V Cisco Nexus 1000V active * standby ok ok

Single Chassis Management


A single switch from control plane and management plane perspective
Protocols such as CDP operates as a single switch XML API and SNMP management appears as a single virtual chassis

Upstream-4948-1#show cdp neighbor Capability Codes: R - Router, T - Trans Bridge, B - Source Route Bridge S - Switch, H - Host, I - IGMP, r - Repeater, P - Phone Device ID Platform Local Intrfce Port ID Gig 1/5 Gig 1/10 136 136 S S Nexus Nexus Holdtme Capability

N1KV-Rack10 1000V Eth2/2 N1KV-Rack10 1000V Eth3/5

Virtual Supervisor Options


Server 1
VM #1 VM #2 VM #3 VM #4 VM #5

Server 2
VM #6 VM #7 VM #8 VM #9

Server 3
VM #10 VM #11 VM #12

VEM
VMW ESX

VEM
VMW ESX

VEM
VMW ESX

VSM VSM

VSM Virtual Appliance


ESX Virtual Appliance Special dependence on CPVA server Supports up to 64 VEMs

VSM VSM
VSM Physical Appliance
Cisco branded x86 server Runs multiple instances of the VSM virtual appliance Each VSM managed independently

Virtual Supervisor to Virtual Center


Nexus 1000V

Virtual Center

VSM

One way API between the VSM and Virtual Center Certificate (Cisco self signed or customer supplied) ensures secure communications Connection is setup on the Supervisor

N1K-CP# show svs connections Connection VC: IP address: 10.95.112.10 Protocol: vmware-vim https vmware dvs datacenter-name: PHXLab ConfigStatus: Enabled OperStatus: Connected

Supervisor to Ethernet Module


Two distinct virtual interfaces are used to communicate between the VSM and VEM
Control Carries low level messages to ensure proper configuration of the VEM. Maintains a 2 sec heartbeat what the VSM to the VEM (timeout 6 seconds) Packet Carries any network packets between the VEM and the VSM such as CDP/LLDP
Nexus 1000V

VM #1

VM #2

VM #3

VM #4

VEM
VMW ESX

Must be on two separate VLANs Supports both L2 and L3 designs

VSM

Nexus 1000V Deployment Scenarios

Virtual Ethernet Modules

VEM Deployment Scenarios


VEM Concepts
Limits of VEM in Nexus 1000V Installation of VEM

Port Types Defined & Addressing Mechanism for ports


n1kv(Config t)# interface Module#/Eth# n1kv(Config t)# interface veth#

Spanning Tree Considerations/Conversations General Configuration Options for Traffic Flow Special Ports/VLANs used and I/O characteristics 1GE & 10GE deployment scenarios

Virtual Ethernet Module Basics


VEM is a light weight (~10MB RAM) module that provides N1KV switching capability on the ESX host Single VEM instance per ESX host Relies on the VSM for configuration Can run in last known good state without VSM connectivity Some VMWare features will not work (Vmotion) when VSM is down Must have VSM connectivity upon reboot to switch VM traffic
Server 1
VM VM VM VM VM VM VM VM #1 #2 #2 #3 #3 #4 #4 #1

Server 2
VM VM VM VM VM VM VM VM #5 #6 #6 #7 #7 #8 #8 #5

Server 3
VM VM VM VM #9 #10 #9 #10 VM VM #11 #11 VM VM #12 #12

VEM VEM
VMW ESX VMW ESX

VEM VEM
VMW ESX VMW ESX

VEM VMware vSwitch VEM VMware vSwitch


VMW ESX VMW ESX

Virtual Center

VSM

Targeted Cisco Nexus 1000V Scalability


Nexus 1000V

A single Nexus 1000V


66 modules (2x Supervisors and 64x Ethernet Modules)

Virtual Supervisor -

Active

Virtual Supervisor - Standby

Virtual Ethernet Module:


32 physical NICs 256 virtual NICs

VEM

VEM

VEM

VEM

Limit Per Nexus 1000V


512 Port Profiles 2048 physical ports 8,192 virtual ports (vmknic, vswif, vnic)
VEM VEM VEM VEM

VEM

VEM

VEM Distributed Switching


Unique to each VEM
Data Plane MAC/Forwarding Table Upstream path configuration (EtherChannel, pinning, etc) Module # identification

Shared among all VEMs controlled by VSM


Control Plane (mgmt IP) Domain ID of N1K DVS Port Profile Configuration veth Interface Pool
Nexus 1000V

VSM

VEM Module 3
VMW ESX1

VEM Module 4
VMW ESX2

VEM Module n
VMW ESX3

Installation of VEM
Current Virtual Ethernet Module code must be in lockstep with the ESX release version. Each time a new ESX server is deployed the correct VEM version must be loaded. Automatic using VMWare Update Manager (VUM) Or manual method with CLI command

m deploying a new SX Server, do you ave something for it?


Nexus 1000V

Yes VEM Module 5 I do


Virtual Center & VMWare Update Manager

VSM

VEM Module 3
VMW ESX VMW ESX

VEM Module 4
VMW ESX

Switching Interface Types - Eth


Physical Ethernet Ports (Network Admin Configuration)
- NIC cards on each ESX server - Appears as a Eth interface on a specific module in NX-OS Example n1kv(Config t)# interface Eth3/1 - Module/Slot - Module number is allocated when ESX is added to N1K - Server name to Module relationship can be found by issuing the show module command
VM #1 VM #2 VM #3 VM #4

VEM Module 3 Module 3


VMW esx1.cisco.com n1kv(Config t)# int eth3/1

Switching Interface Types - veth


n1kv(Config t)# int veth1
VM #1 VM #2 VM #3 VM #4

ervice onsole swif0

veth2

veth5

veth6

veth9

veth68

mknic

veth3

VEM VEM Module 5


ESX1

Virtual Ethernet Ports


- Virtual Machine/ESX facing ports VEM VEM - Appears as veth within NX-OS Module 6 - No module exists when configuring veth ports ESX2 - Not being assigned to a specific module to simplifies VMotion Example Veth68

Spanning Tree Considerations


There are none, but customers always want an explanation of why BPDUs if sent from an upstream switch, the Nexus 1000V drops them Loop prevention techniques will be used similar to the way VMWare provides today It will only learn MACs connected to a veth port on the local VEM by default If destination is not on the local VEM, frame is forwarded out one of the physical interfaces The best terminology to use with customers is to call the VEM a Leaf Node A B
Server 2
VM VM #5 #5 VM VM #6 #6 VM VM #7 #7 VM VM #8 #8

Software Switch Software Switch


VMW ESX VMW ESX

Configuration options for traffic flow


MAC Pinning
Embedded switch will determine and fix a path for each MAC address to use until a failure is detected
VM VM #5 #5

Server 2
VM VM #6 #6 VM VM #7 #7 VM VM #8 #8 VM VM #9 #9

Server 3
VM VM #10 #10 VM VM #11 #11 VM VM #12 #12

Software Switch Software Switch


VMW ESX VMW ESX 1 2

Software Switch Software Switch


VMW ESX VMW ESX

Virtual Port ID
Essentially the same as MAC pinning, but based on the virtual NIC port @ FCS
A B

Configuration options for traffic flow


Hashing
Using some parameter to load balance across redundant links to an upstream switch or Cat6k VSS/Nexus vPC (i.e. MAC, IP, TCP, etc)
VM VM #5 #5

Server 2
VM VM #6 #6 VM VM #7 #7 VM VM #8 #8 VM VM #9 #9

Server 3
VM VM #10 #10 VM VM #11 #11 VM VM #12 #12

Software Switch Software Switch


VMW ESX VMW ESX

Software Switch Software Switch


VMW ESX VMW ESX

Manual
Manually configuring a path through a specific physical NIC to a specific vnic

Channeling Techniques available with VMWare


NIC team load balancing algorithms based on either/or, not AND.
src MAC (MAC Pinning) virtual Port ID IP Hashing equiv EtherChannel Manual
VM VM #5 #5

Server 2
VM VM #6 #6 VM VM #7 #7 VM VM #8 #8 VM VM #9 #9

Server 3
VM VM #10 #10 VM VM #11 #11 VM VM #12 #12

VMWare doesnt behave any differently if you are talking to the same upstream switch, or a different one. i.e. Hashing scenario

VMware vSwitch VMware vSwitch


VMW ESX VMW ESX

VMware vSwitch VMware vSwitch


VMW ESX VMW ESX

Channeling Techniques available with Nexus 1000V


Traffic flow is based on same principles as VMware, except N1KV can combine
src MAC (MAC Pinning) virtual Port ID EtherChannel Manual
Server 2
VM VM #5 #5 VM VM #6 #6 VM VM #7 #7 VM VM #8 #8 VM VM #9 #9

Server 3
VM VM #10 #10 VM VM #11 #11 VM VM #12 #12

VEM VMware vSwitch VEM VMware vSwitch


VMW ESX VMW ESX

VEM VMware vSwitch VEM VMware vSwitch


VMW ESX VMW ESX

Primary Benefit of N1KV is the ability to pin traffic of specific VLANs to a certain upstream switch and provide EtherChannel

Possible Deployment Scenarios


Purpose of the following slides is to make you aware of the different architecture components of a ESX/N1KV environment Any design sessions which leverage these slides before FCS, must come with the caveat that official best practices and recommendations may change

This is meant to start conversations and provide examples of how it could be.

Priorities and I/O characteristics of Nexus 1000V VLANs & Virtual interfaces
Control VLAN High Priority, Low BW
Unique VLAN configured for VSM to VEM configuration, heartbeats, etc

Packet VLAN Medium Priority, Low BW


Unique VLAN configured for SUP level communication (IGMP, CDP, Netflow, system logs VACL/ACL, etc)

Vswif - Medium Priority, Low BW


Service Console/Management interface to the ESX Server veth port

Vmknic High or Low Priority & BW


The vmknic is used by the TCP/IP stack that services VMotion, NFS and software iSCSI clients that run at the VMkernel level, and remote console traffic veth port

Vnic Priority & I/O characteristics depend on VM


Standard VM data traffic veth port
VM #1 Service Console vswif0 VM #2 VM #3 VM #4

veth2

veth5

veth6

veth9

veth68

knic

eth3

Additional information & links found on this thread: http://communities.vmware.com/thread/136077?tstart=1775

VEM VEM

1GE design Possible minimum


VM #1 veth2 veth5 VM #2 veth6 VM #3 veth9 VM #4 veth68

Service vmknic Console vswif0

Multiple adapters for redundancy and throughput 1GE begs for traffic isolation as pipe can be filled Minimum configs are four NICs (Two per EtherChannel) for Isolation and redundancy 4Gb/s Total Bandwidth

veth3

VEM
ESX

Pinned Traffic N1KV Control N1KV Packet Service Console Possible VMkernel

Pinned Traffic VMs Possible VMkernel

1GE design A More Common Isolated Scenario


VM #1 veth2 veth5 VM #2 veth6 VM #3 veth9 VM #4 veth68

Service vmknic Console vswif0

Multiple adapters for redundancy and throughput Provide Isolation of different types of traffic Guard against 1GE bottleneck

veth3

VEM
ESX

N1KV Control N1KV Packet Service Console

VMs

8Gb/s Total Bandwidth


VMkernel (IP Storage) VMkernel (Vmotion)

Possible 10GE designs


VM #1 veth2 veth5 VM #2 veth6 VM #3 veth9 VM #4 veth68

Service vmknic Console vswif0

Pin specific VLAN traffic to a specific uplink to enhance traffic isolation 10GE likely to be enough BW for all traffic Minimum config would be two 10GE NICs for redundancy to two upstream switches

veth3

VEM
ESX

Pinned Traffic N1KV Control N1KV Packet Service Console Possible VMkernel

Pinned Traffic VMs VMkernel

20 Gb/s Total Bandwidth

Your Feedback is important to us We want to hear from you!


Please complete your Survey by going to the URL listed below:

http://iplatform.cisco.com/iplatform/
Event Name: Data Center SEVT Session Name: Nexus 1000V Design Scenarios

Das könnte Ihnen auch gefallen