Beruflich Dokumente
Kultur Dokumente
Server 2
VM #6 VM #7 VM #8 VM #9
Server 3
VM #10 VM #11 VM #12
VEM
VMW ESX
VEM
VMW ESX
VEM
VMW ESX
Nexus 1000V
Virtual Center
VSM
VM #8
Defined Policies
WEB Apps HR DB Compliance Virtual Center
VM Connection Policy
Defined in the network Applied in Virtual Center Linked to VM UUID
VM #8
Server Benefits
Maintains existing VM mgmt Reduces deployment time Improves scalability Reduces operational workload Enables VM-level visibility Virtual Center
Network Benefits
Unifies network mgmt and ops Improves operational security Enhances VM network features Ensures policy persistence Enables VM-level visibility
SVS-CP# show module Mod Ports Module-Type --- ----- --------------------------------1 1 Supervisor Module 2 1 Supervisor Module 3 48 Virtual Ethernet Module 4 48 Virtual Ethernet Module --More--
Model Status ------------------ ---------Cisco Nexus 1000V Cisco Nexus 1000V active * standby ok ok
Upstream-4948-1#show cdp neighbor Capability Codes: R - Router, T - Trans Bridge, B - Source Route Bridge S - Switch, H - Host, I - IGMP, r - Repeater, P - Phone Device ID Platform Local Intrfce Port ID Gig 1/5 Gig 1/10 136 136 S S Nexus Nexus Holdtme Capability
Server 2
VM #6 VM #7 VM #8 VM #9
Server 3
VM #10 VM #11 VM #12
VEM
VMW ESX
VEM
VMW ESX
VEM
VMW ESX
VSM VSM
VSM VSM
VSM Physical Appliance
Cisco branded x86 server Runs multiple instances of the VSM virtual appliance Each VSM managed independently
Virtual Center
VSM
One way API between the VSM and Virtual Center Certificate (Cisco self signed or customer supplied) ensures secure communications Connection is setup on the Supervisor
N1K-CP# show svs connections Connection VC: IP address: 10.95.112.10 Protocol: vmware-vim https vmware dvs datacenter-name: PHXLab ConfigStatus: Enabled OperStatus: Connected
VM #1
VM #2
VM #3
VM #4
VEM
VMW ESX
VSM
Spanning Tree Considerations/Conversations General Configuration Options for Traffic Flow Special Ports/VLANs used and I/O characteristics 1GE & 10GE deployment scenarios
Server 2
VM VM VM VM VM VM VM VM #5 #6 #6 #7 #7 #8 #8 #5
Server 3
VM VM VM VM #9 #10 #9 #10 VM VM #11 #11 VM VM #12 #12
VEM VEM
VMW ESX VMW ESX
VEM VEM
VMW ESX VMW ESX
Virtual Center
VSM
Virtual Supervisor -
Active
VEM
VEM
VEM
VEM
VEM
VEM
VSM
VEM Module 3
VMW ESX1
VEM Module 4
VMW ESX2
VEM Module n
VMW ESX3
Installation of VEM
Current Virtual Ethernet Module code must be in lockstep with the ESX release version. Each time a new ESX server is deployed the correct VEM version must be loaded. Automatic using VMWare Update Manager (VUM) Or manual method with CLI command
VSM
VEM Module 3
VMW ESX VMW ESX
VEM Module 4
VMW ESX
veth2
veth5
veth6
veth9
veth68
mknic
veth3
Server 2
VM VM #6 #6 VM VM #7 #7 VM VM #8 #8 VM VM #9 #9
Server 3
VM VM #10 #10 VM VM #11 #11 VM VM #12 #12
Virtual Port ID
Essentially the same as MAC pinning, but based on the virtual NIC port @ FCS
A B
Server 2
VM VM #6 #6 VM VM #7 #7 VM VM #8 #8 VM VM #9 #9
Server 3
VM VM #10 #10 VM VM #11 #11 VM VM #12 #12
Manual
Manually configuring a path through a specific physical NIC to a specific vnic
Server 2
VM VM #6 #6 VM VM #7 #7 VM VM #8 #8 VM VM #9 #9
Server 3
VM VM #10 #10 VM VM #11 #11 VM VM #12 #12
VMWare doesnt behave any differently if you are talking to the same upstream switch, or a different one. i.e. Hashing scenario
Server 3
VM VM #10 #10 VM VM #11 #11 VM VM #12 #12
Primary Benefit of N1KV is the ability to pin traffic of specific VLANs to a certain upstream switch and provide EtherChannel
This is meant to start conversations and provide examples of how it could be.
Priorities and I/O characteristics of Nexus 1000V VLANs & Virtual interfaces
Control VLAN High Priority, Low BW
Unique VLAN configured for VSM to VEM configuration, heartbeats, etc
veth2
veth5
veth6
veth9
veth68
knic
eth3
VEM VEM
Multiple adapters for redundancy and throughput 1GE begs for traffic isolation as pipe can be filled Minimum configs are four NICs (Two per EtherChannel) for Isolation and redundancy 4Gb/s Total Bandwidth
veth3
VEM
ESX
Pinned Traffic N1KV Control N1KV Packet Service Console Possible VMkernel
Multiple adapters for redundancy and throughput Provide Isolation of different types of traffic Guard against 1GE bottleneck
veth3
VEM
ESX
VMs
Pin specific VLAN traffic to a specific uplink to enhance traffic isolation 10GE likely to be enough BW for all traffic Minimum config would be two 10GE NICs for redundancy to two upstream switches
veth3
VEM
ESX
Pinned Traffic N1KV Control N1KV Packet Service Console Possible VMkernel
http://iplatform.cisco.com/iplatform/
Event Name: Data Center SEVT Session Name: Nexus 1000V Design Scenarios