Sie sind auf Seite 1von 104

VMware NSX® for vSphere® 6.

2 Knowledge
Transfer Kit
Architecture Overview

© 2016 VMware Inc. All rights reserved.


Agenda
• Problem Statement
• Data Center Network Trends
• Analogies of Logical Networking Constructs
• Mapping of Logical to Physical Space
• VMware NSX for vSphere Component Overview
• NSX for vSphere Design Considerations
• VMware NSX in a Multi-vCenter Environment
• Integration

2
Problem Statement
Traditional Networking is Hard!

4
Physical Networking Configuration Tasks
Initial configuration Recurring configuration
• Multichassis LAG
• Routing configuration
• Switch virtual • SVIs/RVIs
interfaces (SVIs) / • VRRP/HSRP
Router virtual
interfaces (RVIs)
• Advertise new subnets
• Virtual Router • Access lists (ACLs)
Redundancy • VLANs
Protocol (VRRP)/ • Adjust VLANs on trunks
Hot Standby Router • VLANs STP/Multiple
Protocol (HSRP) L3
Spanning Tree (MST)
• Spanning Tree
L2 protocol mapping
Protocol (STP)
‾ Instances/mappings
‾ Priorities
‾ Safeguards

• Link Aggregation • VLANs STP/MST mapping


Control Protocol • Add VLANs on uplinks
(LACP) • Add VLANs to server ports
• VLANs Configuration consistency!
‾ Infra networks on
uplinks and downlinks
‾ STP

5
Physical Network Services Configuration Tasks

Configuration consistency!

6
Networking Before and After Server Virtualization
• Before • After
– 100s of physical servers – 1,000s of VMs
– Change the VLAN on a switch port to control – VLAN trunking
server connectivity configurations
– Features are dependent on hardware – Different teams manage different network
functionality (ASICs) components
– Complexity with configuring network services – Features are still dependent on hardware
– Traffic flow is mostly North-South functionality
– Complexity of network services (firewalls and
so forth) increased because of the number of
servers
– Data center traffic flow now predominately
East-West, which the network is not designed
for
– Reduced visibility of network endpoints (policy
enforcement, monitoring, and so forth)

7
Network Utilization
Number of VMs Number of Tenants

L3
L2

Before NV With NV

MAC addresses
VM
VM VM
VM ARP entries
VM
VM VM
VM
VM
VM VM
VM
VM
VM VM
VM VLAN usage
VM
VM VM
VM
STP load

8
Network Utilization (cont.)
Number of VMs Number of Tenants

L3
L2

Before NV With NV

MAC addresses
VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM ARP entries
VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM
VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM
VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VLAN usage
VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM
STP load

9
Traditional security
is no longer enough!

10
The Pressure on Security

New App
Requested

Provision VM

Policies Security Services Security App Change


are Set Configured Mapped to Deployed Happens
Network
Provision
Network

11
Everything Works Well on Day One

DAY DAY
1 Data Center 2

Perimeter
Firewall SQL database server provision Sensitive data is added to the new
Finance Application request database VM

VM VM

DMZ/Web Database policy assumptions are: 555-55-


Now what?
VM
5555
• No confidential information
VM VM • No personal privacy information
App • Vanilla DB policies

VM VM

DB

12
Ideally, Every App Would Have Dedicated Resources

VM VM VM VM VM

VM VM VM VM VM

13
Manageability Necessitates Grouping

Security Zones

VLANS

192.168.10.4
192.168.10.12
192.168.20.6
192.168.20.11

14
Today, Security is Tied to a Complex and Rigid Network
Topology

15
Traditional Data Center Security

Converged infrastructure, running on data center


compute resources and VMware vSphere ® hypervisors

End user
computing/desktops

FW
Perimeter FW

FW
Internal FW
A/V

Perimeter

Internal
Client

Application infrastructure
APP
Internet
Other
Internet-facing
Internet-facing servers:
servers:
server
Web,
Web, e-mail,
e-mail, DNS,
DNS, and
and security
IPS
IPS

DMZ so
so on
on
Also
Also for
for VDI:
VDI: VMware
VMware
Horizon
Horizon®® View™
View™
Security
Security Server
Server

16
What is needed?

17
VMware NSX – A New Architectural Approach

Software-Defined Data Center

Applications

Virtual Virtual Virtual


Machines Networks Storage

Data Center Virtualization

Compute Network Storage


Capacity Capacity Capacity

Location Independence

18
The Next-Generation Networking Model

Software
Network and Security Services
Now in the Hypervisor
Software OS

VSWITCH

Load Balancing L3 Routing L2 Switching Firewalling/ACLs

Hypervisor

Hardware

19
Visibility
VMware NSX® is uniquely positioned to see everything

VM
VM VM
VM
XSN
HYPERVISOR

PHYSICAL COMPUTE

PHYSICAL NETWORK

20
Granular Control Becomes Possible

NSX Built-In Services

VM
Firewall Data Security
VM VM

VM
Server Activity Monitoring VPN (IPSEC, SSL)

Third-Party Services
NSX vSwitch
Hypervisor Intrusion
Antivirus Firewall
Prevention

Security Policy Vulnerability Identity and Access


Management Management Management

…and more in progress

21
Data Center
Network Trends
Importance of the VMware NSX Virtual Switch
• The VMware NSX Virtual Switch™ is
– First network hop
– First aggregation point for VM traffic

• Best spot to VM VM VM VM

– Enforce policy
– Collect statistics
– Initiate and terminate monitoring Hypervisor

• Centrally controlled
(same as vSphere hypervisor)
• Feature-rich because it is x86-based
• Riding the performance curve of x86 architectures

23
So Why Not at the ToR?
• Limited VM visibility
– Tagging options [802.1Qbg/Virtual Ethernet
Port Aggregator (VEPA) or 802.1Qbh/VN-TAG]
– Require coordination/configuration on each VM change

• Increasing VM density
– ToR significant aggregation point
– Challenge for ASICs from a tables/tunnel standpoint

• Distributed stateful services


– Service enforcement at ToR is expensive
– Dilemma between cost, speed, feature richness

• Automated configuration
– ToR configuration vendor-specific

24
Data Center Network With Virtualization
• Why does virtualization require Layer 2 connectivity?
– VMware vSphere vMotion® is a non-disruptive operation so the network address cannot change. Layer
3 would require an IP address change and routing information, both of which are disruptive
– Many applications expect a Layer 2 network – broadcast traffic, application requirements, high
availability, lower latency, and legacy needs
– VMkernel networks (storage, VMware vSphere Fault Tolerance) also typically require Layer 2 adjacency
– What is the impact to my network?
• VLAN sprawl and static allocation of VLANs
• Large failure domains
• Run out of VLANs in large networks (only 4096 VLANs)

25
Fabric Options
• Network virtualization enables greater scale independent of fabric technology. NSX for vSphere
works over any reliable IP network supporting 1600 bytes MTU
• Layer 3 fabrics
– Most scalable technology
– Offers best interoperability

• Layer 2 multipathing capable fabrics (such as TRILL-style)


– Evolving technology
– Interoperability to be determined

• N-tier networks
– Also works fine
– Expensive, not ideal for greenfield deployments

26
Fabric-Based Network Design
• Layer 2 Fabric – VLAN-based
L3 Connectivity – Larger Layer 2 domains, reliance on STP
L2/L3 – Comparatively limited in scalability – 2-tier
design
– Generally, industry is moving away from
L2 Layer 2 fabrics

• Layer 3 Fabric – Multi-tier


L3 Connectivity – Highly scalable data centers use 3-tier with
Layer 2/Layer 3 at leaf
– Limited STP, VLAN spread
– Scalable 3-tier design
L3
– Expensive and not ideal for Greenfield
deployments
L2/L3
• Leaf/Spine
– Virtualization and Big Data applications are
major contributors to East-West traffic
Spine – Tier 2 growth. Up to 75%
– Trill or Layer 3
Leaf – Tier 1 – Leaf-spine design allows for:
• Uniform access and consistent latency
• N-way ECMP – link utilization and HA
27
WAN/Internet

Physical Network Trends L3 L3

• From 2-tier or 3-tier to spine/leaf L2 L2

• Density and bandwidth jump


• Equal-cost multipath (ECMP)
routing for Layer 3 (and Layer 2)
• Reduce network oversubscription POD A POD B

• Wire and configure once


• Uniform configurations

L3
L2

WAN/Internet

28
Layer 3 Leaf-Spine Fabric Simplified Operations
Initial configuration Recurring Configuration

• Multi-chassis
Routing LAG
configuration • SVIs/RVIs
• Routing configuration
SVIs • VRRP/HSRP
• SVIs/RVIs • Advertise new subnets
• VRRP/HSRP • Access lists (ACLs)
• STP • VLANs
Instances/mappings L3 • Adjust VLANs on trunks
Priorities • VLANs STP/MST
Safeguards L3 L2 mapping
L2
Simplified 2-Tier
Layer 3 Fabric
• LACP with Network Virtualization
• VLANs STP/MST mapping
• VLANs
• Infra networks on
• Add VLANs on uplinks
uplinks and downlinks • Add VLANs to server ports
• Routing protocols

29
Spine Switches Spine
• Spine connects to …
leaf switches .2

10.99.1.0/31
– Interfaces L3 Point-to-Point
configured as routed
point-to-point Layer .1
3 links
… … …
– Links between spine
switches not Leaf
required
– In case of a spine-
to-leaf link failure,
routing protocol Spine
reroutes traffic on Layer 3 Only
the alternate paths (Route table entries, ARP entries,
Layer 3 no MAC consumption)
– Aggregates all leaf Downlinks To Leaf 1 To Leaf N
nodes and provides …
connectivity
between racks

30
Leaf Switches
• Servers facing ports have minimal L3 Uplinks
configuration
• Can use Link Aggregation Control Protocol
(LACP)
L3
– Applies to same-speed interfaces
L2
– Active/active bandwidth
– Fast failover times
VLAN 802.1Q
Hypervisor 1
• 801.Q trunks with a small set of VLANs Boun-
dary

...
802.1Q
Hypervisor n

31
Analogies of Logical Networking
Constructs
Logical Constructs Analogies
Layer 2
VM1 VM2 VM1 VM2

Logical Switch Logical Switch Port


Virtual NIC
(lswitch) (lswitch port)

VLAN Switch Port


NIC
(Broadcast Domain) (Interface)

33
Logical Constructs Analogies
Layer 3
VM1 VM2 VM3

Logical Router Logical Router Port


(lrouter) (lrouter port)

Router Router Port


(Layer 3 Switch) SVI/RVI*

*Switch/Routed Virtual Interface

34
Logical Constructs Analogies
Layer 4 - Layer 7 Services

VM1 VM2 VM1 VM2 VM3


VM3

DMZ
Logical Port Firewall
Logical Firewall (micro-segmentation,
transparent firewall))

Firewall

35
Logical Constructs Analogies
Layer 4 - Layer 7 Services (cont.)

VM1 VM2 VM1 VM2 VM3

One-Armed Load Balancer


(Source NAT)
Inline Load Balancer
(Destination NAT)

Load Balancer

36
Services in Network Virtualization Space
• Security-related
– Port ACLs, router ACLs (allow, deny)
– Port security
– ARP spoof protection
– IP spoof protection

• Troubleshooting
– Port and port-to-port statistics
– Port-mirror [Encapsulated Remote Switched Port Analyzer (ERSPAN)]
– Port-to-port connectivity validation tool

• Other services
– QoS (marking, policing)
– NAT

37
Connecting to Physical Infrastructure at Layer 2

Bare Metal (x86)

VM1 VM2 802.1Q

VM VM
L2 Gateway
??? VM
Service 1 1
802.1Q

HW VTEP
(Switch)

802.1Q

38
Mapping of Logical to Physical
Space
Logical Topologies Mapped to Physical
Logical View Physical View
WAN Internet

VM1 VM2 VM3

VM1
VM2

VM3

Compute Racks On/Off Ramp

40
Networking Problems That NSX for vSphere Solves
• Seamless Layer 2 connectivity over Layer 3 networks with flexible designs
• Isolation of tenants no longer dependent on VLANs
• VM attributes (VLANs/IPs/MAC addresses) not
exposed/coupled to infrastructure
• Reduces frequency of changes to the physical network
• No network configuration changes for tenant/application networking
• Network administrator focuses on maintaining a reliable transport network as opposed to dynamic VM
networking
• Distributes network services providing centralized management
• Leverages existing topology, while planning a transition to new fabric
• Extends logical networks to physical
• Provides new tools for automation, policy enforcement, and VM visibility

41
Network Virtualization Virtual Space

VM VM VM VMware NSX Controller™

• VM-aware Layer 4 - Layer 7 services


VMware
NSX • Networking services distributed at the
Virtual edge
Switch™
• Scale out with the number of vSphere
hypervisors

Routing/NAT

Security/Firewalling

QoS
Physical Fabric
Port Mirroring

Counters
42
NSX for vSphere Logical Switching
L2
Logical Logical Logical
Switch 1 Switch 2 Switch 3
L3

VMware NSX
Design Challenges VMware NSX Benefits
• Multi-tenant or application segmentation • Scalable multi-tenancy across data center
• VM mobility requires Layer 2 everywhere • Enabling Layer 2 over Layer 3 infrastructure
• Large Layer 2 physical network sprawl – using overlay networks
STP issues • Logical switches span across physical hosts
• Hardware memory (MAC, FIB) table limits and network switches

Logical Switching
Scale the Network

43
NSX for vSphere Logical Routing

VM VM
VM

VM
VM
VM
VM

VM

VM
VM VM

VM

VM VM
VM
VM 44
Network Virtualization Design Attributes
• Physical fabric requirements • Benefits of virtualization
– Decouple – Simplicity
• Independent address spaces, topology • Uniform one time configuration
independence • Scalability
– Reproduce • Scale-out architecture (spine/leaf)
• Workloads act as if they had a physical network – Low oversubscription
for themselves
• Multipathing and efficient use of bandwidth
– Automate
– Fault tolerant
• Programmatic service provisioning
• Dynamically route around failures
– Traffic differentiation
• Enforce QoS

45
VMware NSX for vSphere
Component Overview
NSX for vSphere Components
• Self-service portal
• Cloud management
Consumption • VMware vRealize®
Automation™
NSX Manager VMware vCenter Server®
Management Message Bus Agent • Single point of configuration
Plane • REST API and UI interface

VMware NSX Logical NSX Controller • Manages logical networks


Router Control VM User World Agent
• Run-time state
Control
Plane • Does not sit in the data path
• Control plane protocol

NSX
NSX Virtual Switch Edge • NSX Virtual Switch
Services • Distributed network edge
Gateway • Line rate performance
VMware VXLAN Distributed Firewall
• VMware NSX Edge™
Data vSphere Logical Router • VM form factor
Plane Distributed • Data plane for North-South
Hypervisor Kernel Modules
ESXi Host Switch™
traffic
• Routing and advanced
services

47
Components – NSX Manager
• NSX for vSphere centralized management plane
• 1:1 mapping between a VMware NSX Manager™ and VMware vCenter Server
• Up to 8 NSX Manager instances in a multi-vCenter ® configuration (from VMware NSX 6.2)

• NSX Manager instances have the following roles:


– Standalone
– Primary
– Secondary

• Provides the management UI and API for NSX for vSphere


• VMware vSphere Web Client Plug-In
• Deploys NSX Controller and NSX Edge virtual appliances (OVF)
• Installs VXLAN, distributed routing, and firewall kernel modules plus UW Agent on ESXi hosts

48
Components – NSX Manager (cont.)
• Configures the NSX Controller cluster through a REST API and hosts through a message bus
• Host configuration includes distributed firewall and NSX Edge nodes
• Generates certificates to secure control plane communications

49
Components – NSX Controller
• Provides control plane to distribute VXLAN and logical routing network information to ESXi hosts
• NSX Controllers are clustered for scale out and high availability
• Network information is sliced across nodes in an NSX Controller cluster
• Enables dependency on multicast routing/PIM in the physical network to be removed
• Provides suppression of ARP broadcast traffic in VXLAN networks

NSX Controller
VXLAN
Logical Router Logical Router
Directory Service
VXLAN VXLAN

MAC table
ARP table
Logical Router
VXLAN
VTEP table

50
NSX Controller – Master Election
• Each role needs a master
• Masters for different roles can sit on different nodes
• Uses Paxos-based algorithm
• Guaranteed correctness (not necessarily convergence)

VXLAN VXLAN

Role VXLAN

51
NSX Controller – Failure Scenario
• Node failure triggers election for roles where the master is no longer available
• A new node is promoted to master after the election process

VXLAN VXLAN

VXLAN

52
NSX Controller – Distribution
• Problem
– Need to dynamically distribute workload across all available cluster nodes
– Redistribute workload when new cluster member is added
– Ability to sustain failure of any cluster node with zero impact
– Do all of the above transparent to the application

• Solution is slicing

53
NSX Controller – Slicing
1. For a given role, create N slices
2. Define application objects
3. Assign objects to slices

VM VM VM
VM VM VM
VM VMVM
VMVM VM
VM VMVMVMVM VM
VM VMVM
VMVM VM
VM VMVMVMVM VM
VM VMVM
VMVM VM
VM VM VM
VM VM VM

Logical Switches - VNIs Objects Logical Routers

1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9

Slices

54
NSX Controller – Slicing (cont.)
1. For a given role, create N slices
2. Define application object
3. Assign objects to slices
4. Sprinkle slices across nodes

1 4 7
VXLAN Function 2 5 8
VXLAN Function 3 6 9
VXLAN Function

Logical Router 2 5 8 Logical Router 1 4 7 Logical Router 3 6 9

1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9

55
NSX Controller – Redistribution
1. Failure of Node 3
2. Master for that role re-assigns the slices

6 3 Function
VXLAN 1 4 7 9
VXLAN 2 5 8
Function 3 6 9
VXLAN Function

6 3 Router 2 5 8
Logical Logical Router 1 4 7 Logical Router 3 6 9
9

56
NSX Controller – Deployment
• NSX Controller nodes are deployed as virtual appliances
– 4 vCPU, 4 GB of RAM and 20 GB of disk space per node
– Modifying settings is not supported

• NSX Controller password is defined during deployment of the first node and is consistent
across all nodes
• NSX Controller nodes must be deployed in same vCenter Server that NSX Manager is
connected to
• Cluster size of 3 NSX Controller nodes is the only supported configuration
• Controller interaction is through the CLI, while configuration operations are also available
through NSX for vSphere API

57
Components – User World Agent
• User World Agent is a TCP (SSL) client that communicates with the NSX Controller using the control plane
protocol
– Can connect to multiple NSX Controllers
– Mediator between the VMware ESXi™ hypervisor kernel modules and NSX Controller instances
– Communicates with message bus agent to retrieve information from NSX Manager
– Runs as a service daemon on ESXi: netcpa
– Logs to: /var/log/netcpa.log
– NSX distributed firewall has a separate service daemon: vsfwd

NSX Controller NSX Controller NSX Controller NSX Controller


Cluster

User World Client Client Client NSX


Agent MGR

Kernel VXLAN LR
Modules

ESXi Host
58
Components – NSX Virtual Switch and NSX Edge
ESXi

NSX Virtual
Switch
Hypervisor Kernel Modules
(vSphere VIBs) NSX Logical NSX Edge
Router Control Services
VM Gateway
vSphere VXLAN Logical Router Firewall
Distributed
Switch

vSphere NSX Edge Logical Router NSX Edge Services Gateway

• NSX Virtual Switch • Control functions only • Layer 3 - Layer 7


• VMkernel modules • Dynamic routing and services
updates to NSX • NAT, DHCP, LB, VPN,
• VXLAN
Controller interface-based firewall
• Distributed routing
• Distributed firewall • Determines active ESXi • Dynamic routing
• Switch security host for Layer 2 bridging • VM form factor
• Message bus
• High availability
59
NSX for vSphere Overview
Component Interactions

Management
Control Plane
Plane
Data Plane

60
Building the NSX for vSphere Platform
Deploy NSX for vSphere Consumption
+ + +

Programmatic
Virtual
Network Deployment
NSX NSX
Mgmt Edge Prerequisites
• Physical network – VXLAN
transport network, MTU
• vCenter and ESXi 5.5
• vSphere Distributed Switch
Virtual Infrastructure

Component Deployment Logical Networks

Logical Network/Security Services


1 Deploy NSX Manager
One Time

Recurring
2 Deploy NSX Controller Cluster 1 Deploy Logical Switches per Tier

Deploy Distributed Logical Router


Preparation
2 or Connect to Existing

1 Host Preparation 3 Create Bridged Network

2 Logical Network Preparation 4 Connect to Centralized Router

61
NSX Controller Interaction NSX Manager

• Control plane basics


NSX
• ESXi hosts and NSX Edge logical router VMs
learn network information, which is then reported Controller
to the NSX Controller through the User World Cluster
Agent (UWA)
• The NSX Controller CLI provides a consistent
interface to verify VXLAN and logical routing UWA VTEP UWA VTEP

network state information


• NSX Manager also provides APIs to
UWA VTEP UWA
programmatically retrieve data from the NSX VTEP

Controller nodes in the future

UWA VTEP UWA VTEP

vSphere Cluster A vSphere Cluster B

62
Component Interaction – Configuration
NSX Controller
NSX Manager
1 Configuration
(logical switches, distributed
vCenter logical routers)

1 3
Host Configuration
2 (logical switches, distributed
logical routers)

2 Service
NSX Controller NSX Edge 3 Configuration
Services (load balancer, firewall, VPN,
and so forth)
Gateway

vSphere Cluster 1 vSphere Cluster 2 vSphere Cluster N

63
NSX for vSphere Control Plane Security
• NSX for vSphere control plane communication occurs over the management network
• The control plane is protected by
– Certificate-based authentication
– SSL

• NSX Manager generates self-signed certificates for each of the ESXi hosts and NSX
Controllers
• These certificates are pushed to the NSX Controller and ESXi hosts over secure channels
• Mutual authentication occurs by verifying these certificates

64
VXLAN Control Plane Security
Certificate
NSX Manager 1 Generation

5 SSL NSX Manager Database

3 Message Bus 2 OVF Deployment


4 REST API

UW Agent VTEP UW Agent VTEP

UW Agent VTEP UW Agent VTEP


NSX Controller Cluster

UW Agent VTEP 5 SSL 5 SSL UW Agent VTEP

vSphere Cluster A vSphere Cluster B


65
NSX for vSphere
Design Considerations
vSphere Cluster Design

WAN Internet

VMware ESXi Clusters

vCenter 1
(Up to maximum
supported
VMs by vCenter)

vCenter 2
(Up to maximum
supported
VMs by vCenter)
Compute Racks Infrastructure Racks Edge Racks
(Storage, (North/South
vCenter and traffic)
Cloud Management
System)
67
vSphere Scalability
• Cluster sizing
– VMware vSphere High Availability 5.x: 32 hosts
– VMware vSphere High Availability 6.0: 64 hosts

• Storage
– VMware vSphere Storage APIs - Array Integration Atomic Test and Set (ATS) removes SCSI reservation constraints
on datastore sizing

• Virtual machines
– 10,000 powered on VMs per vCenter Server

• Networking
– NSX for vSphere allows scaling of the network independent of vCenter Server

68
Recap – vCenter Scale Boundaries
10,000 powered-on VMs*
DC Object vCenter Server 1,000 ESXi hosts
128 vSphere Distributed Switch instances
Maximum of 500 hosts
Cluster
Maximum of 62 hosts

ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi


vSphere Distributed Switch (maximum 1000 hosts) vSphere Distributed Switch
vSphere
DRS-based vSphere
vMotion

Manual vSphere
vMotion

*Depends on rate of provisioning calls

69
NSX for vSphere Scale Boundaries
Cloud Management System

1:1 mapping NSX API vCenter NSX API vCenter


of vCenter to (Manager) Server (Manager) Server

NSX for NSX Controller NSX Controller


Cluster Cluster
vSphere
Cluster

ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi


vSphere Distributed Switch vSphere Distributed Switch vSphere Distributed Switch
vSphere
DRS-based vSphere
vMotion

Manual vSphere
vMotion

Logical Network Span

70
VMkernel Networking
• Multi-instance TCP/IP stack • Separate routing table, ARP table, and
– Introduced with vSphere 5.5 and leveraging default gateway per stack instance
• VXLAN • Provides increased isolation and reservation
• NSX Virtual Switch transport network of networking resources such as sockets,
buffers and heap memory
• Enables VXLAN VTEPs and vSphere
vMotion VMkernels to use a gateway
independent from the default TCP/IP stack
• Management, vSphere Fault Tolerance,
NFS, iSCSI leverage the default TCP/IP
stack

71
VMkernel Networking (cont.)
• Teaming recommendations
– LACP (802.3ad) is a good option for optimal use of
available bandwidth and quick convergence
– Load Based Teaming is also a good option for non-VXLAN NSX
NSX vSwitch
vSwitch
VMkernel traffic where there is a desire to simplify
configuration and reduce dependencies on the physical
network, while still using multiple uplinks effectively
ESXi
ESXi Host
Host
– VMware NSX introduces support for multiple VTEPs per
host with VXLAN
– 2x 10Gbe network adapters per server is common
– Network partitioning technologies increase complexity

• Overlay networks are used for VMs


– Use VLANs for VMkernel interfaces to avoid circular
dependencies
• Considerations for VMware vSphere Auto Deploy™
– DHCP relay and IP helper support, Etherchannel Physical Switch

72
VMkernel Networking (cont.)
Routed uplinks (ECMP)
SVI 66: 10.66.1.1/26 L3 ToR Switch
SVI 77: 10.77.1.1/26
SVI 88: 10.88.1.1/26
SVI 99: 10.99.1.1/26

VLAN 66 VLAN 77 VLAN 88 VLAN 99


Span of VLANs

Span of VLANs
VLAN Trunk (802.1Q)

vSphere Host (ESXi)

Mgmt vMotion VXLAN Storage


10.66.1.25/26 10.77.1.25/26 10.88.1.25/26 10.99.1.25/26
DGW: 10.66.1.1 GW: 10.77.1.1 DGW: 10.88.1.1 GW: 10.99.1.1
73
QoS in Data Center Designs
• Virtualized environments can carry different
types of traffic
• Hypervisor is a trusted boundary, sets the
respective QoS values Spine
• The physical switching infrastructure to trust
these values. No reclassification is
necessary at the server facing port of a leaf
• With congestion, QoS values are looked at
No Marking/Reclassification
Leaf
to decide which traffic should be queued
(and potentially dropped) or prioritized Trust or Set QoS Marking

Trust QoS Marking

802.1Q
Hypervisor

74
Management and Edge Rack Requirements
• Management racks • Edge racks
– Layer 2 between racks needed for – Layer 2 between racks needed for external
management workloads such as vCenter 802.1Q VLANs
Server, NSX Controller nodes, NSX Manager,
and IP storage WAN
Internet
L3
L2

Leaf L3 L3
L2 L2
L2 L3 L3
Mgmt 1 Mgmt N Leaf
L2 L2
Mgmt 2 Mgmt N L2
Edge 1 Edge 2

Edge 3 Edge 4

VMkernel VLANs
Edge N Edge N
VLANs for VMkernel VLANs
Management VMkernel VLANs VMkernel VLANs
VMs
VLANs for
Edge VMs to
Physical
Network 75
vSphere Network Addressing Benefits
• To keep the number of static routes manageable as the fabric scales, larger address blocks
could be allocated to the VMkernel functions
– 10.66.0.0/16 for Management
– 10.77.0.0/16 for VMware vSphere Storage vMotion
– 10.88.0.0/16 for VXLAN
– 10.99.0.0/16 for Storage

76
vSphere Network Addressing Benefits (cont.)
• Dynamic routing protocols (OSPF, BGP) are used to advertise the new capacity to the rest of
the fabric
• Provides scalability and predictable network addressing, based on number of ESXi hosts per
rack or cluster
• Reduces VLAN usage by reusing VLANs with a rack (Layer 3) or POD (Layer 2)

77
VMware NSX in a
Multi-vCenter Environment
VMware NSX Logical Networks (6.0/6.1)

NSX Controller NSX Controller NSX Controller


Cluster Cluster Cluster
vCenter with NSX vCenter with NSX vCenter with NSX
Manager Manager Manager

vCenter A vCenter B vCenter C

Local vCenter Inventory Local vCenter Inventory Local vCenter Inventory

Distributed Logical Router Distributed Logical Router Distributed Logical Router

Logical Logical Logical


Switch Switch Switch

Single NSX domain can span


more than one site

79
VMware NSX Logical Networks (6.2)

NSX Controller NSX Controller NSX Controller


Cluster Cluster Cluster
vCenter with NSX vCenter with NSX vCenter with NSX
Manager Manager Manager

vCenter A vCenter B vCenter C

Local vCenter Inventory Local vCenter Inventory Local vCenter Inventory

Distributed Logical Router Distributed


Distributed
Logical
Logical
Router
Router Distributed Logical Router

Logical Logical Logical


Logical
Switch Switch Switches
Switch

Single NSX domain can span


more than one site

80
Multi-vCenter Components and Terminology
• Multi-vCenter instance objects use the term Universal and include
– Universal Sync
– Universal Controller Cluster (UCC)
– Universal Transport Zone (UTZ)
– Universal Logical Switch (ULS)
– Universal Distributed Logical Router (UDLR)
– Universal IP Set/MAC Set
– Universal Security Group

• NSX Manager instances have the following roles


– Standalone
– Primary
– Secondary

• Egress optimized routing includes


– Locale ID (metadata that describes location)

81
Multi-vCenter Logical Networks (VMware NSX 6.2)

Universal Object Configuration


(NSX UI & API) Universal Configuration Synchronization

Universal
Controller
Cluster

vCenter & NSX Manager A vCenter & NSX Manager B vCenter & NSX Manager H
Primary Secondary Secondary

Local vCenter Inventory Local vCenter Inventory Local vCenter Inventory

Universal Distributed Logical Router


Universal Logical
Switches

Universal
DFW

82
Multi-vCenter Logical Networks (VMware NSX 6.2) (cont.)
• Universal Controller Cluster size remains at 3 nodes
• NSX Controller instances always run within a single vCenter Server
and single site
• Unique Universal Segment ID pool
– Makes ULS VNIs consistent across all vCenter instances
– UDLR IDs are also automatically derived from this pool

• The Universal Controller Cluster continues to manage Local VXLAN/DLR objects in addition to universal
ones
• Transport Zone determines whether logical switches are local or universal
• Segment ID pool is now required for any DLR, even without VXLAN LIFs
• VMware NSX 6.2 distributed logical routing supports local egress

83
Multi-vCenter Distributed Firewall (NSX 6.2)
• NSX for vSphere 6.2 supports multi-vCenter distributed
firewall for centralized management of firewall rules for
universal objects
• This is performed through Universal Sections in the DFW rule
table Secondary
Secondary

Secondary
Secondary Secondary
Secondary

• These sections will automatically be synchronized to all


secondary NSX Manager instances Primary
Primary
Secondary
Secondary Secondary
Secondary

• Universal Section is managed on the primary NSX Manager


and read only on secondary NSX Managers Secondary
Secondary Secondary
Secondary

• Universal DFW rules are based on IP/MAC sets because


vCenter inventory remains local to an NSX Manager
• Supports both VXLAN and VLAN backed deployments
• vSphere vMotion across vCenter Server instances with
Universal DFW policy are fully supported

84
Multi-vCenter Use Cases
• Increase the span of VMware NSX logical networks to enable
– Capacity pooling across multiple vCenter Server instances
– Non-disruptive migrations
– Cloud and VDI deployments

vCenter Server A vCenter Server B vCenter Server C

DB App Web

App Web DB

Web App DB

85
Multi-vCenter Use Cases (cont.)
• Centralized security policy management
– One place to manage firewall rules
– Rules enforced regardless of VM location and vCenter Server

Universal Firewall Policy

86
Multi-vCenter Use Cases (cont.)
• VMware NSX 6.2 supports new mobility boundaries in vSphere 6
– Enable cross vCenter, cross virtual switch and long-distance vSphere vMotion
– On existing networks, with no new hardware required

vCenter-A vCenter-B

<= 150ms RTT

VDS-A VDS-B

VXLAN
Transport (L3) &
vMotion Network
(L3)

87
Multi-vCenter Use Cases (cont.)
• Enhance VMware NSX multi-site support
– Active-Active (From Metro to 150ms RTT)
– Disaster Recovery

N-S Connectivity N-S Connectivity

DB Web App App DB Web

vCenter-A vCenter-B

App DB Web Web App DB

NSX Mgr A NSX Mgr B

SRM A Web App DB DB Web App


SRM B

<=150ms

88
Multi-vCenter with VMware NSX Key Benefits
• Provides a comprehensive solution covering L2, L3 and firewalling
– Decoupled from underlying physical network
– Fully integrated software-based solution, not hardware centric

• No need to span L2 for cross vCenter vMotion or workload migration

• In-place upgrade and migration for existing VMware NSX deployments

• Integration with other VMware SDDC components

• Enhances VMware NSX multi-site and disaster recovery capabilities

• Addresses issue of vCenter Server being a scale boundary

89
Integration

90
VMware NSX for vSphere vRealize Orchestrator Plug-In 1.0.0 for
vRealize Automation
• VMware vRealize Orchestrator™ plug-in built to deliver networking functionality in VMware
vRealize Automation™
• The plug-in exposes scriptable APIs and vRealize Orchestrator workflows that are invoked from
vRealize Automation for networking use cases
• The plug-in also exposes an inventory of existing objects in VMware NSX. These include
– Data centers
– Edges
– Security groups
– Transport zones
– Security tags
– Security policies
– Logical switches (virtual wires)

• The inventory exposed by the plug-in is used by vRealize Automation during data collection

91
NSX for vSphere vRealize Orchestrator Plug-In 1.0.0 for vRealize
Automation – Use Cases
• The following are vRealize Automation networking use cases for the plug-in
– Provisioning logical switches to realize routed/NAT/private networks
– Efficient routing through Virtual Distributed Router (VDR) consumption
– Connect/disconnect vRealize Automation routed networks to/from VDR
– Application isolation through security policy consumption using service composer APIs
– Applying security policies on application components to cater to specific use cases, such as a Web
security policy that enables HTTP and HTTPS traffic
– Provisioning services edge gateway for an application to consume features such as NAT, DHCP, and
LB

92
PAN Partner Redirection
New tab in
firewall UI

Manage all traffic redirection policy from a unique pane of glass

• Capability to define src=ANY to dst=ANY traffic redirection to third-party vendor


• Extends src and dst fields to vCenter objects (no longer limited to Security
Groups)

93
The Need for a Comprehensive Security Solution
Sophisticated Security
Challenges
NSX for vSphere Platform Palo Alto Networks Next-
Generation Security
Applications are not linked
NSX Distributed Firewall
to port and protocols Next-Generation Firewall

Line rate access control Visibility and safe


traffic filtering application enablement

Distributed enforcement at User, device, and


hypervisor level application aware policies
Distributed user and
VM level zoning without
VLAN/VXLAN device population Protection against known
dependencies and unknown threats

Modern Malware
94
NSX for vSphere / PAN Use Case –
PCI Zone Segmentation
User PANORAMA

INTERNET

PAN Provides Intrusion


Prevention (IPS), Application
and User-Based Access Control
and Malware Prevention

Dev Zone Prod Zone PCI Zone Legend:

DFW

PAN VM-Series
SDDC FW

SDDC: Software-Defined Data Center 95


NSX for vSphere / PAN Use Case –
VDI Internet Access

Virtual Virtual Virtual


Desktop Desktop Desktop WEB APP DB
Tier Tier Tier

Virtual Virtual Virtual


Desktop Desktop Desktop Back End

Web browsing
VDI protocols
inspection
SDDC

DFW
INTERNET PAN VM-Series
FW
96
NSX for vSphere / PAN Use Case –
Secure Web DMZ
User

INTERNET PANORAMA

Line rate processing


of traffic allowed to
enter the SDDC

Web and other


protocols
deep inspection DFW

WEB DMZ WEB DMZ WEB DMZ PAN VM-Series


FW

APP Tier APP Tier APP Tier

DB Tier DB Tier DB Tier

SDDC 97
Updates.paloaltonetworks.com

NSX for vSphere / PAN Deployment Model


@

• NSX for vSphere is the infrastructure for SDDC


• DFW kernel module required
• Easy connectivity for VM-Series FW with guest VM vCenter NSX
• Dynamic update of threats signature to PAN VM PANORAMA
Server
VM Manager
VM VM
Series FW
MGMT
VERY SIMPLE DEPLOYMENT MODEL! CLUSTER

VDS
DFW DFW DFW DFW
VM-Series VM-Series VM-Series VM-Series
FW FW FW FW

COMPUTE
COMPUTE COMPUTE COMPUTE
CLUSTER 2
CLUSTER 1 CLUSTER 1 CLUSTER 2

MGMT Port-Group
L2/L3 Switch 98
NSX for vSphere / PAN End-to-End Workflow

• Register PANORAMA with NSX Manager


1

• Deploy PAN VM Series FW Appliances


2 (per ESXi Cluster)

• Consume Service!
• NSX for vSphere Security Groups mapping with PAN Dynamic
3 Address Group
• Dynamically add/delete VM, host, and cluster

NSX for vSphere simple operational model now extended to PAN services

99
NSX for vSphere –
F5 Solution Overview
Key driver: Operational Simplicity NSX for vSphere - F5 Joint Solution
Leverage Advanced F5 ADC options inside NSX for vSphere model Leverage NSX service insertion capabilities to integrate F5 BigIQ/BigIP as
NSX ADC service
Enable choice of Virtual or Physical F5 appliances within NSX for vSphere

Tenant
L2

L2
L2

L2 L2

L2

Components Required: NSX for vSphere, F5 Big-IQ, F5 Big-IP


Note: Users consume LB services through NSX for vSphere UI or API only

100
NSX for vSphere –
F5 Solution Overview (cont.)
Features
Any Application
(without
(without modification)
modification)
• NSX for vSphere integrates with F5 BIG-IQ and BIG-IPs
Virtual Networks
• F5 Admin defined iApps published to NSX Manager as ADN
Any Cloud Management Platform
service templates
• BIG-IPs VEs get automatically deployed, licensed and
VMware
VMware NSX
NSX Network
Network Virtualization
Virtualization Platform
Platform
Logical
Logical Logical
Logical Logical
Logical configured
Firewall Load
Load Balancer VPN
VPN
Firewall Balancer
L2 Logical
Logical L2 Logical L3
L3
• Pre-deployed BIG-IP physical appliances are automatically
Logical Logical
configured
Load
AnyBalancer
Any Hypervisor
Hypervisor • User can consume F5 iApps from NSX for vSphere UI or API
Any Network Hardware

Benefits

• Compatible with all NSX for vSphere features


• Compatible with all F5 BIG-IQ and BIG-IP features
Virtual IP: 172.168.1.1 • Seamless support for virtual networks and traditional
Member pool: 10.0.0.1, 10.0.0.2 networking with VLANs
• Future support for any CMP including vRealize Automation
ADN template: Web Gold • Familiar workflows for all teams (in NSX for vSphere and in
F5 BIG-IQ)
• Supports virtual and physical form factor of F5 appliances

101
Solution Details and User Personas
Cloud Admin

• Provisions apps
• Defines app network
L2 L2 • Specifies desired NSX Edge to
L2 L2
use for LB
Note: F5 iApp to NSX Edge pre-mapped
L2 L2

NSX for vSphere Admin


Logical view

Infrastructure view • Pre-provisions NSX Edge


WEB APP WEB DB WEB APP DB instances for Cloud Admin
Edge Rack
• Enables F5 integration and
associates F5 iApps to desired
NSX NSX Edge instances
Bridge
VXLAN 6000 VXLAN 5001

F5 Admin
Big IQ
Note: • Registers BIG IQ to NSX
• Configures and/or publishes F5
• VXLAN-to-VLAN mapping is automated iApps to NSX
• NSX Edge exposes data driven UI used for VLAN 100
F5 Big IP
• Deploys F5 virtual or physical
editions
filling iApp templates at the time of app provisioning
• NSX for vSphere API can be used to automate
provisioning
102
Questions

103
VMware NSX for vSphere 6.2
Knowledge Transfer Kit

VMware, Inc.
3401 Hillview Ave
Palo Alto, CA 94304

Tel: 1-877-486-9273 or 650-427-5000


Fax: 650-427-5001

Das könnte Ihnen auch gefallen