Beruflich Dokumente
Kultur Dokumente
2 Knowledge
Transfer Kit
Architecture Overview
2
Problem Statement
Traditional Networking is Hard!
4
Physical Networking Configuration Tasks
Initial configuration Recurring configuration
• Multichassis LAG
• Routing configuration
• Switch virtual • SVIs/RVIs
interfaces (SVIs) / • VRRP/HSRP
Router virtual
interfaces (RVIs)
• Advertise new subnets
• Virtual Router • Access lists (ACLs)
Redundancy • VLANs
Protocol (VRRP)/ • Adjust VLANs on trunks
Hot Standby Router • VLANs STP/Multiple
Protocol (HSRP) L3
Spanning Tree (MST)
• Spanning Tree
L2 protocol mapping
Protocol (STP)
‾ Instances/mappings
‾ Priorities
‾ Safeguards
5
Physical Network Services Configuration Tasks
Configuration consistency!
6
Networking Before and After Server Virtualization
• Before • After
– 100s of physical servers – 1,000s of VMs
– Change the VLAN on a switch port to control – VLAN trunking
server connectivity configurations
– Features are dependent on hardware – Different teams manage different network
functionality (ASICs) components
– Complexity with configuring network services – Features are still dependent on hardware
– Traffic flow is mostly North-South functionality
– Complexity of network services (firewalls and
so forth) increased because of the number of
servers
– Data center traffic flow now predominately
East-West, which the network is not designed
for
– Reduced visibility of network endpoints (policy
enforcement, monitoring, and so forth)
7
Network Utilization
Number of VMs Number of Tenants
L3
L2
Before NV With NV
MAC addresses
VM
VM VM
VM ARP entries
VM
VM VM
VM
VM
VM VM
VM
VM
VM VM
VM VLAN usage
VM
VM VM
VM
STP load
8
Network Utilization (cont.)
Number of VMs Number of Tenants
L3
L2
Before NV With NV
MAC addresses
VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM ARP entries
VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM
VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM
VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VLAN usage
VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM
STP load
9
Traditional security
is no longer enough!
10
The Pressure on Security
New App
Requested
Provision VM
11
Everything Works Well on Day One
DAY DAY
1 Data Center 2
Perimeter
Firewall SQL database server provision Sensitive data is added to the new
Finance Application request database VM
VM VM
VM VM
DB
12
Ideally, Every App Would Have Dedicated Resources
VM VM VM VM VM
VM VM VM VM VM
13
Manageability Necessitates Grouping
Security Zones
VLANS
192.168.10.4
192.168.10.12
192.168.20.6
192.168.20.11
…
14
Today, Security is Tied to a Complex and Rigid Network
Topology
15
Traditional Data Center Security
End user
computing/desktops
FW
Perimeter FW
FW
Internal FW
A/V
Perimeter
Internal
Client
Application infrastructure
APP
Internet
Other
Internet-facing
Internet-facing servers:
servers:
server
Web,
Web, e-mail,
e-mail, DNS,
DNS, and
and security
IPS
IPS
DMZ so
so on
on
Also
Also for
for VDI:
VDI: VMware
VMware
Horizon
Horizon®® View™
View™
Security
Security Server
Server
16
What is needed?
17
VMware NSX – A New Architectural Approach
Applications
Location Independence
18
The Next-Generation Networking Model
Software
Network and Security Services
Now in the Hypervisor
Software OS
VSWITCH
Hypervisor
Hardware
19
Visibility
VMware NSX® is uniquely positioned to see everything
VM
VM VM
VM
XSN
HYPERVISOR
PHYSICAL COMPUTE
PHYSICAL NETWORK
20
Granular Control Becomes Possible
VM
Firewall Data Security
VM VM
VM
Server Activity Monitoring VPN (IPSEC, SSL)
Third-Party Services
NSX vSwitch
Hypervisor Intrusion
Antivirus Firewall
Prevention
21
Data Center
Network Trends
Importance of the VMware NSX Virtual Switch
• The VMware NSX Virtual Switch™ is
– First network hop
– First aggregation point for VM traffic
• Best spot to VM VM VM VM
– Enforce policy
– Collect statistics
– Initiate and terminate monitoring Hypervisor
• Centrally controlled
(same as vSphere hypervisor)
• Feature-rich because it is x86-based
• Riding the performance curve of x86 architectures
23
So Why Not at the ToR?
• Limited VM visibility
– Tagging options [802.1Qbg/Virtual Ethernet
Port Aggregator (VEPA) or 802.1Qbh/VN-TAG]
– Require coordination/configuration on each VM change
• Increasing VM density
– ToR significant aggregation point
– Challenge for ASICs from a tables/tunnel standpoint
• Automated configuration
– ToR configuration vendor-specific
24
Data Center Network With Virtualization
• Why does virtualization require Layer 2 connectivity?
– VMware vSphere vMotion® is a non-disruptive operation so the network address cannot change. Layer
3 would require an IP address change and routing information, both of which are disruptive
– Many applications expect a Layer 2 network – broadcast traffic, application requirements, high
availability, lower latency, and legacy needs
– VMkernel networks (storage, VMware vSphere Fault Tolerance) also typically require Layer 2 adjacency
– What is the impact to my network?
• VLAN sprawl and static allocation of VLANs
• Large failure domains
• Run out of VLANs in large networks (only 4096 VLANs)
25
Fabric Options
• Network virtualization enables greater scale independent of fabric technology. NSX for vSphere
works over any reliable IP network supporting 1600 bytes MTU
• Layer 3 fabrics
– Most scalable technology
– Offers best interoperability
• N-tier networks
– Also works fine
– Expensive, not ideal for greenfield deployments
26
Fabric-Based Network Design
• Layer 2 Fabric – VLAN-based
L3 Connectivity – Larger Layer 2 domains, reliance on STP
L2/L3 – Comparatively limited in scalability – 2-tier
design
– Generally, industry is moving away from
L2 Layer 2 fabrics
L3
L2
WAN/Internet
28
Layer 3 Leaf-Spine Fabric Simplified Operations
Initial configuration Recurring Configuration
• Multi-chassis
Routing LAG
configuration • SVIs/RVIs
• Routing configuration
SVIs • VRRP/HSRP
• SVIs/RVIs • Advertise new subnets
• VRRP/HSRP • Access lists (ACLs)
• STP • VLANs
Instances/mappings L3 • Adjust VLANs on trunks
Priorities • VLANs STP/MST
Safeguards L3 L2 mapping
L2
Simplified 2-Tier
Layer 3 Fabric
• LACP with Network Virtualization
• VLANs STP/MST mapping
• VLANs
• Infra networks on
• Add VLANs on uplinks
uplinks and downlinks • Add VLANs to server ports
• Routing protocols
29
Spine Switches Spine
• Spine connects to …
leaf switches .2
10.99.1.0/31
– Interfaces L3 Point-to-Point
configured as routed
point-to-point Layer .1
3 links
… … …
– Links between spine
switches not Leaf
required
– In case of a spine-
to-leaf link failure,
routing protocol Spine
reroutes traffic on Layer 3 Only
the alternate paths (Route table entries, ARP entries,
Layer 3 no MAC consumption)
– Aggregates all leaf Downlinks To Leaf 1 To Leaf N
nodes and provides …
connectivity
between racks
30
Leaf Switches
• Servers facing ports have minimal L3 Uplinks
configuration
• Can use Link Aggregation Control Protocol
(LACP)
L3
– Applies to same-speed interfaces
L2
– Active/active bandwidth
– Fast failover times
VLAN 802.1Q
Hypervisor 1
• 801.Q trunks with a small set of VLANs Boun-
dary
...
802.1Q
Hypervisor n
31
Analogies of Logical Networking
Constructs
Logical Constructs Analogies
Layer 2
VM1 VM2 VM1 VM2
33
Logical Constructs Analogies
Layer 3
VM1 VM2 VM3
34
Logical Constructs Analogies
Layer 4 - Layer 7 Services
DMZ
Logical Port Firewall
Logical Firewall (micro-segmentation,
transparent firewall))
Firewall
35
Logical Constructs Analogies
Layer 4 - Layer 7 Services (cont.)
Load Balancer
36
Services in Network Virtualization Space
• Security-related
– Port ACLs, router ACLs (allow, deny)
– Port security
– ARP spoof protection
– IP spoof protection
• Troubleshooting
– Port and port-to-port statistics
– Port-mirror [Encapsulated Remote Switched Port Analyzer (ERSPAN)]
– Port-to-port connectivity validation tool
• Other services
– QoS (marking, policing)
– NAT
37
Connecting to Physical Infrastructure at Layer 2
VM VM
L2 Gateway
??? VM
Service 1 1
802.1Q
HW VTEP
(Switch)
802.1Q
38
Mapping of Logical to Physical
Space
Logical Topologies Mapped to Physical
Logical View Physical View
WAN Internet
VM1
VM2
VM3
40
Networking Problems That NSX for vSphere Solves
• Seamless Layer 2 connectivity over Layer 3 networks with flexible designs
• Isolation of tenants no longer dependent on VLANs
• VM attributes (VLANs/IPs/MAC addresses) not
exposed/coupled to infrastructure
• Reduces frequency of changes to the physical network
• No network configuration changes for tenant/application networking
• Network administrator focuses on maintaining a reliable transport network as opposed to dynamic VM
networking
• Distributes network services providing centralized management
• Leverages existing topology, while planning a transition to new fabric
• Extends logical networks to physical
• Provides new tools for automation, policy enforcement, and VM visibility
41
Network Virtualization Virtual Space
Routing/NAT
Security/Firewalling
QoS
Physical Fabric
Port Mirroring
Counters
42
NSX for vSphere Logical Switching
L2
Logical Logical Logical
Switch 1 Switch 2 Switch 3
L3
VMware NSX
Design Challenges VMware NSX Benefits
• Multi-tenant or application segmentation • Scalable multi-tenancy across data center
• VM mobility requires Layer 2 everywhere • Enabling Layer 2 over Layer 3 infrastructure
• Large Layer 2 physical network sprawl – using overlay networks
STP issues • Logical switches span across physical hosts
• Hardware memory (MAC, FIB) table limits and network switches
Logical Switching
Scale the Network
43
NSX for vSphere Logical Routing
VM VM
VM
VM
VM
VM
VM
VM
VM
VM VM
VM
VM VM
VM
VM 44
Network Virtualization Design Attributes
• Physical fabric requirements • Benefits of virtualization
– Decouple – Simplicity
• Independent address spaces, topology • Uniform one time configuration
independence • Scalability
– Reproduce • Scale-out architecture (spine/leaf)
• Workloads act as if they had a physical network – Low oversubscription
for themselves
• Multipathing and efficient use of bandwidth
– Automate
– Fault tolerant
• Programmatic service provisioning
• Dynamically route around failures
– Traffic differentiation
• Enforce QoS
45
VMware NSX for vSphere
Component Overview
NSX for vSphere Components
• Self-service portal
• Cloud management
Consumption • VMware vRealize®
Automation™
NSX Manager VMware vCenter Server®
Management Message Bus Agent • Single point of configuration
Plane • REST API and UI interface
NSX
NSX Virtual Switch Edge • NSX Virtual Switch
Services • Distributed network edge
Gateway • Line rate performance
VMware VXLAN Distributed Firewall
• VMware NSX Edge™
Data vSphere Logical Router • VM form factor
Plane Distributed • Data plane for North-South
Hypervisor Kernel Modules
ESXi Host Switch™
traffic
• Routing and advanced
services
47
Components – NSX Manager
• NSX for vSphere centralized management plane
• 1:1 mapping between a VMware NSX Manager™ and VMware vCenter Server
• Up to 8 NSX Manager instances in a multi-vCenter ® configuration (from VMware NSX 6.2)
48
Components – NSX Manager (cont.)
• Configures the NSX Controller cluster through a REST API and hosts through a message bus
• Host configuration includes distributed firewall and NSX Edge nodes
• Generates certificates to secure control plane communications
49
Components – NSX Controller
• Provides control plane to distribute VXLAN and logical routing network information to ESXi hosts
• NSX Controllers are clustered for scale out and high availability
• Network information is sliced across nodes in an NSX Controller cluster
• Enables dependency on multicast routing/PIM in the physical network to be removed
• Provides suppression of ARP broadcast traffic in VXLAN networks
NSX Controller
VXLAN
Logical Router Logical Router
Directory Service
VXLAN VXLAN
MAC table
ARP table
Logical Router
VXLAN
VTEP table
50
NSX Controller – Master Election
• Each role needs a master
• Masters for different roles can sit on different nodes
• Uses Paxos-based algorithm
• Guaranteed correctness (not necessarily convergence)
VXLAN VXLAN
Role VXLAN
51
NSX Controller – Failure Scenario
• Node failure triggers election for roles where the master is no longer available
• A new node is promoted to master after the election process
VXLAN VXLAN
VXLAN
52
NSX Controller – Distribution
• Problem
– Need to dynamically distribute workload across all available cluster nodes
– Redistribute workload when new cluster member is added
– Ability to sustain failure of any cluster node with zero impact
– Do all of the above transparent to the application
• Solution is slicing
53
NSX Controller – Slicing
1. For a given role, create N slices
2. Define application objects
3. Assign objects to slices
VM VM VM
VM VM VM
VM VMVM
VMVM VM
VM VMVMVMVM VM
VM VMVM
VMVM VM
VM VMVMVMVM VM
VM VMVM
VMVM VM
VM VM VM
VM VM VM
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
Slices
54
NSX Controller – Slicing (cont.)
1. For a given role, create N slices
2. Define application object
3. Assign objects to slices
4. Sprinkle slices across nodes
1 4 7
VXLAN Function 2 5 8
VXLAN Function 3 6 9
VXLAN Function
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
55
NSX Controller – Redistribution
1. Failure of Node 3
2. Master for that role re-assigns the slices
6 3 Function
VXLAN 1 4 7 9
VXLAN 2 5 8
Function 3 6 9
VXLAN Function
6 3 Router 2 5 8
Logical Logical Router 1 4 7 Logical Router 3 6 9
9
56
NSX Controller – Deployment
• NSX Controller nodes are deployed as virtual appliances
– 4 vCPU, 4 GB of RAM and 20 GB of disk space per node
– Modifying settings is not supported
• NSX Controller password is defined during deployment of the first node and is consistent
across all nodes
• NSX Controller nodes must be deployed in same vCenter Server that NSX Manager is
connected to
• Cluster size of 3 NSX Controller nodes is the only supported configuration
• Controller interaction is through the CLI, while configuration operations are also available
through NSX for vSphere API
57
Components – User World Agent
• User World Agent is a TCP (SSL) client that communicates with the NSX Controller using the control plane
protocol
– Can connect to multiple NSX Controllers
– Mediator between the VMware ESXi™ hypervisor kernel modules and NSX Controller instances
– Communicates with message bus agent to retrieve information from NSX Manager
– Runs as a service daemon on ESXi: netcpa
– Logs to: /var/log/netcpa.log
– NSX distributed firewall has a separate service daemon: vsfwd
Kernel VXLAN LR
Modules
ESXi Host
58
Components – NSX Virtual Switch and NSX Edge
ESXi
NSX Virtual
Switch
Hypervisor Kernel Modules
(vSphere VIBs) NSX Logical NSX Edge
Router Control Services
VM Gateway
vSphere VXLAN Logical Router Firewall
Distributed
Switch
Management
Control Plane
Plane
Data Plane
60
Building the NSX for vSphere Platform
Deploy NSX for vSphere Consumption
+ + +
Programmatic
Virtual
Network Deployment
NSX NSX
Mgmt Edge Prerequisites
• Physical network – VXLAN
transport network, MTU
• vCenter and ESXi 5.5
• vSphere Distributed Switch
Virtual Infrastructure
Recurring
2 Deploy NSX Controller Cluster 1 Deploy Logical Switches per Tier
61
NSX Controller Interaction NSX Manager
62
Component Interaction – Configuration
NSX Controller
NSX Manager
1 Configuration
(logical switches, distributed
vCenter logical routers)
1 3
Host Configuration
2 (logical switches, distributed
logical routers)
2 Service
NSX Controller NSX Edge 3 Configuration
Services (load balancer, firewall, VPN,
and so forth)
Gateway
63
NSX for vSphere Control Plane Security
• NSX for vSphere control plane communication occurs over the management network
• The control plane is protected by
– Certificate-based authentication
– SSL
• NSX Manager generates self-signed certificates for each of the ESXi hosts and NSX
Controllers
• These certificates are pushed to the NSX Controller and ESXi hosts over secure channels
• Mutual authentication occurs by verifying these certificates
64
VXLAN Control Plane Security
Certificate
NSX Manager 1 Generation
WAN Internet
vCenter 1
(Up to maximum
supported
VMs by vCenter)
vCenter 2
(Up to maximum
supported
VMs by vCenter)
Compute Racks Infrastructure Racks Edge Racks
(Storage, (North/South
vCenter and traffic)
Cloud Management
System)
67
vSphere Scalability
• Cluster sizing
– VMware vSphere High Availability 5.x: 32 hosts
– VMware vSphere High Availability 6.0: 64 hosts
• Storage
– VMware vSphere Storage APIs - Array Integration Atomic Test and Set (ATS) removes SCSI reservation constraints
on datastore sizing
• Virtual machines
– 10,000 powered on VMs per vCenter Server
• Networking
– NSX for vSphere allows scaling of the network independent of vCenter Server
68
Recap – vCenter Scale Boundaries
10,000 powered-on VMs*
DC Object vCenter Server 1,000 ESXi hosts
128 vSphere Distributed Switch instances
Maximum of 500 hosts
Cluster
Maximum of 62 hosts
Manual vSphere
vMotion
69
NSX for vSphere Scale Boundaries
Cloud Management System
Manual vSphere
vMotion
70
VMkernel Networking
• Multi-instance TCP/IP stack • Separate routing table, ARP table, and
– Introduced with vSphere 5.5 and leveraging default gateway per stack instance
• VXLAN • Provides increased isolation and reservation
• NSX Virtual Switch transport network of networking resources such as sockets,
buffers and heap memory
• Enables VXLAN VTEPs and vSphere
vMotion VMkernels to use a gateway
independent from the default TCP/IP stack
• Management, vSphere Fault Tolerance,
NFS, iSCSI leverage the default TCP/IP
stack
71
VMkernel Networking (cont.)
• Teaming recommendations
– LACP (802.3ad) is a good option for optimal use of
available bandwidth and quick convergence
– Load Based Teaming is also a good option for non-VXLAN NSX
NSX vSwitch
vSwitch
VMkernel traffic where there is a desire to simplify
configuration and reduce dependencies on the physical
network, while still using multiple uplinks effectively
ESXi
ESXi Host
Host
– VMware NSX introduces support for multiple VTEPs per
host with VXLAN
– 2x 10Gbe network adapters per server is common
– Network partitioning technologies increase complexity
72
VMkernel Networking (cont.)
Routed uplinks (ECMP)
SVI 66: 10.66.1.1/26 L3 ToR Switch
SVI 77: 10.77.1.1/26
SVI 88: 10.88.1.1/26
SVI 99: 10.99.1.1/26
Span of VLANs
VLAN Trunk (802.1Q)
802.1Q
Hypervisor
74
Management and Edge Rack Requirements
• Management racks • Edge racks
– Layer 2 between racks needed for – Layer 2 between racks needed for external
management workloads such as vCenter 802.1Q VLANs
Server, NSX Controller nodes, NSX Manager,
and IP storage WAN
Internet
L3
L2
Leaf L3 L3
L2 L2
L2 L3 L3
Mgmt 1 Mgmt N Leaf
L2 L2
Mgmt 2 Mgmt N L2
Edge 1 Edge 2
Edge 3 Edge 4
VMkernel VLANs
Edge N Edge N
VLANs for VMkernel VLANs
Management VMkernel VLANs VMkernel VLANs
VMs
VLANs for
Edge VMs to
Physical
Network 75
vSphere Network Addressing Benefits
• To keep the number of static routes manageable as the fabric scales, larger address blocks
could be allocated to the VMkernel functions
– 10.66.0.0/16 for Management
– 10.77.0.0/16 for VMware vSphere Storage vMotion
– 10.88.0.0/16 for VXLAN
– 10.99.0.0/16 for Storage
76
vSphere Network Addressing Benefits (cont.)
• Dynamic routing protocols (OSPF, BGP) are used to advertise the new capacity to the rest of
the fabric
• Provides scalability and predictable network addressing, based on number of ESXi hosts per
rack or cluster
• Reduces VLAN usage by reusing VLANs with a rack (Layer 3) or POD (Layer 2)
77
VMware NSX in a
Multi-vCenter Environment
VMware NSX Logical Networks (6.0/6.1)
79
VMware NSX Logical Networks (6.2)
80
Multi-vCenter Components and Terminology
• Multi-vCenter instance objects use the term Universal and include
– Universal Sync
– Universal Controller Cluster (UCC)
– Universal Transport Zone (UTZ)
– Universal Logical Switch (ULS)
– Universal Distributed Logical Router (UDLR)
– Universal IP Set/MAC Set
– Universal Security Group
81
Multi-vCenter Logical Networks (VMware NSX 6.2)
Universal
Controller
Cluster
vCenter & NSX Manager A vCenter & NSX Manager B vCenter & NSX Manager H
Primary Secondary Secondary
Universal
DFW
82
Multi-vCenter Logical Networks (VMware NSX 6.2) (cont.)
• Universal Controller Cluster size remains at 3 nodes
• NSX Controller instances always run within a single vCenter Server
and single site
• Unique Universal Segment ID pool
– Makes ULS VNIs consistent across all vCenter instances
– UDLR IDs are also automatically derived from this pool
• The Universal Controller Cluster continues to manage Local VXLAN/DLR objects in addition to universal
ones
• Transport Zone determines whether logical switches are local or universal
• Segment ID pool is now required for any DLR, even without VXLAN LIFs
• VMware NSX 6.2 distributed logical routing supports local egress
83
Multi-vCenter Distributed Firewall (NSX 6.2)
• NSX for vSphere 6.2 supports multi-vCenter distributed
firewall for centralized management of firewall rules for
universal objects
• This is performed through Universal Sections in the DFW rule
table Secondary
Secondary
Secondary
Secondary Secondary
Secondary
84
Multi-vCenter Use Cases
• Increase the span of VMware NSX logical networks to enable
– Capacity pooling across multiple vCenter Server instances
– Non-disruptive migrations
– Cloud and VDI deployments
DB App Web
App Web DB
Web App DB
85
Multi-vCenter Use Cases (cont.)
• Centralized security policy management
– One place to manage firewall rules
– Rules enforced regardless of VM location and vCenter Server
86
Multi-vCenter Use Cases (cont.)
• VMware NSX 6.2 supports new mobility boundaries in vSphere 6
– Enable cross vCenter, cross virtual switch and long-distance vSphere vMotion
– On existing networks, with no new hardware required
vCenter-A vCenter-B
VDS-A VDS-B
VXLAN
Transport (L3) &
vMotion Network
(L3)
87
Multi-vCenter Use Cases (cont.)
• Enhance VMware NSX multi-site support
– Active-Active (From Metro to 150ms RTT)
– Disaster Recovery
vCenter-A vCenter-B
<=150ms
88
Multi-vCenter with VMware NSX Key Benefits
• Provides a comprehensive solution covering L2, L3 and firewalling
– Decoupled from underlying physical network
– Fully integrated software-based solution, not hardware centric
89
Integration
90
VMware NSX for vSphere vRealize Orchestrator Plug-In 1.0.0 for
vRealize Automation
• VMware vRealize Orchestrator™ plug-in built to deliver networking functionality in VMware
vRealize Automation™
• The plug-in exposes scriptable APIs and vRealize Orchestrator workflows that are invoked from
vRealize Automation for networking use cases
• The plug-in also exposes an inventory of existing objects in VMware NSX. These include
– Data centers
– Edges
– Security groups
– Transport zones
– Security tags
– Security policies
– Logical switches (virtual wires)
• The inventory exposed by the plug-in is used by vRealize Automation during data collection
91
NSX for vSphere vRealize Orchestrator Plug-In 1.0.0 for vRealize
Automation – Use Cases
• The following are vRealize Automation networking use cases for the plug-in
– Provisioning logical switches to realize routed/NAT/private networks
– Efficient routing through Virtual Distributed Router (VDR) consumption
– Connect/disconnect vRealize Automation routed networks to/from VDR
– Application isolation through security policy consumption using service composer APIs
– Applying security policies on application components to cater to specific use cases, such as a Web
security policy that enables HTTP and HTTPS traffic
– Provisioning services edge gateway for an application to consume features such as NAT, DHCP, and
LB
92
PAN Partner Redirection
New tab in
firewall UI
93
The Need for a Comprehensive Security Solution
Sophisticated Security
Challenges
NSX for vSphere Platform Palo Alto Networks Next-
Generation Security
Applications are not linked
NSX Distributed Firewall
to port and protocols Next-Generation Firewall
Modern Malware
94
NSX for vSphere / PAN Use Case –
PCI Zone Segmentation
User PANORAMA
INTERNET
DFW
PAN VM-Series
SDDC FW
Web browsing
VDI protocols
inspection
SDDC
DFW
INTERNET PAN VM-Series
FW
96
NSX for vSphere / PAN Use Case –
Secure Web DMZ
User
INTERNET PANORAMA
SDDC 97
Updates.paloaltonetworks.com
VDS
DFW DFW DFW DFW
VM-Series VM-Series VM-Series VM-Series
FW FW FW FW
COMPUTE
COMPUTE COMPUTE COMPUTE
CLUSTER 2
CLUSTER 1 CLUSTER 1 CLUSTER 2
MGMT Port-Group
L2/L3 Switch 98
NSX for vSphere / PAN End-to-End Workflow
• Consume Service!
• NSX for vSphere Security Groups mapping with PAN Dynamic
3 Address Group
• Dynamically add/delete VM, host, and cluster
NSX for vSphere simple operational model now extended to PAN services
99
NSX for vSphere –
F5 Solution Overview
Key driver: Operational Simplicity NSX for vSphere - F5 Joint Solution
Leverage Advanced F5 ADC options inside NSX for vSphere model Leverage NSX service insertion capabilities to integrate F5 BigIQ/BigIP as
NSX ADC service
Enable choice of Virtual or Physical F5 appliances within NSX for vSphere
Tenant
L2
L2
L2
L2 L2
L2
100
NSX for vSphere –
F5 Solution Overview (cont.)
Features
Any Application
(without
(without modification)
modification)
• NSX for vSphere integrates with F5 BIG-IQ and BIG-IPs
Virtual Networks
• F5 Admin defined iApps published to NSX Manager as ADN
Any Cloud Management Platform
service templates
• BIG-IPs VEs get automatically deployed, licensed and
VMware
VMware NSX
NSX Network
Network Virtualization
Virtualization Platform
Platform
Logical
Logical Logical
Logical Logical
Logical configured
Firewall Load
Load Balancer VPN
VPN
Firewall Balancer
L2 Logical
Logical L2 Logical L3
L3
• Pre-deployed BIG-IP physical appliances are automatically
Logical Logical
configured
Load
AnyBalancer
Any Hypervisor
Hypervisor • User can consume F5 iApps from NSX for vSphere UI or API
Any Network Hardware
Benefits
101
Solution Details and User Personas
Cloud Admin
• Provisions apps
• Defines app network
L2 L2 • Specifies desired NSX Edge to
L2 L2
use for LB
Note: F5 iApp to NSX Edge pre-mapped
L2 L2
F5 Admin
Big IQ
Note: • Registers BIG IQ to NSX
• Configures and/or publishes F5
• VXLAN-to-VLAN mapping is automated iApps to NSX
• NSX Edge exposes data driven UI used for VLAN 100
F5 Big IP
• Deploys F5 virtual or physical
editions
filling iApp templates at the time of app provisioning
• NSX for vSphere API can be used to automate
provisioning
102
Questions
103
VMware NSX for vSphere 6.2
Knowledge Transfer Kit
VMware, Inc.
3401 Hillview Ave
Palo Alto, CA 94304