Beruflich Dokumente
Kultur Dokumente
NSX-T 2.4
CONTENTS
Contents i
2-21 About NSX Management Cluster .........................................................................36
2-22 Benefits of NSX Management Cluster ..................................................................38
2-23 NSX Management Cluster with Virtual IP Address ..............................................39
2-24 NSX Management Cluster with Load Balancer ....................................................40
2-25 About NSX Policy .................................................................................................41
2-26 NSX Policy Characteristics ...................................................................................42
2-27 Centralized Policy Management ...........................................................................43
2-28 NSX Manager Functions ......................................................................................44
2-29 NSX Policy and NSX Manager Workflow .............................................................45
2-30 About NSX Controller ...........................................................................................46
2-31 Control Plane Components (1) .............................................................................47
2-32 Control Plane Components (2) .............................................................................48
2-33 Control Plane Change Propagation ......................................................................49
2-34 Control Plane Sharding Function..........................................................................50
2-35 Handling Controller Failure ...................................................................................51
2-36 Data Plane Functions ...........................................................................................52
2-37 Data Plane Components ......................................................................................53
2-38 Review of Learner Objectives...............................................................................54
2-39 Key Points.............................................................................................................55
ii Contents
3-17 Management Cluster Status: CLI (1) ....................................................................73
3-18 Management Cluster Status: CLI (2) ....................................................................74
3-19 NSX Manager Deployment on KVM Hosts ...........................................................75
3-20 Review of Learner Objectives...............................................................................76
3-21 Navigating the NSX Manager UI ..........................................................................77
3-22 Learner Objectives ...............................................................................................78
3-23 NSX Manager Simplified and Advanced User Interfaces (1) ...............................79
3-24 NSX Manager Simplified and Advanced User Interfaces (2) ...............................80
3-25 Networking View ...................................................................................................81
3-26 Security View ........................................................................................................82
3-27 Inventory View ......................................................................................................83
3-28 Tools View ............................................................................................................84
3-29 System View .........................................................................................................85
3-30 Labs ......................................................................................................................86
3-31 Lab: Labs Introduction ..........................................................................................87
3-32 Lab: Reviewing the Configuration of the Predeployed NSX Manager
Instance ................................................................................................................88
3-33 Lab Simulation: Deploying a 3-Node NSX Management Cluster .........................89
3-34 Review of Learner Objectives...............................................................................90
3-35 Preparing the Data Plane .....................................................................................91
3-36 Learner Objectives ...............................................................................................92
3-37 Data Plane Components and Functions ...............................................................93
3-38 Transport Node Overview.....................................................................................94
3-39 Transport Node Components and Architecture ....................................................95
3-40 Transport Node Physical Connectivity .................................................................96
3-41 About IP Address Pools ........................................................................................97
3-42 About Transport Zones (1) ...................................................................................98
3-43 About Transport Zones (2) ...................................................................................99
3-44 About N-VDS ......................................................................................................100
3-45 N-VDS on ESXi Transport Nodes.......................................................................102
3-46 N-VDS on KVM Transport Nodes .......................................................................103
3-47 Transport Zone and N-VDS Mapping .................................................................104
3-48 Creating Transport Zones ...................................................................................105
3-49 N-VDS Operational Modes .................................................................................107
3-50 Enhanced Datapath Mode ..................................................................................109
3-51 Reviewing the Transport Zone Configuration .....................................................110
Contents iii
3-52 Physical NICs, LAGs, and Uplinks .....................................................................111
3-53 About Uplink Profiles ..........................................................................................112
3-54 Default Uplink Profiles ........................................................................................113
3-55 Types of Teaming Policies .................................................................................114
3-56 Teaming Policies Supported by ESXi and KVM Hosts.......................................115
3-57 Teaming Policy ...................................................................................................116
3-58 About LLDP ........................................................................................................117
3-59 Enabling LLDP Profiles .......................................................................................118
3-60 About Network I/O Control Profiles.....................................................................119
3-61 Creating Network I/O Control Profiles (1) ...........................................................120
3-62 Creating Network I/O Control Profiles (2) ...........................................................121
3-63 About Transport Node Profiles (1) ......................................................................122
3-64 About Transport Node Profiles (2) ......................................................................123
3-65 Benefits of Transport Node Profiles....................................................................124
3-66 Transport Node Profile Considerations ..............................................................125
3-67 Transport Node Profile Prerequisites .................................................................126
3-68 Attaching a Transport Node Profile to the ESXi Cluster .....................................127
3-69 Managed ESXi: Host Preparation (1) .................................................................128
3-70 Managed ESXi: Host Preparation (2) .................................................................129
3-71 Reviewing ESXi Transport Node Status .............................................................130
3-72 Verifying ESXi Transport Node by CLI ...............................................................131
3-73 Transport Node Preparation: KVM .....................................................................133
3-74 Configuring KVM Hosts as Transport Nodes (1) ................................................134
3-75 Configuring KVM Hosts as Transport Nodes (2) ................................................135
3-76 Reviewing KVM Transport Node Status .............................................................136
3-77 Verifying the KVM Transport Node by CLI .........................................................137
3-78 Lab: Preparing the NSX-T Data Center Infrastructure .......................................138
3-79 Review of Learner Objectives.............................................................................139
3-80 Key Points...........................................................................................................140
iv Contents
4-7 Prerequisites for Logical Switching.....................................................................147
4-8 Logical Switching Terminology ...........................................................................148
4-9 About Segments (1) ............................................................................................150
4-10 About Segments (2) ............................................................................................151
4-11 About Tunneling..................................................................................................152
4-12 About GENEVE ..................................................................................................154
4-13 GENEVE Header Format ...................................................................................156
4-14 Logical Switching: End-to-End Communication .................................................158
4-15 Review of Learner Objectives.............................................................................160
4-16 Logical Switching Architecture............................................................................161
4-17 Learner Objectives .............................................................................................162
4-18 Management Plane and Central Control Plane Agents......................................163
4-19 Creating Segments on ESXi Hosts (1) ...............................................................164
4-20 Creating Segments on ESXi Hosts (2) ...............................................................165
4-21 Creating Segments on KVM Hosts (1) ...............................................................166
4-22 Creating Segments on KVM Hosts (2) ...............................................................167
4-23 NSX-T Data Center Communication Channels ..................................................168
4-24 Review of Learner Objectives.............................................................................169
4-25 Configuring Segments ........................................................................................170
4-26 Learner Objectives .............................................................................................171
4-27 Segment Configuration Tasks ............................................................................172
4-28 Creating Segments .............................................................................................173
4-29 Viewing Configured Segments ...........................................................................174
4-30 Attaching VMs to a Segment ..............................................................................175
4-31 Workflow: Attaching a vSphere VM to a Segment (1) ........................................176
4-32 Workflow: Attaching a vSphere VM to a Segment (2) ........................................177
4-33 Attaching a KVM VM to a Segment ....................................................................178
4-34 Workflow: Attaching a KVM VM to a Segment (1)..............................................180
4-35 Workflow: Attaching a KVM VM to a Segment (2)..............................................181
4-36 Viewing the Switching Configuration in the Advanced and Simplified UIs .........182
4-37 Verifying L2 End-to-End Connectivity .................................................................183
4-38 Lab: Configuring Segments ................................................................................184
4-39 Review of Learner Objectives.............................................................................185
4-40 Configuring Segment Profiles .............................................................................186
4-41 Learner Objectives .............................................................................................187
4-42 About Segment Profiles (1) ................................................................................188
Contents v
4-43 About Segment Profiles (2) ................................................................................189
4-44 Default Segment Profiles ....................................................................................190
4-45 Applying Segment Profiles to Segments ............................................................191
4-46 Applying Segment Profiles to L2 Ports ...............................................................192
4-47 IP Discovery Segment Profile .............................................................................193
4-48 Creating an IP Discovery Segment Profile (1)....................................................194
4-49 Creating an IP Discovery Segment Profile (2) ....................................................196
4-50 MAC Discovery Segment Profile ........................................................................197
4-51 QoS Segment Profile ..........................................................................................199
4-52 Segment Security Profile ....................................................................................201
4-53 SpoofGuard Segment Profile..............................................................................203
4-54 Creating a SpoofGuard Segment Profile ............................................................204
4-55 Review of Learner Objectives.............................................................................206
4-56 Logical Switching Packet Forwarding .................................................................207
4-57 Learner Objectives .............................................................................................208
4-58 NSX-T Data Center Controller Tables ................................................................209
4-59 TEP Table Update (1) .........................................................................................210
4-60 TEP Table Update (2) .........................................................................................211
4-61 TEP Table Update (3) .........................................................................................212
4-62 TEP Table Update (4) .........................................................................................213
4-63 MAC Table Update (1) ........................................................................................214
4-64 MAC Table Update (2) ........................................................................................215
4-65 MAC Table Update (3) ........................................................................................216
4-66 MAC Table Update (4) ........................................................................................217
4-67 ARP Table Update (1) ........................................................................................218
4-68 ARP Table Update (2) ........................................................................................219
4-69 ARP Table Update (3) ........................................................................................220
4-70 ARP Table Update (4) ........................................................................................221
4-71 Unicast Packet Forwarding Across Hosts (1) .....................................................222
4-72 Unicast Packet Forwarding Across Hosts (2) .....................................................223
4-73 Unicast Packet Forwarding Across Hosts (3) .....................................................224
4-74 Unicast Packet Forwarding Across Hosts (4) .....................................................225
4-75 BUM Traffic Overview .........................................................................................226
4-76 Handling BUM Traffic: Head Replication ............................................................228
4-77 Handling BUM Traffic: Hierarchical Two-Tier Replication ..................................229
4-78 Review of Learner Objectives.............................................................................230
vi Contents
4-79 Key Points...........................................................................................................231
Contents vii
5-35 Using the OVF Tool to Deploy NSX Edge Nodes ..............................................270
5-36 Installing NSX Edge on Bare Metal ....................................................................272
5-37 Using PXE to Deploy NSX Edge Nodes from an ISO File .................................273
5-38 Joining NSX Edge with the Management Plane.................................................274
5-39 Verifying the Edge Transport Node Status .........................................................275
5-40 Enabling Edge Node SSH Service .....................................................................276
5-41 Postdeployment Verification Checklist ...............................................................277
5-42 Creating an Edge Cluster ...................................................................................278
5-43 Mapping NSX Edge Node Interfaces (1) ............................................................279
5-44 Mapping NSX Edge Node Interfaces (2) ............................................................280
5-45 Verifying NSX Edge Node Interfaces Mapping ..................................................281
5-46 Edge Node VM Deployment Options..................................................................282
5-47 Lab: Deploying and Configuring NSX Edge Nodes ............................................284
5-48 Review of Learner Objectives.............................................................................285
5-49 Configuring Tier-0 and Tier-1 Gateways ............................................................286
5-50 Learner Objectives .............................................................................................287
5-51 Gateway Configuration Tasks ............................................................................288
5-52 Configuring a Tier-0 Gateway: Step 1 ................................................................289
5-53 Configuring a Tier-0 Gateway: Step 2 ................................................................290
5-54 Configuring a Tier-0 Gateway: Step 3 ................................................................291
5-55 Configuring a Tier-0 Gateway: Step 4 ................................................................292
5-56 Configuring a Tier-0 Gateway: Step 5 ................................................................293
5-57 Reviewing the Tier-0 Gateway Configuration .....................................................294
5-58 Configuring a Tier-1 Gateway: Step 1 ................................................................295
5-59 Configuring a Tier-1 Gateway: Step 2 ................................................................296
5-60 Testing East-West Connectivity..........................................................................297
5-61 Configuring a Tier-1 Gateway: Step 3 ................................................................298
5-62 Configuring a Tier-1 Gateway: Step 4 ................................................................299
5-63 Testing North-South Connectivity .......................................................................300
5-64 Routing Topologies .............................................................................................301
5-65 Single-Tier Topology ..........................................................................................302
5-66 Single-Tier Routing: Egress to Physical Network (1) .........................................303
5-67 Single-Tier Routing: Egress to Physical Network (2) .........................................304
5-68 Single-Tier Routing: Egress to Physical Network (3) .........................................305
5-69 Single-Tier Routing: Egress to Physical Network (4) .........................................306
5-70 Single-Tier Routing: Egress to Physical Network (5) .........................................307
viii Contents
5-71 Single-Tier Routing: Egress to Physical Network (6) .........................................308
5-72 Single-Tier Routing: Ingress from Physical Network (7).....................................309
5-73 Single-Tier Routing: Ingress from Physical Network (8).....................................310
5-74 Single-Tier Routing: Ingress from Physical Network (9).....................................311
5-75 Single-Tier Routing: Ingress from Physical Network (10)...................................312
5-76 Single-Tier Routing: Ingress from Physical Network (11)...................................313
5-77 Single-Tier Routing: Ingress from Physical Network (12)...................................314
5-78 Single-Tier Routing: Ingress from Physical Network (13)...................................315
5-79 Multitier Topology (1) ..........................................................................................316
5-80 Multitier Topology (2) ..........................................................................................317
5-81 Multitier Topology (3) ..........................................................................................318
5-82 Multitier Routing: Egress to Physical Network Example ....................................319
5-83 Multitier Routing: Egress to Physical Network (1) ..............................................320
5-84 Multitier Routing: Egress to Physical Network (2) ..............................................321
5-85 Multitier Routing: Egress to Physical Network (3) ..............................................322
5-86 Multitier Routing: Egress to Physical Network (4) ..............................................323
5-87 Multitier Routing: Egress to Physical Network (5) ..............................................324
5-88 Multitier Routing: Egress to Physical Network (6) ..............................................325
5-89 Multitier Routing: Egress to Physical Network (7) ..............................................326
5-90 Multitier Routing: Egress to Physical Network (8) ..............................................327
5-91 Multitier Routing: Egress to Physical Network (9) ..............................................328
5-92 Multitier Routing: Egress to Physical Network (10) ............................................329
5-93 Multitier Routing: Egress to Physical Network (11) ............................................330
5-94 Multitier Routing: Egress to Physical Network (12) ............................................331
5-95 Multitier Routing: Egress to Physical Network (13) ............................................332
5-96 Multitier Routing: Egress to Physical Network (14) ............................................333
5-97 Multitier Routing: Egress to Physical Network (15) ............................................334
5-98 Multitier Routing: Egress to Physical Network (16) ............................................335
5-99 Multitier Routing: Egress to Physical Network (17) ............................................336
5-100 Lab: Configuring the Tier-1 Gateway .................................................................337
5-101 Review of Learner Objectives.............................................................................338
5-102 Configuring Static and Dynamic Routing ............................................................339
5-103 Learner Objectives .............................................................................................340
5-104 Static and Dynamic Routing ...............................................................................341
5-105 Tier-0 Gateway Capabilities ...............................................................................342
5-106 Configuring Static Routes on a Tier-0 Gateway (1)............................................343
Contents ix
5-107 Configuring Static Routes on a Tier-0 Gateway (2)............................................344
5-108 Viewing the Static Route Configuration ..............................................................345
5-109 BGP on Tier-0 .....................................................................................................346
5-110 Routing Features Supported by the Tier-0 Gateway ..........................................347
5-111 Configuring Dynamic Routing on Tier-0 Gateways: Step 1 ................................348
5-112 Configuring Dynamic Routing on Tier-0 Gateways: Step 2 ................................349
5-113 Configuring Dynamic Routing on Tier-0 Gateways: Step 3 ................................350
5-114 Verifying BGP Configuration of Tier-0 Gateway on Edge Nodes .......................351
5-115 BFD on a Tier-0 Gateway ...................................................................................352
5-116 Enabling BFD on a Tier-0 Gateway ....................................................................353
5-117 About IP Prefix Lists ...........................................................................................354
5-118 Configuring an IP Prefix List ...............................................................................355
5-119 About Route Maps (1) ........................................................................................356
5-120 About Route Maps (2) ........................................................................................357
5-121 Using Route Maps in BGP Route Advertisements .............................................358
5-122 BGP Feature: Allow AS-In ..................................................................................359
5-123 BGP Feature: Multipath Relax ............................................................................360
5-124 Internal BGP Support .........................................................................................362
5-125 About Inter-SR Routing ......................................................................................363
5-126 Inter-SR Routing Characteristics ........................................................................364
5-127 Inter-SR Routing Example (1) ............................................................................365
5-128 Inter-SR Routing Example (2) ............................................................................366
5-129 Inter-SR Routing Example (3) ............................................................................367
5-130 Lab: Configuring the Tier-0 Gateway .................................................................368
5-131 Review of Learner Objectives.............................................................................369
5-132 ECMP and High Availability ................................................................................370
5-133 Learner Objectives .............................................................................................371
5-134 About Equal-Cost Multipath Routing ..................................................................372
5-135 Enabling ECMP ..................................................................................................373
5-136 Edge Node High Availability ...............................................................................374
5-137 Tier-0 Gateway Active-Active Mode ...................................................................375
5-138 Tier-0 Gateway Active-Standby Mode ................................................................376
5-139 Failure Conditions and Failover Process (1) ......................................................377
5-140 Failure Conditions and Failover Process (2) ......................................................378
5-141 Failure Conditions and Failover Process (3) ......................................................379
5-142 Edge Node Failback Modes ...............................................................................380
x Contents
5-143 Lab: Verifying Equal Cost Multipathing Configurations ......................................381
5-144 Review of Learner Objectives.............................................................................382
5-145 Key Points (1) .....................................................................................................383
5-146 Key Points (2) .....................................................................................................384
Contents xi
7-13 Configuring Reflexive NAT .................................................................................419
7-14 NAT Packet Flow Logical Topology....................................................................420
7-15 NAT Packet Flow (1) ..........................................................................................421
7-16 NAT Packet Flow (2) ..........................................................................................422
7-17 NAT Packet Flow (3) ..........................................................................................423
7-18 NAT Packet Flow (4) ..........................................................................................424
7-19 NAT Packet Flow (5) ..........................................................................................425
7-20 NAT Packet Flow (6) ..........................................................................................426
7-21 NAT Packet Flow (7) ..........................................................................................427
7-22 NAT Packet Flow (8) ..........................................................................................428
7-23 NAT Packet Flow (9) ..........................................................................................429
7-24 NAT Packet Flow (10) ........................................................................................430
7-25 NAT Packet Flow (11) ........................................................................................431
7-26 Lab: Configuring Network Address Translation ..................................................432
7-27 Review of Learner Objectives.............................................................................433
7-28 Configuring DHCP and DNS Services ...............................................................434
7-29 Learner Objectives .............................................................................................435
7-30 About DHCP Services ........................................................................................436
7-31 DHCP Architecture .............................................................................................437
7-32 DHCP Use Cases ...............................................................................................438
7-33 DHCP Workflow ..................................................................................................439
7-34 Creating the DHCP Server .................................................................................440
7-35 Configuring the DHCP Server on the Tier-1 Gateway........................................441
7-36 Configuring the Subnet on the Segment ............................................................442
7-37 Editing Segments................................................................................................443
7-38 Viewing the DHCP Server Status .......................................................................444
7-39 DHCP Configuration Details: Advanced UI ........................................................445
7-40 DHCP Server Router Ports on Tier-1 Gateways ................................................446
7-41 DHCP Server and IP Pool Information in the Advanced UI ...............................447
7-42 DHCP Relay .......................................................................................................448
7-43 Configuring the DHCP Relay Server on Tier-1 Gateways..................................449
7-44 Configuring Segments with Gateway and DHCP IP Address Ranges ...............450
7-45 Local and Remote DHCP Server Configuration .................................................451
7-46 About DNS Services ...........................................................................................452
7-47 About DNS Forwarder ........................................................................................453
7-48 DNS Forwarder Benefits .....................................................................................454
xii Contents
7-49 Configuring DNS Services and DNS Zones (1)..................................................455
7-50 Configuring DNS Services and DNS Zones (2)..................................................457
7-51 Verifying the DNS Forwarder..............................................................................458
7-52 Lab: Configuring the DHCP Server on the NSX Edge Node ..............................459
7-53 Review of Learner Objectives.............................................................................460
7-54 Configuring Load Balancing ...............................................................................461
7-55 Learner Objectives .............................................................................................462
7-56 Load Balancing Use Cases ................................................................................463
7-57 Layer 4 Load Balancing ......................................................................................464
7-58 Layer 7 Load Balancing ......................................................................................465
7-59 Load Balancer Architecture ................................................................................466
7-60 Connecting to Tier-1 Gateways ..........................................................................467
7-61 Virtual Servers ....................................................................................................468
7-62 About Profiles .....................................................................................................469
7-63 About Server Pools .............................................................................................470
7-64 About Monitors....................................................................................................471
7-65 Relationships Among Load Balancer Components ............................................472
7-66 Load Balancer Scalability (1) ..............................................................................473
7-67 Load Balancer Scalability (2) ..............................................................................474
7-68 Load Balancing Deployment Modes ...................................................................475
7-69 Inline Topology ...................................................................................................476
7-70 One-Arm Topology (1) ........................................................................................477
7-71 One-Arm Topology (2) ........................................................................................478
7-72 Load Balancing Configuration Steps ..................................................................479
7-73 Creating Load Balancers ....................................................................................480
7-74 Creating Virtual Servers .....................................................................................481
7-75 Configuring Layer 4 Virtual Servers....................................................................482
7-76 Configuring Layer 7 Virtual Servers....................................................................483
7-77 Configuring Application Profiles..........................................................................484
7-78 Configuring Persistence Profiles ........................................................................485
7-79 Layer 7 Load Balancer SSL Modes ....................................................................486
7-80 Configuring Layer 7 SSL Profiles .......................................................................487
7-81 Configuring Layer 7 Load Balancer Rules ..........................................................488
7-82 Creating Server Pools ........................................................................................489
7-83 Configuring Load Balancing Algorithms .............................................................490
7-84 Configuring SNAT Translation Modes ................................................................491
Contents xiii
7-85 Configuring Active Monitors................................................................................492
7-86 Configuring Passive Monitors .............................................................................493
7-87 Lab: Configuring Load Balancing .......................................................................494
7-88 Review of Learner Objectives.............................................................................495
7-89 IPSec VPN ..........................................................................................................496
7-90 Learner Objectives .............................................................................................497
7-91 NSX-T Data Center VPN Services .....................................................................498
7-92 IPSec VPN Use Cases .......................................................................................499
7-93 IPSec VPN Methods ...........................................................................................500
7-94 IPSec VPN Modes ..............................................................................................501
7-95 IPSec VPN Protocols and Algorithms.................................................................502
7-96 IPSec VPN Certificate-Based Authentication .....................................................503
7-97 IPSec VPN Dead Peer Detection .......................................................................504
7-98 IPSec VPN Types ...............................................................................................505
7-99 IPSec VPN Deployment Considerations ............................................................506
7-100 IPSec VPN High Availability ...............................................................................507
7-101 IPSec VPN Scalability ........................................................................................508
7-102 IPSec VPN Configuration Steps .........................................................................509
7-103 Configuring an IPSec VPN Service ....................................................................510
7-104 Configuring DPD Profiles....................................................................................511
7-105 Configuring IKE Profiles .....................................................................................512
7-106 Configuring IPSec Profiles ..................................................................................513
7-107 Configuring Local Endpoints...............................................................................515
7-108 Configuring IPSec VPN Sessions (1) .................................................................516
7-109 Configuring IPSec VPN Sessions (2) .................................................................517
7-110 Configuring IPSec VPN Sessions (3) .................................................................519
7-111 Configuring IPSec VPN Sessions (4) .................................................................520
7-112 Review of Learner Objectives.............................................................................522
7-113 L2 VPN ...............................................................................................................523
7-114 Learner Objectives .............................................................................................524
7-115 L2 VPN Use Cases .............................................................................................525
7-116 L2 VPN in NSX-T Data Center ...........................................................................526
7-117 L2 VPN Deployment Considerations ..................................................................527
7-118 L2 VPN Hub-and-Spoke Topology .....................................................................528
7-119 L2 VPN Packet Format .......................................................................................529
7-120 L2 VPN Edge Packet Flow .................................................................................530
xiv Contents
7-121 L2 VPN Scalability ..............................................................................................531
7-122 L2 VPN Server Configuration Steps ...................................................................532
7-123 Configuring the L2 VPN Server (1) .....................................................................533
7-124 Configuring the L2 VPN Server (2) .....................................................................534
7-125 Configuring the L2 VPN Server (3) .....................................................................535
7-126 Configuring the L2 VPN Server (4) .....................................................................536
7-127 Supported L2 VPN Clients..................................................................................537
7-128 L2 VPN Peer Compatibility Matrix ......................................................................538
7-129 About Standalone Edge......................................................................................539
7-130 About NSX-Managed Edge (NSX Data Center for vSphere) .............................540
7-131 About NSX-Managed Edge (NSX-T Data Center)..............................................541
7-132 Configuring the L2 VPN Managed Client (1) ......................................................542
7-133 Configuring the L2 VPN Managed Client (2) ......................................................543
7-134 Configuring the L2 VPN Managed Client (3) ......................................................544
7-135 Configuring the L2 VPN Managed Client (4) ......................................................545
7-136 Lab: Deploying Virtual Private Networks ............................................................546
7-137 Review of Learner Objectives.............................................................................547
7-138 Key Points (1) .....................................................................................................548
7-139 Key Points (2) .....................................................................................................549
Contents xv
8-17 NSX-T Data Center Firewalls (1) ........................................................................570
8-18 NSX-T Data Center Firewalls (2) ........................................................................571
8-19 Features of the Distributed Firewall ....................................................................572
8-20 Distributed Firewall: Key Concepts (1) ...............................................................574
8-21 Distributed Firewall: Key Concepts (2) ...............................................................575
8-22 Creating a Domain ..............................................................................................576
8-23 Security Policy Overview ....................................................................................577
8-24 Distributed Firewall Policy ..................................................................................578
8-25 Configuring Distributed Firewall Policies (1) .......................................................580
8-26 Configuring Distributed Firewall Policies (2) .......................................................581
8-27 Configuring Distributed Firewall Policy Settings .................................................583
8-28 Creating Distributed Firewall Rules ....................................................................584
8-29 Configuring Distributed Firewall Rule Parameters .............................................585
8-30 Specifying Sources and Destinations for a Rule ................................................586
8-31 Creating Groups .................................................................................................587
8-32 Adding Members and Member Criteria for a Group ...........................................588
8-33 Viewing the Configured Groups..........................................................................589
8-34 Specifying Services for a Rule............................................................................590
8-35 Predefined and User-Created Services ..............................................................591
8-36 Adding a Context Profile to a Rule .....................................................................592
8-37 Predefined and User-Created Context Profiles ..................................................593
8-38 Configuring Context Profile Attributes ................................................................594
8-39 Setting the Scope of Rule Enforcement .............................................................595
8-40 Specifying Distributed Firewall Settings .............................................................596
8-41 Filtering the Display of Firewall Rules ................................................................597
8-42 Determining the Default Firewall Behavior .........................................................598
8-43 Viewing the Default Firewall Rules .....................................................................599
8-44 Distributed Firewall Architecture .........................................................................600
8-45 Distributed Firewall Architecture: ESXi ...............................................................601
8-46 Distributed Firewall Architecture: KVM ...............................................................602
8-47 Lab: Configuring the NSX Distributed Firewall ...................................................603
8-48 Review of Learner Objectives.............................................................................604
8-49 NSX-T Data Center Gateway Firewall ................................................................605
8-50 Learner Objectives .............................................................................................606
8-51 About NSX-T Data Center Gateway Firewall .....................................................607
8-52 Gateway Firewall on Tier-0 Gateway for Perimeter Protection ..........................609
xvi Contents
8-53 Gateway Firewall Policy .....................................................................................610
8-54 Predefined Gateway Firewall Categories ...........................................................611
8-55 Configuring the Gateway Firewall Policy Settings ..............................................612
8-56 Configuring Firewall Rules ..................................................................................614
8-57 Configuring Gateway Firewall Rules Settings ....................................................615
8-58 Gateway Firewall Architecture ............................................................................616
8-59 Lab: Configuring the NSX Gateway Firewall ......................................................617
8-60 Review of Learner Objectives.............................................................................618
8-61 NSX-T Data Center Service Insertion ................................................................619
8-62 Learner Objectives .............................................................................................620
8-63 About Service Insertion ......................................................................................621
8-64 About Network Introspection ..............................................................................622
8-65 North-South Network Introspection Overview ....................................................623
8-66 Configuring North-South Network Introspection .................................................624
8-67 Registering a Partner Service.............................................................................625
8-68 Deploying a Partner Service Instance ................................................................626
8-69 Configuring Traffic Redirection to Partners ........................................................627
8-70 East-West Network Introspection Overview .......................................................628
8-71 Configuring East-West Network Introspection....................................................629
8-72 Registering Partner Services ..............................................................................631
8-73 Deploying an Instance of a Registered Service .................................................632
8-74 Creating a Service Profile for East-West Network Introspection ........................633
8-75 Creating Service Chains .....................................................................................634
8-76 Configuring Redirection Rules ............................................................................635
8-77 Endpoint Protection Overview and Use Cases ..................................................636
8-78 Endpoint Protection Process ..............................................................................637
8-79 Automatic Policy Enforcement for New VMs ......................................................638
8-80 Automated Virus or Malware Quarantine with Tags Example ............................639
8-81 Creating a Service Profile for Endpoint Protection .............................................640
8-82 Configuring Endpoint Protection Rules ..............................................................641
8-83 Review of Learner Objectives.............................................................................642
8-84 Key Points (1) .....................................................................................................643
8-85 Key Points (2) .....................................................................................................644
Contents xvii
9-3 Module Lessons..................................................................................................649
9-4 Integrating NSX-T Data Center and VMware Identity Manager .........................650
9-5 Learner Objectives .............................................................................................651
9-6 About VMware Identity Manager ........................................................................652
9-7 Benefits of Integrating VMware Identity Manager and NSX-T Data Center .......653
9-8 VMware Identity Manager Integration Pre-Requisites ........................................654
9-9 Configuring VMware Identity Manager ...............................................................655
9-10 VMware Identity Manager and NSX-T Data Center Integration Overview .........657
9-11 Creating a New OAuth Client .............................................................................658
9-12 Getting the SHA-256 Certificate Thumbprint ......................................................660
9-13 Configuring VMware Identity Manager Details in NSX-T Data Center ...............661
9-14 Verifying VMware Identity Manager Integration .................................................662
9-15 Default UI Login ..................................................................................................663
9-16 UI Login with VMware Identity Manager .............................................................664
9-17 Local Login with VMware Identity Manager ........................................................665
9-18 Review of Learner Objectives.............................................................................666
9-19 Managing Users and Configuring RBAC ............................................................667
9-20 Learner Objectives .............................................................................................668
9-21 NSX-T Data Center Users ..................................................................................669
9-22 User Access and Authentication Policy Management ........................................670
9-23 Local Users .........................................................................................................671
9-24 Changing the Password for Local Users ............................................................672
9-25 Configuring Authentication Policy Settings for Local Users ...............................673
9-26 Configuring Authentication Policy Settings for VMware Identity Manager
Users ..................................................................................................................674
9-27 Using Role-Based Access Control .....................................................................675
9-28 Permissions Hierarchy ........................................................................................676
9-29 Built-in Roles (1) .................................................................................................677
9-30 Built-in Roles (2) .................................................................................................678
9-31 Role Assignment for Local Users .......................................................................679
9-32 Role Assignment for VMware Identity Manager Users.......................................680
9-33 Lab: Managing Users and Roles with VMware Identity Manager ......................681
9-34 Review of Learner Objectives.............................................................................682
9-35 Key Points...........................................................................................................683
xviii Contents
Module 10 NSX-T Data Center Tools and Basic Troubleshooting 685
10-2 Importance ..........................................................................................................686
10-3 Module Lessons..................................................................................................687
10-4 Troubleshooting Overview and Log Collection ...................................................688
10-5 Learner Objectives .............................................................................................689
10-6 About the Troubleshooting Process ...................................................................690
10-7 Differentiating Between Symptoms and Causes ................................................691
10-8 Local Logging on NSX-T Data Center Components ..........................................692
10-9 Viewing NSX Policy Manager Logs ....................................................................693
10-10 Viewing the NSX Manager Syslog......................................................................694
10-11 Viewing the NSX Controller Log .........................................................................695
10-12 Viewing the ESXi Host Log.................................................................................696
10-13 Viewing the KVM Host Log .................................................................................697
10-14 Syslog Overview .................................................................................................698
10-15 Configuring Syslog Exporters (1)........................................................................699
10-16 Configuring Syslog Exporters (2)........................................................................701
10-17 Configuring and Displaying Syslog .....................................................................702
10-18 Generating Technical Support Bundles ..............................................................703
10-19 Monitoring the Support Bundle Status ................................................................704
10-20 Downloading Support Bundles ...........................................................................705
10-21 Labs ....................................................................................................................706
10-22 Lab: Configuring Syslog .....................................................................................707
10-23 Lab: Generating Technical Support Bundles......................................................708
10-24 Review of Learner Objectives.............................................................................709
10-25 Monitoring and Troubleshooting Tools ...............................................................710
10-26 Learner Objectives .............................................................................................711
10-27 Monitoring Components from the NSX Manager Simplified UI ..........................712
10-28 Monitoring Component Status ............................................................................713
10-29 Port Mirroring Overview ......................................................................................714
10-30 Port Mirroring Method: Remote L3 SPAN ..........................................................715
10-31 Port Mirroring Method: Logical SPAN.................................................................716
10-32 Configuring Logical SPAN ..................................................................................717
10-33 Viewing the Logical SPAN Configuration and Mirrored Packets ........................718
10-34 IPFIX Overview ...................................................................................................719
10-35 Configuring IPFIX to Export Traffic Flows ..........................................................720
Contents xix
10-36 Configuring an IPFIX Firewall Profile..................................................................721
10-37 Configuring an IPFIX Switch Profile ...................................................................722
10-38 Configuring IPFIX Collectors ..............................................................................724
10-39 Traceflow Overview (1) .......................................................................................725
10-40 Traceflow Overview (2) .......................................................................................726
10-41 Traceflow Configuration Settings........................................................................727
10-42 Traceflow Operations .........................................................................................728
10-43 Using Traceflow for Troubleshooting ..................................................................729
10-44 About the Port Connection Tool .........................................................................730
10-45 Viewing the Graphical Output of the Port Connection Tool ................................731
10-46 Packet Capture ...................................................................................................732
10-47 Lab: Using Traceflow to Inspect the Path of a Packet........................................733
10-48 Review of Learner Objectives.............................................................................734
10-49 Troubleshooting Basic NSX-T Data Center Problems .......................................735
10-50 Learner Objectives .............................................................................................736
10-51 Common NSX Manager Installation Problems ...................................................737
10-52 Using Logs to Troubleshoot NSX Manager Installation Problems .....................738
10-53 Using CLI Commands to Troubleshoot NSX Manager Installation
Problems.............................................................................................................739
10-54 Viewing the NSX Manager Node Configuration .................................................740
10-55 Verifying Services and States Running on NSX Manager Nodes ......................741
10-56 Verifying NSX Management Cluster Status .......................................................742
10-57 Verifying Communication from Hosts to the NSX Management Cluster ............743
10-58 Troubleshooting Logical Switching Problems.....................................................744
10-59 Verifying the N-VDS Configuration .....................................................................745
10-60 Verifying Overlay Tunnel Reachability (1) ..........................................................746
10-61 Verifying Overlay Tunnel Reachability (2) ..........................................................747
10-62 Troubleshooting Logical Routing Problems ........................................................748
10-63 Retrieving Gateway Information .........................................................................749
10-64 Viewing the Routing Table..................................................................................750
10-65 Viewing the Forwarding Table of the Tier-1 Gateway ........................................751
10-66 Verifying BGP Neighbor Status ..........................................................................752
10-67 Viewing the BGP Route Table ............................................................................753
10-68 Troubleshooting Firewall Problems ....................................................................754
10-69 Verifying Firewall Configuration and Status (1) ..................................................755
10-70 Verifying Firewall Configuration and Status (2) ..................................................756
xx Contents
10-71 Verifying the Firewall Configuration from the KVM Host ....................................757
10-72 Verifying the Firewall Configuration from the ESXi Host ....................................758
10-73 Verifying the Firewall Configuration from the NSX Edge Node ..........................759
10-74 Review of Learner Objectives.............................................................................760
10-75 Key Points...........................................................................................................761
Contents xxi
Module 1
Course Introduction
Digital badges contain metadata with skill tags and accomplishments, and are based on Mozilla's
Open Badges standard.
Virtual Cloud Network empowers customers to connect and protect applications and data,
regardless of their physical locations. The purpose of Virtual Cloud Network is to connect and
protect any workload running across any environment. Workloads might be running on-premises
in a customer data center, in a branch, or in a public cloud such as AWS or Azure.
Virtual Cloud Network enables organizations to embrace cloud networking as the software-
defined architecture for connecting everything in a distributed world.
Virtual Cloud Network is a ubiquitous software layer that provides maximum visibility into, and
context for, the interaction among various users, applications, and data. To realize its vision,
VMware started NSX to support various types of endpoints.
VMware’s software-based approach delivers a networking and security platform that enables
customers to connect, secure, and operate an end-to-end architecture to deliver services to
applications.
• Enables you to design and build the next generation policy-driven data center that connects,
secures, and automates traditional hypervisors, as well as new microservices-based
(container) applications across a range of deployment targets, such as the data center, cloud,
and so on
• Embeds security into the platform, compartmentalizing the network through micro-
segmentation, encrypting in-flight data, and automatically detecting and responding to
security threats
• Delivers a WAN solution that provides full visibility, metrics, control, and automation of all
endpoints
VMware’s Virtual Cloud Network enables you to run your applications everywhere.
NSX-T Data Center takes what you built from the private data center into the public cloud. NSX-T
Data Center also supports modern applications and technologies, such as containers and the
Internet of Things (IoT).
You can bring key capabilities from one central control point out to wherever your applications
run.
• NSX-T Data Center, formerly known as NSX-T, is an end-to-end platform for data center
networking.
• NSX Cloud extends the data center network into the public cloud, such as VMware Cloud on
AWS or Microsoft Azure Cloud. NSX Cloud also provides container or Kubernetes support
with VMware PKS.
• NSX Hybrid Connect delivers application and network hybridity and mobility.
• AppDefense provides application-centric security based on the intent and behavior of each
application.
• VMware Network Insight (SaaS) and vRealize Network Insight provide full visibility,
troubleshooting, and optimization across physical, virtual, and cloud environments.
• vRealize Automation is the cloud automation tool for the software-defined data center.
NSX-T Data Center offers consistent networking and security services across multiple endpoints,
such as ESXi , kernel-based virtual machines (KVMs), and bare metal workloads. These
workloads can run on the on-premises data center or on public clouds running native workloads,
or they can be powered by VMware Cloud destinations, such as VMware Cloud on AWS, IBM,
OVH public cloud, and the VMware Cloud Provider Program (VCPP).
You can use NSX-T Data Center for the following purposes:
• Security: Delivers application-centric security at the workload level to prevent the lateral
spread of threats
• Multicloud networking: Brings networking and security consistency across varied sites and
streamlines multicloud operations
• Cloud-native applications: Enable native networking and security for containerized workloads
across application frameworks
Services from ecosystem partners are integrated with the NSX-T Data Center platform in the
management, control, and data planes. The NSX-T Data Center platform creates a unified user
experience and seamless integration with any cloud management platform (CMP), and also
enables roles and duties separation.
NSX-T Data Center provides a platform for solutions from ecosystem partners to help customers
optimize software-defined data center deployments. The VMware Ready for Networking and
Security NSX Partner Program provides support and certification for partners' integration with
NSX-T Data Center.
The following partner solutions are available:
• Network monitoring
• Management plane: The management plane is designed with advanced clustering technology,
which allows the platform to process large-scale concurrent API requests. NSX Manager
provides the REST API and a web-based UI interface entry point for all user configurations.
• Control plane: The control plane includes a three-node controller cluster, which is responsible
for computing and distributing the runtime virtual networking and security state of the NSX-T
Data Center environment. The control plane is separated into a central control plane and a
local control plane. This separation significantly simplifies the work of the central control
plane and enables the platform to extend and scale for various endpoints. With NSX-T Data
Center 2.4, the management plane and control plane are converged. Each manager node in
NSX-T Data Center is an appliance with converged functions, including management,
control, and policy.
• Data plane: The data plane includes a group of ESXi or KVM hosts, as well as NSX Edge
nodes. The group of servers and edge nodes prepared for NSX-T Data Center are called
• Consumption plane: Although the consumption plane is not part of NSX-T Data Center, it
provides integration into virtually any CMP through the REST API and integration with
VMware cloud management planes such as vRealize Automation:
• The consumption of NSX-T Data Center can be driven directly through the NSX
Manager user interface (UI).
• Typically, end users tie network virtualization to their cloud management plane for
deploying applications.
• Integration is also available through OpenStack, Kubernetes, and Pivotal Cloud Foundry.
All operations are performed from the management plane. These operations include create, read,
update, and delete (CRUD).
Network and security virtualization is made up of several key solutions, providing security,
integration, extensibility, automation, and elasticity.
NSX Manager is a standalone appliance. It includes the management plane, control plane, and
policies. As a result of this integrated approach, users do not need to install the manager,
controller, and policy roles as separate VMs.
The diagram shows that the manager and controller instances run on all three nodes, providing
resiliency. Requests from users through the API or UI can be handled by three manager nodes,
resulting in shared workloads and efficiency.
Although the three services are merged on each node in the cluster, separate resources (CPU,
memory, and so on) are allocated for each of the services.
The distributed persistent database runs across all three nodes, providing the same configuration
view to each node. This way, a manager or controller running on one node has the same view of
the configuration topology as those running on the other two nodes.
The management plane is designed to process large-scale concurrent API calls from CMPs. The
system can be integrated into any CMP and ships with a fully supported OpenStack Neutron plug-
in. As the system scales, the management plane scales out, using advanced clustering technology.
The API and GUI are available on all three manager nodes in the cluster. When a user request is
sent to the virtual IP address, the active manager (the leader who has the virtual IP address
attached) responds to the request. If the leader fails, the two remaining managers elect a new
leader. The new leader responds to the requests sent to that virtual IP address.
The diagram shows that, from the administrator's perspective, a single IP address (the virtual IP
address) is always used to access the NSX Management Cluster.
The diagram shows how a traditional load balancer can balance traffic across multiple manager
nodes.
In NSX-T Data Center 2.4, the roles of NSX Policy Manager, manager, and controller are still
three different roles. but they are automatically deployed in the same appliance.
The support of a single policy and central management across multiple sites, NSX-T Data Center
instances, and on-premises and VMware Cloud on AWS will be supported in the future.
The motivation for centralized policy management is the customer's need for a consistent network
and security policy management platform across all workloads.
The central control plane (CCP) computes and disseminates the ephemeral runtime state based on
the configuration from the management plane and topology information reported by the data plane
elements.
The local control plane (LCP) runs on the compute endpoints. It computes the local ephemeral
runtime state for the endpoint based on updates from the CCP and local data plane information.
The LCP pushes stateless configurations to forwarding engines in the data plane and reports the
information back to the CCP. This process simplifies the work of the CCP significantly and
enables the platform to scale to thousands of different types of endpoints (hypervisor, container
host, bare metal, or public cloud).
RabbitMQ is an open-source message broker protocol.
Remote procedure call (RPC) is a protocol that one program can use to request a service from
another program located in another computer without having to understand the network's details.
The LCP on the transport node reports local runtime changes to the master CCP node. The master
CCP nodes receive the changes and propagate the changes to other controllers in the cluster. All
controllers propagate the changes to the transport nodes that they are responsible for.
In the diagram, controller 3 is assigned to two transport nodes. When controller 3 fails, the nodes
are moved to controllers 1 and 2.
For packet forwarding, ESXi uses the NSX-T Data Center virtual distributed switch (N-VDS), and
KVM uses Open vSwitch.
NSX Manager combines the functions of the management plane, control plane, and policy
management in a single node (virtual appliance).
NSX Manager nodes can be installed on supported hypervisors (vSphere, ESXi, RHEL KVM, and
Ubuntu KVM) for on-premises deployment.
For supported hypervisor versions , see VMware Product Interoperability Matrices at
https://www.vmware.com/resources/compatibility/sim/interop_matrix.php.
The NSX Manager extra-small VM resource requirements apply to the Cloud Service Manager
(CSM) only.
You add the configuration details to register the compute manager to NSX-T Data Center. The
compute manager in the example is vCenter Server.
The numbers on the image show the process for automatically deploying NSX Manager instances
from the NSX Manager simplified UI.
You can check the status of nodes by selecting Home > Dashboard > System in the NSX
Manager simplified UI.
Different colors send different messages. For example, yellow indicates degraded performance,
such as memory usage of NSX Manager that is consistently higher than 80% for the past 5
minutes.
For information about manually joining the NSX Manager nodes to form a cluster, see NSX-T
Data Center Installation Guide at https://docs.vmware.com/en/VMware-NSX-T-Data-
Center/2.4/nsxt_24_install.pdf.
You can configure a virtual IP address for the Management Cluster to provide load balancing and
availability among the management nodes:
• You can configure the address for the management nodes to share.
• You might need to wait a few minutes for the newly configured address to take effect.
To change the virtual IP address, click EDIT. To remove the virtual IP address, click RESET.
You connect to an appliance in the cluster and enter the command get cluster status. The
number and status of the nodes in the cluster appear.
The example output lists the manager, policy, and controller groups. It also shows each group’s
status, along with its members and member status.
• Incorrect network details, such as the gateway address, network mask, DNS, and so on
• Use of the same IP address when deploying multiple appliances through a single request
For more information about deploying NSX Manager on a KVM host, see "Install NSX Manager
on KVM" at https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.4/installation/GUID-
5229A83D-1B97-4203-BA30-F52716F68F7F.html.
Starting with NSX-T Data Center 2.4, the NSX Manager UI is divided into simplified and
advanced sections. You use the Advanced Networking & Security tab when the configuration is
not supported by the simplified UI, for example, when you need to configure a bridge firewall or
use the traceflow tool.
You can use the simplified or advanced UI to configure objects, but VMware recommends that
you use the simplified UI if possible.
Some terms used in the simplified UI tabs are different from those in previous versions of NSX-T
Data Center. For example, logical switch is now called segment. Tier-0 or Tier-1 logical router is
now called Tier-0 or Tier-1 Gateway.
However, previous naming conventions are maintained in the advanced UI:
• A logical segment in the simplified UI is called a logical switch in the advanced UI.
• A Tier-0 or Tier-1 Gateway in the simplified UI is called a T-0 or T-1 logical router in the
advanced UI.
The Overview page shows the status and details of the management nodes and the cluster.
• Workload data
NSX-T Data Center logical topology is decoupled from the hypervisor-type transport nodes.
ESXi and KVM transport nodes can work together. Networks and topologies can extend to both
ESXi and KMV environments, regardless of the hypervisor type.
Each transport node has a management plane agent (MPA). NSX Manager polls rule statistics and
status from the transport node using the MPA.
Each transport node is configured with a host switch, which is the primary component in the data
plane.
Data plane forwarding functions include switching, overlay encapsulation and decapsulation,
routing, and creating firewalls.
The local control plane (LCP) is composed of several agents and modules that perform the LCP
function on the data plane.
A transport zone defines the span of a logical network over the physical infrastructure. It defines
the potential reach of transport nodes.
A transport zone can accommodate either overlay or VLAN traffic.
VLAN transport zones are used to connect between NSX Edge uplinks and upstream physical
routers to establish north-south connectivity.
Transport nodes are hypervisor hosts and NSX Edge nodes that participate in an NSX-T Data
Center overlay. As a result, a hypervisor host can host VMs that communicate over logical
switches. An NSX Edge node can have logical router uplinks and downlinks configured.
A hypervisor transport node can belong to multiple transport zones. A segment can belong to only
one transport zone.
NSX Edge nodes can belong to multiple transport zones: one overlay transport zone and multiple
VLAN transport zones.
• NSX Edge
• Bare metal
The N-VDS is the software that operates in hypervisors to form a software abstraction layer
between servers and the physical network. The N-VDS is based on vSphere Distributed Switch,
which provides uplinks for host connectivity to physical switches.
When an ESXi host is prepared for NSX-T Data Center, an N-VDS is created. An N-VDS is
similar in function to a KVM Open vSwitch on a KVM host.
The N-VDS performs the switching functionality on a transport node:
• The N-VDS typically owns several physical NICs of the transport node.
• The N-VDS has a name assigned for grouping and management. For example, the diagram
shows two N-VDS instances that are configured on the transport nodes: An N-VDS named
Lab and an N-VDS named Prod (production).
NSX-T Data Center does not require vCenter Server to operate. NSX Manager is responsible for
its creation and is completely independent of vCenter Server.
N-VDS can coexist with vSphere distributed and standard switches.
vCenter Server sees N-VDS as an opaque network. In other words, vCenter Server is aware of its
existence but cannot configure it. N-VDS is configured by the management plane and host agents
(nsxa and netcpa).
N-VDS performs later 2 forwarding and supports VLAN, port mirroring, and NIC teaming. The
teaming configuration is applied switch-wide. Link aggregation groups are implemented as ports.
Transport zones dictate which hosts (and thus which VMs) can participate in a particular network:
• The overlay transport zone is used by both host transport nodes and NSX Edge nodes.
• The VLAN transport zone is used by NSX Edge nodes for their VLAN uplinks.
An NSX-T Data Center environment can contain one or more transport zones, depending on your
requirements. A host can belong to multiple transport zones. A logical switch can belong to only
one transport zone.
NSX-T Data Center does not allow VMs in different transport zones in the layer 2 network to
connect. The span of a logical switch is limited to a transport zone, so virtual machines in different
transport zones cannot be on the same layer 2 network.
When you create a transport zone, you must provide a name for the N-VDS that will be installed
on the transport nodes when the nodes are added to this transport zone. The N-VDS name can be
whatever you want it to be.
• Enhanced Datapath mode is based on the underlying N-VDS and supports the base switch
features, such as vSphere vMotion, vSphere HA, vSphere DRS, and so on.
– It brings the advantages of the Data Plane Development Kit (DPDK)-style packet-
processing performance to the east-west flows within the data center.
– This switch mode is designed to support Network Functions Virtualization (NFV) type
applications.
– It is not suitable for generic data center applications or deployments where traditional
VM-based or bare metal NSX Edge nodes must be used.
DPDK is a set of data plane libraries and network interface controller drivers for fast packet
processing:
• Compared to the standard way of packet processing, DPDK helps decrease CPU cost and yet
increase the number of packets processed per second.
• The DPDK library can be used for a variety of use cases, and many software vendors use
DPDK. It can be tuned to match desired performance for generalized or specific use cases.
With NFV, the focus shifts from raw throughput to packet-processing speed. In these workloads,
the applications do not often send a smaller number of large packets. Rather, they send many
smaller packets. Often, these packets are as small as 128 bytes. TCP optimizations do not help
with these workloads. Enhanced Datapath mode leverages DPDK to deliver performance for these
packet-processing workloads.
With Poll Mode Driver (PMD), instead of the NIC sending an interrupt to the CPU when a packet
arrives, a core is assigned to poll the NIC to check for any packets. This process eliminates CPU
context switching, which is unavoidable in the traditional interrupt mode of packet processing.
Flow cache is an optimization that helps reduce the CPU cycles spent on known flows. Flow
Cache tables get populated with the start of a new flow. Decisions for the rest of the packets in a
flow might be skipped if the flow already exists in the flow table. If the packets from the same
flow arrive consecutively, the fast path decision for that packet is stored in memory and applied
directly for the rest of the packets in that cluster of packets. If the packets are from different flows,
the decision per flow is saved to a hash table and used to decide the next hop for each of the
packets of the flows.
DPDK features, including PMD, flow cache, and optimized packet copy, provide better
performance for small and large packet sizes pertinent for NFV style workloads.
In the example, two transport zones are created. The transport zone named Prod-Overlay-TZ is
mapped to the N-VDS named Prod-Overlay-NVDS to carry the GENEVE-encapsulated overlay
traffic. The transport zone named Prod-VLAN-TZ is mapped to the N-VDS named Prod-VLAN-
NVDS to carry the 802.1q VLAN traffic.
The N-VDS allows for virtual-to-physical packet flow by binding logical router uplinks and
downlinks to physical NICs.
Link Aggregation Groups (LAGs) use Link Aggregation Control Protocol (LACP) for the
transport network.
Uplinks of an N-VDS are assigned physical NICs or LAGs.
In the example, logical uplink 1 is mapped to a physical LAG (comprised of physical port p1 and
p2). Logical uplink 2 is mapped to physical port p3.
• Failover Order: An active uplink is specified along with an optional list of standby uplinks.
If the active uplink fails, the next uplink in the standby list replaces the active uplink. No
actual load balancing is performed with this option.
• Load Balanced Source: A list of active uplinks is specified, and each interface on the
transport node is pinned to one active uplink based on the Source Port ID. This configuration
allows use of several active uplinks at the same time.
• Load Balanced Source Mac: This option determines the uplink based on the source VM’s
MAC address.
The image shows that you can specify a type of teaming policy for the uplink profile.
The Load Balanced Source and Load Balanced Source Mac teaming policies do not allow the
configuration of standby uplinks.
Load Balanced Source and Load Balanced Source Mac teaming policies are not supported on
KVM transport nodes.
KVM hosts are limited to the Failover Order teaming policy and a single LAG support. For
LACP, multiple LAG is not supported on KVM hosts.
Attaching a transport node profile is required only when configuring vCenter-managed ESXi hosts
at the cluster level.
This step is not required for standalone ESXi host preparation.
The diagram shows how you can prepare a host or a host cluster managed by a compute manager,
such as vCenter Server.
The slide shows that the ESXi hosts prepared for NSX-T Data Center, sa-esxi-04.vclass.local and
sa-esx-05.vclass.local, are automatically listed as transport nodes in the NSX Manager simplified
UI.
You can check the status of host transport nodes in the System view of the dashboard. Point to the
circle, and messages appear. These messages provide details about the nodes. For example, in the
screenshot, out of seven nodes, four nodes are configured as transport nodes (green color) and
three are not configured for NSX-T Data Center (gray circle).
After an ESXi host is prepared for NSX-T Data Center, VIBs are installed for the host to
participate in networking and security operations.
The functions of the VIBs are defined as follows:
• nsx-aggservice: NSX-T Data Center aggregation service runs in the management plane nodes
and fetches the runtime state of NSX-T Data Center components.
• nsx-da: Collects discovery agent data about the hypervisor OS version, VMs, and network
interfaces.
• nsx-exporter: Provides host agents that report runtime state to the aggregation service.
• nsx-host: Provides metadata for the VIB bundle that is installed on the host.
• nsx-lldp: Provides support for the Link Layer Discovery Protocol (LLDP).
• nsx-sfhc: Service fabric host components (SFHC) provides a host agent for managing the life
cycle of the hypervisor as a fabric host.
Transport nodes are hypervisor hosts, bare metal servers, and NSX Edge instances participating in
NSX-T Data Center.
One or more VMs can be attached to a segment. The VMs connected to a segment can
communicate with each other through tunnels between hosts.
Segments are similar to VLANs, in that they provide network connections to which you can attach
VMs. Each segment has a virtual network identifier (VNI), similar to a VLAN ID.
Tunneling is the basis for implementing NSX-T Data Center overlay networks. It provides
isolation between the underlay network (physical network) and an overlay network (virtual
network). This isolation is achieved by encapsulating the overlay packet within an underlay
packet.
Overlay logical networking, or tunneling, deploys a layer 2 network on top of an existing layer 3
network by encapsulating frames inside of packets and transferring the packets over an underlying
transport network. The underlying transport network can be another layer 2 network or it can cross
layer 3 boundaries.
• TEPs are the source and destination IP addresses used in the external IP header to uniquely
identify the hypervisor hosts originating and terminating the NSX-T Data Center
encapsulation of overlay frames.
• These source and destination IP addresses are used in the external IP header to uniquely
identify the hypervisor hosts originating and terminating the NSX-T Data Center
encapsulation of overlay frames.
• TEPs typically carry two types of traffic: VM traffic and control (health check) traffic.
NSX-T Data Center uses a tunneling encapsulation mechanism called Generic Network
Virtualization Encapsulation (GENEVE).
GENEVE was developed by VMware, Microsoft, Red Hat, and Intel. It is a standard under
development (draft-ietf-nvo3-geneve-07). The GENEVE protocol is compatible with other
tunneling protocols (such as VXLAN, NVGRE, and STT) and is considered to be more flexible.
The GENEVE protocol encapsulates only data plane packets. GENEVE-encapsulated packets are
designed to be communicated over standard back planes, switches, and routers:
• Packets are sent from one tunnel endpoint to one or more tunnel endpoints using either unicast
or multicast addressing.
• The end-user application and the VMs in which the application is executing are not modified
in any way by the GENEVE protocol.
• The tunnel endpoint encapsulates the end-user IP packet in the GENEVE header.
• The completed GENEVE packet is transmitted to the destination endpoint in a standard User
Datagram Protocol (UDP) packet. Both IPv4 and IPv6 are supported.
• The receiving tunnel endpoint strips off the GENEVE header, interprets any included options,
and directs the end-user packet to its destination in the virtual network indicated by the tunnel
identifier.
To support the needs of network virtualization, the tunneling protocol draws on the evolving
capabilities of each type of device in both the underlay and overlay networks.
This process imposes a few requirements on the data plane tunneling protocol:
• The data plane is generic and extensible enough to support current and future control planes.
• Tunnel components are efficiently implemented in both hardware and software without
restricting capabilities to the lowest common denominator.
The GENEVE packet format consists of a compact tunnel header encapsulated in UDP over either
IPv4 or IPv6. A small fixed tunnel header provides control information, as well as a base level of
functionality and interoperability with a focus on simplicity. This header is then followed by a set
of variable options to allow for future development. The payload consists of a protocol data unit of
the indicated type, such as an Ethernet frame.
• Options Length (6 bits): This variable results in a minimum total GENEVE header size of 8
bytes and a maximum of 260 bytes.
• O (1 bit): Operations, Administration and Maintenance (OAM) packet. This packet contains a
control message instead of a data payload.
• Rsvd. (6 bits): The Reserved field must be zero on transmission and ignored on receipt.
• Protocol Type (16 bits): The field indicates the type of protocol data unit appearing after the
GENEVE header.
• Reserved (8 bits): The Reserved field must be zero on transmission and ignored on receipt.
• Virtual Network Identifier: Each logical network is identified by a unique VNI. The VNI
uniquely identifies the segment that the inner Ethernet frame belongs to. It is a 24-bit number
that is added to the GENEVE frame, allowing a theoretical limit of 16 million separate
networks. The NSX VNI range starts from 5000-16777216.
The base GENEVE header is followed by zero or more options in type-length-value format. Each
option consists of a 4-byte option header and a variable amount of option data interpreted
according to the type. GENEVE provides NSX-T Data Center with the complete flexibility of
inserting metadata in the type, length, and value fields that can be used for new features. One of
the examples of this metadata is the VNI. VMware recommends an MTU of 1600 to account for
the encapsulation header.
The GENEVE protocol offers the following benefits:
• Can add new metadata to the encapsulation without revising the GENEVE standard
• Provides the same kind of NIC offloads as VXLAN (check compatibility list)
• The ESXi host is configured as a transport node with TEP IP: 172.20.11.51, and PROD-
NVDS is installed on the hypervisor during the transport node creation. The VMkernel
interface VMK10 is created on the ESXi host.
• The KVM host is configured as a transport node with TEP IP: 172.20.11.52, and PROD-
NVDS is installed on the hypervisor during the transport node creation. The nsx-tep 0.0
interface is created on the KVM host.
• The ESXi and KVM transport nodes are configured in the transport zone named PROD-
OVERLAY-TZ.
• Transport node A is running VM-1 with IP Address 10.1.10.11 and MAC address ABC.
• Transport node B is running VM-2 with IP Address 10.1.10.12 and MAC address DEF.
• When VM-1 communicates with VM-2, the source hypervisor encapsulates the packet with
the GENEVE header and sends it to the destination transport node, which decapsulates the
packet and forwards it to the destination VM.
Although the management plane and central control plane (CCP) run on the same virtual
appliance, they perform different functions.
The NSX cluster can scale to a maximum of three NSX Manager nodes running on the
management and central control planes.
• The nsx-proxy agent is the local control plane agent running on each ESXi transport node.
• The CCP sends the information to the nsx-proxy agent running on the ESXi hypervisor, and
the nsx-proxy agent updates NestDB.
• The cfgAgent running on the ESXi host uses the nsxt-vdl2 module to create and configure
layer 2 segments.
• The configuration changes are performed through the cfgAgent and are written into the in-
memory database called NestDB.
The directional arrows represent the ports used between the various components of NSX-T Data
Center.
If your VM is on a KVM host, you need to manually create a logical port to attach the VM:
• A segment contains multiple switch ports. Routers, VMs, containers, and so on can connect to
a segment through the segment ports.
• After attaching a VM to a segment, you can add segment ports to the segment.
When creating a segment, you select an uplink in the Uplink & Type drop-down menu:
• You can also select None, which means that the segment is a logical switch that is not
connected to any gateway.
• If the uplink connects to a Tier-1 gateway, you must select a type: Flexible or Fixed.
The segments from NSX Manager appear in vCenter Server as opaque networks. These segments
are not port groups in vSphere.
A segment might have multiple switching ports. Entities such as routers, VMs, or containers can
connect to a segment through the segment ports. After attaching a VM to a segment, you can add
logical ports to the segment.
Depending on your host, the configuration for connecting a VM to a segment can vary.
If your ESXi host is managed by vCenter Server, you can access a hosted VM through the
vSphere Web Client UI. By editing the VM settings, you attach the VM to a desired segment.
If the ESXi host on which your VM resides is a standalone host, see the NSX-T Data Center
Administration Guide at https://docs.vmware.com/en/VMware-NSX-T-Data-
Center/2.4/administration/GUID-FBFD577B-745C-4658-B713-A3016D18CB9A.html.
If your VM resides on a KVM host, you must manually create a logical port and attach the VM:
1. From the KVM CLI, run the virsh dumpxml <VM_name> | grep interfaceid
command and record the UUID information.
2. In the NSX Manager simplified UI, add a segment port by configuring the UUID, attachment
type, and other settings.
For more information about the creation of the UUID, see VMware knowledge base article
2150850 at https://kb.vmware.com/s/article/2150850.
When adding segment ports, you select a type for the port:
• Leave this field blank except for use cases such as containers or VMware HCX.
• If the type is set to Child, enter the parent virtual interface (VIF) ID in the Context ID text
box.
• If the type is set to Independent, enter the transport node ID in the Context ID text box.
After you successfully set up the segment and attach VMs to it, you can test the connectivity
between VMs on the same segment. In the example, you can test the connectivity in the following
way:
1. Using SSH or the VM console, log in to VM T1-Web-01 (172.16.10.11), which is attached to
the segment Web-LS.
2. Ping VM T1-Web-03 172.16.10.13 (which resides on another KVM host). This VM is also
attached to the segment Web-LS.
NSX-T Data Center supports several types of segment profiles and maintains one or more system-
defined default segment profiles:
• The IP Discovery profile uses DHCP snooping, Address Resolution Protocol (ARP)
snooping, or VMware Tools to learn the VM MAC and IP addresses.
• The MAC Discovery profile supports two functionalities: MAC learning and MAC address
change.
• SpoofGuard prevents traffic with incorrect source IP and MAC addresses from being
transmitted.
• Segment Security provides stateless layer 2 and layer 3 security by checking the ingress
traffic to the segment and matching the IP address, MAC address, and protocols to a set of
allowed addresses and protocols. Unauthorized packets are dropped.
• QoS (Quality of Service) provides high-quality and dedicated network performance for
preferred traffic.
You cannot edit or delete the default segment profiles, but you can create custom segment profiles.
For example, two QoS segment profiles cannot be associated with a segment or segment port.
When the segment profile is associated or disassociated from a segment, the segment profile for
the child segment ports is applied based on the following criteria:
• If the parent segment has a profile associated with it, the child segment port inherits the
segment profile from the parent.
• If the parent segment does not have a segment profile associated with it, a default segment
profile is assigned to the segment, and the segment port inherits that default segment profile.
• If you explicitly associate a custom profile with a segment port, this custom profile overrides
the existing segment profile.
If you associate a custom segment profile with a segment, but want to retain the default segment
profile for one of the child segment ports, you must make a copy of the default segment profile
and associate it with the specific segment port.
ARP suppression minimizes ARP traffic flooding within VMs connected to the same segment.
The VMware Tools IP Discovery method can also provide the VM's configuration information
and is available for ESXi-hosted VMs only.
The IP Discovery profile might be used in the following scenario: The distributed firewall depends
on the IP-to-port mapping to create firewall rules. Without IP Discovery, the distributed firewall
must find the IP of a logical port through SpoofGuard and manual address bindings, which is a
cumbersome and error-prone process.
• The discovered MAC and IP addresses are used to achieve ARP and ND (Neighbor
Discovery) suppression, which minimizes traffic between VMs connected to the same logical
switch.
• The addresses are also used by the SpoofGuard and distributed firewall components. The
distributed firewall uses the address bindings to determine the IP address of objects in firewall
rules.
• DHCP and DHCP Snooping: IPv6 inspects DHCP packets exchanged between the DHCP
client and server to learn the IP and MAC addresses.
• ARP Snooping inspects the outgoing ARP and GARP (gratuitous ARP) packets of a VM to
learn the IP and MAC addresses.
• VM Tools is software that runs on an ESXi-hosted VM and can provide the VM's
configuration information, including MAC and IP or IPv6 addresses. This IP discovery
method is available for VMs running on ESXi hosts only.
• ND Snooping is the IPv6 equivalent of ARP Snooping. It inspects neighbor solicitation (NS)
and neighbor advertisement (NA) messages to learn the IP and MAC addresses.
For each port, NSX Manager maintains an ignore bindings list, which contains IP addresses that
cannot be bound to the port. You can only update this list using the API. You can also use this
method to delete a previously discovered IP for a given port.
You can enable ARP Snooping, DHCP Snooping, or VM Tools to create a custom IP Discovery
segment profile that learns the IP and MAC addresses to ensure the IP integrity of a segment.
Source MAC address-based learning is a common feature in the physical world to learn the MAC
address of a machine. MAC Learning provides network connectivity to deployments where
multiple MAC addresses are configured behind one vNIC, for example, in a nested hypervisor
deployment where an ESXi VM runs on an ESXi host and multiple VMs run inside the ESXi VM.
The MAC Discovery profile supports source MAC address learning:
• Source MAC address-based learning is a common feature in the physical world for learning
the MAC address of a machine. The MAC Learning feature provides network connectivity to
deployments where multiple MAC addresses are configured behind one vNIC, for example, in
a nested hypervisor deployment where an ESXi VM runs on an ESXi host and multiple VMs
run inside the ESXi VM.
• Without MAC Learning, when the ESXi VM’s vNIC connects to a segment port, its MAC
address is static. VMs running inside the ESXi VM do not have network connectivity because
their packets have different source MAC addresses. With MAC Learning, the source MAC
address of every packet coming from the vNIC is inspected, the MAC address is learned, and
• MAC Learning also supports Unknown Unicast Flooding. When a unicast packet received by
a port that has an unknown destination MAC address, the packet is flooded out on all segment
ports that has MAC Learning and Unknown Unicast Flooding enabled. This property is
enabled by default, but only if MAC Learning is enabled.
The MAC Discovery profile also supports the ability of a VM to change its MAC address. A VM
connected to a port with MAC Change enabled can run an administrative command to change the
MAC address of its vNIC and still send and receive traffic on that vNIC. This feature is used when
a VM needs the ability to change its MAC address and yet not lose network connectivity.
The MAC Discovery profile also supports the ability of a VM to change its MAC address:
• A VM connected to a port with MAC Change enabled can run an administrative command to
change the MAC address of its vNIC and still send and receive traffic on that vNIC.
• This feature (disabled by default) is used when a VM needs the ability to change its MAC
address and yet not lose network connectivity.
If you enable both MAC Learning or MAC Change, you should also enable SpoofGuard to
improve security.
For more information about creating a MAC Discovery profile and associating the profile with a
segment or a port, see the NSX-T Data Center Administration Guide at
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.4/nsxt_24_admin.pdf.
QoS provides high-quality and dedicated network performance for preferred traffic that requires
high bandwidth. The QoS mechanism achieves this performance by providing sufficient
bandwidth, controlling latency and jitter, and reducing data loss for preferred packets even with
network congestion. This level of network service is provided by using the existing network
resources efficiently.
The QoS profile supports two methods:
• Class of Service (CoS): Marks the packet’s layer 2 header to specify its priority
• Differentiated Services Code Point (DSCP): Inserts a code value into the packet’s layer 3
header for prioritization.
The layer 2 CoS allows you to specify priority for data packets when traffic is buffered in the
segment due to congestion. The layer 3 DSCP detects packets based on their DSCP values. CoS is
always applied to the data packet regardless of the trusted mode.
The Segment Security profile provides stateless layer 2 and layer 3 security by checking the
ingress traffic to the segment and dropping unauthorized packets sent from VMs. The profile
matches the IP address, MAC address, and protocols to a set of allowed addresses and protocols.
You can configure the Bridge Protocol Data Unit (BPDU) filter, DHCP snooping, DHCP server
block, and rate limiting options.
You can configure the Bridge Protocol Data Unit (BPDU) filter, DHCP snooping, DHCP server
block, and rate limiting options:
• BPDU Filter: Clicking the BPDU Filter toggle button to on enables BPDU filtering. When
the BPDU filter is enabled, all of the traffic to the BPDU destination MAC address is blocked.
When enabled, the BPDU filter also disables the Spanning Tree Protocol (STP) on the logical
segment ports because these ports are not expected to take part in STP.
• BPDU Filter Allow List: You click the destination MAC address from the BPDU destination
MAC addresses list to allow traffic to the permitted destination.
• Clicking the Non-IP Traffic Block toggle button to on allows only IPv4, IPv6, ARP, GARP,
and BPDU traffic. The rest of the non-IP traffic is blocked. The permitted IPv4, IPv6, ARP,
GARP. and BPDU traffic is based on other policies set in address binding and SpoofGuard
configurations. By default, this option is disabled to allow non-IP traffic to be handled as
regular traffic.
• You can configure rate limits for the ingress or egress broadcast and multicast traffic. Rate
limits are configured to protect the segment or the VM from threats such as broadcast storms.
To avoid any connectivity problems, the minimum rate limit value must be >= 10 pps.
• Ensuring that the IP addresses of VMs cannot be altered without intervention. If you do not
want VMs to alter their IP addresses without proper change control review, you can use
SpoofGuard to ensure that the VM owner cannot simply alter the IP address and continue
working unimpeded.
• Ensuring that the distributed firewall rules are not inadvertently (or deliberately) bypassed.
For distributed firewall rules created using IP sets as sources or destinations, a VM could have
its IP address forged in the packet header, thereby bypassing the rules in question.
• At the port level, the allowed MAC, VLAN, or IP whitelist is provided through the Address
Bindings property of the port. When the VM sends traffic, it is dropped if its MAC, VLAN, or
IP address does not match the MAC, VLAN, or IP properties of the port. The port-level
• At the segment level, the allowed MAC, VLAN, or IP whitelist is provided through the
Address Bindings property of the segment. This property is typically an allowed IP range or
subnet for the segment, and the segment-level SpoofGuard deals with traffic authorization.
Traffic must be permitted by the port and the segment levels by SpoofGuard before it is allowed
into a segment. Enabling or disabling port- and segment-level SpoofGuard can be controlled using
the SpoofGuard segment profile.
The host N-VDS learns the MAC-to-IP association by snooping the ARP and DHCP traffic. The
learned information is pushed from each host to the control plane.
All broadcast, unicast, and multicast (BUM) traffic is treated the same: flooded to all participating
hypervisors in the segment. The replication is performed in software.
Each host transport node is a tunnel endpoint. Each TEP has an IP address. These IP addresses can
be in the same subnet or in different subnets, depending on your configuration of IP pools or
DHCP for your transport nodes.
When two VMs on different hosts communicate directly and ARP is resolved, unicast-
encapsulated traffic is exchanged between the two TEP IP addresses without any need for
flooding. However, as with any layer 2 network, sometimes traffic that is originated by a VM,
such as an ARP request, needs to be flooded, which means that the packet needs to be sent to all of
the other VMs belonging to the same segment. This is the case with layer 2 BUM traffic.
In the diagram, VM2, residing on transport node 2 (TN2) needs to send traffic to VM9 residing on
TN9. VM9’s MAC address is unknown to TN2 or the control plane. Therefore, VM2 sends out an
ARP request (broadcast frame) seeking VM9’s MAC address. TN2 floods this ARP request frame
out to all other transport nodes within VNI 5000. VM9 on TN9 receives the ARP request and
responds with an ARP reply. ARP tables on hosts are then updated to reduce future flooding.
• Head Replication mode: This mode is also known as Source Mode or Headend Replication.
The source host simply duplicates each BUM frame and sends a copy to each TEP (on a
particular VNI) that it knows of.
• Hierarchical Two-Tier Replication (default mode): This mode is also known as the MTEP
mode. It involves a host in another L2 domain that performs replication of BUM traffic to
other hosts within the same VNI.
• TN1 replicates (because the control plane does not have the desired information) to TN2 and
TN3 because they are in the same L2 domain.
• Meanwhile, TN1 also needs to replicate the packet to the remote transport nodes (TN4 and
TN5 in a L2 domain and TN7, TN8, TN9 in another L2 domain).
• Because TN6 does not participate in VNI 5000, the packet is not replicated to TN6.
• TN1 also sends a copy of the BUM packet to each remote MTEP.
The role of MTEP is to replicate the received BUM packet locally and forward it to other TNs
within the same L2 domain.
• Because TN6 does not participate in VNI 5000, the packet is not sent to TN6.
• NSX-T Data Center is designed to meet the demands of containerized workload, multi-
hypervisor, and multicloud environments.
• The logical routing functionality focuses on multitenant environments. Gateways can support
multiple instances where complete separation of tenants and networks are required.
• Logical routing is optimized for cloud environments. It is suited for containerized workload,
multi-hypervisor, and multicloud data centers.
• The distributed routing architecture provides optimal routing paths. Routing is done closest to
the source. For example, traffic from two VMs on different subnets residing on the same host
can be routed in the kernel. The traffic does not need to leave the host to get routed, thereby
avoiding hairpinning.
• NSX Edge transport nodes that host gateways provide network services that cannot be
distributed to hosts.
• No dynamic routing protocol is needed between the two-tiered gateways, simplifying data
center routing.
• Gateways provide centralized services. Layer 3 functionalities, such as NAT, are provided
through the services running on NSX Edge nodes.
• When multiple gateway instances are installed, multitenancy and network separation are
supported on a single gateway. Logical routing is enhanced for most cloud use cases that
involve multiple service providers and tenants.
• North-south routing enables tenants to access public networks. North-south means a traffic
direction leaving or entering into a tenant administrative domain. Connections to and from the
entities outside of the tenant's premises can be considered as north-south connectivity.
• East-west traffic flows between various networks within the same tenant. In other words,
traffic is sent between logical networks (between logical switches) under the same
administrative domain.
Tier-1 Gateways have downlink ports to connect to NSX-T Data Center logical switches and
uplink ports to connect to NSX-T Data Center Tier-0 Gateways:
• The logical interface serves as the default gateway for the connected network.
• Can provide network services such as NAT, load balancing, edge firewall, VPN, and more.
The distributed router (DR) modules, which are placed on the transport nodes, can forward packets
based on the given setup and routing decisions. However, these modules cannot perform certain
required functions. In those cases, application traffic is diverted through a service router (SR) in a
given edge node to perform these functions.
Some services in NSX-T Data Center are not distributed, including physical infrastructure
connectivity, NAT, DHCP server, NSX Edge firewall, logical load balancer, different VPN
services, metadata proxy for OpenStack, and more.
One instance runs on the NSX Edge node to support connectivity to the service router on that
same NSX Edge node. When the service and distributed routers are created, they are
interconnected through the router-link port between them.
NSX Edge nodes are not the same as the Edge Services Gateway in NSX for vSphere.
Gateways are distributed across the kernel of each host. A gateway can be deployed as either a
Tier-0 or a Tier-1 Gateway:
The Tier-1 Gateway must connect to the Tier-0 Gateway, with the exception of a single-tier
topology in which Tier-0 is directly connected to upstream physical gateways.
The Tier-1 Gateway does not require an edge node if no services are used. It has preprogrammed
(by the management plane) connections toward its upstream Tier-0 Gateway.
Both Tier-0 and Tier-1 Gateways support stateful services, such as NAT. Stateful services are
centralized on gateway nodes.
The stateful function supported by Tier-0 and Tier-1 Gateways is routing. Unlike stateful services,
no state must be maintained in routing.
In logical router deployment in NSX-T Data Center, different types of connections require
different types of interfaces:
• The uplink interface provides connections to the external physical infrastructure. VLAN and
overlay interface types are supported, depending on the use case. The uplink interface is
where the external BGP peering can be established. External service connections, such as
IPSec VPN, can also be used through the uplink interface.
• The downlink interface connects workload networks (where endpoint VMs are running) to the
routing infrastructure. A downlink interface is configured to connect to a logical switch (local
subnet). It is the interface that provides the default gateway for the VMs in that subnet.
• RouterLink is a type of interface that connects Tier-0 and Tier-1 Gateways. The interface is
created automatically when Tier-0 and Tier-1 Gateways are connected. It uses a subnet
assigned from the 100.64.0.0/10 IPv4 address space.
• The centralized service port (CSP) is a special purpose port to enable centralized services for
mainly VLAN-based networks. North-south service insertion is another use case that requires
a centralized service port to connect partner appliance and redirect north-south traffic for
partner services. Centralized service ports are supported on both active-standby Tier-0 logical
routers and Tier-1 routers. Firewall, NAT, and VPNs are supported on this port.
Support for VLAN-backed downlinks (centralized service port) to Tier-0 and Tier-1 Gateways
was introduced in NSX-T Data Center 2.2. You can extend NSX Edge services to customer
environments with only VLAN-based networks. Downlink interfaces can also be used for VLAN-
based connections.
Gateway firewall rules and NAT configuration can be applied directly to CSP.
In a single-tier deployment, only Tier-0 gateways are used (no Tier-1). The segments are directly
connected to the Tier-0 layer. The upstream connectivity is provided by the service provider. The
southbound connectivity is performed by the tenant.
In the diagrams, Segments A, B, C, and D are connected to Tier-1 Gateways. The Tier-0 Gateway
is also known as the provider gateway. Tier-1 Gateways can be owned and configured by the
tenants, depending on the business requirements.
The two-tier routing topology is not mandatory. If the provider and the tenant do not need to be
separated, a single-tier topology can be used.
The Tier-0 Gateway is owned and configured by the provider. The Tier-1 Gateway is owned and
configured by the tenants and is typically provisioned by cloud management platforms (CMPs).
The purpose of NSX Edge is to provide computational power to deliver IP routing and services.
NSX Edge is an important part of the NSX-T Data Center transport zone.
NSX Edge nodes provide the administrative background and computational power for dynamic
routing and services. Edge nodes are appliances with pools of capacity that can host distributed
routing and nondistributed services. NSX Edge nodes provide high availability, using active-active
and active-standby models for resiliency.
NSX Edge is commonly deployed in DMZs and multitenant cloud environments, where it creates
virtual boundaries for each tenant.
For NSX Edge bare metal NIC requirements, see the supported adapters listed on the NSX Edge
Bare Metal Requirements page in the NSX-T Data Center Installation Guide at
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.4/nsxt_24_install.pdf.
If the hardware is not listed, the storage, video adapter, or motherboard components might not
work on the NSX Edge node.
All transport nodes include the DR instance to provide localized and distributed routing functions.
The SR instances run on the edge transport nodes.
When VMs communicate with external entities, they pass through the edge nodes in their own
resource cluster. This process is true in reverse: VMs get traffic from external entities through the
edge nodes.
NSX-T Data Center edge cluster scaling and maximums define the maximum nodes supported in
an edge cluster, how many cluster and edge nodes can belong to a cluster, and what combination
of different forms of nodes (VM and bare metal) can be used.
When you deploy NSX Edge nodes, different types of requirements apply depending on the
deployment method. In general, all the requirements listed on the slide apply for both manual or
automated deployments, with the exception of the requirements related to installation media and
OVF or OVA templates.
On the Name and Description page of the Add Edge VM wizard, you configure the following
settings:
• Host name/FQDN
• Description
• Form Factor
NSX-T Data Center Edge nodes can be installed or deployed using various methods. If you prefer
an interactive edge installation, you can use a UI-based VM management tool, such as vSphere
Web Client connected to vCenter Server.
The image shows the option to deploy through vCenter Server or vSphere client. A wizard guides
you through the steps so that you can provide the required details.
This process does not register the NSX-T Data Center edge node with the management plane.
Additional commad-line operations are required. When finished, you can verify the connectivity
of the edge node in various ways.
You can use the VMware OVF tool to install NSX Edge nodes. This tool can be downloaded from
the VMware portal and supports many different types of deployment, including deployment
operations in vCenter Server or vCloud Director.
The following extra options (defined by the --prop command-line switch) are available:
• nsx_ntp_0: Setting for the NTP server IP address for the appliance management
Manual installation is also available when you install NSX Edge nodes on a bare metal server.
After the listed requirements are verified, the installation process should start automatically from
the installation media. Once the boot-up and power-on processes are complete, the system requests
an IP address through DHCP or requires manual entry.
By default, the root login password is vmware, and the admin login password is default.
Further setup procedures include enabling the interfaces and joining the edge node to the
management plane.
The preboot execution environment (PXE) boot can also be used to install NSX Edge nodes on a
bare metal platform.
This operation automates the installation process. You can preconfigure the deployment with all
the required network settings for the appliance.
The PXE method supports the NSX Edge node deployment only. It does not support NSX
Manager or NSX Controller deployments.
The manual installation of NSX Edge nodes does not include an automated procedure to ensure
that the management plane sees edge nodes as available resources. You must join NSX Edge with
the management plane so that they can communicate with each other. Joining NSX Edge nodes to
the management plane ensures that the edge nodes are available from the management plane as
managed nodes.
First, you must verify that you have the administration privileges to access NSX Edge nodes and
the NSX Manager simplified UI. Then you can join NSX Edge nodes to the management plane
using the CLI.
In the simplified UI, select Network > Fabric > Nodes and select the Edge Transport Nodes tab
to view the status of the edge nodes known by the NSX Manager or the management plane.
The Edge Transport Nodes tab lists the following categories:
• Configuration State
• Node Status
• Node Version
Clicking the information icon next to the node status provides additional information about the
reasons for a given status.
You can also enter the command set service ssh start-on-boot to set the SSH
service to autostart when the VM is powered on.
4. Use the command get service ssh to check the result.
When the edge node deployment is complete, you can verify connectivity in several ways.
Clicking +ADD starts the process for creating an edge cluster. You must create an edge cluster
profile, either separately or through the Add Edge Cluster wizard, including the edge node
members of this planned cluster.
Edge Node deployment requires specific interface assignments, particularly for the VM form
factor:
• Other interfaces must be assigned to the data path process that creates the overlay or VLAN-
based N-VDS.
All non-management links on the Edge Node will be used for the uplinks and tunnels. For
example, one might be used for a tunnel endpoint. The other might be used for an NSX Edge-to-
external physical uplink.
During N-VDS creation the uplinks can be individually assigned per N-VDS. The amount of
"uplink" interfaces will be decided by what uplink profiles they will use.
When deploying NSX Edge nodes on a hypvervisor or host that is already a transport node, you
can take different approaches to the deployment:
• If the host transport node (in this case, an ESXi host) has multiple virtual switches running,
for example, one switch is from vSphere (either a vSphere distributed switch or a vSphere
standard switch) and an NSX-managed virtual distributed switch (N-VDS) from NSX,
separate uplink interfaces must be used for each switch.
– The NSX Edge vNICs are attached to the standard switch or the distributed switch.
– Edge nodes can use a virtual distributed switch or virtual standard switch, which can use
different uplinks per the above requirement.
– This installation does not require the edge node TEP IP range to be different from the
host transport node TEP IP address range.
• If the transport node uses only the N-VDS, you must deploy the edge node using separate
VLAN-backed logical switches for uplink connectivity.
Depending on the environment, the order of the configuration tasks can vary.
Before configuring the Tier-0 Gateway, you should verify that your NSX Controller cluster is
stable, that at least one NSX Edge node is installed, and that an NSX Edge cluster is configured.
After you create the Tier-0 and Tier-1 Gateways, you must manually connect the gateways.
The gateways are not automatically connected to each other during the creation process. The
management plane does not know automatically which Tier-1 instance should connect to which
Tier-0 instance.
After you manually connect these instances, the management plane programs the routes in these
instances to establish connectivity between the tiers.
Each Tier-0 Gateway can have multiple uplink connections, depending on the requirements and
the actual configuration.
In the example, two different segments are configured on two active edge nodes in the cluster.
In the first step, you create a Tier-0 Gateway. Later, you can select whether the configuration
should be continued.
Tier-1 Gateways have downlink ports for connecting to NSX logical switches, and gateway-link
ports for connecting to NSX Tier-0 Gateways.
When connecting a segment to a gateway, the subnet or gateway IP address must be configured.
The Tier-1 Gateway is created and the interfaces for various logical networks are configured. Now
you can verify the east-west connectivity within the tenant environment.
Using route advertisement ensures that the networks defined for tenant segments are available for
the connected Tier-0 Gateway, which, in turn, can advertise them accordingly.
In the diagram and the command output, the VM web-sv-01a (172.16.10.11) can ping the Tier-0
Gateway (192.168.100.2) and the upstream physical router (192.168.100.1), assuming routing is
configured on the physical router.
Web-sv-01a can also ping a remote VM 172.20.10.80.
Complete north-south connectivity is now established.
The slide highlights the segment that is used to transmit traffic between T0_DR and T0_SR. This
image shows only a logical view.
Tier-0 Gateways can use only BGP as a dynamic routing protocol. The Tier-0 Gateway BGP
topology should be configured with redundancy and symmetry between the Tier-0 Gateways and
the external peers.
BGP is the only supported dynamic routing protocol on the Tier-0 Gateway in NSX-T Data
Center.
To use the edge node CLI to verify NSX Edge BGP connections, you follow these steps:
1. Log in to the edge node CLI.
2. Run the get logical-routers command to acquire the Virtual Routing and
Forwarding (VRF) number of the Tier-0 service router, which is SR-T0-LR-01 in the
example.
3. Enter the vrf <vrf_number> command to enter the Tier-0 service gateway context. This
command restricts the scope of the output of the commands to the configured VRF.
4. Enter the get bgp neighbor or get bgp neighbor summary commands to verify
that the BGP neighbor state is established.
You can enable BFD per BGP neighbor and globally per gateway. The protocol timer can be fine-
tuned to cover environmental needs. The BFD timer range is 300 milliseconds to 10,000
milliseconds. The default 1,000 milliseconds x 3.
IP prefix lists are a way to define subsets or lists of IP addresses. This way of grouping is not the
same as defining a subnet. The IP prefix list defines a group of different subnets and individual IP
addresses as well as an action to either allow or deny those IP addresses. Additionally, using le or
ge prefix modifications, you can limit or extend the subnet or IP range.
In NSX-T Data Center, you can use IP prefix lists for various purposes, such as BGP filtering.
For example, you can add the IP address 192.168.100.3/24 to the IP prefix list and deny that the
route be redistributed to the northbound gateway. As a result, with the exception of the
192.168.100.3/24 IP address, all other IP addresses are shared with the gateway.
You can also append an IP address with less-than-or-equal-to (le) and greater-than-or-equal-to (ge)
modifiers to grant or limit route redistribution.
In the example, the IP prefix list permits network prefixes 10.0.0.0/8, so that they can be
advertised out by the Tier-0 Gateway to its upstream BGP neighbors. This prefix list also denies
the 192.168.0.0/24 network prefixes with masks greater than or equal to 26 bits and less than or
equal to 30 bits in length. This configuration means that the Tier-0 Gateway cannot redistribute
these prefixes to its upstream BGP neighbors.
Prefixes that are not specifically permitted in a prefix list are denied implicitly. If you need to
reverse this behavior (permit all other routes that are not specifically denied), you can change the
action of the default rule prefixlist-out-default to Permit.
A route map consists of a sequence of IP prefix lists, BGP path attributes, and an associated
action. The gateway scans the sequence for an IP address match. When a match occurs, the
gateway performs the action and stops scanning.
Route maps can be referenced at the BGP neighbor level for route redistribution. When IP prefix
lists are referenced in route maps, and the route map action of permitting or denying is applied, the
action specified in the route map sequence overrides the specification in the IP prefix list.
By default, the BGP process denies routes received from a peer if the route has its own
autonomous system number (ASN) in the route update. The receiving gateway considers this route
an internal route and denies it to avoid routing loops.
However, this scenario can be valid when a single customer has two locations interconnected to
the same service provider. In this case, the BGP neighbors are configured with the allowas-in
option to receive routes with the same ASN.
In the example, Company X has two locations: Site-A and Site-B. Both sites belong to AS 64511.
Both sites are connected to an ISP.
Route 1 (RT1) advertises network prefix x.x.x.x to the ISP. The ISP gateway, in turn, advertises
x.x.x.x to RT2.
The ASNs (AS path) are recorded in the advertised prefix as it traverses each AS. By default, RT2
does not accept the BGP advertised prefix x.x.x.x because RT2 sees its own AS number 64511 in
it. However, x.x.x.x is a legitimate network residing at another site (Site-A). In this scenario, the
network administrator can enable the allowas-in option so that RT2 accepts the route
advertisement x.x.x.x with its own ASN 64511 is in the AS path.
The multipath relax feature is available beginning with NSX-T Data Center 2.4.
For application load-balancing purposes, the same prefix should be advertised from multiple BGP
gateways. From the perspective of other devices, this prefix includes BGP paths with different
AS_PATH attribute values but the same AS_PATH attribute lengths.
BGP implementations support load-sharing over the above-mentioned paths. This feature is
sometimes known as multipath relax or multipath multiple-AS and enables equal-cost multipath
(ECMP) routing across different neighboring ASNs, if all other attributes (weight, local
preference, and so on) are equal.
In the diagram, the network prefix 200.1.1.0/24 in AS 100 is advertised by RT1 to its peers: RT2
in AS 200 and RT3 in AS 300.
RT2 and RT3 both advertise the prefix 200.1.1.0/24 to RT4 in AS 400.
RT4 can reach the 200.1.1.0/24 network through two paths: one through AS 200 and the other
through AS 300. Both of these paths have equal cost and path length.
iBGP and two related options, set local-preference and set next-hop-self, are
supported. These settings offer flexibility in exchanging routing information between logical space
and the fabric.
Inter-SR routing is a new feature available with the NSX-T Data Center 2.4 release.
Grouping edge nodes offers the benefits of high availability for edge node services. The service
router runs on an edge node and has two modes of operation: active-active or active-standby.
Active-active mode is offered on NSX Edge:
• Logical routing is active on more than one NSX Edge node at a time.
Active-active is a high availability mode where a gateway is hosted on more than one edge node at
a time:
• When one node fails, traffic is not disrupted, but bandwidth is constrained.
• High availability can be optionally enabled during the creation of a Tier-0 Gateway.
• By default, the active-active mode is used. In the active-active mode, traffic is load-balanced
across all members.
Stateful services such as NAT and firewall cannot be used in this mode.
Active-standby is a high availability mode where a gateway is operational on only a single edge
node at a time.
This mode is required when stateful services are enabled. Stateful services typically require
tracking of connection state, for example, sequence number check. As a result, traffic for a given
session needs to go through the same edge node.
Active-standby mode is supported on both Tier-1 and Tier-0 service routers (SRs).
With active-standby mode, the Tier-0 SR acts as a hot standby. The Tier-0 state is synchronized
but it does not actively forward traffic. Both SRs maintain BGP peering with the physical
gateway.
In active-standby mode, all traffic is processed by an elected active member. If the active member
fails, a new member is elected to be active.
For Tier-1, active-standby SRs have the same IP addresses northbound.
For Tier-0, active-standby SRs have different IP addresses northbound and have eBGP sessions
established on both links.
BFD is a network protocol used to detect faults between two forwarding engines connected by a
link. Failures are detected on a per-logical router basis. The conditions used to declare an edge
node down are the same in active-active and active-standby high availability modes.
To ensure uninterrupted routing of network traffic, the NSX Edge nodes exchange keepalive
messages, which are BFD sessions running between the nodes. Edge nodes in an edge cluster
exchange BFD keepalives on the management and tunnel interfaces. When the standby Tier-0
Gateway fails to receive keepalives on both management and tunnel interfaces, it announces itself
as active.
The BFD protocol provides fast detection of failure for forwarding paths or forwarding engines,
improving convergence. Edge VMs support BFD with a minimum BFD timer of one second with
three retries, providing a three-second failure detection time. Bare metal edges support BFD with a
minimum BFD Tx/Rx timer of 300 milliseconds with three retries, which implies a 900
milliseconds-failure-detection time.
If an active gateway loses all its BGP neighbor sessions and a standby gateway is configured,
failover occurs. An active SR on an edge node is declared down when all eBGP sessions on the
peer SR are down.
This scenario is only applicable on Tier-0 with dynamic routing.
eBGP is configured on the uplink between each NSX Edge node and the exterior physical
gateways.
eBGP status is also monitored during the keepalive exchanges.
The default keepalive interval is 60 seconds. The minimum time between advertisements is 30
seconds.
If all overlay tunnels to the compute hypervisors are down, the active edge node is not receiving
any tunnel traffic from compute hypervisors. Then the standby edge node takes over.
When an NSX-T Data Center segment requires a layer 2 connection to a VLAN-backed port group
or needs to reach another device, such as a gateway which resides outside of an NSX-T Data
Center deployment, you can use an NSX-T Data Center layer 2 bridge.
The main benefits of logical bridging are as follows:
• It provides high throughput using Data Plane Development Kit (DPDK)-based physical-to-
virtual bridging on edge nodes.
• In a KVM-only environment, layer 2 bridging can only be configured using edge nodes.
This diagram demonstrates how the physical server and the App1 server (virtual) can exist on the
same subnet.
The NSX-T Data Center components that are key to layer 2 bridging are bridge clusters, bridge
endpoints, and bridge nodes:
• A bridge endpoint identifies the physical attributes of the bridge, such as the bridge cluster ID
and the associated VLAN ID. Each segment that is used for bridging a virtual and physical
deployment has an associated VLAN ID.
• The NSX bridge node is a transport node that belongs to a bridge cluster. The segment is
attached to a bridge cluster and is called a bridge-backed segment. To be eligible for bridge
backing, a segment must be in an overlay transport zone, not in a VLAN transport zone.
• The NSX bridge node (on the left) and the NSX transport node (in the middle) are part of the
same overlay transport zone. As such, their N-VDS are attached to the same bridge-backed
segment.
• The bridge-backed segment has a VNI 10500, which is mapped to VLAN 150. The NSX
bridge node has VLAN 150 and VNI 10500 configured.
• The NSX transport node is not a part of the bridge cluster. It is a normal transport node, which
can be a KVM or ESXi host. In the diagram, VM1 residing on this node is attached to the
bridge-backed segment.
• The other node on the right is not a part of the NSX-T Data Center overlay. It might be any
hypervisor with VM2 or it might be a physical network node. If the node that is not part of
NSX-T Data Center is an ESXi host, you can use a standard virtual switch or a vSphere
distributed switch for the port attachment. But the VLAN ID associated with the port
attachment must match the VLAN ID on the bridge-backed segment. Also, communication
occurs over layer 2, so the two end devices must have IP addresses in the same subnet
(172.16.30.0 network).
To use ESXi host transport nodes for bridging, you create a bridge cluster. To use NSX Edge
transport nodes for bridging, you create a bridge profile.
A bridge cluster is a collection of ESXi host transport nodes that can provide layer 2 bridging to a
segment. A bridge cluster can have a maximum of two ESXi host transport nodes as bridge nodes.
With two bridge nodes, a bridge cluster provides high availability in active-standby mode. Even if
you have only one bridge node, you must create a bridge cluster. After creating the bridge cluster,
you can add an additional bridge node.
VMware recommends that bridge nodes do not include hosted VMs.
You can add a transport node to only one bridge cluster. You cannot add the same transport node
to multiple bridge clusters.
You can configure a bridge-backed logical switch to provide layer 2 connectivity between VMs in
an NSX-T Data Center overlay and devices that are outside of NSX-T Data Center.
In a KVM-only environment, layer 2 bridging can only be configured using NSX Edge nodes.
Preemption is an action taken by the preferred node. If the preferred node fails and recovers, it
demotes its peer and becomes the active node. The peer changes its state to standby.
Before configuring a bridge-backed logical switch, you should verify the following components:
• At least one ESXi or KVM host to serve as a regular transport node. This node hosts VMs that
require connectivity with devices outside of a NSX-T Data Center deployment.
• A VM or another end device outside of the NSX-T Data Center deployment. This end device
must be attached to a VLAN port matching the VLAN ID of the bridge-backed logical switch.
• One logical switch in an overlay transport zone to serve as the bridge-backed logical switch.
You can attach the logical switch to a bridge cluster or a bridge profile.
After completing the configuration of the bridge-backed logical switch, you can connect VMs to
the switch, if they are not already connected. The VMs must be on transport nodes in the same
transport zone as the bridge cluster or bridge profile.
Network address translation (NAT) was designed originally to conserve public internet address
space. During the 1990s, Internet providers quickly depleted the available IPv4 address supply.
NAT became the primary method for IPv4 address conservation. NAT performs one-to-one
mapping (one public IP address is mapped to one private IP address) or one-to-many mapping
(one public IP address is mapped to multiple private IP addresses).
You can create different NAT rules:
• Source NAT (SNAT) translates the source IP of the outbound packets to a known public IP
address so that the application can communicate with the outside world without using its
private IP address. SNAT also keeps track of the reply.
• Destination NAT (DNAT) enables access to internal private IP addresses from the outside
world by translating the destination IP address when inbound communication is initiated.
• Reflexive NAT rules are stateless access control lists (ACLs) that must be defined in both
directions. These rules do not keep track of the connection. Reflexive NAT rules are applied
when stateful NAT cannot be used. For example, when a Tier-0 logical router is running in
active-active equal-cost multipath (ECMP) mode, you cannot configure stateful NAT because
asymmetrical paths might cause issues.
Whenever NAT is enabled, a service router (SR) component must be instantiated on an edge
cluster.
To configure NAT, specify the edge cluster where the service should run. You can also configure
the NAT service on a specific edge node pair.
If no specific edge node is identified, the platform performs auto-placement of the services
component on an edge node in the cluster using a weighted round-robin algorithm.
Using the No SNAT rule disables source NAT. This rule applies to the traffic going toward the out
direction.
Using the No DNAT rule disables destination NAT. This rule applies to the traffic coming toward
the in direction.
When a Tier-0 or Tier-1 logical router (LR) is running in active-active mode, you cannot configure
stateful NAT where asymmetrical paths might cause issues. For active-active routers, you can use
reflexive NAT, which is sometimes called stateless NAT.
For reflexive NAT, you can configure a single source address to be translated or a range of
addresses. If you configure a range of source addresses, you must also configure a range of
translated addresses. The size of the two ranges must be the same. The address translation is
deterministic, meaning that the first address in the source address range is translated to the first
address in the translated address range, the second address in the source range is translated to the
second address in the translated range, and so on.
In the diagram, the source VM (172.16.101.11) on the inside network sends a packet to an outside
client (x.x.x.x) on the Internet. The packet is routed to the Tier-0 Gateway hosted on NSX Edge
Node 1, which creates a reflexive NAT entry: source IP 172.16.10.11 and translated IP 80.80.80.1.
When the return traffic arrives (with the destination 80.80.80.1), the same reflexive NAT entry is
used to translate 80.80.80.1 back to 172.16.10.11.
• Action: Specify the action of the NAT rule if a match occurs. The No DNAT action disables
the DNAT rule, in which case the original destination address in the IP packet is not
translated.
• Source IP: Specify a source IP address or an IP address range in CIDR format. If you leave
this field blank, the NAT rule applies to all sources outside of the local subnet.
• Translated IP: The new IP address as the result of network address translation.
• Applied To: Select objects that this NAT rule applies to. The available objects are Tier-0
Gateways, interfaces, labels, service instance endpoints, and virtual endpoints.
A DHCP server profile specifies an NSX Edge cluster or members of an NSX Edge cluster. A
DHCP server with this profile services DHCP requests from VMs on logical switches that are
connected to the NSX Edge nodes that are specified in the profile
On the DHCP configuration page of the simplified UI, you can create DHCP servers to handle
DHCP requests and create DHCP relay services to relay DHCP traffic to external DHCP servers.
To edit or create a new Tier-1 Gateway with a DHCP relay server, you click IP Address
Management to set the server type to DHCP Relay Server.
The DHCP relay server forwards the DHCP IP requests to the external DHCP server.
The DHCP relay server can be configured on Tier-1 or Tier-0 Gateways.
The slide shows that the Tier-1 Gateway, T1-LR-1, is attached to the local DHCP server, whereas
the Tier-1 Gateway, T2-LR-2, is configured to relay DHCP requests to the external DHCP server.
The slide shows how to create a DNS forwarder service in NSX-T Data Center from the simplified
UI.
To create a DNS forwarder, you perform the following steps:
• Select the DNS Services tab and enter details for the following fields:
– Name
– Tier0/Tier1 Gateway
– DNS Service IP: This IP address is the listen IP address for DNS requests.
– Default Zone: DNS client requests are forwarded to the default zone unless configured to
relay the request to the conditional zone. The default zone contains the following
parameters.
• Source IP: Enter the IP address that the DNS forwarder uses as the source IP
address to send the DNS query to external DNS servers and receive the DNS reply
from external DNS servers.
• Source IP: Enter the IP address that the DNS forwarder uses as the source IP address
to send the DNS query to external DNS servers and receive the DNS reply from
external DNS servers.
When you configure DNS services, you can use the simplified UI to create DNS zones.
You create DNS zones on the DNS Zones tab. You can use these zones when you configure the
DNS forwarder.
Run the get dns-forwarder status command to query the DNS forwarder service
running on NSX Edge.
Run the get dns-forwarder config command to query the DNS forwarder configuration
on the NSX Edge node.
The high availability for the DNS forwarder is active/standby, depending on the NSX Edge
cluster.
In the ADD LOAD BALANCER wizard, you provide the name of the load balancer, specify the
deployment size, and provide the Tier-1 Gateway to attach your load balancer to.
From this wizard, you can also select the Set Virtual Servers link to configure the virtual servers
for the load balancer you just created.
When configuring a layer 4 virtual server, you provide values for the following parameters:
• Name
• Virtual IP address
• Application Profile: This setting is populated by default based on the protocol type specified
when you created the virtual server.
• Persistence profile: Layer 4 virtual servers only support the Source IP option.
When configuring a layer 7 virtual server, you provide values for the following parameters:
• Name
• IP address
• Ports: Port ranges are not supported when configuring a layer 7 virtual server.
• Application Profile: This setting is populated by default based on the protocol type specified
when you create the virtual server.
• Persistence profile: Layer 7 virtual servers support both Source IP and Cookie persistence
options.
• SSL Configuration: You can configure SSL parameters on both the server and client side.
• Load Balancer Rules: Layer 7 virtual servers support the configuration of layer 7 rules.
Additional persistence profiles can be created based on source IP or cookies to suit your
application needs
When configuring a server pool, you provide values for the following parameters:
• Name
• Active Monitor
The IPSec VPN secures traffic flowing between two networks connected over a public network
through IPSec gateways called endpoints. NSX Edge supports site-to-site IPSec VPN between an
NSX Edge instance and remote IPSec-capable gateways.
The difference between the two headers is that the authentication header does not provide
encryption, whereas the encapsulating security payload header enables the encryption of the
protected payload.
IPSec VPN is not supported in the NSX-T Data Center limited export release.
The table shows the various sizes of NSX Edge nodes and their supported VPN sessions.
• Tier-0 Gateway: From the Tier-0 Gateway drop-down menu, you can select a Tier-0
Gateway to associate with this IPSec VPN service.
• IKE Log Level: This setting is for VPN service logging. The Internet Key Exchange (IKE)
logging level determines the amount of information you want collected for the IPSec VPN
traffic. The default is set to the Info level.
• Admin Status: This setting enables or disables the IPSec VPN service. By default, the value
is set to Enabled, which means the service is enabled on the Tier-0 Gateway.
• Tags: You enter a value for tags if you want to include this service in a tag group.
To configure a DPD profile, you select values for the following options:
• DPD Probe Interval (sec): You provide a value in seconds to define at which times a DPD
detection packet should be sent.
• Tags: For cloud-based installations, almost every entity can hold a tag.
• IKE Version: The options are IKE V1, IKE V2, or IKE FLEX. The selection depends on
your business requirements.
• Encryption Algorithm: This setting specifies the level of encryption to secure the
communication.
• SA Lifetime (sec): The lifetime (in seconds) of the security associations (individual
communicating peer identifiers) after which a renewal is required.
• Tags: For cloud-based installations, almost every entity can hold a tag.
You provide values for the following settings to configure the IPSec profile:
• Encryption Algorithm: This setting specifies the level of encryption to secure the
communication.
• PFS Group: This setting specifies the Perfect Forward Secrecy (PFS) group, which adds
protection to the keys used for building secure channels. You can enable or disable this
option.
• Diffie-Hellman: This setting is an additional security algorithm for establishing a secret key-
exchange channel.
• SA Lifetime (sec): The setting specifies the lifetime (in seconds) of the security associations
(individual communicating peer identifiers) after which a renewal is required.
• Tags: For cloud-based installations, almost every entity can hold a tag.
To configure the local endpoints, you provide values for the following options:
• VPN Service: This setting specifies which IPSec VN service to use with the endpoint.
• Site Certificate: You use this setting with certificate-based authentication to specify which
certificate to use with this endpoint.
• Local ID: This setting specifies the IPsec ID of the local side. The local ID is usually the
same as the local IP address.
• Tags: For cloud-based installations, almost every entity can hold a tag.
To configure the policy-based IPSec session, you specify the following settings:
• Name : You use the name to identify the service when you need to use it.
• VPN Service: This setting is the predefined service to use with this session.
• Local Endpoint: This seeing is the earlier configured local endpoint for use with this
configuration session.
• Remote IP: The setting specifies the IP address of the remote IPSec-capable gateway for
building the secure connection.
• Authentication Mode: This setting defines whether to use the preshared key (PSK) or
certificate-based connection authentication.
• Local Networks and Remote Networks: These settings define the interesting traffic that
should be encrypted through this VPN session.
• Remote ID: This setting defines the identifier of the remote peer for verifying the authenticity
of the peering.
• Tags: For cloud-based installations, almost every entity can hold a tag.
• Name: You use the name to identify the service for later use.
• VPN Service: This setting is the predefined service to use with this session.
• Local Endpoint: This setting defines an earlier configured local endpoint to use with this
session configuration.
• Remote IP: This setting defines the IP address of the remote IPSec-capable gateway for
building the secure connection.
• Authentication Mode: This setting defines whether to use a preshared key (PSK) or
certificate-based connection authentication.
• Pre-shared Key: This setting provides the string for defining the key if the authentication
mode is PSK.
• Remote ID: This setting specifies the identifier of the remote peer for verifying the
authenticity of the peering.
• Tags: For cloud-based installations, almost every entity can hold a tag.
In previous releases, L2 VPN services were supported only between an NSX-T Data Center server
and a managed NSX-v edge or standalone edge.
NSX-T Data Center 2.4 adds managed edge L2 VPN client support.
The L2 VPN function is not supported in the NSX-T Data Center limited export release.
For outbound L2 VPN traffic (traffic from the internal network behind the edge node) that is
destined for any remote L2 network, the first step is to decapsulate the GENEVE frames. The
destination address of the internal frame designates whether traffic goes through the local bridge
port toward remote sites or is locally handled. Further steps are inserting the ID for the proper
VLAN and sending the traffic to the local VTI interface to encapsulate into GRE, which, in turn,
gets protected by IPSec and forwarded to any given destination.
In the inbound direction, when receiving L2 VPN traffic that is identified as such by the IPSec
engine, the traffic requires IPSec decryption first and GRE decapsulation. After being sent to the
bridge interface, traffic is sent to local networks. The required GENEVE encapsulation parameters
are based on the actual tunnel IDs for the traffic.
• Pre-shared Key: This setting is the key or password for this session.
• Tunnel Interface: You select the IP address of the VTI to use with this session.
• Remote ID: This setting designates the IPsec identifier of the remote IPSec gateway.
• Tags: For cloud-based installations, almost every entity can hold a tag.
When you configure the segments, the following settings are key:
• L2 VPN: This setting defines the previously configured L2 VPN session. The segment
defined is used through that session.
• VPN Tunnel ID: This number is used to identify the communicating local and remote L2
networks. The same ID on both sides means that they are on the same L2 broadcast domain.
The information presented on the slide is related to what is required on NSX Data Center for
vSphere to act as a peer for the NSX-T Data Center L2 VPN. For more detailed configuration
steps, see the NSX API Guide at https://docs.vmware.com/en/VMware-NSX-Data-Center-for-
vSphere/6.4/nsx_64_api.pdf.
Peer code is a Base64-encoded configuration string that is available from the L2 VPN server
through the DOWNLOAD CONFIG option or through a REST API call.
Typically, in a traditional data center, certain high-level segmentation security policies are built to
prevent various types of workloads from communicating with other types of workloads. However,
this high-level segmentation does not prevent lateral communication between workloads within a
tier. When threats breach the perimeter, their lateral spread is very hard to stop. Shared services
can traverse boundaries unchecked. Most important, this security model is not aligned to
applications, which need to be protected.
The vast majority of attacks breach perimeter security systems by compromising just one machine.
More often than not, attackers exploit human beings, not technology, to breach the system.
Phishing emails and social engineering techniques are extremely effective in getting legitimate
credentials to a machine. Attackers go after low-priority systems first. After they are inside, they
move laterally through the data center, from machine to machine, to find the information they
want.
Application-centric security policies and control are needed to address these challenges.
The distributed firewall allows you to define and enforce network security policies for every
individual workload in the environment, whether a VM, container, or a bare metal server.
However, to achieve this level of segmentation with physical or virtual firewall appliances is
expensive and operationally unrealistic. The number of rules that you must manage is enough to
make micro-segmentation of this kind impractical.
Conversely, NSX-T Data Center allows you to design and manage all of these policies from a
central location.
NSX-T Data Center micro-segmentation policies are also software-defined, making them agile and
capable of being automated.
A key advantage of the distributed firewall is context. You can use security groups and tags to
orchestrate policy. These security groups can be based on VM attributes such as name or operating
system, traditional network attributes such as IP address or port, and even higher-order application
attributes.
Micro-segmentation enables an organization to logically divide a data center into distinct security
segments down to the individual workload level, and to define distinct security controls for, and
deliver services to, each unique segment.
A central benefit of micro-segmentation is its ability to deny attackers the opportunity to pivot
laterally within the internal network, even after the perimeter is breached. NSX-T Data Center
micro-segmentation prevents the lateral spread of threats across an environment.
NSX-T Data Center supports micro-segmentation because it allows for a centrally controlled,
operationally distributed firewall to be attached directly to workloads within an organization’s
network. The distribution of the firewall for applying security policies that protect individual
workloads is highly efficient.
You can apply rules that are specific to the requirements of each workload. Of additional value is
that these capabilities are not limited to homogeneous vSphere environments. NSX-T Data Center
supports a variety of platforms and infrastructure.
Conventional security models assume that everything on the inside of an organization's network
can be trusted. Zero trust assumes the opposite: trust nothing and verify everything. This
architecture addresses the increased sophistication of network attacks and insider threats that
frequently exploit the conventional perimeter-controlled approach. For each system in an
organization's network, trust of the underlying network is removed. A perimeter is defined per
system within the network to limit the possibility of lateral (east-west) movement of an attacker.
To build the zero-trust security data center, first determine which VMs contain an application and
what network traffic is necessary for the application to function.
When you understand an application’s composition and necessary network traffic, you can create
micro-segmentation policies to restrict superfluous network traffic.
This step immediately reduces the attack surface of the application by restricting what the
application can communicate with to only the resources that it absolutely needs.
But what about legitimate, necessary network traffic? For example, an application needs to
communicate with shared services such as Active Directory (AD), users, and, potentially, other
applications.
How do we account for these communication paths and direct attacks on the VMs?
This step, securing through context, establishes and enforces the intended state and behavior of the
workload VM, including the processes that should be running, how the OS should be configured,
and so on.
The NSX-T Data Center security platform is designed to handle the firewall challenges faced by
IT administrators. One of the platform's main use cases is to address the need for context-aware
micro-segmentation of applications.
The NSX-T Data Center distributed firewall is delivered as part of a distributed platform that
offers ubiquitous enforcement, scalability, line-rate performance, multi-hypervisor support, and
API-driven orchestration. These fundamental pillars of the distributed firewall enable it to address
many different use cases for production deployment.
NSX-T Data Center implements a centralized policy and configuration capability and distributes
the policies to the firewalls.
NSX-T Data Center implements a centralized policy and configuration capability and distributes
the policies to the firewalls.
Rules in a domain must have at least one group in the source or destination that is a member of the
same domain.
The NSX Manager simplified UI enables you to configure several types of policies:
• Gateway policies: Use for configuring gateway firewall rules to control north-south traffic
• Network Introspection policies: Use for configuring north-south and east-west traffic
redirection rules
• Distributed Firewall policies: Use for configuring distributed firewall rules to control east-
west traffic
• Endpoint policies: Use for configuring Guest Introspection services and rules
The categories for distributed firewall rules are available for both distributed and gateway
firewalls:
• Ethernet: All layer 2 policies. Layer 2 firewall rules are always evaluated before layer 3 rules.
• Environment: High-level policy groupings, for example, the production group cannot
communicate with the testing group, or the testing group cannot communicate with the
development group.
• Application: Specific and granular application policy rules such as rules between applications
or application tiers, or rules between micro-services.
In a firewall policy, each firewall rule contains instructions that determine whether a packet should
be allowed or blocked, which protocols it is allowed to use, which ports it is allowed to use, and so
forth. Policies are used for multitenancy, such as creating specific rules for sales and engineering
departments in separate policies.
A policy can be defined as enforcing stateful or stateless rules. Stateless rules are treated as
traditional stateless access-control lists (ACLs). Reflexive ACLs are not supported for stateless
policies. A mix of stateless and stateful rules on a single logical switch port is not recommended
and might cause undefined behavior.
In the example, three distributed firewall policies are created: Web, MySQL, and Drop. Each
policy has one or more firewall rules.
Firewall rules are enforced in the following ways:
• Like firewall policies, firewall rules are processed in the top-to-bottom order.
• Each packet is checked against the top rule in the rule table before moving down the
subsequent rules in the table.
• The first rule in the table that matches the traffic parameters is enforced. No subsequent rules
can be enforced because the search is then terminated for that packet.
Because of this behavior, VMware recommends that you place the most granular policies at the
top of the rule table.
For any traffic attempting to pass through the firewall, the packet information is subjected to the
rules in the order shown in the policy, beginning at the top and proceeding to the default rule at the
bottom. The first rule that matches the packet has its configured action applied, and any processing
• Action: You can select from the firewall rule actions Allow, Drop, and Reject.
In the example, a firewall policy named HR-APP-DFW-Policy is created. In this policy, a rule
named To-Web is configured. This rule permits HTTPS traffic from the any source to reach the
destinations specified in Group-1.
The order of firewall rules is important in determining the handling of traffic. You can drag and
drop rules in the simplified UI to change the order.
Both IPv4 and IPv6 addresses are supported for Sources and Destinations options of the firewall
rule.
Before creating a group including AD users, you must add an AD domain to NSX Manager. You
add this domain through the NSX Manager simplified UI by navigating to System > Active
Directory > ADD ACTIVE DIRECTORY.
The main use case for creating a group that includes AD users is to configure identity-based
firewall rules. In NSX-T Data Center 2.4, Identity Firewall is only supported for virtual desktops
and virtual user sessions.
Both predefined services and user-created services can be used in firewall rules to classify traffic.
NSX-T Data Center 2.4 includes additional services to support layer 2 and layer 7 rules.
A context profile defines context-aware attributes, including application ID, domain name, as well
as subattributes such as application version or cipher set.
Context profiles include the following main attributes:
• APP_ID: You can choose from a list of preconfigured applications. You cannot add any
additional applications. Examples include FTP, SSH, and SSL. Certain applications allow
users to specify subattributes. For example, when choosing SSL administrators, you can
specify the TLS_VERSION and the TLS_CIPHER_SUITE. For CIFS, you can specify the
SMB_VERSION.
• DOMAIN_NAME: You can choose from a static list of Fully Qualified Domain Names
(FQDNs).
• Logging: You can turn logging off or on. Logs are stored in the
/var/log/dfwpktlogs.log file on ESXi and KVM hosts.
• Direction: This setting matches the direction of a packet as it moves across the network. A
direction of IN is for traffic ingressing through the firewall. A direction of OUT is for traffic
egressing though the firewall. The option IN_OUT is also available.
• Tag: Tags are a way to group VMs and other objects in a category, such as accounting,
payroll, web servers, and so on. Tags can also be used to identify quarantined VMs. Support
for rule tagging is introduced in NSX-T Data Center 2.4.
• Rule Path: Distributed firewall policies and rules are identified using their absolute path.
Knowing how to retrieve the object identifiers for policies (policy path) and rules (rule path)
from the UI is very important for troubleshooting purposes.
To view the default rules (whitelist and blacklist) that you create, select the Advanced
Networking & Security tab.
The following data path modules are responsible for distributed firewall rule processing:
• VSIP: (VMware Internetworking Service Insertion Platform): The main part of the distributed
firewall kernel module that receives the firewall rules and downloads them on each VM’s
vNIC.
• VDPI: (VMware Deep Packet Inspection): A deep packet inspection module daemon in the
user space that is responsible for L7 packet inspection. VDPI can identify application IDs and
extract context for a traffic flow.
This slide shows the distributed firewall architecture on a KVM. The same architecture also
applies to bare metal servers.
The following data path modules are responsible for distributed firewall rule processing on a
KVM:
• OVS: Core data path component for L2, L3, and distributed firewall. It provides ingress and
egress filtering for stateless rules.
• Conntrack: Module responsible for tracking established connections for stateful firewall rules.
• VDPI: A deep packet inspection module daemon in the user space that is responsible for L7
packet inspection. VDPI can identify application IDs and extract context for a traffic flow.
The NSX-T Data Center gateway firewall provides essential perimeter firewall protection that can
be used in addition to a physical perimeter firewall. The gateway firewall service is part of the
NSX-T Edge node for both bare metal and VM form factors. The gateway firewall is useful in
developing PCI zones, multi-tenant environments, or DevOps style connectivity without forcing
the inter-tenant or inter-zone traffic onto the physical network. The gateway firewall data path
uses the Data Plane Development Kit (DPDK) framework supported on NSX Edge to provide
better throughput.
The NSX-T Data Center gateway firewall is instantiated per logical router and supported at both
Tier-0 and Tier-1.
The gateway firewall works independent of the distributed firewall from a policy configuration
and enforcement perspective. A user can consume the gateway firewall using either the UI or
REST API framework provided by NSX Manager. In NSX Manager, the UI gateway firewall can
be configured from the Gateway Firewall page. The gateway firewall configuration is similar to
the Distributed Firewall policy in that it is defined as a set of individual rules within a policy. Like
the distributed firewall, the gateway firewall rules can use logical objects, tagging, and groups to
build policies.
The diagrams show that the Tier-0 Gateway firewall is used as a perimeter firewall between the
physical and virtual domains. This gateway is mainly used for north-south traffic from the
virtualized environment to the physical world. In this case, the Tier-0 service router component
residing on the NSX Edge node enforces the firewall policy before traffic leaves or enters the
NSX-T Data Center virtual environment. East-west traffic continues to use the distributed routing
and firewall filtering capability that NSX-T Data Center natively provides in the hypervisor.
• Emergency: Used for quarantine and can also be used for Allow rules
• System Rules: Automatically generated by NSX-T Data Center and specific to internal
control plane traffic, such as BFD rules, VPN rules, and so on
To create a Gateway Firewall policy, you assign a policy name and specify the domain.
You can configure the following settings when creating a new Gateway Firewall policy:
• TCP Strict: This setting strengthens the security of the gateway firewall by dropping packets
that are not preceded by a complete three-way TCP handshake.
• Stateful: When this option is enabled, the gateway firewall performs stateful packet
inspection and tracks the state of network connections. Packets matching a known active
connection are allowed by the firewall, and packets that do not match are inspected against
the gateway firewall rules
• Locked: This setting allows you to lock a policy while making configuration changes so that
others cannot make modifications at the same time. To lock or unlock a policy, you must
provide a comment.
Each Gateway Firewall policy defined in the NSX Manager simplified UI has its own policy in the
logical router.
In the example, a firewall policy named Block Traffic Policy was created for the default domain.
Within this policy, a firewall rule was created to block SSH traffic from any source going to
Group-1, Group-2, and the test group. This rule is applied to the entire Tier-0 Gateway.
Service insertion for Network Introspection can be applied at Tier-0 and Tier-1 Gateways to check
north-south and east-west traffic. Partner services typically provide advanced security features
such as IDS, IPS, L4-L7 firewall, URL filtering, and so on.
The L2 north-south insertion mode (also known as the Bump in the Wire mode) of the partner
service is supported. The L3 north-south service insertion is being developed.
Service Manager is a third-party entity that mediates the communication between the NSX-T Data
Center service insertion platform and partner’s service virtual machines (SVMs)
An SVM runs the OVA or OVF specified by a service and is connected over the service plane to
receive redirected traffic.
By default, traffic on the router’s uplink (where the partner’s service is inserted) is not redirected.
You can add granular redirection rules to send interesting traffic to the partner’s security service.
Traffic steering uses policy-based routing (PBR).
A partner registers the service with NSX-T Data Center by making an API call or using the partner
management console (CLI). Partners can automate service registration from their management
console.
In this registration process, several parameters must be specified, such as the location (URL) of
the OVF, to which router (Tier-1 or Tier-0) the partner service is attached, the operational mode
(L2 bridged mode), and so on.
Users need to do separate registrations for each service type per attachment point (Tier-0 and Tier-
1).
After a service is registered and appears in the catalog, you deploy an instance of the service so
that it can start processing network traffic. Each partner releases its own partner service OVF for
NSX-T Data Center integration
You select a logical router (Tier-0 or Tier-1) and a host where the partner service is deployed. The
instance must be deployed on an ESXi transport node because logical switching is needed to
bridge traffic to the interface where the partner service is attached.
The partner service instance is typically deployed on the same host as the edge node to avoid
hairpinning situations.
Each SVM can only be applied to one logical router. NSX Manager creates and attaches segments
to the gateway and the partner’s SVM.
The deployment process might take some time, depending on the vendor's implementation.
Deployment and operational status is monitored. You can view the status in the NSX Manager
simplified UI. The status should appear as Deployment Successful.
After you deploy a service instance, you can configure the type of traffic that the gateway redirects
to the partner service. Configuring traffic redirection is similar to configuring a firewall. You can
define detailed redirection rules at the distributed firewall, which define which traffic should be
inspected by the partner services. By default, the No-Redirect All rule is applied. You can create
selective redirection rules using groups.
Redirection rules are always stateless. Reflexive redirection rules are automatically created and
sent to the control plane so that the return traffic is also sent to the partner service.
For SVMs deployed on compute hosts, an SVM does not need to be installed on every host. Some
customers prefer to deploy the partner SVM on each host to achieve the least amount of traffic
hairpinning.
When the partner SVM is deployed in a service cluster, traffic is sent from the compute hosts
across the overlay to the hosts in the service cluster.
For north-south service insertion, the insertion points are at the uplinks of Tier-0 or Tier-1
Gateways. With east-west service insertion, the insertion points are at each guest VM’s vNIC. In
other words, traffic is intercepted at the vNIC of each guest VM.
With east-west service insertion, the security groups configured in the NSX Manager simplified
UI can be shared with the management consoles of the partners.
Partner appliances of different sizes can be integrated with NSX-T Data Center.
The east-west service insertion for the Network Introspection configuration is very similar to the
steps that you used in the north-south configuration and includes the following steps:
1. Service Registration
2. Service Deployment
3. Service Consumption
Service profile: A specific instantiation of a vendor template. For example, if a vendor template
defines an IPSec tunneling operation, a service profile specifies details such as IPSec tunnel
endpoints, algorithms, and so on. Service profiles can be created by NSX-T Data Center
administrators or third-party vendors.
Service chain: A sequence of service profiles defined by the network administrator. A service
defines the logical sequence of operations to be applied to network traffic, for example, firewall,
then monitor, and then IPSec VPN. Service chains can specify different sequences of service
profiles for different directions of traffic (egress or ingress).
All traffic matching a redirection rule is redirected along the specified service chain.
As part of host preparation, NSX-T Data Center distributes Endpoint Protection modules to all
hosts in a cluster.
The integrated service is deployed as a virtual appliance running partner services, also known as a
service virtual machine (SVM).
The administrator defines Endpoint Protection policies for the VMs.
The integrated service uses the Endpoint Protection API library (formerly known as EPSec API
library) to introspect and protect guest VMs from malware.
Because Endpoint Protection enables SVMs to read and write specific files on guest VMs, it
provides an efficient way to optimize memory use and avoid resource bottlenecks.
The example shows two security groups (Standard, Quarantine Zone), two security policies
(Standard, Quarantined), and a defined security tag (ANTI_VIRUS.VirusFound).
1. The antivirus SVM monitors activities on the guest of the VMs in the Standard group. If malicious
activities are detected, the security tag ANTI_VIRUS.VirusFound is set for that VM.
2. A security tag, ANTI_VIRUS.VirusFound, places the VMs that are virus-infected into the
Quarantine Zone security group, where the Quarantined VM security policy is enforced. In
this example, the Quarantined VM security policy blocks all inbound and outbound traffic
with the exception of the necessary security tools.
3. After the virus is removed and the VMs scanned, the ANTI_VIRUS.Virusfound tag is
removed. The VMs are removed from the Quarantine Zone group, eliminating the
Quarantined VM security policy. Traffic flow to and from the VMs resumes. This process
prevents the spread of the virus to other VMs. This entire process is automated and does not
require manual intervention.
Service profiles are a way for administrators to choose protection levels for a VM by selecting the
templates provided by the vendor. For example, a vendor can provide silver, gold, and platinum
policy levels.
Each profile created might serve a different type of workload. A gold service profile provides
complete antimalware to a PCI-type workload, whereas a silver service profile only provides basic
antimalware protection to a regular workload.
You can verify the version compatibility between NSX-T Data Center and VMware Identity
Manager, using the VMware Product Interoperability Matrix at
https://www.vmware.com/resources/compatibility/sim/interop_matrix.php#interop&175=&140=.
NSX-T Data Center 2.4 requires at least version 3.2 of VMware Identity Manager.
The following steps must be completed before integrating VMware Identity Manager with NSX-
T:
1. From the vSphere Web Client, deploy the VMware Identity Manager appliance from an OVF
template.
2. The VMware Identity Manager virtual machine must be configured to synchronize its time
with the ESXi host where it is running. To configure synchronization, right-click the VM and
select Edit Settings > VM Options. Scroll down to the Time section and select the check box
for Synchronize guest time with host.
3. After deploying the VMware Identity Manager appliance, you use the Setup wizard available
at https://<VMware_Identity_Manager_FQDN> to set passwords for the admin, root, and
remote SSH user and to select a database. You can use the external Microsoft SQL or Oracle
database, or the internal PostgreSQL database.
Active Directory (AD) over LDAP or AD with integrated Windows authentication, LDAP, and
local directory are all supported identity sources in VMware Identity Manager. To configure
identity sources from the VMware Identity Manager administration console, you take the
following steps:
1. Click the Identity & Access Management tab.
2. On the Directories page, click Add Directory and select the type of directory for integration.
3. Select Domains.
4. Map user attributes.
5. Select Groups and Users to Sync.
6. Click Sync Directory to start the directory synchronization.
VMware Identity Manager uses the OAuth 2.0 authorization framework to enable third-party
applications, such as NSX-T Data Center and their users, to access specific data and services. In
the process, VMware Identity Manager protects the third-party's account credentials.
Before enabling integration between VMware Identity Manager and NSX-T Data Center, you
must register NSX-T Data Center as a trusted OAuth client in VMware Identity Manager.
When configuring NSX-T Data Center details, you select Service Client Token from the Access
Type drop-down menu. This selection indicates that the application, NSX-T Data Center in this
example, accesses the APIs on its own behalf, not on behalf of a particular user.
You must specify a Client ID to uniquely identify NSX. You should record this value because you
need it to enable VMware Identity Manager integration.
You must also click Generate Shared Secret and record the generated value, which you need to
enable VMware Identity Manager integration.
Leave the default settings for all other options.
You should record the SHA-256 certificate thumbprint because you need this value when you
enable VMware Identity Manager integration.
The values entered for OAuth Client ID and OAuth Client Secret are the values you recorded
when creating a new OAuth client for NSX-T Data Center in VMware Identity Manager.
The value entered for SSL Thumbprint is the value you recorded from the VMware Identity
Manager appliance command line.
The value entered for NSX Appliance must be used to access NSX Manager after the integration.
If you enter the FQDN of NSX Manager but then try to access the appliance through its IP
address, the authentication fails.
If the integration is successful, the VMware Identity Manager Connection appears as Up, and the
VMware Identity Manager Integration as Enabled.
The default login page also appears if integration with VMware Identity Manager is configured,
but VMware Identity Manager is down or not reachable at the time of the login.
Each NSX node has two users, admin and audit, which can be used for local authentication. You
cannot delete or add local users.
A system user called nsx_policy is used by the policy role to realize configuration changes in the
NSX-T Data Center environment.
Additional built-in roles available for VMware Cloud deployments are Cloud Service
Administrator and Cloud Service Auditor. The Cloud Service Administrator role is designed for
public cloud administrators and container administrators to configure services on NSX Manager.
For more information about the permissions for each role in different operations, see "Role-Based
Access Control" at https://docs.vmware.com/en/VMware-NSX-T-Data-
Center/2.4/administration/GUID-26C44DE8-1854-4B06-B6DA-A2FD426CDF44.html.
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 685
10-2 Importance
686 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-3 Module Lessons
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 687
10-4 Troubleshooting Overview and Log Collection
688 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-5 Learner Objectives
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 689
10-6 About the Troubleshooting Process
690 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-7 Differentiating Between Symptoms and Causes
For more information about how to resolve issues in your environment, see the NSX-T Data
Center Troubleshooting Guide at https://docs.vmware.com/en/VMware-NSX-T-Data-
Center/2.3/nsxt_23_troubleshoot.pdf.
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 691
10-8 Local Logging on NSX-T Data Center Components
692 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-9 Viewing NSX Policy Manager Logs
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 693
10-10 Viewing the NSX Manager Syslog
694 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-11 Viewing the NSX Controller Log
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 695
10-12 Viewing the ESXi Host Log
696 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-13 Viewing the KVM Host Log
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 697
10-14 Syslog Overview
698 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-15 Configuring Syslog Exporters (1)
NSX-T Data Center component logging is RFC 5424-compliant, except for logging on ESXi
hosts.
RFC 5424 defines a specific format for log messages. Any number of transport protocols can be
used for transmission of Syslog messages.
RFC 5424 also provides a message format that enables vendor-specific extensions:
On management or edge nodes, to configure a Syslog server, you enter the command set
logging-server proto level [facility ] [messageid ] [certificate
] [structured-data ].
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 699
You can filter log entries by severity, facility, and message ID. The message ID field identifies the
type of message. Message IDs can be used to specify which messages are transferred by the set
logging-server command.
700 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-16 Configuring Syslog Exporters (2)
For example, you can configure an ESXi host as a Syslog exporter to export log messages to the
remote Syslog server (172.20.10.94):
[-esxi-01:~] esxcli network firewall ruleset set -r syslog -e
true
[root@sa-esxi-01:~] esxcli system syslog config set --
loghost=172.20.10.94
[root@sa-esxi-01:~] esxcli system syslog reload
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 701
10-17 Configuring and Displaying Syslog
In this example, the NSX Manager node sa-nsxmgr-01 is configured as a Syslog exporter. NSX
Manager sends the info-level log messages to the Syslog server student-a-01.vclass.local through
TCP.
The Syslog server application Kiwi Syslog Service Manager running on student-a-01 receives the
info-level messages from home 172.20.10.41 (sa-nsxmgr-01) as configured.
702 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-18 Generating Technical Support Bundles
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 703
10-19 Monitoring the Support Bundle Status
• If you download the bundles to your machine, you get a single archive file consisting of a
manifest file and support bundles for each node.
• If you upload the bundles to a file server, the manifest file and the individual bundles are
uploaded to the file server separately.
704 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-20 Downloading Support Bundles
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 705
10-21 Labs
706 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-22 Lab: Configuring Syslog
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 707
10-23 Lab: Generating Technical Support Bundles
708 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-24 Review of Learner Objectives
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 709
10-25 Monitoring and Troubleshooting Tools
710 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-26 Learner Objectives
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 711
10-27 Monitoring Components from the NSX Manager
Simplified UI
712 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-28 Monitoring Component Status
NSX Manager dashboards provide a predefined status panel of the NSX-T Data Center
environment. A Custom tab can be modified by using the API to display content that you define.
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 713
10-29 Port Mirroring Overview
Port mirroring is used on a switch to send a copy of packets seen on one switch port (or an entire
VLAN) to a monitoring connection on another switch port. Port mirroring is used to analyze and
debug data or diagnose errors on a network.
714 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-30 Port Mirroring Method: Remote L3 SPAN
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 715
10-31 Port Mirroring Method: Logical SPAN
• You can mirror source ports to a destination port on the same logical overlay switch but on
different transport nodes.
• This method uses the overlay for tunneling traffic to its destination if needed.
716 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-32 Configuring Logical SPAN
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 717
10-33 Viewing the Logical SPAN Configuration and Mirrored
Packets
718 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-34 IPFIX Overview
Internet Protocol Flow Information Export (IPFIX) is a standard for the format and export of
network flow information.
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 719
10-35 Configuring IPFIX to Export Traffic Flows
When you enable IPFIX, all configured host transport nodes send IPFIX messages to the IPFIX
collectors using port 4739. In the case of the ESXi host, NSX-T Data Center automatically opens
port 4739.
For a KVM, if the firewall is not enabled, port 4739 is open. If the firewall is enabled, you must
ensure that the port is open because NSX-T Data Center does not automatically open the port.
IPFIX on ESXi and KVM hosts sample tunnel packets in different ways. For details, see the NSX-
T Data Center Administration Guide at https://docs.vmware.com/en/VMware-NSX-T-Data-
Center/2.4/nsxt_24_admin.pdf.
720 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-36 Configuring an IPFIX Firewall Profile
Configuring the IPFIX firewall profile is similar to configuring the IPFIX switch profile.
You configure the following settings for the IPFIX firewall profile:
• Collector Configuration: Select a collector that you configured. The collector is the device
to which the IPFIX flows are sent.
• Active Flow Export Timeout (sec): Enter the length of time after which a flow times out,
even if more packets associated with the flow are received.
• Priority: Enter a priority value. This parameter is used to resolve conflicts when multiple
profiles apply. The IPFIX exporter uses the profile with the highest priority only. A lower
value means a higher priority.
• Observation Domain Id: Enter the observation domain that the network flows originate
from. The default is 0 and indicates no specific observation domain.
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 721
10-37 Configuring an IPFIX Switch Profile
You configure the following settings for the IPFIX switch profile:
• Active Timeout (sec): Enter the length of time after which a flow times out, even if more
packets associated with the flow are received. The default is 300.
• Idle Timeout (sec): Enter the length of time after which a flow times out, if no more packets
associated with the flow are received. This option is for ESXi only. KVM times out all flows
based on active timeout. The default is 300.
• Max Flows: Enter the maximum flows to be cached on a bridge. This option is for KVM
only. It is not configurable on ESXi hosts. The default is 16,384.
• Packet Sample Probability (%): The percentage of packets that are sampled
(approximately). Increasing this setting might have a performance impact on the hypervisors
and collectors. If all hypervisors send more IPFIX packets to the collector, the collector might
722 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
not be able to collect all packets. Setting the probability at the default value of 0.1% keeps the
performance impact low.
• Observation Domain Id: Enter the observation domain that the network flows originate
from. Enter 0 to indicate no specific observation domain.
• Collector Configuration: Select a switch IPFIX collector that you configured in the previous
step.
• Priority: Enter a priority value. This parameter is used to resolve conflicts when multiple
profiles apply. The IPFIX exporter uses the profile with the highest priority only. A lower
value means a higher priority.
• Applied To: Apply the IPFIX switch profile to one or more objects.
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 723
10-38 Configuring IPFIX Collectors
You must install one or more IPFIX collectors. The installed collectors must have full network
connectivity to other devices in the NSX-T Data Center.
You should also verify that any relevant firewalls, including the ESXi firewall, allow traffic on the
IPFIX collector ports.
724 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-39 Traceflow Overview (1)
Traceflow enables users to test layer 2 and layer 3 connectivity between two objects, typically two
VM ports, by tracing all actions that take place in the data path during network communication.
These actions include switching, routing, firewall, NAT, and so on.
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 725
10-40 Traceflow Overview (2)
726 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-41 Traceflow Configuration Settings
You inject a packet and observe where that packet is seen as it passes through the physical and
logical networks.
The trace packet travels the logical switch overlay, but it is not visible to interfaces attached to the
logical switch. In other words, no packet is actually delivered to the test packet’s intended
recipients.
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 727
10-42 Traceflow Operations
Information about the connections, components, and layers is displayed in the UI.
If you select unicast and logical switch as a destination, the output includes a graphical map of the
topology.
A table lists information under the following categories:
• Transport Node
• Component
You can filter the displayed observations with the options ALL, DELIVERED, and DROPPED.
If dropped observations occur, the DROPPED filter is applied by default. Otherwise, the ALL
filter is applied. The graphical map shows the back plane and router links.
Unicast traceflow traffic observations are layered similar to the port connection tool. For multicast
and broadcast traceflows, observations are reported in tabular format.
728 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-43 Using Traceflow for Troubleshooting
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 729
10-44 About the Port Connection Tool
NSX-T Data Center includes a port connection tool to help you visualize the connectivity status
between two virtual machine interfaces. This tool takes both port IDs as an input and provides
information about the data path connectivity, including GENEVE tunneling.
730 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-45 Viewing the Graphical Output of the Port Connection
Tool
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 731
10-46 Packet Capture
732 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-47 Lab: Using Traceflow to Inspect the Path of a Packet
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 733
10-48 Review of Learner Objectives
734 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-49 Troubleshooting Basic NSX-T Data Center Problems
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 735
10-50 Learner Objectives
736 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-51 Common NSX Manager Installation Problems
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 737
10-52 Using Logs to Troubleshoot NSX Manager Installation
Problems
738 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-53 Using CLI Commands to Troubleshoot NSX Manager
Installation Problems
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 739
10-54 Viewing the NSX Manager Node Configuration
740 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-55 Verifying Services and States Running on NSX
Manager Nodes
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 741
10-56 Verifying NSX Management Cluster Status
You can verify the firewall rules from the KVM host by running the following commands:
742 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-57 Verifying Communication from Hosts to the NSX
Management Cluster
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 743
10-58 Troubleshooting Logical Switching Problems
744 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-59 Verifying the N-VDS Configuration
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 745
10-60 Verifying Overlay Tunnel Reachability (1)
746 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-61 Verifying Overlay Tunnel Reachability (2)
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 747
10-62 Troubleshooting Logical Routing Problems
748 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-63 Retrieving Gateway Information
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 749
10-64 Viewing the Routing Table
At the service router command prompt, you can also run the get interfaces command to
retrieve the Tier-0 or Tier-1 router’s interface information.
750 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-65 Viewing the Forwarding Table of the Tier-1 Gateway
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 751
10-66 Verifying BGP Neighbor Status
752 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-67 Viewing the BGP Route Table
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 753
10-68 Troubleshooting Firewall Problems
You can verify the firewall configuration from the NSX Manager simplified UI:
754 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-69 Verifying Firewall Configuration and Status (1)
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 755
10-70 Verifying Firewall Configuration and Status (2)
756 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-71 Verifying the Firewall Configuration from the KVM Host
You can verify the firewall rules from the KVM host by running the following commands:
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 757
10-72 Verifying the Firewall Configuration from the ESXi Host
758 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-73 Verifying the Firewall Configuration from the NSX Edge
Node
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 759
10-74 Review of Learner Objectives
760 Module 10: NSX-T Data Center Tools and Basic Troubleshooting
10-75 Key Points
Module 10: NSX-T Data Center Tools and Basic Troubleshooting 761
Lab Manual
VMware NSX-T Data Center
CONTENTS
i
Lab 8 Configuring the Tier-0 Gateway ............................................ 65
Task 1: Prepare for the Lab ........................................................................................ 66
Task 2: Create Uplink Segments ................................................................................ 67
Task 3: Create a Tier-0 Gateway ............................................................................... 68
Task 4: Connect the Tier-0 and Tier-1 Gateways ...................................................... 72
Task 5: Test the End-to-End Connectivity .................................................................. 74
Lab 9 Verifying Equal-Cost Multipathing Configurations ............. 75
Task 1: Prepare for the Lab ........................................................................................ 75
Task 2: Verify the BGP Configuration......................................................................... 76
Task 3: Verify That Equal-Cost Multipathing Is Enabled ............................................ 78
Task 4: Verify the Result of the ECMP Configuration ................................................ 78
Lab 10 Configuring Network Address Translation ........................ 83
Task 1: Prepare for the Lab ........................................................................................ 84
Task 2: Create a Tier-1 Gateway for Network Address Translation........................... 85
Task 3: Create a Segment .......................................................................................... 86
Task 4: Attach a VM to the NAT-LS Segment ............................................................ 87
Task 5: Configure NAT ............................................................................................... 88
Task 6: Configure Route Advertisement and Route Redistribution ............................ 90
Task 7: Verify the IP Connectivity............................................................................... 93
Lab 11 Configuring the DHCP Server on the NSX Edge Node ...... 95
Task 1: Prepare for the Lab ........................................................................................ 96
Task 2: Configure a DHCP Server ............................................................................. 97
Task 3: Verify the DHCP Server Operation .............................................................. 100
Task 4: Prepare for the Next Lab ............................................................................. 105
Lab 12 Configuring Load Balancing ............................................. 107
Task 1: Prepare for the Lab ...................................................................................... 108
Task 2: Test the Connectivity to Web Servers ......................................................... 109
Task 3: Create a Tier-1 Gateway Named T1-LR-LB and Connect it to T0-LR-01 ... 110
Task 4: Create a Load Balancer ............................................................................... 111
Task 5: Configure Route Advertisement and Route Redistribution for the Virtual IP115
Task 6: Use the CLI to Verify the Load Balancer Configuration ............................... 120
Task 7: Verify the Operation of the Backup Server .................................................. 122
Task 8: Prepare for the Next Lab ............................................................................. 123
Lab 13 Deploying Virtual Private Networks .................................. 125
Task 1: Prepare for the Lab ...................................................................................... 126
Task 2: Deploy Two New NSX Edge Nodes to Support the VPN Deployment ........ 127
Task 3: Enable SSH on the Edge Nodes ................................................................. 131
Task 4: Configure a New Edge Cluster .................................................................... 132
ii Contents
Task 5: Deploy and Configure a New Tier-0 Gateway and Segments for
VPN Support ................................................................................................ 133
Task 6: Create an IPSec VPN Service ..................................................................... 136
Task 7: Create an L2 VPN Server and Session ....................................................... 137
Task 8: Deploy the L2 VPN Client ............................................................................ 139
Task 9: Verify the Operation of the VPN Setup ........................................................ 142
Lab 14 Configuring the NSX Distributed Firewall ........................ 147
Task 1: Prepare for the Lab ...................................................................................... 148
Task 2: Test the IP Connectivity ............................................................................... 149
Task 3: Create IP Set Objects .................................................................................. 151
Task 4: Create Firewall Rules .................................................................................. 154
Task 5: Create an Intratier Firewall Rule to Allow SSH Traffic ................................. 157
Task 6: Create an Intratier Firewall Rule to Allow MySQL Traffic ............................ 158
Task 7: Prepare for the Next Lab ............................................................................. 160
Lab 15 Configuring the NSX Gateway Firewall ............................ 163
Task 1: Prepare for the Lab ...................................................................................... 164
Task 2: Test SSH Connectivity ................................................................................. 165
Task 3: Configure a Gateway Firewall Rule to Block External SSH Requests ........ 166
Task 4: Test the Effect of the Configured Gateway Firewall Rule ............................ 169
Task 5: Prepare for the Next Lab ............................................................................. 170
Lab 16 Managing Users and Roles with
VMware Identity Manager............................................................... 171
Task 1: Prepare for the Lab ...................................................................................... 172
Task 2: Add an Active Directory Domain to VMware Identity Manager ................... 173
Task 3: Create the OAuth Client for NSX Manager in VMware Identity Manager ... 180
Task 4: Gather the VMware Identity Manager Appliance Fingerprint ...................... 182
Task 5: Enable VMware Identity Manager Integration with NSX Manager .............. 184
Task 6: Assign NSX Roles to Domain Users and Test Permissions ........................ 185
Task 7: Prepare for the Next Lab ............................................................................. 187
Lab 17 Configuring Syslog ............................................................ 191
Task 1: Prepare for the Lab ...................................................................................... 192
Task 2: Configure Syslog on NSX Manager and Review the Collected Logs .......... 193
Task 3: Configure Syslog on an NSX Edge Node and Review the Collected Logs . 194
Lab 18 Generating Technical Support Bundles ........................... 195
Task 1: Prepare for the Lab ...................................................................................... 196
Task 2: Generate a Technical Support Bundle for NSX Manager ........................... 197
Task 3: Download the Technical Support Bundle .................................................... 199
Contents iii
Lab 19 Using Traceflow to Inspect the Path of a Packet ............. 201
Task 1: Prepare for the Lab ...................................................................................... 202
Task 2: Configure a Traceflow Session .................................................................... 203
Task 3: Examine the Traceflow Output .................................................................... 204
iv Contents
Lab 1 Labs Introduction
The lab environment in which you work is highlighted by the Lab Environment Topology Map.
You need to know and use these important items when you work with the NSX-T 2.4 ICM labs
that impacts the lab performance:
• In these labs, you enter the environment by using MSTSC (Remote Desktop Protocol -
RDP) initially to the student desktop. The student desktop resides on the Management
Network (SA-Management) and you can start deploying the various NSX-T fabric items
from here.
• You find a vCenter Server and NSX Manager predeployed with two clusters populated
with various virtual machines.
• At various points within the labs you are directed to copy and paste information for later
use.
When you initially access the student desktop, right-click the Start button > Run> notepad
and note the following useful items:
1
Lab Environment Topology Map
You can refer to this topology map periodically, which you would find useful.
For this lab environment, you use a single-node NSX cluster. In a production environment, a
three-node cluster must be deployed to provide redundancy and high availability.
3
Task 1: Access Your Lab Environment
You use Remote Desktop Connection to connect to your lab environment.
1. Use the information provided by your instructor to log in to your lab environment.
2. If the message Kiwi Syslog free version supports up to 5 message
sources. Please define them under Inputs in Setup. appears, click OK to
close the Kiwi Syslog Service Manager window.
The Kiwi Syslog application is a free Syslog collector preinstalled as a service on your
student desktop to be used in a future lab.
NOTE
On first opening Chrome, you might see a message indicating that VMware
Enhanced Authetication Plugin has updated its SSL certification. Click OK to
close.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.
A login prompt appears as Welcome to NSX-T and Get Started for a guided workflow
experience.
Information for only one NSX Manager node appears because in this lab you are using a
single-node cluster.
set cli-timeout 0
This command returns the status for each of the roles in the NSX Cluster including Policy,
Manager, and Controller. You can see that the cluster for each of these components is
STABLE. Note that in the lab you use a single-node NSX cluster.
If you see the Your connection is not private message, click ADVANCED and click the
Proceed to 172.20.10.48 (unsafe) link.
4. Click ADD.
6. Wait until the Registration Status shows Registered and the Connection Status shows Up.
Click Refresh at the bottom of the display to update the contents.
17
IMPORTANT
Do not refresh, navigate away from, or minimize the browser tab hosting the
simulation. These actions might pause the simulation and the simulation might not
progress.
19
Task 1: Prepare for the Lab
c. Click ADD.
You might need to click REFRESH at the bottom of the screen to refresh the page.
NOTE
When you next look at the vCenter Inventory, ESXi hosts sa-esxi-04.vclass.local
and sa-esxi-05.vclass.local show a red alarm for their loss of network redundancy.
Click Reset to Green to resolve the host alarm.
d. Click Next.
e. When the Thumbprint is missing message appears, click ADD. When the Add
Transport Node returns, click Next.
33
Task 1: Prepare for the Lab
You log in to the vSphere Web Client UI and the NSX Manager Simplified UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.
2. Click SAVE
a. When the message to continue segment configuration appears, click NO.
b. Copy and paste the UUID to a notepad so it can be used in a future step.
In this example, the UUID associated with T1-DB-01 is 57601300-2e82-48c4-8c27-
1e961ac70e81.
c. On the NSX Simplified UI Home page, click Networking > Segments and click the
three vertical ellipses icon next to DB-LS and select Edit.
d. Click Ports, then click Set, and then click ADD SEGMENT PORT.
The Set Segment Ports window appears.
NOTE
You can press Ctrl+Alt to escape from the console window.
The VNIs and UUIDs in your lab environment might be different from the screenshot.
5. Retrieve the Tunnel Endpoint (TEP) information for the Web-LS Segment.
get logical-switch Web-LS_VNI_number vtep
The above sample output shows the TEPs connected to the VNI 73728 (Web-LS) control
plane.
If your Address Resolution Protocol (ARP) table is empty, initiate ping between the Web-
Tier VMs.
8. Retrieve information about the established host connections on Web-LS.
get logical-switch Web-LS_UUID ports
47
Task 1: Prepare for the Lab
You log in to the vSphere Web Client UI and the NSX Manager Simplified UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.
4. Click NEXT.
6. Click NEXT.
8. Click NEXT.
NOTE
The Edge deployment might take a several minutes to complete. The deployment
status displays various values, for example, Node Not Ready, which is only
temporary.
NOTE
Please wait until the Configuration status displays Success and Status is Up. You
might click REFRESH occasionally.
13. On the NSX Simplified UI Home page, click System > Fabric > Nodes > Edge
Transport Nodes.
Provide the configuration details to deploy the second edge node.
a. On the Name and Description window, enter the following details.
• Name: Enter sa-nsxedge-02.
• Host name/FQDN: Enter sa-nsxedge-02.vclass.local.
• Form Factor: Medium (default).
b. On the Credentials window, enter the following details.
• Enter VMware1!VMware1! as the CLI password and the system root password.
c. On the Configure Deployment window, enter the following details.
• Compute Manager: Select sa-vcsa-01.vclass.local (begin by typing sa and the
full name should appear).
• Cluster: Select SA-Management-Edge from the drop-down menu.
• Resource Pool: Leave empty.
• Host: Leave empty.
• Datastore: Select SA-Shared-02-Remote from the drop-down menu.
d. On the Configure Ports window, enter the following details.
• IP Assignment: Click Static.
NOTE
The Edge deployment might take a several minutes to complete. The deployment
status displays various values, for example, Node Not Ready which is only
temporary.
NOTE
Please wait until the Configuration status displays Success and Status is Up. You
might click REFRESH occasionally.
7. Verify that the SSH service is running and Start on boot is set to True.
get service ssh
g. Verify that the SSH service is running and Start on boot is set to True.
get service ssh
59
Task 1: Prepare for the Lab
You log in to the vSphere Web Client UI and the NSX Manager Simplified UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.
4. Click SAVE.
You see a message message that you want to continue editing the Tier-1GW, click YES.
65
Task 1: Prepare for the Lab
3. Click SAVE.
a. When the message appears asking whether you want to continue Configuring the
Segment, specify NO.
4. Repeat steps 1-3 to create another logical segmentnamed Uplink-2 for the second uplink.
• Name: Enter Uplink-2.
• Uplink & Type: None.
• Transport Zone: Select Global-VLAN-TZ.
• VLAN: Enter 0 and click Add Item(s).
a. Click SAVE.
b. When the message appears asking whether you want to continue Configuring the
Segment, specify NO.
b. Click SAVE.
3. When the message appears asking that you want to continue editing this Tier-0 gateway,
click YES.
9. You should see your Tier-0 Gateway in the window with the status as UP.
3. On the T1-LR-1 edit page, click the down arrow in the Linked Tier-0 Gateway field and
select T0-LR-1.
4. Click SAVE followed by CLOSE EDITING.
ping -c 3 192.168.100.1
ping -c 3 192.168.110.1
You should be able to ping from your student desktop to any of the tenant networks, which
verifies that the north-south routing is working properly.
75
You log in to the NSX Manager Simplified UI.
1. From your student desktop, open the Chrome web browser.
2. Click the NSX-T Data Center > NSX Manager bookmark.
3. On the login page, enter admin as the user name and VMware1!VMware1! as the
password.
In the command output, VRF 6 is associated with SR-T0-LR-01. The VRF ID in your lab
might be different.
NOTE
On sa-nsxedge-01, the BGP state for neighbor 192.168.100.1 is established and up.
2. From MTPuTTY, connect to sa-nsxedge-02 and repeat step 1 to verify that the BGP
neighbor relationship is established between the VyOS router and the sa-nsxedge-02
gateway.
NOTE
On sa-nsxedge-02, the BGP neighbor state for neighbor 192.168.100.1 is active.
NOTE
Your results might reflect that the traffic for 172.16.10.11 flows goes to edge-01
and the traffic for 172.16.10.12 goes to edge-02, or vice versa.
11. If the .bat scripts do not automatically terminate, stop them manually.
a. In the httpdata11.bat window, press Ctrl+C to stop the script, and enter Y to terminate
the batch job.
b. In the httpdata12.bat window, press Ctrl+C to stop the script, and enter Y to terminate
the batch job.
83
Task 1: Prepare for the Lab
You log in to the vSphere Web Client UI and the NSX Manager UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.
3. Click SAVE.
You see a message message asking whether you want to continue editing the Tier1
Gateway. Click NO.
4. Click SAVE.
6. Click ADD NAT RULE again and check that T1-LR-2-NAT is still the value in the
Gateway field.
7. Provide the configuration details in the New NAT Rule window.
• Name: Enter NAT-Rule-2.
• Action: Select DNAT.
• Source IP: Leave blank.
• Destination IP: Enter 80.80.80.1.
• Translated IP: Enter 172.16.101.11.
• Firewall: Select By Pass.
• Priority: Enter 1024.
Leave all the other options as default.
8. Click SAVE.
a. Click CLOSE.
The T0-LR-01 Gateway Status shows Down until the configuration is realized on the NSX
Manager, which might take a few seconds.
5. Switch back to the MTpuTTY connection for sa-vayos-01 and enter show ip route
again to verify that 80.80.80.1/32 is displayed.
In the above command output, the VRF ID for SR-T0-LR is 9. The VRF ID in your lab
might be different.
3. Access the VRF for SR-T0-LR-01 and view the routing table of the Tier-0 SR -1.
vrf 9
get route
95
Task 1: Prepare for the Lab
You log in to the vSphere Web Client UI and the NSX Manager UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.
100 Lab 11 Configuring the DHCP Server on the NSX Edge Node
2. Connect Ubuntu-02a to the DHCP-LS.
a. Right-click Ubuntu-02a and select from the Edit Settings menu.
b. Change the Network Adapter 1 to connect to DHCP-LS, make sure Connected is
selected, and click OK.
3. Verify that the two virtual machines can communicate on the newly attached segment.
a. In the vSphere Web Client, select Hosts and Clusters and right-click Ubuntu-01a
from the inventory, and select Open Console.
b. Log in to Ubuntu-01a using vmware as the user name and VMware1! as the password.
c. Ping Ubuntu-02a.
ping -c 3 172.16.40.12
Lab 11 Configuring the DHCP Server on the NSX Edge Node 101
4. Verify the DHCP server configurations using the command line.
a. Switch to MTpuTTY and connect to sa-nsxedge-01.
b. Log in with admin as user name and VMware1!VMware1! as the password.
c. Get the DHCP servers.
get dhcp servers
102 Lab 11 Configuring the DHCP Server on the NSX Edge Node
5. Verify the configurations of the DHCP IP pools.
get dhcp ip-pools
Lab 11 Configuring the DHCP Server on the NSX Edge Node 103
6. Verify that the DHCP server operates as expected.
a. Switch to the vSphere Web Client and open a console for Ubuntu-02a.
b. Log in using the user name vmware and password VMware1!.
c. Gain root access by entering sudo -s and enter VMWare1! when prompted for the
password.
d. Clear the IP address assignment and request a new one from DHCP. Enter the
ifconfig ens160 0.0.0.0 0.0.0.0 && dhclient command.
ifconfig ens160 0.0.0.0 0.0.0.0 && dhclient
NOTE
Note the space between the two zero groupings for the IP address and netmask.
e. View the newly assigned IP address (172.16.40.25) from the DHCP pool with the
ifconfig command.
You see that the new inet addr: is now 172.16.40.25, which is the first address in
the DHCP IP pool.
104 Lab 11 Configuring the DHCP Server on the NSX Edge Node
7. Switch back to MTpuTTY to verify the DHCP lease.
a. Get the DHCP lease.
get dhcp leases
Lab 11 Configuring the DHCP Server on the NSX Edge Node 105
2. Switch to the vSphere Web Client and return the virtual machines back to their original
network.
a. Right-click on Ubuntu-01a and select Edit Settings.
106 Lab 11 Configuring the DHCP Server on the NSX Edge Node
Lab 12 Configuring Load Balancing
107
Task 1: Prepare for the Lab
You log in to the vSphere Web Client UI and the NSX Manager UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.
ping 172.16.10.11
ping 172.16.10.12
ping 172.16.10.13
3. On your student desktop, open a browser tab and verify that you can access the three web
servers.
http://172.16.10.11
http://172.16.10.12
http://172.16.10.13
Do not proceed to the next task if you cannot access the three web servers.
5. Select the VIRTUAL SERVERS tab and verify that the newly created Web-LB-VIP
appears in the virtual server list.
6. Navigate to NSX-T Home UI > Networking > Load Balancers > LOAD BALANCERS.
a. Verify that the Web-LB load balancer is attached to the T1-LR-LB gateway and the
load balancer’s operational status is Up.
6. Click APPLY.
a. Click SAVE followed by CLOSE EDITING.
The output shows the general load balancer configuration, including UUID and
Virtual Server ID.
b. Copy the UUID and the Virtual Server IIDd values and paste them to a notepad.
2. Verify the virtual server configuration.
get load-balancer UUID virtual-server Virtual_Server_ID
UUID is the value that you recorded for the load balancer.
NOTE
You might need to wait a few minutes before trying to access the backup server.
125
Task 1: Prepare for the Lab
You log in to the vSphere Web Client UI and the NSX Manager UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.
NOTE
The edge deployment might take a several minutes to complete. The deployment
status displays various values, for example, Node Not Ready, which is only
temporary.
NOTE
Please wait until the Configuration status displays Success and Status is Up. You
might click REFRESH occasionally.
NOTE
The edge deployment might take a several minutes to complete. The deployment
status displays various values, for example, Node Not Ready, which is only
temporary.
NOTE
Please wait until the Configuration status displays Success and Status is Up. You
might click REFRESH occasionally.
7. Verify that the SSH service is running and Start on boot is set to True.
get service ssh
g. Verify that the SSH service is running and Start on boot is set to True.
get service ssh
NOTE
The Edge Cluster might not initially populate. You might need to click on the field
multiple times to eventually have it available.
• Preferred Edge: Select sa-nsxedge-03 (use the drop-down menu and select).
d. Click Save.
2. When the message asking whether you want to continue Configuring this Tier-0 Gateway
appears, click YES.
3. Expand ROUTE RE-DISTRIBUTION by clicking the > icon next to it and click Set.
a. Click the check boxes for the configuration.
• Select Static Routes.
• Select IPSec Local IP.
• Select Connect Interfaces & Segments and all subobjects.
• Select Advertised Tier-1 Subnets: Leave Off.
b. Click APPLY.
c. Click SAVE.
NOTE
The L2 VPN Session appears as either Down or In Progress until you have
deployed the L2 VPN Client and have an active session running.
13. Expand Sub Interface using the > icon next to it.
• Enter 10(100) in the Sub Interface VLAN (Tunnel ID)
14. Click Next and then Finish.
You might encounter the Failed to Deploy OVF package...missing
descriptor error.
NOTE
You might encounter the Failed to Deploy OVF package...missing
descriptor error. Unfortunately you will have to start the deploy over and try
again. You must power off the NSX-l2t-client and Delete from Disk option before
reattempting the deploy. If the second time does not work correctly, ask your
instructor for assistance.
NOTE
Even after Recent Tasks show is complete, you might have to wait for a few
minutes before the Power On option is accessible.
16. To insure that the startup is complete, switch back to your vSphere Web Client, select
NSX-l2t-client in the inventory and click the gear icon in the console image and select
Launch Web Console.
a. Wait for the login prompt appears and login using the user name admin and password
VMware1!VMware1!.
NOTE
Ensure that both NSX-l2t-client and T1-L2VPN-02 reside on the same host by
selecting each of them and viewing the Summary tab for the Host: value.
Otherwise, use vMotion to migrate T1-L2VPN-02 to the same host as the NSX-l2t-
client. Both should reside on sa-esxi-01.vclass.local.
5. Return to vCenter Hosts and Clusters inventory pane, select T1-L2VPN-02, click the
Summary tab and gear in the console image to select Launch Web Console.
6. Log in to T1-L2VPN-02 VM using the username vmware and the password VMware1!.
a. Verify bidirectional connectivity from T1-L2VPN-02 to T1-L2VPN-01.
10. Check whether the ipsecvpn session is up between the local and remote peers.
11. Get the l2vpn session, tunnel, and IPSEC session numbers, and check that the status is UP.
get l2vpn sessions
12. Get statistical information of the local and remote peers, whether the status is UP, count of
packets received, bytes received (RX), packets transmitted (TX), and packets dropped,
malformed, or loops.
147
Task 1: Prepare for the Lab
You log in to the vSphere Web Client UI and the NSX Manager UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.
• If the Are you sure you want to continue connecting? message appears,
enter yes.
• If the Are you sure you want to continue connecting? message appears,
enter yes.
• Enter VMware1! as the password when prompted.
You should be able to enter T1-DB-01’s command prompt through SSH.
b. Click default in Domain and select Production from the list and click SAVE.
5. Click PUBLISH.
6. Verify the connectivity from your student desktop to the Web-Tier VMs.
a. From your student desktop, open a browser tab and enter http://172.16.10.11.
The HTTP request should timeout, as a result of the firewall rule.
b. From your student desktop, open a browser tab and and enter
http://172.16.10.12.
5. Click PUBLISH.
6. Switch to the T1-App-01’s console prompt and test the SQL access again.
a. Test the SQL connectivity.
mysql -u root -h 172.16.30.11 -p
The mysql prompt verifies that the App-to-DB rule is working properly.
163
Task 1: Prepare for the Lab
You log in to the vSphere Web Client UI and the VMware NSX Manager UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.
a. Click SAVE.
5. Edit the New Policy name.
• Name: Enter Block-SSH-Policy.
8. Click PUBLISH.
3. Click PUBLISH.
4. Verify that SSH is allowed from external sources.
5. Open MTPuTTY from the desktop and connect to T1-Web-01, T1-App-01, and T1-DB-01.
Your connections should work.
171
Task 1: Prepare for the Lab
You log in to the NSX Manager UI and the Identity Manager Administration Console.
1. From your student desktop, log in to the NSX Simplified UI.
a. Open the Chrome web browser.
b. Click the NSX-T Data Center > NSX Manager bookmark.
c. On the login page, enter admin as the user name and VMware1!VMware1! as the
password.
172 Lab 16 Managing Users and Roles with VMware Identity Manager
2. Log in to the VMware Identity Manager Administration Console.
a. Open another tab in the Chrome web browser.
b. Click the NSX-T Data Center > VMware Workspace ONE - VIDM bookmark.
c. If you see the Your connection is not private message, click ADVANCED
and click Proceed to sa-nsxvidm-01.vclass.local (unsafe).
d. Enter admin as the user name and VMware1! as the password.
e. On your first entry to the VMware Identity Manager, you are greeted by a message
that asks you to join the VMware Customer Experience Improvement Program
(CEIP). For lab purposes, deselect the check box and click OK.
Lab 16 Managing Users and Roles with VMware Identity Manager 173
b. Directory Sync and Authentication:
• Sync Connector: Leave as sa-nsxvidm-01.vclass.local (default).
• Authentication: Click Yes (default).
• Directory Search Attribute: Select sAMAccountName (default) and scroll down.
c. Certificates:
• Leave the check box deselected (default) and scroll down.
174 Lab 16 Managing Users and Roles with VMware Identity Manager
d. Join Domain Details:
• Domain Name: Enter vclass.local.
• Domain Admin Username: Enter administrator.
• Domain Admin Password: Enter VMware1! and scroll down.
Lab 16 Managing Users and Roles with VMware Identity Manager 175
4. On the Select the Domains page, ensure that Domain and vclass.local (VCLASS) are
selected and click Next.
5. On the Map User Attributes page, leave the default settings, and click Next.
176 Lab 16 Managing Users and Roles with VMware Identity Manager
6. On the Select the groups that you want to sync page, provide the necessary
specifications.
a. Leave the Sync nested group members check box selected (default).
b. In the Specify the group DNs row, click the green plus sign.
• When the Specify the group DNs text box appears, specify the group DNs.
CN=NSX-Users,CN=Users,DC=vclass,DC=local
c. Click Next.
Lab 16 Managing Users and Roles with VMware Identity Manager 177
7. On the Select the Users you would like to sync page, provide the necessary
specifications.
a. In the Specify the user DNs row, click the green plus sign.
• When the Specify the user DNs text box appears, enter the values.
CN=John Doe,CN=Users,DC=vclass,DC=local
b. Click Next.
8. On the Review page, verify that there is one user and one group ready to synchronize, and
click Sync Directory.
The Import Status: Sync started message appears.
178 Lab 16 Managing Users and Roles with VMware Identity Manager
9. Click the Refresh Page link.
10. Once the synchonization process completes, verify that there is one user and one group
listed in the vclass.local directory.
The Green check mark indicates that the synchronization process is successful.
Lab 16 Managing Users and Roles with VMware Identity Manager 179
Task 3: Create the OAuth Client for NSX Manager in
VMware Identity Manager
You create the new OAuth Client for NSX Manager from VMware Identity Manager
Administration Console.
1. From VMware Identity Manager Administration Console, click the down arrow next to the
Catalog tab and select Settings from the drop-down menu.
2. In the left pane, select Remote App Access.
180 Lab 16 Managing Users and Roles with VMware Identity Manager
4. Provide the configuration details in the Create Client window.
• Access Type: Select Service Client Token.
• Client ID: Enter sa-nsxmgr-01-OAuthClient.
• Click the triangle to expand the Advanced option.
• Click the Generate Shared Secret link to populate the Shared Secret text box.
Copy and paste the shared secret in a notepad.
5. Click Add.
Lab 16 Managing Users and Roles with VMware Identity Manager 181
6. Verify the OAuthClient addition.
182 Lab 16 Managing Users and Roles with VMware Identity Manager
5. Collect the SHA-256 fingerprint of the VMware Identity Manager and record it in a
notepad.
openssl x509 -in sa-nsxvidm-01.vclass.local_cert.pem -noout -sha256
-fingerprint
Lab 16 Managing Users and Roles with VMware Identity Manager 183
Task 5: Enable VMware Identity Manager Integration with
NSX Manager
You integrate VMware Identity Manager with NSX Manager.
1. On the NSX Simplified UI Home page, click System > Users and click on the
Configuration tab.
2. Click the EDIT link.
3. Provide the configuration details in the Edit VMware Identity Manager Parameters
window.
• External Load Balancer Integration: Select Enabled.
• VMware Identity Manager Integration: Select Enabled.
• VMware Identity Manager Appliance: Enter sa-nsxvidm-01.vclass.local.
• OAuth Client ID: Enter sa-nsxmgr-01-OAuthClient, which is the Client ID that
you created in task 3.
• OAuth Client Secret: Enter Shared Secret that you collected in task 3.
• SSL Thumbprint: Cut and paste the SHA-256 Fingerprint you collected in task 4 with
MTPuTTY.
• NSX Appliance: Enter 172.20.10.48.
4. Click SAVE.
184 Lab 16 Managing Users and Roles with VMware Identity Manager
5. Verify that the VMware Identity Manager Connection status is Up and the VMware
Identity Manager Integration status is Enabled.
NOTE
You need to wait for 5 minutes approximately and click the browser refresh before
proceeding.
Lab 16 Managing Users and Roles with VMware Identity Manager 185
8. Log in to the NSX Simplified UI at the Virtual IP address (https://172.20.10.48) as the
new user jdoe.
The VMware Identity Manager login page appears.
a. Verify that the vclass.local domain is selected. Otherwise, click Change to a
different domain to select it.
b. Click Next.
c. Enter jdoe as the user name, VMware1! as the password, and click Sign in.
9. In the upper-right corner of the NSX Simplified UI, click User to verify that you are
logged in as jdoe@vclass.local.
186 Lab 16 Managing Users and Roles with VMware Identity Manager
10. Click Networking > Segments > and verify that the ADD SEGMENT option is grayed
out.
The grayed out option indicates that users with the Security Engineer role do not have
permissions to configure segments.
11. Click to System > Fabric > Nodes > Edge Transport Nodes and verify that the +ADD
Edge VM option is grayed out.
The grayed out option indicates that users with the Security Engineer role do not have
permission to configure routing.
12. In the upper-right corner of the NSX Simplified UI, click the User and select LOG out to
log out as jdoe@vclass.local.
3. On the NSX Simplified UI Home page, click System > Users and click the
Configuration tab.
4. Click the EDIT link.
5. When the Edit VMware Identity Manager Parameters menu appears, change the VMware
Identity Manager Integration and External Load Balancer options to Disabled and
click SAVE.
Lab 16 Managing Users and Roles with VMware Identity Manager 187
6. Logout of NSX Simplified and log in again to https://172.20.10.48/login.jsp?local=true as
user admin and password VMware1!VMware1! to validate properly disabling VMware
Identity Manager.
Your login should be successful.
7. Log in to the NSX Simplified UI using the new URL.
Ensure that you perform this step.
a. To enable you to use the correct URL, right-click the NSX Data Center favorites tab
and select Add page.
b. In the Name field, enter NSX After vIDM.
c. In the URL field, enter https://172.20.10.48/login.jsp?local=true.
188 Lab 16 Managing Users and Roles with VMware Identity Manager
d. Click the link to test it and you should be able to log in as user admin and password
VMware1!VMware1!.
Lab 16 Managing Users and Roles with VMware Identity Manager 189
190 Lab 16 Managing Users and Roles with VMware Identity Manager
Lab 17 Configuring Syslog
191
Task 1: Prepare for the Lab
You can use the DNS name or the IP address of the Syslog server in your configuration.
3. Verify your logging configuration.
get logging-server
5. Verify that the log messages from NSX Manager with the IP address of 172.20.10.41
appear in Kiwi Syslog Server Console.
3. Configure NSX Edge Node to send TCP info level log messages to the Syslog server.
set logging-server student-a-01.vclass.local:1468 proto tcp level
info
5. Go back to Kiwi Syslog Server Console and verify that the log messages from NSX Edge
Node with the IP address of 172.20.10.61 appear.
6. Return to the sa-nsxedge-01 MTPuTTY session and remove the Syslog server
configuration.
del logging-server student-a-01.vclass.local:1468 proto tcp level
info
195
Task 1: Prepare for the Lab
3. From the Available pane, select the sa-nsxmgr-01 check box and click the right arrow to
move it to the Selected pane.
201
Task 1: Prepare for the Lab
You log in to the vSphere Web Client UI and the NSX Manager UI.
1. From your student desktop, log in to the vSphere Web Client UI.
a. Open the Chrome web browser.
b. Click the vSphere Site-A > vSphere Web Client (SA-VCSA-01) bookmark.
c. On the login page, enter administrator@vsphere.local as the user name and
VMware1! as the password.
5. Click TRACE.
2. Verify that the Traceflow output appears, including a diagram on the left and the steps of
the packet are on the right.
3. In the first row of the packet walk, verify that a packet is injected through the Transport
Node.
4. In the second and third rows, verify that the distributed firewall receives the packet, applies
firewall rules, and forwards the packet to the App-LS logical switch.
5. From the fourth to the seventh rows, verify that App-LS is attached to the gateway T1-LR-
1, which receives the packet and forwards it to the attached logical segmentWeb-LS.
6. In the eighth and ninth rows, verify that the source VTEP and destination VTEP IP
addresses appear, because the source and the destination VMs reside on two different
hosts.
7. In the tenth and eleventh rows, verify that the distributed firewall receives the packet and
applies any firewall rules, if any, at the destination host.