Sie sind auf Seite 1von 476

HOL-1703-SDC-1

Table of Contents
Lab Overview - HOL-1703-SDC-1 - VMware NSX: Introduction and Feature Tour............... 3
Lab Guidance .......................................................................................................... 4
Module 1 - Installation Walk Through (30 minutes)......................................................... 11
Special Note on Module 1...................................................................................... 12
Introduction to Deploying NSX .............................................................................. 13
Deploying the NSX Manager OVA.......................................................................... 15
Registering NSX with vCenter ............................................................................... 28
Configuring Syslog and NSX Manager Backups..................................................... 35
Deploying NSX Controllers .................................................................................... 42
Preparing a Cluster for NSX................................................................................... 53
Configuring and verifying VXLAN Tunnel End Points ............................................. 56
Creating VXLAN Network Identifier Pools .............................................................. 62
Creating Transport Zones ...................................................................................... 65
NSX Manager Dashboard ...................................................................................... 67
Module 1 Conclusion ............................................................................................. 70
Module 2 - Logical Switching (30 minutes) ..................................................................... 72
Logical Switching - Module Overview .................................................................... 73
Logical Switching .................................................................................................. 74
Scalability/Availability ......................................................................................... 105
Module 2 Conclusion ........................................................................................... 110
Module 3 - Logical Routing (60 minutes)....................................................................... 112
Routing Overview ................................................................................................ 113
Dynamic and Distributed Routing ...................................................................... 115
Centralized Routing............................................................................................. 149
ECMP and High Availability.................................................................................. 168
Prior to moving to Module 3 - Please complete the following cleanup steps ....... 217
Module 3 Conclusion ........................................................................................... 222
Module 4 - Edge Services Gateway (60 minutes).......................................................... 224
Introduction to NSX Edge Services Gateway ....................................................... 225
Deploy Edge Services Gateway for Load Balancer .............................................. 226
Configure Edge Services Gateway for Load Balancer.......................................... 243
Edge Services Gateway Load Balancer - Verify Configuration ............................. 255
Edge Services Gateway Firewall ......................................................................... 268
DHCP Relay ......................................................................................................... 279
Configuring L2VPN .............................................................................................. 309
Module 4 Conclusion ........................................................................................... 372
Module 5 - Physical to Virtual Bridging (60 minutes) .................................................... 373
Native Bridging ................................................................................................... 374
Introduction to Hardware VTEP with Arista.......................................................... 418
Hands-on Lab Interactive Simulation: Hardware VTEP with Arista ...................... 424
Bridging Design Considerations .......................................................................... 425
Module 5 Conclusion ........................................................................................... 437
HOL-1703-SDC-1

Page 1

HOL-1703-SDC-1

Module 6 - Distributed Firewall (45 minutes) ................................................................ 439


Distributed Firewall Introduction ......................................................................... 440
Confirm DFW Enablement ................................................................................... 441
Configure Rules for Web Application Access ....................................................... 444
Security Group Creation ...................................................................................... 454
Access Rule Creation........................................................................................... 458
Module 6 Conclusion ........................................................................................... 474

HOL-1703-SDC-1

Page 2

HOL-1703-SDC-1

Lab Overview HOL-1703-SDC-1 VMware NSX:


Introduction and Feature
Tour

HOL-1703-SDC-1

Page 3

HOL-1703-SDC-1

Lab Guidance
Note: It will take more than 90 minutes to complete this lab. You should
expect to only finish 2-3 of the modules during your time. The modules are
independent of each other so you can start at the beginning of any module
and proceed from there. You can use the Table of Contents to access any
module of your choosing.
The Table of Contents can be accessed in the upper right-hand corner of the
Lab Manual.
VMware NSX is the network virtualization platform for the Software-Defined Data Center
(SDDC) and the main focus of this lab. Throughout this lab we will guide you through the
basic features of NSX. We start off by walking you through a typical installation of NSX
and all of the required components. We then go into Logical Switching and Logical
Routing so that you have a better understanding of these concepts. We then cover the
Edge Services Gateway and how it can provide common services such as DHCP, VPN,
NAT, Dynamic Routing and Load Balancing. We then finally cover off how to bridge a
physical VLAN to a VXLAN and then the Distributed Firewall. The complete Lab Module
list is below and all modules are completely independant so you can freely move
between different modules.
Lab Module List:
Module 1 - Installation Walk Through (30 minutes) - Basic - This module will
walk you through a basic install of NSX including deploying the .ova, configuring
NSX Manager, deploying controllers and preparing hosts.
Module 2 - Logical Switching (30 minutes) - Basic - This module will walk you
through the basics of creating logical switches and attaching virtual machines to
logical switches.
Module 3 - Logical Routing (60 minutes) - Basic - This module will help us
understand some of the routing capabilities supported in the NSX platform and
how to utilize these capabilities while deploying a three tier application.
Module 4 - Edge Services Gateway (60 minutes) - Basic - This module will
demonstrate the capabilities of the Edge Services Gateway and how it cam
provide common services such as DHCP, VPN, NAT, Dynamic Routing and Load
Balancing.
Module 5 - Physical to Virtual Bridging (30 minutes) - Basic - This module will
guide us through the configuration of a L2 Bridging instance between a traditional
VLAN and a NSX Logical Switch. There will also be an offline demonstration of
NSX integration with Arista hardware VXLAN-capable switches.
Module 6 - Distributed Firewall (45 minutes) - Basic - This module will cover
the Distributed Firewall and creating firewall rules between a 3-tier application.
Lab Captains:

HOL-1703-SDC-1

Page 4

HOL-1703-SDC-1

Module 1
Kingdom
Module 2
Module 3
Module 4
Module 5
Module 6

- Michael Armstrong, Senior Systems Engineer, United


-

Mostafa Magdy, Senior Systems Engineer, Canada


Aaron Ramirez, Staff Systems Engineer, USA
Chris Cousins, Systems Engineer, USA
Aaron Ramirez, Staff Systems Engineer, USA
Brian Wilson, Staff Systems Engineer, USA

This lab manual can be downloaded from the Hands-on Labs Document site found here:
[http://docs.hol.pub/HOL-2017]
This lab may be available in other languages. To set your language preference and have
a localized manual deployed with your lab, you may utilize this document to help guide
you through the process:
http://docs.hol.vmware.com/announcements/nee-default-language.pdf

HOL-1703-SDC-1

Page 5

HOL-1703-SDC-1

Location of the Main Console


1. The area in the RED box contains the Main Console. The Lab Manual is on the tab
to the Right of the Main Console.
2. A particular lab may have additional consoles found on separate tabs in the upper
left. You will be directed to open another specific console if needed.
3. Your lab starts with 90 minutes on the timer. The lab can not be saved. All your
work must be done during the lab session. But you can click the EXTEND to
increase your time. If you are at a VMware event, you can extend your lab time
twice, for up to 30 minutes. Each click gives you an additional 15 minutes.
Outside of VMware events, you can extend your lab time up to 9 hours and 30
minutes. Each click gives you an additional hour.

Alternate Methods of Keyboard Data Entry


During this module, you will input text into the Main Console. Besides directly typing it
in, there are two very helpful methods of entering data which make it easier to enter
complex data.

HOL-1703-SDC-1

Page 6

HOL-1703-SDC-1

Click and Drag Lab Manual Content Into Console Active


Window

You can also click and drag text and Command Line Interface (CLI) commands directly
from the Lab Manual into the active window in the Main Console.

Accessing the Online International Keyboard


You can also use the Online International Keyboard found in the Main Console.
1. Click on the Keyboard Icon found on the Windows Quick Launch Task Bar.

HOL-1703-SDC-1

Page 7

HOL-1703-SDC-1

Click once in active console window


In this example, you will use the Online Keyboard to enter the "@" sign used in email
addresses. The "@" sign is Shift-2 on US keyboard layouts.
1. Click once in the active console window.
2. Click on the Shift key.

Click on the @ key


1. Click on the "@" key.
Notice the @ sign entered in the active console window.

HOL-1703-SDC-1

Page 8

HOL-1703-SDC-1

Look at the lower right portion of the screen


Please check to see that your lab is finished all the startup routines and is ready for you
to start. If you see anything other than "Ready", please wait a few minutes. If after 5
minutes you lab has not changed to "Ready", please ask for assistance.

HOL-1703-SDC-1

Page 9

HOL-1703-SDC-1

Activation Prompt or Watermark


When you first start your lab, you may notice a watermark on the desktop indicating
that Windows is not activated.
One of the major benefits of virtualization is that virtual machines can be moved and
run on any platform. The Hands-on Labs utilizes this benefit and we are able to run the
labs out of multiple datacenters. However, these datacenters may not have identical
processors, which triggers a Microsoft activation check through the Internet.
Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft
licensing requirements. The lab that you are using is a self-contained pod and does not
have full access to the Internet, which is required for Windows to verify the activation.
Without full access to the Internet, this automated process fails and you see this
watermark.
This cosmetic issue has no effect on your lab.

HOL-1703-SDC-1

Page 10

HOL-1703-SDC-1

Module 1 - Installation
Walk Through (30
minutes)

HOL-1703-SDC-1

Page 11

HOL-1703-SDC-1

Special Note on Module 1


Please read the following section as it is important for this module.
This module is a walkthrough of the installation and configuration of NSX, it is not meant
to be completed as part of the lab. You may click through the steps in the manual to
see the actions that need to be taken, but do not actually do them in the lab itself.
If you are not interested in the installation of NSX and want proceed to a point in the
manual where you are interfacing with the lab. Please proceed to Module 2 - Logical
Switching or use the Table of Contents in the upper right corner of the interface to select
your desired module.

HOL-1703-SDC-1

Page 12

HOL-1703-SDC-1

Introduction to Deploying NSX


VMware NSX is the leading network virtualization platform that delivers the operational
model of a virtual machine for the network. Just as server virtualization
provides extensible control of virtual machines running on a pool of server hardware,
network virtualization with NSX provides a centralized API to provision and configure
many isolated logical networks that run on a single physical network.
Logical networks decouple virtual machine connectivity and network services from the
physical network, giving cloud providers and enterprises the fexibility to place or
migrate virtual machines anywhere in the data center while still supporting layer-2 /
layer-3 connectivity and layer 4-7 network services.
Within this module we are going to focus on how to perform the actual deployment of
NSX within your environment. Within the lab environment the actual deployment has
already been completed for you.
This Module contains the following lessons:

Deploying the NSX Manager OVA


Registering NSX with vCenter
Configuring Syslog and NSX Manager Backups
Deploying NSX Controllers
Preparing a Cluster for NSX
Configuring and verifying VXLAN Tunnel End Points (VTEP's)
Creating VXLAN Network Identifier Pools (VNI's)
Creating Transport Zones
NSX Manager Dashboard
Conclusion

HOL-1703-SDC-1

Page 13

HOL-1703-SDC-1

NSX Components

HOL-1703-SDC-1

Page 14

HOL-1703-SDC-1

Deploying the NSX Manager OVA


Within this lesson we are going to walk you through the deployment of the NSX Manager
OVA file utilizing the vSphere Web Client.
** Module 1 covers the process of how you would typically deploy NSX and
you are not expected to make any configuration changes as the environment
has already been pre-configured. Some screenshots may not reflect the actual
lab configuration and have been added for completeness. **

Launch the Google Chrome browser


1. Click on the Chrome browser icon from within the taskbar or on the desktop of
the main console.

vCenter - Region A bookmark


1. If the page does not automatically default to the vSphere Web Client page click
on the vCenter - Region A link in the bookmarks bar.

HOL-1703-SDC-1

Page 15

HOL-1703-SDC-1

Login to the vSphere Web Client


1. Either Check the option to Use Windows session authentication or enter the
following username and password:
User name - administrator@corp.local
Password - VMware1!
2.

Click Login to continue.

HOL-1703-SDC-1

Page 16

HOL-1703-SDC-1

Navigate to Hosts and Clusters


After logging into the vSphere Web Client with credentials that have permissions to
deploy a new virtual machine from an .ova:
1. Click on the Home button.
2. Click on Hosts and Clusters.

HOL-1703-SDC-1

Page 17

HOL-1703-SDC-1

Deploy OVF Template to Cluster


1. Right click on the cluster you wish to deploy the NSX Manager to.
2. Select Deploy OVF Template.

HOL-1703-SDC-1

Page 18

HOL-1703-SDC-1

Select the NSX Manager OVF from your local drive


Ensure you have the VMware Client Integration Plug-in installed locally, if not, you will
be prompted to install it before proceeding:
1. Click the Browse button and select NSX Manager OVA File from your local device
and click Open.
2. Click Next to continue and wait for file to be validated.
** Since NSX Manager has already been deployed the uploading of the .OVA
file is not required and the file does not exist within the HOL Environment. **

HOL-1703-SDC-1

Page 19

HOL-1703-SDC-1

Review Details
1. Check the box to Accept extra configuration options.
2. Click Next to continue.

HOL-1703-SDC-1

Page 20

HOL-1703-SDC-1

Read and Accept License Agreement


1. Read through the entire license Agreement ensuring you scroll all the way down
to the bottom.
2. Click the Accept button.
3. Click Next to continue.

HOL-1703-SDC-1

Page 21

HOL-1703-SDC-1

Name and vCenter Location


1. Enter a name for the NSX Manager appliance virtual machine.
2. Select a Folder or Datacenter to deploy the appliance to.
3. Click Next to continue.

HOL-1703-SDC-1

Page 22

HOL-1703-SDC-1

Select Storage
1. Ensure the virtual disk format is Thick Provision Lazy Zeroed.
2. Select a VM Storage Policy, if you have them configured within your environment,
otherwise leave it set to Datastore Default.
3. Select the datastore to deploy the NSX Manager appliance to.
4. Click Next to continue and wait for the selections to be validated.

HOL-1703-SDC-1

Page 23

HOL-1703-SDC-1

Setup Networks
1. Select the Management network for the NSX Manager appliance.
2. Click Next to continue.

HOL-1703-SDC-1

Page 24

HOL-1703-SDC-1

Customize Template
1.
2.
3.
4.
5.
6.

Enter and confirm the password for the default CLI user for NSX Manager.
Enter and confirm the password for the privilege mode user for NSX Manager.
Enter the required hostname.
Enter the IPv4 address, Netmask and Gateway addresses.
Enter the IPv6 address, Prefix and Gateway if required.
Enter the DNS Server and Search list (If you have multiple entries use a space to
separate).
7. Enter the NTP Server list (If you have multiple entries use a space to separate).
8. Check the box to Enable SSH if required.
9. Click Next to continue.

HOL-1703-SDC-1

Page 25

HOL-1703-SDC-1

HOL-1703-SDC-1

Page 26

HOL-1703-SDC-1

Ready to Complete
1. Verify all your settings are correct.
2. Check the box to Power on after deployment.
3. Click Finish.

HOL-1703-SDC-1

Page 27

HOL-1703-SDC-1

Registering NSX with vCenter


Now that NSX Manager has been deployed and powered on we need to configure it to
point to your Lookup Service for Single Sign On authentication as well as vCenter so the
NSX Manager Networking and Security plugin can be installed to access the Graphical
User Interface.
** Module 1 covers the process of how you would typically deploy NSX and
you are not expected to make any configuration changes as the environment
has already been pre-configured. Some screenshots may not reflect the actual
lab configuration and have been added for completeness. **

Launch the Google Chrome browser


Click on the Chrome browser icon from within the taskbar or on the desktop of the main
console.

Navigate to NSX Manager admin web page bookmark


1. Click on the Admin folder.
2. Click on the NSX Manager 01a link.

Login to NSX Manager admin web page


1. Login with the credentials you supplied during the installation of NSX Manager:
User name - admin

HOL-1703-SDC-1

Page 28

HOL-1703-SDC-1

Password - credentials supplied during installation


If you are following these steps within the lab the password for NSX Manger is
VMware1!

View NSX Manager appliance summary


We need to verify that the NSX Management Service has successfully started. It can
take approximately five minutes for the service to completely start once the NSX
Manager appliance has been powered on.
1. Click on the View Summary option.

HOL-1703-SDC-1

Page 29

HOL-1703-SDC-1

Verify the NSX Management Service is running


1. Ensuring the following Common components are in a running state:
vPostgres
RabbitMQ
2.

Ensure the NSX Management Components are running:

NSX Management Service

Go back to the Home screen


1. Click on the Home icon to go back to the main home screen.

HOL-1703-SDC-1

Page 30

HOL-1703-SDC-1

Click Manage vCenter Registration


1. Click on the Manage vCenter Registration button.

Click Edit to configure the Lookup Service details


1. Click on the Edit button next to the Lookup Service to modify the options.

HOL-1703-SDC-1

Page 31

HOL-1703-SDC-1

Add Lookup Service URL and credentials


1.
2.
3.
4.
5.

Enter the Lookup Server IP address or Fully Qualified Domain Name.


Enter the Lookup Service Port.
Enter the Single Sign On Administrator User Name.
Enter the Single Sign On Administrator Password.
Click OK to accept the options.

Trust the Lookup Service Certificate


1. Verify and then click Yes to trust the Lookup Service SSL certificate.

HOL-1703-SDC-1

Page 32

HOL-1703-SDC-1

Ensure Lookup Service is Connected


1. Verify the settings have been entered correctly and the Lookup Service status
shows Connected.

Add vCenter Server and credentials


Click Edit to add information regarding your vCenter Server:
1.
2.
3.
4.

Enter your vCenter Server Fully Qualified Domain Name.


Enter your vCenter Server User Name.
Enter your vCenter Server Password.
Click OK to accept the options.

HOL-1703-SDC-1

Page 33

HOL-1703-SDC-1

Ensure the vCenter Server is Connected


1. Verify the settings have been entered correctly and the the vCenter Server status
shows Connected.

HOL-1703-SDC-1

Page 34

HOL-1703-SDC-1

Configuring Syslog and NSX Manager


Backups
Logging and backups are an integral part of any infrastructure so we need to ensure
that we configure these settings prior to going into production.
** Module 1 covers the process of how you would typically deploy NSX and
you are not expected to make any configuration changes as the environment
has already been pre-configured. Some screenshots may not reflect the actual
lab configuration and have been added for completeness. **

View NSX Manager Appliance settings


1. From the NSX Manager home screen select the Manage Appliance Settings
option.

HOL-1703-SDC-1

Page 35

HOL-1703-SDC-1

Verify NTP and Edit Syslog Server


1. Verify the NTP Server settings that were configured as part of the NSX Manager
Deployment are correct.
2. Click Edit next to Syslog Server to modify the settings.

Enter Syslog Server settings


Enter the Syslog Server Settings applicable for your environment:
1. Enter the Syslog Server Fully Qualified Domain Name.
2. Enter the port the Syslog Server is listening on.
3. Enter the protocol the Syslog Server is using from the drop down menu (TCP,
TCP6, UDP, UDP6).
4. Click OK to complete and save the settings.

HOL-1703-SDC-1

Page 36

HOL-1703-SDC-1

Verify the Syslog Server settings have been saved


1. Ensure the Syslog Server settings have been saved and are correct.

Configuring NSX Manager backups


NSX Manager backup and restore can be configured from the NSX Manager virtual
appliance web interface or through the NSX Manager API. Backups can be scheduled on
an hourly, daily or weekly basis.
The backup file is saved to a remote FTP or SFTP location that NSX Manager can access.
NSX Manager data includes configuration, events, and audit log tables. Configuration
tables are included in every backup.
1. From the NSX Manager home screen select the Backup & Restore option.

HOL-1703-SDC-1

Page 37

HOL-1703-SDC-1

Change the FTP Server Settings


1. Click the FTP Server Settings Change button to modify the settings.

Enter the FTP / SFTP Server Settings


We now need to enter the required information to enable us to backup to a FTP / SFTP
Server:
1.
2.
3.
4.
5.
6.

Enter the FTP / SFTP IP address or Host Name.


Enter the Transfer Protocol (FTP or SFTP).
The port should automatically be populated based on the previous selection.
Enter the user name to authenticate with.
Enter the password for the user name.
Enter the Backup Directory to place the backup files in (If this does not exist and
the user has permissions this will be created).
7. Enter a prefix to append to the start of all backup files.
8. Enter a pass phrase to secure the backup (You will need this pass phrase to
restore the backup).
9. Click OK to confirm.

HOL-1703-SDC-1

Page 38

HOL-1703-SDC-1

Change the Scheduling Settings


You have the ability to configure automatic backups on a weekly, daily or hourly basis
depending on your organizations requirements.
1. Click the Scheduling Change button to modify the settings.

Configuring Backup Frequency


1. Click on the Backup Frequency drop down menu and select either Weekly, Daily
or Hourly.

Configuring day of week


1. When performing a weekly backup you have the option to select the day of the
week to perform the backup.

HOL-1703-SDC-1

Page 39

HOL-1703-SDC-1

Configuring hour of day


1. Select the hour of the day to perform the backup.

Configure minute
1. Finally configure the minute to perform the backup.
2. Click Schedule to confirm your settings.

HOL-1703-SDC-1

Page 40

HOL-1703-SDC-1

Change the Exclude Settings


You have the ability to exclude the following logs from being backed up to save time and
reduce the size of backups:
Audit Logs
System Events
Flow Records
1. Click the Exclude Change button to modify the settings.

Exclude logs
1. Select the logs that you wish to exclude by checking the box.
2. Click OK to confirm.

Initiate a backup
1. You can either wait for the configured scheduled time to perform a backup or click
the Backup button to start the process.
2. Once completed you should see a new backup in the history which can be
restored when needed.

HOL-1703-SDC-1

Page 41

HOL-1703-SDC-1

Deploying NSX Controllers


NSX controllers are an advanced distributed state management system that controls
virtual networks and overlay transport tunnels.
NSX controllers are the central control point for all logical switches within a network and
maintains information of all virtual machines, hosts, logical switches, and VXLANs. The
controllers support two new logical switch control plane modes: Unicast and Hybrid.
These modes decouple NSX from the physical network. VXLANs no longer require the
physical network to support multicast in order to handle the Broadcast, Unknown
unicast, and Multicast (BUM) traffic within a logical switch. The unicast mode replicates
all the BUM traffic locally on the host and requires no physical network configuration. In
the hybrid mode, some of the BUM traffic replication is offloaded to the first hop
physical switch to achieve better performance.
** Module 1 covers the process of how you would typically deploy NSX and
you are not expected to make any configuration changes as the environment
has already been pre-configured. Some screenshots may not reflect the actual
lab configuration and have been added for completeness. **

Login to the vSphere Web Client


1. Browse to your vSphere Web Client using a supported browser and authenticate
with the account you used to register NSX Manager with vCenter.

HOL-1703-SDC-1

Page 42

HOL-1703-SDC-1

Click on the Network & Security tab


You should now see a Networking & Security tab within the vSphere Web Client home
screen.
1. Click on the Networking & Security tab to manage NSX.

HOL-1703-SDC-1

Page 43

HOL-1703-SDC-1

Click on the Installation Tab


You will now see a list of options which include creating logical switches, NSX Edges,
Firewall rules etc.
1. Click on the Installation tab.

HOL-1703-SDC-1

Page 44

HOL-1703-SDC-1

Deploy a new NSX Controller


You will now be taken to the NSX Manager Installation page which contains four tabs:
1. Management - Used to deploy NSX Controllers and also configure NSX in a cross
vCenter environment.
2. Host Preparation - Used to deploy VMware Infrastructure Bundles (vibs) to the
hosts and perform troubleshooting.
3. Logical Network Preparation - Used to verify VXLAN Transport information, create
VXLAN Network Identifiers (VNI's) and Transport Zones.
4. Service Deployments - Used to manage 3rd Party service's.
5. Currently there are no controllers deployed, in order to create a new controller
click on the green plus icon.

Add Controller Settings


We now need to configure the appropriate settings for the NSX controller:
1. NSX Manager - Select the NSX Manager this controller should be associated with.
2. Datacenter - Select the Datacenter this controller will be deployed to.
3. Cluster / Resource Pool - Select the Cluster / Resource Pool that the controller will
be deployed into.
4. Datastore - Select the datastore the controller will be deployed onto.
5. Host - Optionally select a host to deploy the controller to.

HOL-1703-SDC-1

Page 45

HOL-1703-SDC-1

6. Folder - Optionally place the controller into a specific vCenter folder.

HOL-1703-SDC-1

Page 46

HOL-1703-SDC-1

Select the Connected To Network


1. Click on the Select link.
2. Select the Distributed Port Group that you would like the NSX Controllers to be
connected to.
3. Click OK to confirm.

HOL-1703-SDC-1

Page 47

HOL-1703-SDC-1

Creating a new IP Pool


1. Click Select next to the IP Pool.
2. Click New IP Pool.

HOL-1703-SDC-1

Page 48

HOL-1703-SDC-1

Configuring a new IP Pool


Configure the options as per your environment:
1.
2.
3.
4.
5.
6.
7.

Name - Give the IP Pool a descriptive Name.


Gateway - Enter the IP address of the default gateway.
Prefix Length - Enter the prefix length of the network.
Primary DNS - Optionally enter the primany DNS address of your environment.
Secondary DNS - Optionally enter the secondary DNS of your environment.
DNS Suffix - Optionally enter the DNS suffix of your environment.
Static IP Pool - Enter a range of IP addresses that can be used when deploying
controllers.
8. Click OK to complete.

HOL-1703-SDC-1

Page 49

HOL-1703-SDC-1

Select the newly created IP Pool


1. Select the IP Pool you just created.
2. Click OK.

HOL-1703-SDC-1

Page 50

HOL-1703-SDC-1

Enter and confirm the NSX Controller password


1. Enter and confirm the NSX Controller password. All subsequent NSX Controllers
will utilize the same password for consistency.
2. Once confirmed click OK to deploy the first NSX Controller.

Controller deploying
1. The first controller should start deploying and take approximately 5 - 10 minutes.

Deploying additional Controllers


1. Once the first controller has successfully deployed and the status shows
Connected you can now deploy additional controllers up to the maximum
supported for production which is three. Click on the green plus icon to add
another controller and enter the settings as per the first controller. You will notice

HOL-1703-SDC-1

Page 51

HOL-1703-SDC-1

that for additional controllers there is no need to specify a password. The


password used for the first controller is used for all subsequent controllers.
2. Click OK to proceed.

Verify all Controllers show Connected


1. Once all three controllers have been deployed verify that they are all connected
to the NSX Manager instance and the status shows Connected.

HOL-1703-SDC-1

Page 52

HOL-1703-SDC-1

Preparing a Cluster for NSX


To prepare your environment for network virtualization, you must install network
infrastructure components on a per-cluster level for each vCenter server. This deploys
the required software on all hosts in the cluster. When a new host is added to this
cluster, the required software is automatically installed on the newly added host.
After the network infrastructure is installed on a cluster the Distributed Firewall is
enabled on that cluster.
** Module 1 covers the process of how you would typically deploy NSX and
you are not expected to make any configuration changes as the environment
has already been pre-configured. Some screenshots may not reflect the actual
lab configuration and have been added for completeness. **

Navigate to Host Preparation tab


1. Login to the vSphere Web Client and Navigate to Networking & Security and
click on the Installation tab.
2. Click on the Host Preparation tab.

Prepare a cluster for NSX


NSX components are deployed on a cluster by cluster basis. You cannot prepare a single
host within a cluster, it's all or nothing:

HOL-1703-SDC-1

Page 53

HOL-1703-SDC-1

1. Highlight the Cluster you wish to prepare.


2. Click on the Actions cog.
3. Click Install.

Confirm the Install


1. Click Yes to confirm the install.

Installation in Progress
The VMware Infrastructure Bundles (vibs) are now being pushed down to all ESXi hosts
in the cluster which should only take a few minutes and the hosts do not require a
reboot.

HOL-1703-SDC-1

Page 54

HOL-1703-SDC-1

Verify Installation is complete


1. Verify that the Installation Status has completed successfully and shows the
version of NSX that you are currently deploying.
2. The Firewall is automatically enabled as part of the deployment so ensure it
says Enabled.

Prepare all clusters that will utilize NSX components


1. Continue to prepare all clusters that utilize NSX components within your
environment and ensure the Installation Status shows the version of NSX that
you are currently deploying.
2. Ensure the Firewall is enabled.

HOL-1703-SDC-1

Page 55

HOL-1703-SDC-1

Configuring and verifying VXLAN


Tunnel End Points
The VXLAN network is used for Layer 2 Logical Switching across hosts. You configure
VXLAN on a per-cluster basis, where you map each cluster that is to participate in a
logical network to a vDS. When you map a cluster to a switch, each host in that cluster
is enabled for logical switches.
** Module 1 covers the process of how you would typically deploy NSX and
you are not expected to make any configuration changes as the environment
has already been pre-configured. Some screenshots may not reflect the actual
lab configuration and have been added for completeness. **

Navigate to Host Preparation tab


1. Login to the vSphere Web Client and Navigate to the Networking & Security
tab and click on the Installation tab.
2. Click on Host Preparation.

HOL-1703-SDC-1

Page 56

HOL-1703-SDC-1

Configure cluster for VXLAN


You will notice that all three clusters are currently Not Configured for VXLAN.
1. To configure a cluster simply click on the Not Configured link.

HOL-1703-SDC-1

Page 57

HOL-1703-SDC-1

Configure VLXLAN settings


Enter the appropriate settings applicable to your network:
1. Switch - This option will automatically be populated based on the vSphere
Distributed Switch (vDS) the cluster is connected to. If you have multiple
vSphere Distributed Switches then select the appropriate one.
2. VLAN - Specify the VLAN ID that will be used within the physical network.
3. MTU - The MTU will automatically be populated with 1600.
4. VMKNic IP Addressing - You have the option to allocate IP addresses to the VMK
Nics based on an IP pool or DHCP. DHCP is more flexible as the range of available
IP addresses can be modified from a centralised DHCP server rather than using IP
Pools.
5. VMKNic Teaming Policy - You can specify a teaming policy with the options being
Fail Over, Static EtherChannel, Enhanced LACP, Load Balance based on SRCID or
Load Balance based on SRCMAC.
6. VTEP - This will automatically be populated based on the Teaming Policy selected
and number of vSphere Distributed Switches.
7. Click OK to confirm.

HOL-1703-SDC-1

Page 58

HOL-1703-SDC-1

Confirm VXLAN is Configured and prepare remaining


clusters
1. Verify that VXLAN is now configured and prepare all remaining hosts for VXLAN
that require it.

Additional VXLAN options


1. To configure additional options at the cluster level such as IP Detection Types,
Locale ID or uninstall VXLAN simply highlight the cluster and click on the cog icon
to the right hand side and select the required option.

Verifying VXLAN Tunnel Endpoints (VTEPS)


To verify that the VTEP's have obtained IP addresses either via DHCP or an IP Pool:

HOL-1703-SDC-1

Page 59

HOL-1703-SDC-1

1.
2.
3.
4.

Click on the Logical Network Preparation tab.


Click on VXLAN Transport.
Expand the cluster that you wish to verify.
Ensure IP address have been assigned to the hosts.

HOL-1703-SDC-1

Page 60

HOL-1703-SDC-1

Changing VXLAN port


As part of NSX 6.2.3 you now have the ability to change the port used by VXLAN. Prior
to NSX 6.2.3 the default VXLAN UDP port was 8472. If you are performing a fresh install
of NSX then UDP port 4789 will be used, if you are performing an upgrade then UDP port
8472 will still be used so you may want to change it to 4789.
1. Click on the Change button.
2. Enter the required port to be used by VXLAN.
3. Click OK to confirm the changes.

HOL-1703-SDC-1

Page 61

HOL-1703-SDC-1

Creating VXLAN Network Identifier


Pools
You must specify a Segment ID pool for each NSX Manager to isolate your network
traffic. If an NSX controller is not deployed in your environment, you must add a
multicast address range to spread traffic across your network and avoid overloading a
single multicast address. The Segment ID Pool specifies a range of VXLAN Network
Identifiers (VNIs) for use when building Logical Network segments.
** Module 1 covers the process of how you would typically deploy NSX and
you are not expected to make any configuration changes as the environment
has already been pre-configured. Some screenshots may not reflect the actual
lab configuration and have been added for completeness. **

Navigate to Logical Network Preparation tab


Login to the vSphere Web Client and ensure you are in Networking & Security.
1. Click on the Installation tab.
2. Click on the Logical Network Preparation tab.
3. Finally click on Segment ID.

HOL-1703-SDC-1

Page 62

HOL-1703-SDC-1

Add a Segment ID Pool


1. Click on the Edit button.
2. Enter a range of Segment ID's.
3. Click OK to confirm.

Add Multicast addressing


If you are looking to utilize Hybrid or Multicast replication modes whereby the physical
infrastructure is used to replicate traffic then you need to enable Multicast addressing:
1. Check the Enable Multicast addressing tick box.
2. Enter your multicast address range.
3. Click OK to confirm.

HOL-1703-SDC-1

Page 63

HOL-1703-SDC-1

Verify Segment ID Pool


1. Verify the Segment ID Pool has been added successfully.

HOL-1703-SDC-1

Page 64

HOL-1703-SDC-1

Creating Transport Zones


A transport zone is the compute diameter defined by a set of vCenter clusters. Transport
Zones can be configured in one of three modes:
Multicast - Multicast on the physical network is used for the VXLAN control plane.
Unicast - VXLAN control plane is handled by the NSX Controller Cluster.
Hybrid - Optimized Unicast mode which offloads local traffic replication to the physical
network.
** Module 1 covers the process of how you would typically deploy NSX and
you are not expected to make any configuration changes as the environment
has already been pre-configured. Some screenshots may not reflect the actual
lab configuration and have been added for completeness. **

Navigate to Logical Network Preparation tab


Ensure you are logged into the vSphere Web Client and within Networking & Security.
1. Click on the Installation tab.
2. Click on the Logical Network Preparation tab.
3. Finally click on Transport Zones.

Create a new Transport Zone


1. Click on the green plus to create a new Transport Zone.
2. Enter a name and optional Description for the new Transport Zone.
3. Select the Replication mode.

HOL-1703-SDC-1

Page 65

HOL-1703-SDC-1

4. Select the clusters that you would like added to the Transport Zone. Any logical
switches created in this Transport Zone will automatically be added to the
clusters selected here.
5. Click OK to confirm.

Verify Transport Zone has been created


1. Verify the Transport Zone has been successfully created.

HOL-1703-SDC-1

Page 66

HOL-1703-SDC-1

NSX Manager Dashboard


With NSX 6.2.3 we have added an NSX Dashboard to improve troubleshooting by
providing visibility into the overall health of NSX components in one central view.
** Module 1 covers the process of how you would typically deploy NSX and
you are not expected to make any configuration changes as the environment
has already been pre-configured. Some screenshots may not reflect the actual
lab configuration and have been added for completeness. **

Navigate to Dashboard tab


Ensure you are logged into the vSphere Web Client and within Networking & Security.
1. Click on the Dashboard tab.

HOL-1703-SDC-1

Page 67

HOL-1703-SDC-1

Dashboard View
Viewing the Dashboard we get an instant view of the overall health of the NSX
environment. The Dashboard alerts us to any potential issues with the NSX Manager,
Controllers, Hosts, Firewall and Logical Switches.

NSX Manager health and processes


1. Clicking on the green NSX Manager square gives us a view of the NSX Manager
disk usage and the running state of the services currently in use.

HOL-1703-SDC-1

Page 68

HOL-1703-SDC-1

Controller health and connectivity


1. Clicking on one of the green Controller Node squares shows us the overall health
of that controller and whether it has connectivity to other controllers within the
environment.

HOL-1703-SDC-1

Page 69

HOL-1703-SDC-1

Module 1 Conclusion
In this module we showed the simplicity in which NSX can be installed and configured to
start providing layer two through seven services within software.
We covered the installation and configuration of the NSX Manager appliance which
included deployment, integrating with vCenter and configuring logging and backups. We
then covered the deployment of NSX Controllers as the control plane and installation of
the VMware Infrastructure Bundles (vibs) which are kernel modules pushed down to the
hypervisor. Finally we showed the automated deployment of VXLAN Tunnel Endpoints
(VTEP's), creation of a VXLAN Network Identifier pool (VNI's) and the creation of a
Transport Zone.

You've finished Module 1


Congratulations on completing Module 1.
If you are looking for additional information on deploying NSX then review the NSX 6.2
Documentation Center via the URL below:
Go to http://tinyurl.com/hkexfcl
Proceed to any module below which interests you the most:
Lab Module List:
Module 1 - Installation Walk Through (30 minutes) - Basic - This module will
walk you through a basic install of NSX including deploying the .ova, configuring
NSX Manager, deploying controllers and preparing hosts.
Module 2 - Logical Switching (30 minutes) - Basic - This module will walk you
through the basics of creating logical switches and attaching virtual machines to
logical switches.
Module 3 - Logical Routing (60 minutes) - Basic - This module will help us
understand some of the routing capabilities supported in the NSX platform and
how to utilize these capabilities while deploying a three tier application.
Module 4 - Edge Services Gateway (60 minutes) - Basic - This module will
demonstrate the capabilities of the Edge Services Gateway and how it cam
provide common services such as DHCP, VPN, NAT, Dynamic Routing and Load
Balancing.
Module 5 - Physical to Virtual Bridging (30 minutes) - Basic - This module will
guide us through the configuration of a L2 Bridging instance between a traditional
VLAN and a NSX Logical Switch. There will also be an offline demonstration of
NSX integration with Arista hardware VXLAN-capable switches.
Module 6 - Distributed Firewall (45 minutes) - Basic - This module will cover
the Distributed Firewall and creating firewall rules between a 3-tier application.

HOL-1703-SDC-1

Page 70

HOL-1703-SDC-1

Lab Captains:
Module 1
Kingdom
Module 2
Module 3
Module 4
Module 5
Module 6

- Michael Armstrong, Senior Systems Engineer, United


-

Mostafa Magdy, Senior Systems Engineer, Canada


Aaron Ramirez, Staff Systems Engineer, USA
Chris Cousins, Systems Engineer, USA
Aaron Ramirez, Staff Systems Engineer, USA
Brian Wilson, Staff Systems Engineer, USA

How to End Lab


To end your lab click on the END button.

HOL-1703-SDC-1

Page 71

HOL-1703-SDC-1

Module 2 - Logical
Switching (30 minutes)

HOL-1703-SDC-1

Page 72

HOL-1703-SDC-1

Logical Switching - Module Overview


In this lab, you will first explore the key components of VMware NSX. The following other
key aspects are covered in this module:
1) The addition of the NSX controller cluster has eliminated the requirement for
multicast protocol support on the physical fabric. The controller cluster provides, among
other functions, VTEP, IP and MAC resolution.
2) You will create a Logical Switch and then attach two VM's to the Logical Switch that
you created.
3) Lastly, we will review the scalability and high availability of the NSX platform.

HOL-1703-SDC-1

Page 73

HOL-1703-SDC-1

Logical Switching
In this section we will be:
1.
2.
3.
4.
5.
6.

Confirming the configuration readiness of the hosts.


Confirming the logical network preparation.
Creating a new logical switch.
Attaching the logical switch to the NSX Edge Gateway.
Adding VMs to the logical switch.
Testing connectivity between VMs.

Launch Google Chrome


Open a browser by double clicking on the Google Chrome icon on the desktop.

Login to the vSphere Web Client


If you are not already logged into the vSphere Web Client:
(The home page should be the vSphere Web Client. If not, Click on the vSphere Web
Client Taskbar icon for Google Chrome.)
1. Type in administrator@vsphere.local into User name
2. Type in VMware1! into Password
3. Click OK

HOL-1703-SDC-1

Page 74

HOL-1703-SDC-1

Navigate to the Networking & Security Section in the Web


Client
1. Click on the Networking & Security tab.

HOL-1703-SDC-1

Page 75

HOL-1703-SDC-1

View the deployed components


1. Click Installation.
2. Click Host Preparation.
You will see that the data plane components, also called network virtualization
components, are installed on the hosts in our clusters. These components include the
following:
Hypervisor level kernel modules for Port Security, VXLAN, Distributed Firewall and
Distributed Routing. Firewall and VXLAN functions are configured and enabled on each
cluster after the installation of the network virtualization components. The port security
module assists the VXLAN function while the distributed routing module is enabled once
the NSX edge logical router control VM is configured.

HOL-1703-SDC-1

Page 76

HOL-1703-SDC-1

The topology after the host is prepared with data path


components

HOL-1703-SDC-1

Page 77

HOL-1703-SDC-1

View the VTEP configuration


1. Click Logical Network Preparation tab.
2. Click VXLAN Transport tab.
3. Click the twistie to expand the clusters.
VXLAN configuration can be broken down into three important steps:
Configure Virtual Tunnel Endpoint (VTEP) on each host.
Configure Segment ID range to create a pool of logical networks. (In some
configurations, this step may require Multicast group address configuration.)
However, in this lab we are utilizing Unicast mode and we don't need to specify a
multicast range.
Define the span of the logical network by configuring the transport zone.
As shown in the diagram the hosts in the compute clusters are configured with the same
VTEP, VXLAN Tunnel End Point. The environment uses the 192.168.130.0/24 subnet for
the VTEP pool.

The topology after the VTEPs are configured across the


Clusters
One of the key challenges customers have had with VXLAN deployment in the past is
that multicast protocol support is required from physical network devices. This challenge
is addressed in the NSX Platform by providing a controller based VXLAN implementation
and removing any need to configure multicast in the physical network. This mode
(Unicast) is the default mode and customers don't have to configure any multicast
addresses while defining the logical network pool.
If Multicast replication mode is chosen for a given Logical Switch, NSX relies on the
native L2/L3 multicast capability of the physical network to ensure VXLAN encapsulated
multi-destination traffic is sent to all VTEPs. In this mode, a multicast IP address must be
associated to each defined VXLAN L2 segment (i.e., Logical Switch). L2 multicast
capability is used to replicate traffic to all VTEPs in the local segment (i.e., VTEP IP

HOL-1703-SDC-1

Page 78

HOL-1703-SDC-1

addresses that are part of the same IP subnet). Additionally, IGMP snooping should be
configured on the physical switches to optimize the delivery of L2 multicast traffic.
Hybrid mode offers operational simplicity similar to unicast mode IP multicast routing
configuration is not required in the physical network while leveraging the L2 multicast
capability of physical switches.
So the Three modes of control plane configuration are:
Unicast : The control plane is handled by an NSX controller. All unicast traffic
leverages headend replication. No multicast IP addresses or special network
configuration is required.
Multicast: Multicast IP addresses on the physical network are used for the
control plane. This mode is recommended only when you are upgrading from
older VXLAN deployments. Requires PIM/IGMP on physical network.
Hybrid : The optimized unicast mode. Offloads local traffic replication to physical
network (L2 multicast). This requires IGMP snooping on the first-hop switch, but
does not require PIM. First-hop switch handles traffic replication for the subnet.
Hybrid mode is recommended for large-scale NSX deployments.

HOL-1703-SDC-1

Page 79

HOL-1703-SDC-1

Segment ID and Multicast Group Address Configuration


1. Clickon Segment ID. Note that the Multicast addresses section above is blank. .
As mentioned above, we are using the default unicast mode with a controllerbased VXLAN implementation.

Defining Transport Zone settings


1. Click Transport Zones.
2. Double-click on RegionA0_TZ.
A transport zone defines the span of a logical switch. Transport Zones dictate which
clusters can participate in the use of a particular logical network. As you add new
clusters in your datacenter, you can increase the transport zone and thus increase the
span of the logical networks. Once you have the logical switch spanning across all
compute clusters, you remove all the mobility and placement barriers you had before
because of limited VLAN boundaries.

HOL-1703-SDC-1

Page 80

HOL-1703-SDC-1

Confirm Clusters as members of Local Transport Zone


Confirm all 3 clusters are present in the Transport Zone.
1. Click on the Manage tab to show the clusters that are part of this Transport
Zone.

HOL-1703-SDC-1

Page 81

HOL-1703-SDC-1

The topology after the Transport Zone is defined


After looking at the different NSX components and VXLAN related configuration, we will
now go through the creation of a logical network also known as logical switch.

Go back to Networking & Security Menu


1. Clickthe History back button to return to the last window, in your case the
Networking & Security menu.
If by chance you clicked on something else after view the Transport Zone, return to the
Networking & Security Section of the Web Client via the Home menu as used in previous
steps.

Create a new Logical Switch


1. Click Logical Switches on the left hand side.

HOL-1703-SDC-1

Page 82

HOL-1703-SDC-1

2. Clickon the"Green plus" icon to create a new Logical Switch.


3. Name the Logical Switch:Prod_Logical_Switch.
4. Make sure RegionA0_TZ is selected as the Transport Zone Note: Unicast mode
should automatically be selected.
5. Unicast should be selected automatically.
6. Make sure you leave the Enable IP Discovery box checked.
IP Discovery enables ARP Suppression.
Selecting Enable IP Discovery, activates ARP (Address Resolution Protocol) suppression.
ARP is used to determine the destination MAC (Media Access Control) address from an IP
address by means of sending a broadcast on a layer 2 segment. If the ESXi host with
NSX Virtual Switch receives ARP traffic from a VM (Virtual Machine) or an Ethernet
request, the host sends the request to the NSX Controller which holds an ARP table. If
the NSX Controller instance already has the information in its ARP table, it is returned to
the host which replies to the virtual machine.
7.

Click OK.

HOL-1703-SDC-1

Page 83

HOL-1703-SDC-1

Attach the new Logical Switch to the NSX Edge services


gateway for external access
1. Highlight the newly created logical switch.
2. Right Click on the Prod_Logical_Switch and select Connect Edge.

HOL-1703-SDC-1

Page 84

HOL-1703-SDC-1

Connect the Logical Switch to the NSX Edge


Routing is described in more detail in the next module, however, in order to gain
connectivity from our Main Console VM and/or other VMs in our lab, to the VMs on our
new logical switch, we need to connect them to the router. As mentioned in the
components section, NSX Edge can be installed in two different forms: DistributedRouter and Perimeter-Gateway.
The Edge Services gateway which is named a "Perimeter-Gateway" provides
network services such as DHCP, NAT, Load Balancer, Firewall and VPN along with
dynamic routing capability.
The "Distributed-Router" supports distributed routing, and dynamic routing.
In this example, you are going to connect the logical switch to the NSX Edge services
gateway (Perimeter-Gateway).
1. Click the radio button next to Perimeter-Gateway.
2. Click Next.

Attach logical switch to NSX Edge


1. Clickthe radio button next to vnic7.

HOL-1703-SDC-1

Page 85

HOL-1703-SDC-1

2. Click Next.

HOL-1703-SDC-1

Page 86

HOL-1703-SDC-1

Name the Interface and configure the IP address for the


interface
1. Name the Interface: Prod_Interface.
2. Select Connected.
3. Clickthe Plus sign to Configure subnets (Leave the other settings as they are)

HOL-1703-SDC-1

Page 87

HOL-1703-SDC-1

Assign an IP to the Interface


1. Enter the Primary IP Address 172.16.40.1 (Leave the Secondary IP Address
blank)
2. Enter 24 for the Subnet Prefix length.
3. Verify your settings are correct and Click Next.

HOL-1703-SDC-1

Page 88

HOL-1703-SDC-1

Complete the interface editing process


1. Click Finish (You will see your new logical switch show up in the logical switch
list)

HOL-1703-SDC-1

Page 89

HOL-1703-SDC-1

The topology after Prod_Logical_Switch is connected to the


NSX Edge services gateway
After configuring the logical switch and providing access to the external network it is
time to connect the web application virtual machines to this network.

HOL-1703-SDC-1

Page 90

HOL-1703-SDC-1

Go to Hosts & Clusters


1. Click the Home button.
2. Click Hosts and Clusters.

HOL-1703-SDC-1

Page 91

HOL-1703-SDC-1

Connect VMs to vDS


Inorder to be able to add the VMs to the logical switch that we created, we need to
make sure that the VMs network adapter is enabled and connects to the correct vDS.
1.
2.
3.
4.
5.
6.

Click on the Hosts & Clusters button.


Expand the twisty under RegionA01-COMP01.
Select Web-03a.
Select Manage.
Select Settings.
Select Edit.

HOL-1703-SDC-1

Page 92

HOL-1703-SDC-1

Connect and enable network interface


1.
2.
3.
4.

Next to the Network Adaptor, click on the drop down menu of interfaces.
Select VM-RegionA01-vDS-COMP (RegionA01-vDS-COMP)
Check the Connected box next to it.
Click OK.

Repeat the same steps for Web-04a which you can find under RegionA01-COMP2
cluster.

HOL-1703-SDC-1

Page 93

HOL-1703-SDC-1

Attach web-03a and web-04a to the newly created


Prod_Logical_Switch
1. Please click on the home button.

Go back to the Network & Security


1. Click Networking & Security.

Select Logical Switches


1. Select the Logical Switches tab.

HOL-1703-SDC-1

Page 94

HOL-1703-SDC-1

Add the VMs


1. Click to highlight the new Logical Switch that was created.
2. Right Click and selectthe Add VM menu item.

HOL-1703-SDC-1

Page 95

HOL-1703-SDC-1

Add Virtual Machines to attach to the new Logical Switch


1.
2.
3.
4.

Enter a filter to locate those VM's whose name start with"web".


Highlight web-03a and web-04a VM's.
Click the right arrow to select the VM's to add to the Logical Switch.
Click Next.

HOL-1703-SDC-1

Page 96

HOL-1703-SDC-1

Add VM's vNIC to Logical Switch


1. Select the vNiCs for the two VMs.
2. Click Next.

HOL-1703-SDC-1

Page 97

HOL-1703-SDC-1

Complete Add VMs to Logical Switch


1. Click Finish.

Testing connectivity between Web-03a & Web-04a

HOL-1703-SDC-1

Page 98

HOL-1703-SDC-1

Hosts and clusters view


1. Click the Home Button.
2. Select Hosts and Clusters from the drop down menu.

HOL-1703-SDC-1

Page 99

HOL-1703-SDC-1

Expand the Clusters


Expand the arrows to see the VM's you just added to the Logical Switch. Notice
the two added VMs are on different Compute Clusters.

HOL-1703-SDC-1

Page 100

HOL-1703-SDC-1

Open Putty
1. Click Start.
2. Clickthe Putty Application icon from the Start Menu.
You are connecting from the MainConsole which is in 192.168.110.0/24 subnet. The
traffic will go through the NSX Edge and then to the Web Interface.

HOL-1703-SDC-1

Page 101

HOL-1703-SDC-1

Open SSH session to web-03a


1. Select web-03a.corp.local.
2. Click Open.
**Note - if the web-3a is not showing as an option for some reason, you can also try
putting the IP address 172.16.40.11 in the Host Name box.

HOL-1703-SDC-1

Page 102

HOL-1703-SDC-1

Login into the VM


If prompted, Click Yes to accept the server's host key.
If not automatically logged in, Login as user root and password VMware1!
Note: If you encounter difficulties connecting to web-03a, please review your previous
steps and verify they have been completed correctly.

HOL-1703-SDC-1

Page 103

HOL-1703-SDC-1

Ping web server web-sv-04a to show the layer 2


connectivity
Remember to use the SEND TEXT option to send this command to the console.
(See Lab Guidance)
Type ping -c 2 web-04a to only send 2 pings instead of a continuous ping. NOTE:
web-04a has an IP of 172.16.40.12, you can ping by IP instead of name if needed.
ping -c 2

web-04a

***Note you might see DUP! packets. This is due to the nature of VMware's nested lab
environment. This will not happen in a production environment.
****Do not close your Putty Session. Minimize the window for later use.

HOL-1703-SDC-1

Page 104

HOL-1703-SDC-1

Scalability/Availability
In this section, you will take a look at the controller scalability and availability. The
Controller cluster in the NSX platform is the control plane component that is responsible
in managing the switching and routing modules in the hypervisors. The controller cluster
consists of controller nodes that manage specific logical switches. The use of a
controller cluster in managing VXLAN based logical switches eliminates the need for
multicast support from the physical network infrastructure.
For resiliency and performance, production deployments must deploy a controller
cluster with multiple nodes. The NSX controller cluster represents a scale-out distributed
system, where each controller Node is assigned a set of roles that define the type of
tasks the node can implement. Controller nodes are deployed in odd numbers. The
current best practice (and the only supported configuration) is for the cluster to have
three nodes of active-active-active load sharing and redundancy.
In order to increase the scalability characteristics of the NSX architecture, a slicing
mechanism is utilized to ensure that all the controller nodes can be active at any given
time.
Should a controller(s) fail, data plane (VM) traffic will not be affected. Traffic will
continue. This is because the logical network information has been pushed down to the
logical switches (the data plane). What you cannot do is make add/moves/changes
without the control plane (controller cluster) intact.

HOL-1703-SDC-1

Page 105

HOL-1703-SDC-1

NSX Controller Scalability/Availability


1. Hover over the Home Icon.
2. Click on Networking & Security.

HOL-1703-SDC-1

Page 106

HOL-1703-SDC-1

Verify Existing Controller Setup


1. Click Installation.
2. Click Management.
Examine the NSX Controller nodes, you can see that there are three controllers
deployed. NSX Controllers are always deployed in odd numbers for high availability and
scalability.

HOL-1703-SDC-1

Page 107

HOL-1703-SDC-1

View NSX Controller VMs


To see the NSX Controllers in the virtual environment
1. Hover over the Home Icon.
2. Click on VMs and Templates.

HOL-1703-SDC-1

Page 108

HOL-1703-SDC-1

You will see the 3 NSX Controllers


1. Expandthe "RegioA01" container
2. Highlight one of the NSX_Controllers
3. Selectthe Summary tab.
Notice the esx host that this controller is connected to. The other controllers may be on
a different esx host in this lab environment. In a production environment, each
controller would reside on a different host in the cluster with DRS anti-affinity rules set
to avoid multiple controller failures due to a single host outage.

HOL-1703-SDC-1

Page 109

HOL-1703-SDC-1

Module 2 Conclusion
In this module we demonstrated the following key benefits of the NSX platform.
The speed at which you can provision logical switches and interface them with virtual
machines and external networks.
Platform scalability is demonstrated by the ability to scale the transport zones as well as
the controller nodes.

You've finished Module 2


Congratulations on completing Module 2.
If you are looking for additional information on Logical Switching visit the URL below:
Go to http://tinyurl.com/h8rx5vr
Proceed to any module below which interests you most:
Lab Module List:
Module 1 - Installation Walk Through (30 minutes) - Basic - This module will
walk you through a basic install of NSX including deploying the .ova, configuring
NSX Manager, deploying controllers and preparing hosts.
Module 2 - Logical Switching (30 minutes) - Basic - This module will walk you
through the basics of creating logical switches and attaching virtual machines to
logical switches.
Module 3 - Logical Routing (60 minutes) - Basic - This module will help us
understand some of the routing capabilities supported in the NSX platform and
how to utilize these capabilities while deploying a three tier application.
Module 4 - Edge Services Gateway (60 minutes) - Basic - This module will
demonstrate the capabilities of the Edge Services Gateway and how it cam
provide common services such as DHCP, VPN, NAT, Dynamic Routing and Load
Balancing.
Module 5 - Physical to Virtual Bridging (30 minutes) - Basic - This module will
guide us through the configuration of a L2 Bridging instance between a traditional
VLAN and a NSX Logical Switch. There will also be an offline demonstration of
NSX integration with Arista hardware VXLAN-capable switches.
Module 6 - Distributed Firewall (45 minutes) - Basic - This module will cover
the Distributed Firewall and creating firewall rules between a 3-tier application.
Lab Captains:
Module 1 - Michael Armstrong, Senior Systems Engineer, United
Kingdom

HOL-1703-SDC-1

Page 110

HOL-1703-SDC-1

Module
Module
Module
Module
Module

2
3
4
5
6

Mostafa Magdy, Senior Systems Engineer, Canada


Aaron Ramirez, Staff Systems Engineer, USA
Chris Cousins, Systems Engineer, USA
Aaron Ramirez, Staff Systems Engineer, USA
Brian Wilson, Staff Systems Engineer, USA

How to End Lab


To end your lab click on the END button.

HOL-1703-SDC-1

Page 111

HOL-1703-SDC-1

Module 3 - Logical
Routing (60 minutes)

HOL-1703-SDC-1

Page 112

HOL-1703-SDC-1

Routing Overview
Lab Module Overview
In the previous module you saw that users can create isolated logical switches/networks
with few clicks. To provide communication across these isolated logical layer 2 networks,
routing support is essential. In the NSX platform the distributed logical router allows you
to route traffic between logical switches. One of the key differentiating feature of this
logical router is that the routing capability is distributed in the hypervisor. By
incorporating this logical routing component users can reproduce complex routing
topologies in the logical space. For example, in a three tier application connected to
three logical switches, the routing between the tiers is handled by this distributed
logical router.
In this module you will demonstrate the following
1) How traffic flows when the routing is handled by an external physical router or NSX
edge services gateway.
2) Then we will go through the configuration of the Logical Interfaces (LIFs) on the
Logical router and enable routing between the App and DB tiers of the Application
3) Later we will configure dynamic routing protocols across the distributed logical router
and the NSX Edge services gateway. We will show how internal route advertisements to
the external router are controlled.
4) Finally we will see how various routing protocols, such as ECMP(Equal Cost Multipath
Routing), can be used to scale and protect the Edge service gateway.
This module will help us understand some of the routing capabilities supported in the
NSX platform and how to utilize these capabilities while deploying a three tier
application.

Special Instructions for CLI Commands


Many of the modules will have you enter Command Line Interface (CLI) commands.
There are two ways to send CLI commands to the lab.
First to send a CLI command to the lab console:
1. Highlight the CLI command in the manual and use Control+c to copy to
clipboard.
2. Click on the console menu item SEND TEXT.
3. Press Control+v to paste from the clipboard to the window.
4. Click the SEND button.

HOL-1703-SDC-1

Page 113

HOL-1703-SDC-1

Second, a text file (README.txt) has been placed on the desktop of the environment
allowing you to easily copy and paste complex commands or passwords in the
associated utilities (CMD, Putty, console, etc). Certain characters are often not present
on keyboards throughout the world. This text file is also included for keyboard layouts
which do not provide those characters.
The text file is README.txt and is found on the desktop.

HOL-1703-SDC-1

Page 114

HOL-1703-SDC-1

Dynamic and Distributed Routing


You will first take a look at the configuration of distributed routing and see the benefits
of performing routing at the kernel level.

A look at the Current Topology and Packet Flow


In the above picture, notice that the Application VM and the Database VM both reside on
the same physical host, which is the scenario in the lab. Without distributed routing, for
these two VM's to communicate, we can see the traffic flow noted by the red arrow
steps above. First we see the traffic leave the Application VM and because the
Database VM is not on the same network subnet, the physical host will send that traffic
to a layer 3 device. In the environment, this is the NSX (perimeter) Edge which resides
on the Management Cluster. The NSX Edge then sends the traffic back through to the
host where it finally reaches the Database VM.
At the end of the lab, we will again visit a similar traffic flow diagram to see how we
have changed this behavior after configuring distributed routing.

HOL-1703-SDC-1

Page 115

HOL-1703-SDC-1

Access vSphere Web Client


Bring up the vSphere Web Client via the icon on the desktop labeled, Google
Chrome.

Login to the vSphere Web Client


If you are not already logged into the vSphere Web Client:
(The home page should be the vSphere Web Client. If not, Click on the vSphere Web
Client Taskbar icon for Google Chrome.)
1. Type in administrator@vsphere.local into User name
2. Type in VMware1! into Password
3. Click OK

Confirm 3 Tier Application Functionality


1. Open a new browser tab
2. Click favorite named Customer DB App

Web Application Returning Database Information


Before you begin configuring Distributed Routing let us verify that the three tiered Web
Application is working correctly. The three tiers of the application (web, app and

HOL-1703-SDC-1

Page 116

HOL-1703-SDC-1

database) are on different logical switches and NSX Edge providing routing between
tiers.
The web server will return a web page with customer information stored in the
database.

HOL-1703-SDC-1

Page 117

HOL-1703-SDC-1

Removal of the App and Db Interfaces from the Perimeter


Edge
As you saw in the earlier topology the three logical switches or three tiers of the
application are terminated on the perimeter edge. The perimeter edge provides the
routing between the three tiers. We are going to change that topology by first removing
the App and DB interfaces from the perimeter edge. After deleting the interfaces, we will
move those on to the distributed edge. For saving the time of deploying a component,
the Distributed Router has been created for you.
Return to the vSphere Web Client tab:
1. Click on the Networking & Security button

HOL-1703-SDC-1

Page 118

HOL-1703-SDC-1

Select NSX Edge


1. Click on NSX Edges in the left navigation pane
2. Double click"edge-3 Perimeter-Gateway-01" to open the Perimeter-Gateway
configuration

HOL-1703-SDC-1

Page 119

HOL-1703-SDC-1

Select Interfaces from the Settings Tab to Display Current


Interfaces
1. Click on Manage Tab
2. Click on Settings
3. Click on Interfaces under the Settings navigation tab
You will see the currently configured interfaces and their properties. Information
includes the vNIC number, interface name, whether the interface is configured as
internal or an uplink and what the current status is, active or disabled.

HOL-1703-SDC-1

Page 120

HOL-1703-SDC-1

Delete the App Interface


1. Highlight App_Tier interface. The Actions bar will illuminate giving specific
options for the selected interface
2. Click the red X to delete the selected interface from the perimeter edge. A
warning box will pop-up asking us to confirm we want to delete the interface
3. Click Ok to confirm the deletion

HOL-1703-SDC-1

Page 121

HOL-1703-SDC-1

Delete the DB Interface


1. Highlight DB_Tier interface. The Actions bar will illuminate giving specific
options for the selected interface
2. Click the red X to delete the selected interface from the perimeter edge. A
warning box will pop-up asking us to confirm we want to delete the interface
3. Click Ok to confirm the deletion

HOL-1703-SDC-1

Page 122

HOL-1703-SDC-1

The Topology After the App and DB Interfaces are


Removed from the Perimeter Edge

HOL-1703-SDC-1

Page 123

HOL-1703-SDC-1

Navigate Back to the NSX Home Page


Now that you have removed the App and DB interfaces from the perimeter edge, you
need to navigate back to the edge device screen in order to access the distributed
edge.
Click the Networking & Security back button at the top left which takes us
back to the main Edge Services screen.

Add App and DB Interfaces to the Distributed Router


We will now begin configuring Distributed Routing by adding the App and DB interface to
the "Distributed Router".
1. Double click edge-6 to configure the Distributed Router

Display the Interfaces on the Distributed Router


1. Click on Manage
2. Click on Settings

HOL-1703-SDC-1

Page 124

HOL-1703-SDC-1

3. Click on Interfaces to display all the interfaces currently configured on the


Distributed Router

HOL-1703-SDC-1

Page 125

HOL-1703-SDC-1

Add Interfaces to the Distributed Router


1. Click on the Green Plus sign to add a new interface
2. Name the interface App_Tier
3. Click Select on the Connected To section

HOL-1703-SDC-1

Page 126

HOL-1703-SDC-1

Specify the Network


1. Select the App_Tier_Logical_Switch radio button which will be the network this
interface will communicate on
2. Click OK

HOL-1703-SDC-1

Page 127

HOL-1703-SDC-1

Add Subnets
1.
2.
3.
4.

Click the Green Plus sign for Configure Subnets.


Click on the Primary IP Address box and enter 172.16.20.1 as the IP address
Enter 24 as the Subnet Prefix Length
Then click OK to complete the adding of the subnet

HOL-1703-SDC-1

Page 128

HOL-1703-SDC-1

Add the DB Interface

Complete the same steps as the previous two steps for the DB_Tier Interface:
Name DB_Tier
Connect to DB_Tier_Logical_Switch
IP address 172.16.30.1 and a subnet prefix length of 24

Once the system completes adding and configuring the DB_Interface. Verify that both
the App_Tier and DB_Tier Interfaces match the picture above.

HOL-1703-SDC-1

Page 129

HOL-1703-SDC-1

The New Topology after Moving the App and DB Interfaces


to the Distributed Router
After these interfaces are configured on the Distributed Router those interface
configurations are automatically pushed to each host in the environment. From here on
the Host's Distributed Routing (DR) Kernel loadable module handles the routing between
the App and DB interfaces. So if the two VMs connected to two different subnets are
running on the same host wants to communicate, the traffic will not take un-optimal
path as shown in the earlier traffic flow diagram.

HOL-1703-SDC-1

Page 130

HOL-1703-SDC-1

Return to Browser Tab with 3-Tier Web App


After making the changes, you will test that the 3 Tier Application access fails. The
reason it fails is while we setup the routing to be handled by the Distributed Router,
there is not currently a route between it and where the Web Servers are located.
Click on tab you previously had open named HOL - Multi-Tier App
Note : If you closed that tab in the previous steps, open a new browser tab and click
the Customer DB App favorite

Verify that the 3 Tiered Application Stops Working


1. Click Refresh
The application will take a few seconds to actually time out, you may need to select the
red "x" to stop the browser. If you do see customer data, it may be cached from before
and you may need to close and re-open the browser to correct it.
Close the tab created to test connectivity to the web server. Next we will configure
routing to restore the service.
Note: If you do have to re-open the browser, after verifying the 3 tier
application is not working, click on the bookmark in the browser for vSphere
Web Client and login again with the credentials "root" password "VMware1!".
Then click on Networking and Security, Edge Appliances and finally doubleclick on "Distributed-Router".

HOL-1703-SDC-1

Page 131

HOL-1703-SDC-1

Configure Dynamic Routing on the Distributed Router


Return to the vSphere Web Client Tab.
1. Click the Routing tab
2. Click Global Configuration
3. Click the Edit button next to Dynamic Routing Configuration

HOL-1703-SDC-1

Page 132

HOL-1703-SDC-1

Edit Dynamic Routing Configuration


1. Select the default router id which is the IP address of the Uplink interface, in this
case Transit_Network_01 - 192.168.5.2
2. Click OK

Note: The router ID is important in the operation of OSPF as it indicates the


routers identity in an autonomous system. It is a 32 bit identifier denoted as
an IP address but can be specific to the subnets interesting to the specific
router. In our case, we are using a router ID that is the same as the IP address
of the uplink interface on the edge device which is acceptable although not
necessary. The screen will return to the main "Global Configuration" screen and again
the "Publish Changes" green dialog box appears.

Publish Changes
1. Click the Publish Changes button in the dialog box again to push the updated
configuration to the distributed-edge device.

HOL-1703-SDC-1

Page 133

HOL-1703-SDC-1

Configure OSPF Specific Parameters


We will be using OSPF as our dynamic routing protocol.
1. Select OSPF in the navigation tree under Routing to open the main OSPF
configuration page
2. Click Edit to the right of OSPF Configuration to open the OSPF Configuration
dialog box

HOL-1703-SDC-1

Page 134

HOL-1703-SDC-1

Enable OSPF
1.
2.
3.
4.
5.

Click the Enable OSPF dialog box


Enter 192.168.5.3 in the Protocol Address box
Enter 192.168.5.2 in the Forwarding Address box
Verify that the Enable Graceful Restart dialog box is checked
Then click OK

Note: For the Distributed Router the "Protocol Address" field is required to send the
Control traffic to the Distribute router Control Virtual Machine. The Forwarding address is
where all the normal data path traffic will be sent. The screen will return to the main
"OSPF" configuration window. The green "Publish Changes" dialog box will be displayed.
Note: The separation of control plane and data plane traffic in NSX creates the
possibility of maintaining the routing instance's data forwarding capability while the
control function is restarted or reloaded. This function is referred to as "Graceful
Restart" or "Non-stop Forwarding".
DO NOT PUBLISH CHANGES YET!Rather than publishing changes at every step, we'll
continue though the configuration changes and publish them all at once.

Configure Area Definition


1. Click the Green Plus sign which will open the New Area Definition dialog box
2. Enter 10 into the Area ID box. You may leave the other dialog boxes at their
default settings
3. Click OK

HOL-1703-SDC-1

Page 135

HOL-1703-SDC-1

Note: The Area ID for OSPF is very important. There are several types of
OSPF areas. Be sure to check the correct area the edge devices should be in
to work properly with the rest of the OSPF configuration within the network.

Area to Interface Mapping


1. Click the Green Plus sign under the "Area to Interface Mapping" area to open
the "New Area to Interface Mapping" dialog box
2. Select Transit_Network_01 for Interface
3. Select 10 for the Area
4. Click OK

HOL-1703-SDC-1

Page 136

HOL-1703-SDC-1

Publish Changes
1. Click the Publish Changes button in the dialog box again to push the updated
configuration to the distributed-edge device.

Confirm OSPF Routing is Enabled on the Distributed


Router
We can nowconfirmthat we have enabled and configured OSPF on the distributed-edge.
Confirm all information displayed is correct.

Confirm Route Redistribution


Click on Route Redistribution to open the main configuration page for route
redistribution.

HOL-1703-SDC-1

Page 137

HOL-1703-SDC-1

Verify Route Redistribution


Verify that there is a check box next to OSPF. This is showing that route
redistribution for OSPF is enabled.

Add BGP to OSPF Route Redistribution Table


1.
2.
3.
4.

Click OSPF from the Route Redistribution table


Click the Pencil icon to edit the settings for OSPF
Check the box for BGP in the Allow learning from list
Click OK

HOL-1703-SDC-1

Page 138

HOL-1703-SDC-1

Publish Changes
1. Click the Publish Changes button in the dialog box again to push the updated
configuration to the distributed-edge device.

Configure OSPF Routing on the Perimeter Edge


Now we must configure the dynamic routing on the perimeter-edge device to restore
connectivity to our test 3 Tier Application.
Clicking on the Networking & Security back button to the upper left to take us
back to the main EdgeServices page.

HOL-1703-SDC-1

Page 139

HOL-1703-SDC-1

Select the Perimeter Edge


From the main NSX Edges page, our configured edge devices are displayed.
Double-click the Edge-3 (Perimeter-Gateway) to again open the main
configuration page for that device.

HOL-1703-SDC-1

Page 140

HOL-1703-SDC-1

Global Configuration for the Perimeter Edge


1. Click the Manage navigation tab
2. Select the Routing navigational button to get to the device routing configuration
page
3. Click on OSPF
4. Click Edit to the right of OSPF Configuration to open the OSPF Configuration
dialog box
You will notice that this Edge device has already been configured for Dynamic Routing
with BGP. This routing configuration is set so that this Edge device can communicate
and distribute routes to the router running the overall lab. We will now continue on by
connecting this Edge device to the Logical Distributed Router. All global router and BGP
settings are already completed for the Edge device.

Enable OSPF
1. Click the Enable OSPF dialog box
2. Verify that the Enable Graceful Restart dialog box is checked

HOL-1703-SDC-1

Page 141

HOL-1703-SDC-1

3. Then click OK

Configure Area Definition


1. Click the Green Plus sign which will open the New Area Definition dialog box
2. Enter 10 into the Area ID box. You may leave the other dialog boxes at their
default settings
3. Click OK

Note: The Area ID for OSPF is very important. There are several types of OSPF
areas. Be sure to check the correct area the edge devices should be in to work
properly with the rest of the OSPF configuration within the network.

Add Transit Interface to Area to Interface Mapping


We now just need to direct OSPF to communicate over the interface that will
communicate with the Distributed Routers.

HOL-1703-SDC-1

Page 142

HOL-1703-SDC-1

1.
2.
3.
4.

Click the Green Plus Sign by Area to Interface Mapping


Select Transit_Network_01 under vNIC
Select 10 under Area
Click OK

Publish Changes
1. Click the Publish Changes button in the dialog box again to push the updated
configuration to the distributed-edge device.

HOL-1703-SDC-1

Page 143

HOL-1703-SDC-1

Confirm OSPF Routing is Enabled on the Perimeter Edge


We can now confirm that we have enabled and configured OSPF on the perimeter-edge.
Confirm all information displayed is correct.

Configure Route Redistribution


1. Click on Route Redistribution to open the main configuration page for route
redistribution.
2. Click Edit to the right of Route Redistribution Status to open the Change
redistribution settings dialog box

HOL-1703-SDC-1

Page 144

HOL-1703-SDC-1

Change Redistribution Settings


1. Click the OSPF dialog box
2. Verify that the BGP dialog box is checked
3. Click OK
Note: BGP is the routing protocol used between the Perimeter-Gateway-01 and the vPod
Router.

Edit BGP Redistribution Criteria


1.
2.
3.
4.

Highlight BGP in the Route Redistribution table, under Learner


Click the pencil icon to edit the selected Learner from the perimeter edge
Click the OSPF dialog box
Click OK

Configure OSPF Route Redistribution


1. Click the Green Plus Sign by Route Redistribution table
2. Select OSPF as the Learner Protocol
3. Click the BGP dialog box

HOL-1703-SDC-1

Page 145

HOL-1703-SDC-1

4. Click the Connected dialog box


5. Click OK

Publish Changes
1. Click the Publish Changes button in the dialog box again to push the updated
configuration to the distributed-edge device.

HOL-1703-SDC-1

Page 146

HOL-1703-SDC-1

Review New Topology


Taking a look at how the topology sits now, you can see how route peering is occurring
between the Distributed Router and the NSX Perimeter Edge device. Any network you
create under the Distribute Router will now be distributed up to the Edge, where at that
point you can control how it is routed into your physical network.
The next section will cover this in more detail.

HOL-1703-SDC-1

Page 147

HOL-1703-SDC-1

Verify Communication to the 3-Tier App


Now let's verify the routing is functional. The routing information from the Distributed
Router to the Perimeter-Gateway is now being exchanged, which has in turn restored
connectivity to the Web App. To verify this, we will once again test the Web App.
1. Click on the tab you had previously opened for the Web Application, it may say
"503 Service Temp..." in the tab from the previously failed test.
2. Refresh your browser to verify the 3-Tier app works again
Note: This might take a minute for route propagation, this time is due to the nested
environment.

Dynamic and Distributed Routing Completed


This completes the section on configuring Dynamic and Distributed routing. In the next
section we will review centralized routing with the Perimeter Edge.

HOL-1703-SDC-1

Page 148

HOL-1703-SDC-1

Centralized Routing
In this section, we will look at various elements to see how the routing is done
northbound from the edge. This includes how OSPF dynamic routing is controlled,
updated, and propagated throughout the system. We will verify the routing on the
perimeter edge appliance through the virtual routing appliance that runs and routes the
entire lab.
Special Note: On the desktop you will find a file names README.txt. It
contains the CLI commands needed in the lab exercises. If you can't type
them you can copy and paste them into the putty sessions. If you see a
number with "french brackets - {1}" this tells you to look for that CLI
command for this module in the text file.

HOL-1703-SDC-1

Page 149

HOL-1703-SDC-1

Current Lab Topology


This diagram is the current lab topology, including the northbound link to the vPod
Router. You can see that OSPF is redistributing routes from the NSX Edge router down to
the Distributed Logical Router.

Look at OSPF Routing in Perimeter Gateway


First we will confirm the Web App is functional, then we will log into the NSX Perimeter
Gateway to view OSPF neighbors and see existing route distribution. This will show how
the Perimeter Gateway is learning routes from not only the Distributed Router, but the
vPod router that is running the entire lab.

Confirm 3 Tier Application Functionality


1. Open a new browser tab

HOL-1703-SDC-1

Page 150

HOL-1703-SDC-1

2. Click favorite named Customer DB App

Web Application Returning Database Information


Before you begin configuring Distributed Routing let us verify that the three tiered Web
Application is working correctly. The three tiers of the application (web, app and
database) are on different logical switches and NSX Edge providing routing between
tiers.
The web server will return a web page with customer information stored in the
database.

HOL-1703-SDC-1

Page 151

HOL-1703-SDC-1

Go to vSphere Web Client


If you are not already logged in, go to vSphere Web Client.

Navigate to Perimeter-Gateway VM
Select VMs and Templates

Launch Remote Console


1. Expand the RegionalA01 Folder

HOL-1703-SDC-1

Page 152

HOL-1703-SDC-1

2. Select Perimeter-Gateway-01-0
3. Select Summary Tab
4. Click Launch Remote Console

HOL-1703-SDC-1

Page 153

HOL-1703-SDC-1

Access Remote Console


When the VMRC window first opens, it will appear black. Click inside the window and
press enter a couple of times to make the console appear from the screensaver.
***NOTE*** To release your cursor from the window, press Ctrl+Alt keys

Login to Perimeter Gateway


Log into the perimeter gateway with the following credentials. Note that all Edge
devices are 12 character complex passwords.
Username :admin
Password : VMware1!VMware1!

HOL-1703-SDC-1

Page 154

HOL-1703-SDC-1

Special Instructions for CLI Commands


Many of the modules will have you enter Command Line Interface (CLI)
commands. There are two ways to send CLI commands to the lab.
First to send a CLI command to the lab console:
1. Highlight the CLI command in the manual and use Control+c to copy to
clipboard.
2. Click on the console menu item SEND TEXT.
3. Press Control+v to paste from the clipboard to the window.
4. Click the SEND button.
Second, a text file (README.txt) has been placed on the desktop of the
environment allowing you to easily copy and paste complex commands or
passwords in the associated utilities (CMD, Putty, console, etc). Certain
characters are often not present on keyboards throughout the world. This
text file is also included for keyboard layouts which do not provide those
characters.
The text file is README.txt and is found on the desktop.

View BGP Neighbors


The first thing we will do is look at the BGP neighbors to the Perimeter Edge, which is in
the middle of the lab routing layer.
NOTE - Tab completion works on Edge devices in NSX.
Enter show ip bgp neighbor.

HOL-1703-SDC-1

Page 155

HOL-1703-SDC-1

show ip bgp neighbor

Reviewing Displayed BGP Neighbor Information


Let's now review the content displayed and what it all means.
1. BGP neighbor is 192.168.100.1 - This is the router ID of the vPod Router inside
the NSX environment
2. Remote AS 65002 - This shows the autonomous system number of the vPod
Router's external network
3. BGP state = Established, up - This means the BGP neighbor adjacency is
complete and the BGP routers will send update packets to exchange routing
information.

HOL-1703-SDC-1

Page 156

HOL-1703-SDC-1

Review Routes on Perimeter Edge and their Origin


Type show ip route
Press Enter
show ip route

Review Route Information


Let's review the content of the routes displayed.
1. The first line shows our default route, which is originating from the vPod router
(192.168.100.1) and the B at the start of the lines shows it has been learned
via BGP.
2. The 172.16.10.0/24 line is the Web-Tier logical switch and its interface. Since it
is directly connected to the Edge, there is a C at the beginning of the line noting
as such.
3. The section noted with a 3 are the other two portions of our Web App, those
being the network segments for the App and DB layer. As noted in line 1, they
have an O at the start of the line to denote they were learned via OSPF via the
Distributed Router (192.168.5.2).

HOL-1703-SDC-1

Page 157

HOL-1703-SDC-1

Controlling BGP Route Distribution


There could be a situation where you would only want BGP routes to distribute inside of
the virtual environment, but not out into the physical world. We are able to control that
route distribution easily from the Edge interface.

Navigate to NSX in vSphere Web Client


**NOTE** You need to press Ctrl+Alt to leave VMRC Window of PerimeterGateway
Return to vSphere Web Client
Click Home Icon, then select Networking and Security

HOL-1703-SDC-1

Page 158

HOL-1703-SDC-1

Access Perimeter Gateway


1. Click NSX Edges
2. Double-Click edge-3

Access BGP Routing Configuration


1. Select Manage Tab
2. Click Routing
3. Click BGP in the left pane

HOL-1703-SDC-1

Page 159

HOL-1703-SDC-1

Remove BGP Neighbor Relationship to vPod Router


We will now remove the mapping of BGP Local AS 65002. In doing this, the Perimeter
Gateway and vPod router will no longer be route peered.
1. Highlight the IP Address of the vPod Router, 192.168.100.1, in the Neighbors
section
2. Click the red X to delete the selected neighbor relationship

Confirm Delete
Click Yes

Publish Change
1. Click the Publish Changes button to push the configuration change.

Naivgate to Perimeter Gateway VMRC


Select Perimeter-Gateway in your taskbar

HOL-1703-SDC-1

Page 160

HOL-1703-SDC-1

Show BGP Neighbors


**NOTE** Once the window appears, you may need to click inside and press
the enter key to get the screen to appear
1. Type show ip bgp neighbor and Press Enter
show ip bgp neighbor

You will now see that the vPod Router (192.168.250.1) has dropped from the list.

Show Routes
1. Type show ip route and Press Enter
show ip route

Now you can see that the only routes being learned via OSPF is from the Distributed
Router (192.168.5.2)

Verify that the 3 Tiered Application Stops Working


**NOTE** You need to press Ctrl+Alt to leave VMRC Window of PerimeterGateway

HOL-1703-SDC-1

Page 161

HOL-1703-SDC-1

Since no routes exist between you control center and the virtual networking
environment, the web app should fail.
1. Click on the HOL - Multi-Tier App Tab
2. Click Refresh.
The application may take a few moments to actually time out, you may need to select
the red "x" to stop the browser. If you do see customer data, it may be cached from
before and you may need to close and re-open the browser to correct it.

HOL-1703-SDC-1

Page 162

HOL-1703-SDC-1

Re-Establish Route Peering


Now let's get the route peering between the Perimeter Gateway and the vPod Router
back in place.
Navigate back to your vSphere Web Client

HOL-1703-SDC-1

Page 163

HOL-1703-SDC-1

Add BGP Neighbor Configuration Back in


1.
2.
3.
4.

Click the Green Plus icon at the Neighbors panel


Type 192.168.100.1 in the IP Address
Type 65002 in the Remote AS
Click OK

Publish Change
1. Click the Publish Changes button to push the configuration change.

HOL-1703-SDC-1

Page 164

HOL-1703-SDC-1

Naivgate to Perimeter Gateway VMRC


Select Perimeter-Gateway in your taskbar

Show BGP Neighbors


**NOTE** Once the window appears, you may need to click inside and press
the enter key to get the screen to appear
1. Type show ip bgp neighbor and Press Enter
show ip bgp neighbor

You will now see that the vPod Router (192.168.100.1) is shown as a neighbor.

Review Routes on Perimeter Edge and their Origin


Type "show ip route"
show ip route

HOL-1703-SDC-1

Page 165

HOL-1703-SDC-1

Show Routes
The default route, and connected network from the vPod Router (192.168.100.1) are
now back in the list.

HOL-1703-SDC-1

Page 166

HOL-1703-SDC-1

Verify that the 3 Tiered Application Is Working


**NOTE** You need to press Ctrl+Alt to leave VMRC Window of PerimeterGateway
With the routes back in place, the Web App should now be functional again.
1. Click on the HOL - Multi-Tier App Tab
2. Click Refresh.
This completes this section of the lab, we will now move on to ECMP and High
Availability with the NSX Edges.

HOL-1703-SDC-1

Page 167

HOL-1703-SDC-1

ECMP and High Availability


In this section, we will now add another Perimeter Gateway to the network and then use
ECMP (Equal Cost Multipath Routing) to scale out Edge capacity and increase its
availability. With NSX we are able to perform an in place addition of an Edge device and
enable ECMP.
ECMP is a routing strategy that allows next-hop packet forwarding to a single destination
can occur over multiple best paths. These best paths can be added statically or as a
result of metric calculations by dynamic routing protocols like OSPF or BGP. The Edge
Services Gateway utilizes Linux network stack implementation, a round-robin algorithm
with a randomness component. After a next hop is selected for a particular source and
destination IP address pair, the route cache stores the selected next hop. All packets for
that flow go to the selected next hop. The Distributed Logical Router uses an XOR
algorithm to determine the next hop from a list of possible ECMP next hops. This
algorithm uses the source and destination IP address on the outgoing packet as sources
of entropy.
Now we will configure a new Perimeter Gateway, and establish an ECMP cluster between
the Perimeter Gateways for the Distributed Logical Router to leverage for increased
capacity and availability. We will test availability by shutting down one of the Perimeter
Gateways, and watching the traffic path change.

Access NSX in vSphere Web Client


1. Check the box to Use Windows session authentication
2. Click Login

HOL-1703-SDC-1

Page 168

HOL-1703-SDC-1

Navigate to NSX in vSphere Web Client


**NOTE** You need to press Ctrl+Alt to leave VMRC Window of PerimeterGateway
Return to vSphere Web Client.
1. Click Home Icon
2. Click Networking & Security

HOL-1703-SDC-1

Page 169

HOL-1703-SDC-1

Add Additional Perimeter Gateway Edge


Our first step is to add an additional perimeter edge device.
1. Click NSX Edges
2. Click Green Plus Sign

HOL-1703-SDC-1

Page 170

HOL-1703-SDC-1

Select and Name Edge


1. Click Edge Services Gateway for Install Type
2. Enter Perimeter-Gateway-02 under Name
3. Click Next

Set Password
1.
2.
3.
4.

Enter the password VMware1!VMware1!


Confirm the password VMware1!VMware1!
Check Enable SSH Access
Click Next

HOL-1703-SDC-1

Page 171

HOL-1703-SDC-1

NOTE - All passwords for NSX Edges are 12 character complex passwords.

HOL-1703-SDC-1

Page 172

HOL-1703-SDC-1

Add Edge Appliance


1. Click Green Plus Sign under NSX Appliances to make the Add NSX Edge
Appliance dialog box appear
2. Select RegionA01-MGMT01 for Cluster/Resource Pool
3. Select RegionA01-ISCSI01-MGMT01 for Datastore
4. Select esx-04a.corp.local for Host
5. Click OK

HOL-1703-SDC-1

Page 173

HOL-1703-SDC-1

Continue Deployment
1. Click Next

HOL-1703-SDC-1

Page 174

HOL-1703-SDC-1

Add Uplink Interface


1. Click the Green Plus Sign to add the first interface

HOL-1703-SDC-1

Page 175

HOL-1703-SDC-1

Select Switch Connected To


We have to pick the northbound switch interface for this edge, which is a distributed
port group.
1.
2.
3.
4.

Click Select next to the Connected To field


Click Distributed Portgroup
Select Uplink-RegionA01-vDS-MGMT
Click OK

Name and Add IP


1.
2.
3.
4.

Enter Uplink under Name


Select Uplink under Type
Click the Green Plus Sign
Enter 192.168.100.4 under Primary IP Address

HOL-1703-SDC-1

Page 176

HOL-1703-SDC-1

5. Enter 24 under Subnet Prefix Length


6. Click OK

HOL-1703-SDC-1

Page 177

HOL-1703-SDC-1

Add Edge Transit Interface


1. Click the Green Plus Sign to add the second interface

HOL-1703-SDC-1

Page 178

HOL-1703-SDC-1

Select Switch Connected To


We have to pick the northbound switch interface for this edge, which is a VXLAN Backed
Logical Switch.
1.
2.
3.
4.

Click Select next to the Connected To field


Click Logical Switch
Select Transit_Network_01_5006
Click OK

Name and Add IP


1. Enter Transit_Network_01 under Name
2. Select Internal under Type
3. Click the Green Plus Sign

HOL-1703-SDC-1

Page 179

HOL-1703-SDC-1

4. Enter 192.168.5.4 under Primary IP Address


5. Enter 29 under Subnet Prefix Length
NOTE - This is 29, not 24! Please make sure to enter the right number or the
lab will not function.
6. Click OK

HOL-1703-SDC-1

Page 180

HOL-1703-SDC-1

Continue Deployment
IMPORTANT! Before continuing, review the information and that the IP
Addresses and Subnet Prefix numbers are correct.
1. Click Next

Remove Default Gateway


We are removing the default gateway since we receive that information via
OSPF
1. UNCHECK Configure Default gateway

HOL-1703-SDC-1

Page 181

HOL-1703-SDC-1

2. Click Next

HOL-1703-SDC-1

Page 182

HOL-1703-SDC-1

Default Firewall Settings


1. CHECK Configure Firewall default policy
2. Select ACCEPT
3. Click Next

HOL-1703-SDC-1

Page 183

HOL-1703-SDC-1

Finalize Deployment
Click Finish to start deployment

Edge Deploying
It will take a couple of minutes for the Edge to deploy.
1. We will notice under status for Edge-7 that it says Busy, also it shows 1 item
installing. This means the deployment is in process.
2. We can click the refresh icon on the web client to speed up the auto refresh on
this screen.

HOL-1703-SDC-1

Page 184

HOL-1703-SDC-1

Once the status says Deployed you can move on to the next step.
Note: If the status of the Edge-7 cannot be seen, scrolling to the window to the right will
allow for the deployment status to be viewed.

Configure Routing on New Edge


We must now configure OSPF on the new Edge device before we can enable ECMP.
1. Double-Click the newly deployed edge-7

HOL-1703-SDC-1

Page 185

HOL-1703-SDC-1

Routing Global Configuration


We must set the base configuration to identify the router to the network.
1.
2.
3.
4.
5.
6.

Click Manage tab


Click Routing tab
Select Global Configuration in the left pane
Click Edit next to Dynamic Routing Configuration
Select Uplink -192.168.100.4 for Router ID
Click OK

Publish Changes
1. Click the Publish Changes button in the dialog box again to push the updated
configuration to the distributed-edge device.

HOL-1703-SDC-1

Page 186

HOL-1703-SDC-1

Enable OSPF
1.
2.
3.
4.

Select OSPF in the left pane


Click Edit next to OSPF Configuration
CHECK Enable OSPF
Click OK

HOL-1703-SDC-1

Page 187

HOL-1703-SDC-1

Add New Area


1. Click the Green Plus Sign by Area Definitions
2. Enter 10 for Area ID
3. Click OK

HOL-1703-SDC-1

Page 188

HOL-1703-SDC-1

Add Transit Interface Mapping


Now the same must be done for the downlink interface to the Distributed Router
1.
2.
3.
4.

Click the Green Plus Sign by Area to Interface Mapping


Select Transit_Network_01 for vNIC
Select 10 for Area
Click OK

HOL-1703-SDC-1

Page 189

HOL-1703-SDC-1

Publish Changes
1. Click the Publish Changes button in the dialog box again to push the updated
configuration to the distributed-edge device.

Enable BGP
1.
2.
3.
4.
5.

Select BGP in the left pane


Click Edit next to BGP Configuration
CHECK Enable BGP
Enter 65001 as the Local AS
Click OK

HOL-1703-SDC-1

Page 190

HOL-1703-SDC-1

Add New Neighbor


1.
2.
3.
4.

Click the Green Plus Sign by Neighbors


Enter 192.168.100.1 for the IP Address
Enter 65002 for the Remote AS
Click OK

Publish Changes
1. Click the Publish Changes button in the dialog box again to push the updated
configuration to the distributed-edge device.

HOL-1703-SDC-1

Page 191

HOL-1703-SDC-1

Enable BGP and OSPF Route Distribution


We must now enable BGP and OSPF route redistribution in order for the routes to be
accessible through this edge.
1.
2.
3.
4.
5.

Click Route Redistribution in the left pane


Click Edit for Route Redistribution Status
Check OSPF
Check BGP
Check OK

HOL-1703-SDC-1

Page 192

HOL-1703-SDC-1

OSPF Route Distribution Table


1.
2.
3.
4.

Click the Green Plus Sign under Route Redistribution Table


Check BGP
Check Connected
Click OK

HOL-1703-SDC-1

Page 193

HOL-1703-SDC-1

BGP Route Distribution Table


1.
2.
3.
4.
5.

Click the Green Plus Sign under Route Redistribution Table


Select BGP as the Learner Protocol
Check OSPF
Check Connected
Click OK

HOL-1703-SDC-1

Page 194

HOL-1703-SDC-1

Publish Changes
1. Click the Publish Changes button in the dialog box again to push the updated
configuration to the distributed-edge device.

Enable ECMP
We are now going to enable ECMP on both the Distributed Router and the Perimeter
Gateways
1. Click the back button, Networking and Security in the Navigator panel

HOL-1703-SDC-1

Page 195

HOL-1703-SDC-1

Enable ECMP on Distributed Router


We will first enable ECMP on the Distributed Router
1. Click NSX Edges
2. Double-Click edge-6

Enable ECMP on DLR


1.
2.
3.
4.

Click
Click
Click
Click

Manage tab
Routing Tab
Global Configuration in left pane
ENABLE Button next to ECMP

HOL-1703-SDC-1

Page 196

HOL-1703-SDC-1

Publish Change
1. Click the Publish Changes to push the configuration change.

Return to Edge Devices


1. Click the Networking & Security back button to return to the previous page.

Access Perimeter Gateway 01


1. Double click the edge-3 (Perimeter Gateway 01)

HOL-1703-SDC-1

Page 197

HOL-1703-SDC-1

Enable ECMP on Perimeter Gateway 01


1.
2.
3.
4.

Click
Click
Click
Click

Manage tab
Routing Tab
Global Configuration in left pane
Enable buttonnext to ECMP

Publish Change
1. Click the Publish Changes to push the configuration change.

Return to Edge Devices


Click the Networking & Security back button to return to the previous page.

Access Perimeter Gateway 02

HOL-1703-SDC-1

Page 198

HOL-1703-SDC-1

Double Click Edge-7 - Perimeter Gateway 02

Enable ECMP on Perimeter Gateway 02


1.
2.
3.
4.

Click
Click
Click
Click

Manage tab
Routing Tab
Global Configuration in left pane
ENABLE Button next to ECMP

Publish Change
1. Click the Publish Changes to push the configuration change.

HOL-1703-SDC-1

Page 199

HOL-1703-SDC-1

Topology Overview
At this stage, this is the topology of the lab. This includes the new Perimeter Gateway
that has been added, routing configured, and ECMP turned on.

Verify ECMP Functionality from Distributed Router


Let's now access the distributed router to ensure that OSPF is communicating and ECMP
is functioning.
1. Click Home Icon
2. Select VMs and Templates

HOL-1703-SDC-1

Page 200

HOL-1703-SDC-1

Launch Remote Console


1.
2.
3.
4.
5.

Click Refresh Icon


Expand the RegionA01 Folder
Select Distributed-Router-01-0
Select Summary Tab
Click Launch Remote Console

HOL-1703-SDC-1

Page 201

HOL-1703-SDC-1

Access Remote Console


When the VMRC window first opens, it will appear black. Click inside the window and
press enter a couple of times to make the console appear from the screensaver.
Note: to release your cursor from the window, press Ctrl+Alt keys

Login to Perimeter Gateway


Log into the distributed router with the following credentials
1. Username : admin
2. Password : VMware1!VMware1!

HOL-1703-SDC-1

Page 202

HOL-1703-SDC-1

View OSPF Neighbors


The first thing we will do is look at the OSPF neighbors to the Distributed Router.
1. Type show ip ospf neighbors and press Enter. (Remember to use SEND
TEXT option.)
show ip ospf neighbors

This shows us that the Distributed Router now has two OSPF neighbors. The neighbors
are the Perimeter-Gateway-1(192.168.100.3) and Perimeter-Gateway-2
(192.168.100.4).
Note: tab completion works on Edge devices in NSX.

HOL-1703-SDC-1

Page 203

HOL-1703-SDC-1

Review Routes from Distributed Router to Perimeter Edges


1. Type show ip route and press Enter
show ip route

Note: the vPod Router network segments and default route are advertised via both
Perimeter Gateway network addresses. The red arrows above are pointing to the
addresses of both the Perimeter-Gateway-01 and Perimeter-Gateway-02.
Leave this window open for the following steps.

Verify ECMP Functionality from vPod Router


Note: to release your cursor from the window, press Ctrl+Alt keys
Now we will look at ECMP from the vPod Router, which simulates a physical router in
your network.
1. Click the PuTTY icon on the Taskbar

HOL-1703-SDC-1

Page 204

HOL-1703-SDC-1

Open SSH Session to vPod Router


1. Using the Scroll Bar, scroll down and select vPod Router
2. Click Load
3. Click Open

Log into vPod Router


The Putty session should automatically log in as the root user

HOL-1703-SDC-1

Page 205

HOL-1703-SDC-1

Access BGP Module


We must telnet into the module that controls BGP in the vPod Router.
1. Enter telnet localhost 2605 and press Enter. (Remember to use the SEND
TEXT option.)
telnet localhost 2605

2. Enter the password VMware1!

HOL-1703-SDC-1

Page 206

HOL-1703-SDC-1

Show BGP Neighbors


We must telnet into the module that controls OSPF in the vPod Router.
1. Enter show ip bgp neighbors and press Enter
show ip bgp neighbors

2. We will see Perimeter-Gateway-01 (192.168.100.3) as a BGP neighbor with


a BGP state of Established, up

Verify Perimeter-Gateway-02 Relationship


1. Verify lower in the BGP Neighbors list that we see Perimeter-Gateway-02
(192.168.100.4) as a BGP neighbor with a BGP state of Established, up

Show Routes
1. Enter show ip bgp and press Enter
show ip bgp

2. In this section you notice that all networks have two next hop routers listed, and this
is because Perimeter-Gateway-01 (192.168.100.3) and Perimeter-Gateway-02
(192.168.100.4) are both Established neighbors for these networks.

HOL-1703-SDC-1

Page 207

HOL-1703-SDC-1

At this point, any traffic connected to the distributed router can egress out either of the
perimeter gateways with ECMP.
Leave this window open for following steps.

HOL-1703-SDC-1

Page 208

HOL-1703-SDC-1

Shutdown Perimeter Gateway 01


We will simulate a node going offline by shutting down Perimeter-Gateway-01
Return to your vSphere Web Client
1.
2.
3.
4.

Expand the RegionA01 Folders


Right-Click Perimeter-Gateway-01-0
Click Power
Click Shut Down Guest OS

HOL-1703-SDC-1

Page 209

HOL-1703-SDC-1

Confirm Shutdown
1. Click Yes

Test High Availability with ECMP


With ECMP, BGP, and OSPF in the environment, we are able to dynamically change
routes in the event of a failure in a particular path. We will now simulate one of the
paths going down, and route redistribution occuring.
1. Click on the Command Prompt Icon in the taskbar

HOL-1703-SDC-1

Page 210

HOL-1703-SDC-1

Ping db-01a Database Server


1. Type ping -t db-01a and press Enter
ping -t db-01a

You will see pings from the control center to the database server (db-01a) start.
Leave this window open and running as you go to the next step.

Access Distributed Router Remote Console


Access the Remote Console to the Distributed Router on the desktop, named
Distributed-Router-01-0

Check Current Routes


1. Enter show ip route and press Enter
show ip route

Note: only the Perimeter-Gateway-02 is now available to acces the vPod Router network
segments.

HOL-1703-SDC-1

Page 211

HOL-1703-SDC-1

Leave this window open for the following steps.

HOL-1703-SDC-1

Page 212

HOL-1703-SDC-1

Power Up Perimeter Gateway 01


Return to your vSphere Web Client
1.
2.
3.
4.

Expand the RegionA01 Folders


Right-Click Perimeter-Gateway-01-0
Click Power
Click Power On

HOL-1703-SDC-1

Page 213

HOL-1703-SDC-1

Verify Perimeter-Gateway-01 is Online


It will take a minute or two for the VM to power up. Once it shows the VMTools are
online in the VM Summary, you can move to the next step.
1. Click the Refresh Icon to check for updates on the VMTools Status.

Return to Ping Test


On the taskbar, go back to your command prompt running your ping test.

HOL-1703-SDC-1

Page 214

HOL-1703-SDC-1

BGP Neighbor Peering


Although this is not a clear depiction of the fail over from Perimeter-Gateway-02 to
Perimeter-Gateway-01, the ping traffic would migrate from Perimeter-Gateway-02 to
Perimeter-Gateway-01 with minimal impact if the active path went down.

Access Distributed Router Remote Console


Access the Remote Console to the Distributed Router on the desktop, named
Distributed-Router-01-0

HOL-1703-SDC-1

Page 215

HOL-1703-SDC-1

Show Routes
Let's check the status of the routes on the Distributed Router 01 since we powered
Perimeter-Gateway-01back up.
1. Enter show ip route and press Enter
show ip route

Note: we should now see that all vPod Router networks have returned to dual
connectivity.

Final Note on ECMP


A final note on ECMP and HA in this lab. While we have shutdown PerimeterGateway-01, the result of of doing this on Perimeter-Gateway-02 would be the
same.
The only caveat is that the Customer DB App does not work when PerimeterGateway-01 is offline since the web server VMs are directly connected to it. We could
resolve this by moving the Web-Tier down to the Distributed-Router-01 as you did the
Database and App networks in the Dynamic and Distributed Routing section of this
lab. Once that is complete, the Customer DB App will function if Perimeter Gateway 1 or
2 were offline. It is important to note that performing this migration will break
other modules in this lab! This is the reason it is not done as part of the
manual. If other modules are not going to be attempted, this migration can
be performed without an issue.

HOL-1703-SDC-1

Page 216

HOL-1703-SDC-1

Prior to moving to Module 3 - Please


complete the following cleanup steps
If you plan to continue to any other module in this lab after completing Module 2, you
must complete the following steps or the lab will not function properly going forward.

Delete Second Perimeter Edge Device


1. Return to vSphere Web Client
2. Click Home Icon, then Networking and Security

HOL-1703-SDC-1

Page 217

HOL-1703-SDC-1

Delete Edge-7
We need to delete the Edge we just created
1. Select NSX Edges
2. Select Edge-7
3. Click Red X to Delete

Confirm Delete
1. Click Yes to confirm deletion

HOL-1703-SDC-1

Page 218

HOL-1703-SDC-1

Disable ECMP on DLR and Perimeter Gateway-01


1. Double-click Edge-6

Disable ECMP on Distributed Router


1.
2.
3.
4.

Click
Click
Click
Click

Manage tab
Routing Tab
Global Configuration in left pane
DISABLE Button next to ECMP

Publish Change
1. Click Publish Changes to push the configuration change.

HOL-1703-SDC-1

Page 219

HOL-1703-SDC-1

Return to Edge Devices


1. Click the Networking & Security back button to return to the previous page.

Access Perimeter Gateway 1


1. Double-click Edge-3

HOL-1703-SDC-1

Page 220

HOL-1703-SDC-1

Disable ECMP on Perimeter Gateway 1


1.
2.
3.
4.

Click
Click
Click
Click

Manage tab
Routing Tab
Global Configuration in left pane
DISABLE Button next to ECMP

Publish Change
1. Click Publish Changes to push the configuration change.

HOL-1703-SDC-1

Page 221

HOL-1703-SDC-1

Module 3 Conclusion
In this module we showed the routing capabilities of NSX for both the hypervisor kernel
service, Distributed Logical Router, as well as, the advanced services features of the
NSX Edge Services Gateways.
We covered the migration of Logical Switches from Edge Services Gateway to the
Distributed Logical Router (DLR), and the configuration of a dynamic routing protocol
between the Edge and DLR. We also reviewed the centralized routing capabilities of the
Edge Services Gateway, and dynamic route peering information. Last, we were able to
demonstrate the scalability and availability of the Edge Services Gateways in an ECMP
(Equal Cost Multipath) route configuration. We deployed a new Edge Services Gateway,
and established route peering with both the DLR and vPod Router to increase
throughput, and availability leveraging ECMP. We also cleaned up our ECMP
configuration to prepare for moving to another module in this lab environment.

You've finished Module 3


Congratulations on completing Module 3.
If you are looking for additional information on NSX Routing capabilities and
configuration, then please review the NSX 6.2 Documentation Center via the URL below:
Go to http://tinyurl.com/hkexfcl
Proceed to any module below which interests you the most:
Lab Module List:
Module 1 - Installation Walk Through (30 minutes) - Basic - This module will
walk you through a basic install of NSX including deploying the .ova, configuring
NSX Manager, deploying controllers and preparing hosts.
Module 2 - Logical Switching (30 minutes) - Basic - This module will walk you
through the basics of creating logical switches and attaching virtual machines to
logical switches.
Module 3 - Logical Routing (60 minutes) - Basic - This module will help us
understand some of the routing capabilities supported in the NSX platform and
how to utilize these capabilities while deploying a three tier application.
Module 4 - Edge Services Gateway (60 minutes) - Basic - This module will
demonstrate the capabilities of the Edge Services Gateway and how it cam
provide common services such as DHCP, VPN, NAT, Dynamic Routing and Load
Balancing.
Module 5 - Physical to Virtual Bridging (30 minutes) - Basic - This module will
guide us through the configuration of a L2 Bridging instance between a traditional
VLAN and a NSX Logical Switch. There will also be an offline demonstration of
NSX integration with Arista hardware VXLAN-capable switches.

HOL-1703-SDC-1

Page 222

HOL-1703-SDC-1

Module 6 - Distributed Firewall (45 minutes) - Basic - This module will cover
the Distributed Firewall and creating firewall rules between a 3-tier application.
Lab Captains:
Module 1
Kingdom
Module 2
Module 3
Module 4
Module 5
Module 6

- Michael Armstrong, Senior Systems Engineer, United


-

Mostafa Magdy, Senior Systems Engineer, Canada


Aaron Ramirez, Staff Systems Engineer, USA
Chris Cousins, Systems Engineer, USA
Aaron Ramirez, Staff Systems Engineer, USA
Brian Wilson, Staff Systems Engineer, USA

How to End Lab


To end your lab click on the END button.

HOL-1703-SDC-1

Page 223

HOL-1703-SDC-1

Module 4 - Edge Services


Gateway (60 minutes)

HOL-1703-SDC-1

Page 224

HOL-1703-SDC-1

Introduction to NSX Edge Services


Gateway
NSX Edge provides network edge security and gateway services to isolate a virtualized
network. You can install an NSX Edge either as a logical (distributed) router or as a
services gateway.
The NSX Edge logical (distributed) router provides East-West distributed routing with
tenant IP address space and data path isolation. Virtual machines or workloads that
reside on the same host on different subnets can communicate with one another
without having to traverse a traditional routing interface.
The NSX Edge Gateway connects isolated, stub networks to shared (uplink) networks by
providing common gateway services such as DHCP, VPN, NAT, dynamic routing, and
Load Balancing. Common deployments of NSX Edges include the DMZ, VPN, Extranets,
and multi-tenant Cloud environments where the NSX Edge creates virtual boundaries for
each tenant.
This Module contains the following lessons:

Deploy Edge Services Gateway


Configure Edge Services Gateway for Load Balancer
Edge Services Gateway Load Balancer- Verify Configuration
Edge Services Gateway Firewall
DHCP Relay
Configuring L2VPN
Conclusion

HOL-1703-SDC-1

Page 225

HOL-1703-SDC-1

Deploy Edge Services Gateway for


Load Balancer
The NSX Edge Services Gateway can provide load balancing functionality. Employing a
load balancer is advantageous as it can lead to more efficient resource utilization.
Examples may include efficient usage of network throughput, shorter response times for
applications, the ability to scale, and can also be part of a strategy to ensure service
redundancy and availability.
TCP, UDP, HTTP, or HTTPS requests can be load balanced utilizing the NSX Edge Services
gateway. The Edge Services Gateway can provide load balancing up to Layer 7 of the
Open Systems Interconnection (OSI) model.
In this section, we will deploy and configure a new NSX Edge Appliance as a "OneArmed" Load Balancer.

Validate Lab is Ready


Lab status is shown on the Desktop of the Main Console Windows VM.
Validation checks ensure all components of the lab are correctly deployed and once
validation is complete, status will be updated to Green/Ready. It is possible to have a
Lab deployment fail due to environment resource constraints.

Login to the vSphere Web Client


If you are not already logged into the vSphere Web Client:
(The home page should be the vSphere Web Client. If not, Click on the vSphere Web
Client Taskbar icon for Google Chrome.)

HOL-1703-SDC-1

Page 226

HOL-1703-SDC-1

1. Type in administrator@vsphere.local into User name


2. Type in VMware1! into Password
3. Click OK

Gain screen space by collapsing the right Task Pane


Clicking on the Push-Pins will allow task panes to collapse and provide more viewing
space to the main pane. You can also collapse the left-hand pane to gain the maximum
space.

HOL-1703-SDC-1

Page 227

HOL-1703-SDC-1

Open Networking & Security


1. Click on "Networking & Security"

Creating a New Edge Services Gateway


We will configure the one-armed load balancing service on a new Edge Services
Gateway. To begin the new Edge Services Gateway creation process, make sure you're
in the Networking & Security section of the vSphere Web Client:
1. Click on NSX Edges
2. Click the green plus (+) icon

Defining Name and Type


For the new NSX Edge Services Gateway, set the following configuration options
1. Enter Name: OneArm-LoadBalancer

HOL-1703-SDC-1

Page 228

HOL-1703-SDC-1

2. Click the Next button

HOL-1703-SDC-1

Page 229

HOL-1703-SDC-1

Configuring Admin account


1. Set the password as: VMware1!VMware1!
2. Click the Next button

HOL-1703-SDC-1

Page 230

HOL-1703-SDC-1

Defining Edge Size and VM placement


There are four different appliance sizes that one can choose for their Edge Service
Gateway, with the following specifications (#CPUs, Memory):

Compact: 1 vCPU, 512 MB


Large: 2 vCPU, 1024 MB
Quad Large: 4 vCPU, 1024 MB
X-Large: 6 vCPU, 8192 MB

We will be selecting a Compact sized Edge for this new Edge Services Gateway, but it's
worth remembering that these Edge Service Gateways can also be upgraded to a larger
size after deployment. To continue with the new Edge Service Gateway creation:
1. Click the green plus (+) sign icon to open the Add NSX Edge Appliances popup
window.

Cluster/Datastore placement
1. Select RegionA01-MGMT01 for your Cluster/Resource Pool placement

HOL-1703-SDC-1

Page 231

HOL-1703-SDC-1

2. Select RegionA01-ISCSI01-MGMT01 for your Datastore placement


3. Select a host esx-05-a.corp.local
4. Click OK

Configure Deployment
1. Click Next button

HOL-1703-SDC-1

Page 232

HOL-1703-SDC-1

Placing a new network interface on the NSX Edge


Since this is a one-armed load balancer, it will only need one network interface. In this
section of the New NSX Edge process, we will be giving this Edge a new network adapter
and configure it.
1. Click the green plus (+) icon.

HOL-1703-SDC-1

Page 233

HOL-1703-SDC-1

Configuring the new network interface for the NSX Edge


This is where we will be configuring the first network interface for this new NSX Edge.
1. Name the new interface the name of WebNetwork
2. Check Internal as a type
3. Clicking the Select link

HOL-1703-SDC-1

Page 234

HOL-1703-SDC-1

Selecting Network for New Edge Interface


This one-armed load balancer's interface will need to be on the same network as the
two web servers that this Edge will be providing Load Balancing services.
1. Select the Logical Switch tab
2. Select the radio button for Web_Tier_Logical_Switch (5000)
3. Click the OK button

HOL-1703-SDC-1

Page 235

HOL-1703-SDC-1

Configuring Subnets
1. Next, we will be configuring an IP address for this interface. Click the small green
plus (+) icon.

HOL-1703-SDC-1

Page 236

HOL-1703-SDC-1

Configuring Subnets Popup


To add a new IP address to this interface:
1. Enter an IP address of 172.16.10.10
2. Enter a subnet prefix length of 24
3. Click OK

HOL-1703-SDC-1

Page 237

HOL-1703-SDC-1

Confirm List of Interfaces


Review settings/selections
1. Click the Next button to continue

HOL-1703-SDC-1

Page 238

HOL-1703-SDC-1

Configuring the Default Gateway


This next section of provisioning a new Edge allows us to configure the default gateway
for this Edge Services Gateway. To configure the gateway:
1. Enter a gateway IP of 172.16.10.1
2. Click the Next button

HOL-1703-SDC-1

Page 239

HOL-1703-SDC-1

Configuring Firewall and HA options


To save time later, we have the ability to configure some default Firewall options, as well
as enable an Edge Services Gateway to run in High Availability (HA) mode. Neither
feature is relevant to this particular section of the module, so to continue, configure the
following:
1. Check the checkbox for Configure Firewall default policy
2. Select Accept as the Default Traffic Policy
3. Click Next

HOL-1703-SDC-1

Page 240

HOL-1703-SDC-1

Review of Overall Configuration and Complete


Confirm configuration looks like this screenshot.
1. Click Finish

HOL-1703-SDC-1

Page 241

HOL-1703-SDC-1

Monitoring Deployment
To monitor deployment of the Edge Services Gateway:
1. Click on the Installing button while the Edge is still being deployed to see the
progress of the installing steps.
Afterwards, we should see the progress of the Edge deployment.

HOL-1703-SDC-1

Page 242

HOL-1703-SDC-1

Configure Edge Services Gateway for


Load Balancer
Now that the Edge Services Gateway is deployed, we will now configure load balancing
services.

Configure Load Balancer Service


The above depicts the eventual topology we will have for the load balancer service
provided by the NSX Edge Services Gateway we just deployed. To get started, from
within the NSX Edges area of the Networking & Security plug-in for the vSphere Web
Client, double click on the Edge we just made to go into its management page.

HOL-1703-SDC-1

Page 243

HOL-1703-SDC-1

Configure Load Balancer Feature on OneArm-Load


Balancer
1. Double-click the edge-7 (OneArm-LoadBalancer)

Navigating to New Edge's Management Page


1.
2.
3.
4.

Click Manage tab (if not already selected)


Click Load Balancer sub-tab
Click Global Configuration (if not already selected)
Click the Edit button to go to the Edit Load Balancer global configuration pop-up
window

HOL-1703-SDC-1

Page 244

HOL-1703-SDC-1

Edit Load Balancer Global Configuration


To enable the Load Balancer service;
1. Check the checkbox for Enable Load Balancer
2. Click the OK button

HOL-1703-SDC-1

Page 245

HOL-1703-SDC-1

Creating a New Application Profile


An Application Profile is how we define the behavior of a typical type of network traffic.
These profiles are applied to a virtual server (VIP) which handles traffic based on the
values specified in the Application Profile.
Utilizing profiles can make traffic-management tasks less error-prone and more efficient.
1. Click on Application Profiles
2. Click on the green plus (+) sign to bring up the New Profile pop-up window

HOL-1703-SDC-1

Page 246

HOL-1703-SDC-1

Configuring a New Application Profile HTTPS


For the new Application Profile, configure the following options:
1. Name: OneArmWeb-01
2. Type: HTTPS
3. Check the checkbox for Enable SSL Passthrough. This will allow HTTPS to
terminate on the pool server.
4. Click the OK button when you are done

HOL-1703-SDC-1

Page 247

HOL-1703-SDC-1

Modify Default HTTPS monitor


Monitors ensure that pool members serving virtual servers are up and working. The
default HTTPS monitor will simply do a "GET" at "/". We will modify the default monitor
to do a health check at application specific URL. This will help determine that not only
the pool member servers are up and running but the application is as well.
1.
2.
3.
4.
5.

Click on Service Monitoring


Click and highlight default_https_monitor
Click on the pencil icon
Type in "/cgi-bin/hol.cgi" for the URL
Click on OK

HOL-1703-SDC-1

Page 248

HOL-1703-SDC-1

Create New Pool


A group of servers of Pool is the entity that represents the nodes that traffic is getting
load balanced to. We will be adding the two web servers web-01a and web-02a to a
new pool. To create the new pool, first:
1. Click on Pools
2. Click the green plus (+) sign to bring up the Edit Pool pop-up window

HOL-1703-SDC-1

Page 249

HOL-1703-SDC-1

Configuring New Pool


For the settings on this new Pool, configure the following:
1. Name: Web-Tier-Pool-01
2. Monitors: default_https_monitor
3. Click the green plus (+) sign

HOL-1703-SDC-1

Page 250

HOL-1703-SDC-1

Add members to the pool


1.
2.
3.
4.
5.

Enter web-01a as the Name


Enter 172.16.10.11 as the IP Address
Enter 443 for the Port
Enter 443 for the Monitor Port
Click OK

Repeat above the process to add one more pool member using following information

Name: web-02a
IP Address: 172.16.10.12
Port: 443
Monitor Port: 443

HOL-1703-SDC-1

Page 251

HOL-1703-SDC-1

Save Pool Settings


1. Click OK

HOL-1703-SDC-1

Page 252

HOL-1703-SDC-1

Create New Virtual Server


A Virtual Server is the entity that accepts traffic from the "front end" of a load balanced
service configuration. User traffic is directed towards the IP address the virtual server
represents, and is then redistributed to nodes on the "back-end" of the load balancer. To
configure a new Virtual Server on this Edge Services Gateway, first
1. Click Virtual Servers
2. Click the small green plus (+) sign to bring up the New Virtual Server pop-up
window

HOL-1703-SDC-1

Page 253

HOL-1703-SDC-1

Configure New Virtual Server


Please configure the following options for this new Virtual Server:
1.
2.
3.
4.
5.

Name this Virtual Server Web-Tier-VIP-01.


Enter IP address of 172.16.10.10.
Select HTTPS as the Protocol.
Select Web-Tier-Pool-01
Click the OK button to finish creating this new Virtual Server

HOL-1703-SDC-1

Page 254

HOL-1703-SDC-1

Edge Services Gateway Load Balancer Verify Configuration


Now that we have configured the load balancing services, we will verify the
configuration.

Test Access to Virtual Server


1. Click on a blank browser tab
2. Click on the Favorite Bookmark for 1-Arm LB Customer DB
3. Click on Advanced

HOL-1703-SDC-1

Page 255

HOL-1703-SDC-1

Ignore SSL error


1. Click on Proceed to 172.16.10.10 (unsafe)

HOL-1703-SDC-1

Page 256

HOL-1703-SDC-1

Test Access to Virtual Server


At this time, we should be successful in accessing the one-armed Load Balancer we just
configured!
1. Clicking the page refresh button will allow you to see the Round-Robin of the two
pool members.
You may have to click a few times to get the browser to refresh outside of the
browser cache.

HOL-1703-SDC-1

Page 257

HOL-1703-SDC-1

Show Pool Statistics


Click on the browser tab for the vSphere Web Client
To see the status of the individual pool members:
1. Click on Pools
2. Click Show Pool Statistics.
3. Click on "pool-1"
We will see each member's current status.
4. Close the window by clicking the X.

HOL-1703-SDC-1

Page 258

HOL-1703-SDC-1

Monitor (Health Check) Response Enhancement


To aid troubleshooting, NSX 6.2 Load Balancer "show ...pool" command will yield
informative description for pool member failures . We will create two different failures
and examine the response using show commands on Load Balancer Edge Gateway.
Click on the vSphere Web Client browser tab.
1. Type "LoadBalancer" in upper right corner of vSphere Web Client search box.
2. Click on "OneArm-LoadBalancer-0".

Open Console Load Balancer Console


1. Click on Summary Tab
2. Click on Launch Remote Console
Note: The console will open in new browser tab

HOL-1703-SDC-1

Page 259

HOL-1703-SDC-1

Login to OneArm-LoadBalancer-0
1. Login using user: admin and password: VMware1!VMware1!

HOL-1703-SDC-1

Page 260

HOL-1703-SDC-1

Special Instructions for CLI Commands


Many of the modules will have us enter Command Line Interface (CLI) commands.
There are two ways to send CLI commands to the lab.
First to send a CLI command to the lab console:
1. Highlight the CLI command in the manual and use Control+c to copy to
clipboard.
2. Click on the console menu item SEND TEXT.
3. Press Control+v to paste from the clipboard to the window.
4. Click the SEND button.
Second, a text file (README.txt) has been placed on the desktop of the environment
providing you with all the user accounts and passwords for the environment.

HOL-1703-SDC-1

Page 261

HOL-1703-SDC-1

Examine pool status before failure


Login with username "admin" and password "VMware1!VMware1!"
Type show service loadbalancer pool (Remember to use the SEND TEXT option.)
show service loadbalancer pool

Note: The status of Pool member web-sv-01a is shown to be "UP"

Start PuTTY
1. Click on the PuTTY shortcut on the Window's Launch Bar.

HOL-1703-SDC-1

Page 262

HOL-1703-SDC-1

SSH to web-01a.corp.local
1.
2.
3.
4.

Scroll down to Web-01a.corp.local


Select Web-01a.corp.local
Click Load
Click on Open

Shutdown HTTPD
We will shutdown HTTPS to simulate the first failure condition
1. Type service httpd stop to shutdown HTTPD.
service httpd stop

Loadbalancer console
Type show service loadbalancer pool

HOL-1703-SDC-1

Page 263

HOL-1703-SDC-1

show service loadbalancer pool

Because the service is down, the failure detail shows the client could not establish SSL
session.

HOL-1703-SDC-1

Page 264

HOL-1703-SDC-1

Restart HTTPD service


Switch back to the Putty SSH session for web-01a
1. Type service httpd start
service httpd start

Shutdown web-01a
Navigate back to the Chrome browser and the vSphere Web Client.
1. In upper right corner search box of vSphere Web Client type "web-01a"
2. Click on web-01a

Power off web-01a


1. Click on Actions
2. Click on Power

HOL-1703-SDC-1

Page 265

HOL-1703-SDC-1

3. Click on Power Off


Click on Yes to confirm.

Check the Pool status


Type show service loadbalancer pool
show service loadbalancer pool

Because now the VM is down, the failure detail shows the client could not establish L4
connection as oppose to L7 (SSL) connection in previous step.

Power web-01a on
Click back to the vSphere Web Client browser tab
1. Click Actions

HOL-1703-SDC-1

Page 266

HOL-1703-SDC-1

2. Click Power
3. Click Power On

Conclusion
In this lab we deployed and configured a new Edge Services Gateway and enabled load
balancing services for the 1-Arm LB Customer DB application.
This concludes the Edge Services Gateway Load Balancer lesson. Next, we will learn
more about the Edge Services Gateway Firewall.

HOL-1703-SDC-1

Page 267

HOL-1703-SDC-1

Edge Services Gateway Firewall


The NSX Edge Firewall monitors North-South traffic to provide perimeter security
functionality including firewall, Network Address Translation (NAT) as well as site-to-site
IPSec and SSL VPN functionality. Firewall settings apply to traffic that does not match
any of the user-defined firewall rules. The default Edge firewall policy blocks all
incoming traffic.

Working with NSX Edge Firewall Rules


We can navigate to an NSX Edge to see the firewall rules that apply to it. Firewall rules
applied to a Logical Router only protect control plane traffic to and from the Logical
Router control virtual machine. They do not enforce any data plane protection. To
protect data plane traffic, create Logical Firewall rules for East-West protection or rules
at the NSX Edge Services Gateway level for North-South protection.
Rules created on the Firewall user interface applicable to this NSX Edge are displayed in
a read-only mode. Rules are displayed and enforced in the following order:
1.
2.
3.
4.

User-defined rules from the Firewall user interface (Read Only).


Auto-plumbed rules (rules that enable control traffic to flow for Edge services).
User-defined rules on NSX Edge Firewall user interface.
Default rule.

Edit the Default Edge Firewall Rule


Log into the vSphere Web Client
1. Select "Use Windows session authentication" check-box
2. Click Login

HOL-1703-SDC-1

Page 268

HOL-1703-SDC-1

Open Network & Security


1. Click Networking & Security

Open an NSX Edge


1. Select NSX Edges
2. Double-click Perimeter Gateway-01

HOL-1703-SDC-1

Page 269

HOL-1703-SDC-1

Open Manage Tab


1. Select Manage tab
2. Select Firewall
3. Select the Default Rule, which is the last rule in the firewall table.

Edit Firewall Rule Action


1. Point mouse to the Action cell of the rule and click
2. Click Action drop-down menu and select Deny

HOL-1703-SDC-1

Page 270

HOL-1703-SDC-1

Publish Changes
We will not be making permanent changes to the Edge Services Gateway Firewall
setting.
1. Select Revert to roll back changes.

Adding Edge Services Gateway Firewall Rule


Now that we are familiar with editing an existing Edge Services Gateway firewall rule,
we will add a new edge firewall rule that will block the Control Center's access to the
Customer DB Application.
1. Select the green (+) symbol to add a new firewall rule.
2. Hover the mouse over the upper right corner of the Name column and select the
(+) symbol
3. Enter the rule name: Main Console FW Rule
4. Click OK

HOL-1703-SDC-1

Page 271

HOL-1703-SDC-1

Specify Source
Hover mouse in the upper right corner of the Source field and select the (+) symbol
1.
2.
3.
4.
5.

Click the Object Type drop down menu and select IP Sets
Click the New IP Set... hyperlink
Type Main Console in the IP Set Name text box
Type in the IP address of the Control Center: 192.168.110.10
Click OK

HOL-1703-SDC-1

Page 272

HOL-1703-SDC-1

Confirm Source
1. Confirm Main Console is in Selected Objects
2. Click OK

HOL-1703-SDC-1

Page 273

HOL-1703-SDC-1

Specify Destination
Hover mouse over upper right corner of the Destination column and select the (+)
symbol
1.
2.
3.
4.

Click
Click
Click
Click

the Object Type drop down menu and select Logical Switch
the Web_Tier_Logical_Switch
the right arrow to move the Web_Tier_Logical_Switch to Selected Objects
OK

Configure Action
1. In the upper right corner of the Action column, click the (+) icon
2. Click the Action drop down menu and select Deny
3. Click OK

HOL-1703-SDC-1

Page 274

HOL-1703-SDC-1

Publish Changes
1. Click Publish Changes

Test New FW Rule


Now that we have configured a new FW rule that will block the Control Center from
accessing the Web Tier logical switch, let's run a quick test:
1. Open a new Chrome browser tab
2. Click the Customer DB App bookmark
3. Verify the Main Console cannot access the Customer DB App
After a few moments, we should see a browser page that states the web site cannot be
reached. Now, lets modify the FW rule to allow the Main Console access to the Customer
DB App.

Change the Main Console FW Rule to Accept


Go back to the vSphere Web Client tab.

HOL-1703-SDC-1

Page 275

HOL-1703-SDC-1

1. Click the (+) symbol in the upper right corner of the Action column of the Main
Console FW Rule
2. Click the Action drop down menu and select the Accept option
3. Click OK

Publish Changes
1. Click Publish Changes

HOL-1703-SDC-1

Page 276

HOL-1703-SDC-1

Confirm Access to Customer DB App


Now that the Main Console FW rule has been changed to "Accept", the Main Console
can now access the Customer DB App

HOL-1703-SDC-1

Page 277

HOL-1703-SDC-1

Delete Main Console FW Rule


1. Select the Main Console FW Rule row
2. Click the red (x) icon to delete the firewall rule
3. Click OK to confirm deletion

Publish Changes
1. Click Publish Changes

Conclusion
In this lab we learned how to modify an existing Edge Services Gateway Firewall rule
and how to configure a new Edge Services Gateway Firewall rule that blocks external
access to the Customer DB App.
This concludes the Edge Services Gateway Firewall lesson. Next, we will move on to
learn more about how the Edge Services Gateway can manage DHCP services.

HOL-1703-SDC-1

Page 278

HOL-1703-SDC-1

DHCP Relay
In a network where there are only single network segments, DHCP clients can
communicate directly with their DHCP server. DHCP servers can also provide IP
addresses for multiple networks, even ones not on the same segments as themselves.
Though when serving up IP addresses for IP ranges outside its own, it is unable to
communicate with those clients directly. This is due to the clients not having a routable
IP address or gateway that they are aware of.
In these situations a DHCP Relay agent is required in order to relay the received
broadcast from DHCP clients by sending it to the DHCP server in unicast. The DHCP
server will select a DHCP scope based upon the range the unicast is coming from,
returning it to the agent address which is then broadcast back to the original network to
the client.
Areas to be covered in this lab:
Create a new network segment within NSX
Enable the DHCP Relay agent on the new network segment
Using a pre-created DHCP scope on a DHCP server this is on another network
segment, that requires layer 3 communication
Then network boot (PXE) a blank VM via DHCP scope options
In this lab, the following items have been pre-setup
Windows Server based DHCP Server, with appropriate DHCP scope and scope
options set
TFTP server for the PXE boot files: This server has been installed, configured and
OS files loaded.

HOL-1703-SDC-1

Page 279

HOL-1703-SDC-1

Lab Topology
This diagram lays out the final topology that will be created and used in this lab module.

Access vSphere Web Client


1. Bring up the vSphere Web Client via the icon on the desktop labeled,
GoogleChrome.

Log into vSphere Web Client


Log into the vSphere Web Client using the Windows session authentication.
1. Click Use Windows session authentication - This will auto fill in the
credentials of administrator@corp.local / VMware1!
2. Click Login

HOL-1703-SDC-1

Page 280

HOL-1703-SDC-1

Access NSX Through the Web Client


Access the Networking & Security section of the Web Client
1. Click Networking & Security in the left pane.

Create New Logical Switch


We must first create a new Logical Switch that will run our new 172.16.50.0/24 network.
1. Select Logical Switches
2. Click the Green Plus (+) Sign sign to create a new Logical Switch

HOL-1703-SDC-1

Page 281

HOL-1703-SDC-1

Enter New Switch Parameters


In order to configure the Logical Switch, we must set the name and transport zone.
1. Transport Zone, click Change

HOL-1703-SDC-1

Page 282

HOL-1703-SDC-1

Select Transport Zone


1. Select RegionA0_TZ
2. Click OK

HOL-1703-SDC-1

Page 283

HOL-1703-SDC-1

Enter New Switch Parameters


The name does not specifically matter, but it is used to help identify the switch.
1. Name = DHCP-Relay
2. Click OK

Connect Logical Switch to Perimeter Gateway


We will now attach the logical switch to an interface on the Perimeter Gateway. This
interface will be the default gateway for the 172.16.50.0/24 network with an address of
172.16.50.1.
1. Click NSX Edges in the left pane.
2. Double Click edge-3 which is the Perimeter-Gateway in this lab.

HOL-1703-SDC-1

Page 284

HOL-1703-SDC-1

Add Interface
This section will attach the logical switch to an interface on the Perimeter Gateway.
1.
2.
3.
4.
5.

Click Manage
Click Settings
Click Interfaces
Select vnic9
Click the Pencil Icon to edit interface

HOL-1703-SDC-1

Page 285

HOL-1703-SDC-1

Select What Logical Switch Interface is Connected to


We will select what Logical Switch the interface is connected to.
1. Click Select

Select Newly Created Logical Switch


Select the new Logical Switch that we just created in the previous steps.
1. Select DHCP-Relay Logical Switch
2. Click OK

HOL-1703-SDC-1

Page 286

HOL-1703-SDC-1

Add Interface IP Address


We will add a new IP Address.
1. Click the Green Plus (+) Sign

Configure Interface IP Address


We will assign the new interface an IP Address.
1. Primary IP address = 172.16.50.1
2. Subnet Prefix Length of = 24

HOL-1703-SDC-1

Page 287

HOL-1703-SDC-1

Complete Interface Configuration


Verify all information and complete the configuration
1. Change the name from vnic9 to DHCP Relay in order to make it easier to
identify later.
2. Click OK

HOL-1703-SDC-1

Page 288

HOL-1703-SDC-1

Configure DHCP Relay


Staying inside of the Perimeter Gateway, we must do the global configuration of DHCP
Relay.
1.
2.
3.
4.

Now click Manage tab


Click DHCP button
Click Relay section in the left pane
Click Edit

DHCP Global Configuration


Within the global configuration of DHCP is where we select the DHCP servers that will
respond to DHCP requests from our guest VMs.
There are three methods by which we can set DHCP Server IPs:
IP Sets
IP Sets are configured from the NSX Manager Global Configuration and allow us to
specify a subset of DHCP servers by creating a named grouping.

HOL-1703-SDC-1

Page 289

HOL-1703-SDC-1

IP Addresses
We can manually specify IP addresses of DHCP servers in this method.
Domain Names
This method allows us to specify a DNS name that could be a single or multiple DHCP
server addresses.

For the sake of this lab, we will be using a single IP address.


1. IP Addresses = 192.168.110.10 that is the IP of the DHCP server.
2. Click OK

HOL-1703-SDC-1

Page 290

HOL-1703-SDC-1

Configure DHCP Relay Agent


The DHCP Relay Agent will relay any DHCP requests from the gateway address on the
logical switch to the configured DHCP Servers. We must add an agent to the logical
switch / segment we created on 172.16.50.0/24.
1. Under the DHCP Relay Agents section, click the Green Plus Sign

Select Perimeter Gateway Interface


Select which interface on the Perimeter Gateway will have the relay agent.
1. Click the vNIC drop down, select the interface we created earlier, DHCP Relay
Internal
2. Click OK

HOL-1703-SDC-1

Page 291

HOL-1703-SDC-1

Publish Settings to DHCP Relay Settings


We now need to publish all of these changes to the distributed router.
1. Click Publish Changes

Create Blank VM for PXE Boot


We will now create a blank VM that will PXE boot from the DHCP server we are relaying
to.
1. Click the Home icon
2. Click on Hosts and Clusters

HOL-1703-SDC-1

Page 292

HOL-1703-SDC-1

Create New VM
1.
2.
3.
4.

Expand RegionA01-COMP01
Select esx-02a.corp.local
Select Actions drop-down menu
Then click New Virtual Machine and New Virtual Machine

HOL-1703-SDC-1

Page 293

HOL-1703-SDC-1

Configure the New VM


1. Select Create a New Virtual Machine
2. Click Next

HOL-1703-SDC-1

Page 294

HOL-1703-SDC-1

Name the VM
1. Name = PXE VM
2. Click Next

HOL-1703-SDC-1

Page 295

HOL-1703-SDC-1

Select Host
1. Click Next

HOL-1703-SDC-1

Page 296

HOL-1703-SDC-1

Select Storage
Leave this as default
1. Click Next

HOL-1703-SDC-1

Page 297

HOL-1703-SDC-1

Select Compatibility
Leave this as default
1. Click Next

HOL-1703-SDC-1

Page 298

HOL-1703-SDC-1

Select Guest OS
Leave this as default
1. Select Linux under Guest OS Family
2. Select Other Linux (64-bit) under Guest OS Version
3. Click Next

HOL-1703-SDC-1

Page 299

HOL-1703-SDC-1

Specify Hardware - Remove Hard Disk


We need to delete the default hard disk. Since we are booting from the network, the
hard disk is not needed. This is because the PXE image is booting and running
completely within RAM.
1. Move the mouse cursor over New Hard Disk and the X will appear to the right.
Click this X to remove the hard drive.

HOL-1703-SDC-1

Page 300

HOL-1703-SDC-1

Specify Hardware - Choose Network


We will now select the VXLAN Backed Logical Switch we created earlier, DHCP-Relay.
We can select it here, or alternatively assign the VM to that logical switch. This is done
through the NSX Logical Switch menu by selecting the logical switch and clicking add.
1. Select the network with the words DHCP Relay in it. The entire UUID of the
logical switch may vary from the above screenshot, but only one will have the
DHCP-Relay in it. If you cannot first see the network listed, click "show more
networks".
2. Click Next

HOL-1703-SDC-1

Page 301

HOL-1703-SDC-1

Complete VM Creation
1. Click Finish.

HOL-1703-SDC-1

Page 302

HOL-1703-SDC-1

Access Newly Created VM


Next we will open a console to this VM, power it up and watch it boot from the PXE
image. It receives this information via the remote DHCP server we configured earlier.
1. Select PXE VM from the left pane
2. Select Summary tab
3. Click Launch Remote Console

Power Up VM
Power up the new VM.
1. Click the Play button

HOL-1703-SDC-1

Page 303

HOL-1703-SDC-1

Obtaining DHCP from Remote Server


Note the VM is now attempting to boot and obtain a DHCP address.

HOL-1703-SDC-1

Page 304

HOL-1703-SDC-1

Image Booting
This screen will appear once the VM has a DHCP address and is downloading the PXE
image from the boot server. This screen will take about 1-2 mins, please move on to the
next step.

Verify DHCP Lease


While we wait for the VM to boot, we can verify the address used in the DHCP Leases.
1. Go to the desktop of the Main Console, and double-click the icon DHCP.

View Leases
We can look to see what address the VM took from the DHCP server.
1. Expand the sections by clicking on the arrows
2. Select Address Leases

HOL-1703-SDC-1

Page 305

HOL-1703-SDC-1

3. You will see the address 172.16.50.10 which is in the range we created earlier

View Options
We can also see the scope options used to boot the PXE Image
1. Select Scope Options
2. You will note option 66 & 67 were used
You can now close DHCP.

Access Booted VM
1. Return to the PXE VM console by selecting it from the taskbar.

HOL-1703-SDC-1

Page 306

HOL-1703-SDC-1

Verify Address and Connectivity


The widget in the upper right corner of the VM will show statistics, along with the IP of
the VM. This should match the IP shown in DHCP earlier.

HOL-1703-SDC-1

Page 307

HOL-1703-SDC-1

Verify Connectivity
Because of the dynamic routing already in place with the virtual network, we have
connectivity to the VM upon its creation. We can verify this by pinging it from the Main
Console.
1. Click the Command Prompt Icon in the taskbar.
2.
Type ping 172.16.50.10 and press enter.
option.)

(Remember to use the SEND TEXT

ping 172.16.50.10

You will then see a ping response from the VM. You can now close this command
window.

Conclusion
In this lab we have completed the creation of a new network segment, then relayed the
DHCP requests from that network to an external DHCP server. In doing so we were able
to access additional boot options of this external DHCP server and PXE into a Linux OS.
This lab is now completed. Next, we will explore Edge Services Gateway L2VPN services.

HOL-1703-SDC-1

Page 308

HOL-1703-SDC-1

Configuring L2VPN
In this module we will be utilizing the L2VPN capabilities of the NSX Edge Gateway to
extend a L2 boundary between two separate vSphere clusters. To demonstrate a use
case based on this capability, we will deploy an an NSX Edge L2VPN Server on the
RegionA01-MGMT01 cluster and an NSX Edge L2VPN Client on the RegionA01-COMP01
cluster and finally test the tunnel status to verify a successful configuration.

Opening Google Chrome and Navigating to the vSphere


Web Client
1. Open the Google Chrome web browser from the desktop (if not already open).

Login to vSphere Web Client


If you are not already logged into the vSphere Web Client:
1. Click on Use Windows session authentication
2. Click on the Login button.

HOL-1703-SDC-1

Page 309

HOL-1703-SDC-1

Navigating to the Networking & Security Section of the


vSphere Web Client
1. Click on Networking & Security.

Creating the NSX Edge Gateway for the L2VPN-Server


To create the L2VPN Server service, we must first deploy an NSX Edge Gateway for that
service to run on.
1. Click on NSX Edges.
2. Click on the Green Plus (+) sign to create a new Edge.

Configuring a new NSX Edge Gateway : L2VPN-Server


The New NSX Edge wizard will appear, with the first section "Name and Description"
displayed. Enter in the following values corresponding to the following numbers. Leave
the other fields blank or at their default values.

HOL-1703-SDC-1

Page 310

HOL-1703-SDC-1

1. Enter L2VPN-Server for the Name.


2. Click Next.

HOL-1703-SDC-1

Page 311

HOL-1703-SDC-1

Configure Settings for New NSX Edge Gateway : L2VPNServer


In the Settings section of the New NSX Edge Wizard, perform the following actions:
1. Enter"VMware1!VMware1!" for the Password and Confirm Password fields, and
leave all other options at their defaults.
2. Click the Next button to continue.

HOL-1703-SDC-1

Page 312

HOL-1703-SDC-1

Adding the New NSX Edge Appliance : L2VPN-Server


In the "Add NSX Edge Appliance" modal pop-up window that appears, enter in the
following values:
1.
2.
3.
4.
5.
6.

Click the Green Plus (+) sign to create NSX Edge Appliance.
Set Cluster/Resource Pool: RegionA01-MGMT01.
Set Datastore: RegionA01-ISCSI01-MGMT01 .
Set Host: esx-05a.corp.local.
Set Folder: Discovered virtual machine.
Click the OK button to submit this configuration.

HOL-1703-SDC-1

Page 313

HOL-1703-SDC-1

Back to Configure Deployment for new NSX Edge Gateway


: L2VPN-Server
1. Click the Next button to continue.

Add Interface
1. Click the Green Plus (+) sign to add interface.

HOL-1703-SDC-1

Page 314

HOL-1703-SDC-1

Adding a new Interface to the NSX Edge Gateway : L2VPNServer


1. Enter L2VPNServer-Uplink for the Name field.
2. Select Uplink for the type.
3. Click the Green Plus sign (+) icon to list the fields for a new Primary IP
Address.
4. Enter 192.168.5.5 for the IP address.
5. Enter 29 for the Subnet Prefix Length. Make sure the length entered is 29
and NOT 24
6. Click the link labeled Select next to the text box named Connected To.

HOL-1703-SDC-1

Page 315

HOL-1703-SDC-1

Connecting the new Interface to a Logical Switch


Ensure that the Logical Switch tab is selected, and perform the following actions:
1. Click the radio button for the logical switch named Transit_Network_01 5006.
2. Click the OK button to continue.

HOL-1703-SDC-1

Page 316

HOL-1703-SDC-1

Confirming new Interface Configuration : L2VPN-Server


Before continuing, review the following settings:
1. Click OK to finish configuration.

HOL-1703-SDC-1

Page 317

HOL-1703-SDC-1

Continuing the new NSX Edge Gateway Configuration :


L2VPN-Server
1. Click the Next button to continue.

HOL-1703-SDC-1

Page 318

HOL-1703-SDC-1

Configuring Default Gateway Settings for new NSX Edge :


L2VPN-Server
1. Enter 192.168.5.1 for the Gateway IP.
2. Click Next.

HOL-1703-SDC-1

Page 319

HOL-1703-SDC-1

Configuring Firewall and HA Settings for new NSX Edge


Gateway : L2VPN-Server
For the Firewall and HA section, configure the following properties:
1. Click the checkbox for Configure Firewall default policy.
2. Set the Default Traffic Policy setting to Accept.
3. Click the Next button to continue.

HOL-1703-SDC-1

Page 320

HOL-1703-SDC-1

Review new NSX Edge Gateway Deployment and Complete


: L2VPN-Server
1. Click the Finish button to start deploying the NSX Edge.

Preparing L2VPN-Server NSX Edge for L2VPN Connections


Before we configure the newly deployed NSX Edge for L2VPN connections, a number of
preparatory steps will need to be taken first, including:
1.) Adding a Trunk Interface to the L2VPN-Server Edge Gateway.
2.) Adding a Sub Interface to the L2VPN-Server Edge Gateway.
3.) Configuring dynamic routing (OSPF) on the L2VPN-Server Edge Gateway.

HOL-1703-SDC-1

Page 321

HOL-1703-SDC-1

Configuring the L2VPN-Server NSX Edge


1. Double click on the NSX Edge Gateway we had created earlier labeled
"L2VPN-Server" to enter its configuration area.

Adding the Trunk Interface


1.
2.
3.
4.

Click on the Manage tab.


Click on Settings tab.
Click on Interfaces
Select the vNIC with the number"1" and name of"vnic1" as shown in the
screenshot.
5. Click on the pencil icon to bring up the Edit NSX Edge Interface wizard.

Configuring the Trunk Interface


In the Edit NSX Edge Interface window that comes up, enter the following values:
1. Enter L2VPN-Server-Trunk for the name.
2. Set Type: Trunk

HOL-1703-SDC-1

Page 322

HOL-1703-SDC-1

3. Click on the Select link next to the text box for Connected To.

Selecting the Trunk Port Group


In the "Connect NSX Edge to a Network" pop-up, perform the following actions:
1. Click on Distributed Portgroup.
2. Click on the radio button for Trunk-Network-regionA0-vDS-MGMT.
3. Click on the OK button.

HOL-1703-SDC-1

Page 323

HOL-1703-SDC-1

Adding a Sub Interface to the Trunk Interface


1. Click on the Green Plus sign (+) icon underneath the label Sub Interfaces.

HOL-1703-SDC-1

Page 324

HOL-1703-SDC-1

Configuring the Sub Interface


In the "Add Sub Interface" pop-up, enter in the following values.
1.
2.
3.
4.
5.
6.
7.

Name: L2VPN-Server-SubInterface
Tunnel Id: 1
Backing Type: Network
Click the Green Plus sign (+) icon.
Enter in 172.16.10.1 in the Primary IP Address field
Enter 24 for the Subnet Prefix Length.
Click the link for Select next to the Connected To.

HOL-1703-SDC-1

Page 325

HOL-1703-SDC-1

Attaching the Sub Interface to a Logical Switch


1. Ensure that the Logical Switch tab is selected.
2. Click on the radio button for Web_Tier_Logical_Switch (5000).
3. Click on the OK button.

HOL-1703-SDC-1

Page 326

HOL-1703-SDC-1

Confirming the new NSX Edge Interface Configuration


1. Leave the rest of the properties at their defaults for the Edit NSX Edge
Interface pop-up, and click on the OK button.

HOL-1703-SDC-1

Page 327

HOL-1703-SDC-1

Setting the Router ID for this NSX Edge


Next, we will be configuring dynamic routing on this Edge Gateway.
1. Click on the Routing sub-tab of this Edge Gateway,
2. Click on Global Configuration in the left-hand nav bar.
3. Click the Edit button in the Dynamic Routing Configuration section. This will
bring up a pop-up window where we can configure what the Router ID will be.

Add L2VPNServer-Uplink
1. Click OK.

HOL-1703-SDC-1

Page 328

HOL-1703-SDC-1

Publish Changes to set Router ID


1. Click Publish Changes button to submit the configuration change for this Edge
Gateway.

HOL-1703-SDC-1

Page 329

HOL-1703-SDC-1

Configuring OSPF on the L2VPN-Server NSX Edge


Stay within the Routing sub-tab, and
1. Click on the item OSPF in the left-hand nav bar.
2. Click on the Green Plus sign (+) icon under the section for Area to Interface
Mapping.

HOL-1703-SDC-1

Page 330

HOL-1703-SDC-1

Configuring Area to Interface Mapping


In the pop-up for "New Area to Interface Mapping," configure the following values:
1. vNIC: L2VPNServer-Uplink (if not already selected)
2. Area: 0
3. Click the OK button.

HOL-1703-SDC-1

Page 331

HOL-1703-SDC-1

Enabling OSPF on the L2VPN-Server NSX Edge


1. To enable the OSPF configuration on this Edge Gateway, click the Edit button in
the OSPF Configuration section.
2. Check the checkbox for Enable OSPF.
3. Click OK button.

Publish Changes for the L2VPN-Server NSX Edge


1. Click on the Publish Changes button to submit the configuration change for this
Edge Gateway.

HOL-1703-SDC-1

Page 332

HOL-1703-SDC-1

Enable OSPF Route Redistribution


1.
2.
3.
4.

Click on Route Redistribution.


Click the Edit button for the Route Redistribution status.
Check box for OSPF to enable OSPF.
Click the OK button.

HOL-1703-SDC-1

Page 333

HOL-1703-SDC-1

Add Route Redistribution Table Entry


1. Next, click on the Green Plus sign (+) icon under the Route Redistribution table
section.

HOL-1703-SDC-1

Page 334

HOL-1703-SDC-1

Configure Route Redistribution Table Entry


In the "New Redistribution criteria" popup, configure the following values:
1. Click the checkbox for Connected and leave all other checkboxes unchecked.
2. Click the OK button.

Publish Changes
1. Click the button Publish Changes.
Once complete, all prerequisites have been performed to continue on with configuring
the L2VPN service on this Edge Gateway.

Configuring L2VPN Service on L2VPN-Server NSX Edge


Now that the 172.16.10.1 address belongs to the L2VPN-Server Edge Gateway, and it is
now distributing its routes dynamically via OSPF, we will begin to configure the L2VPN
service on this Edge Gateway so that the Edge acts as "Server" in the L2VPN.

Navigating to VPN Services Area on L2VPN-Server NSX


Edge
1. Click on the VPN tab.
2. Click on the L2 VPN selection in the left-hand navigational bar.

HOL-1703-SDC-1

Page 335

HOL-1703-SDC-1

3. Click on the Change button as shown in the screenshot.

Configuring the L2VPN Server Settings


In the L2 VPN server settings, configure the following values:
1. Set the Encryption Algorithm: ECDHE-RSA-AES256-GCM-SHA384
2. Click the OK button to continue.

HOL-1703-SDC-1

Page 336

HOL-1703-SDC-1

Add a new Site Configuration


1. Click the Green Plus Sign (+) icon.

HOL-1703-SDC-1

Page 337

HOL-1703-SDC-1

Configuring a new L2VPN Site


1.
2.
3.
4.
5.

Check the checkbox for Enable Peer Site.


Enter HOLSite1 in the name field.
User ID: siteadmin
Password: VMware1!
Click on the link for Select Sub Interfaces

HOL-1703-SDC-1

Page 338

HOL-1703-SDC-1

Selecting a Sub Interface


In the "Select Object" pop-up, perform the following actions:
1. Select the L2VPN-Server-SubInterface object.
2. Click the right-facing arrow to move it to the Selected Objects list.
3. Click the OK button.

HOL-1703-SDC-1

Page 339

HOL-1703-SDC-1

Confirming New Site Configuration


1. Click the OK button to continue.

HOL-1703-SDC-1

Page 340

HOL-1703-SDC-1

Publish Changes for L2VPN Configuration


1. Before clicking the Publish Changes button, ensure that the L2VPN Mode is
set to "Server."
2. Click the Publish Changes button to submit the L2 VPN server configuration.

HOL-1703-SDC-1

Page 341

HOL-1703-SDC-1

Enable L2VPN Server Service


1. Lastly, to enable the L2 VPN server service, click the Enable button as shown in
the screenshot.
2. Click Publish Changes.

This concludes the configuration for the L2 VPN Server. Next, we will be deploying
another new NSX Edge Gateway which will act as the L2 VPN Client.

Deploying the L2VPN-Client NSX Edge Gateway


Now that the server side of the L2VPN is configured, we will move on to deploying
another NSX Edge Gateway to act as the L2 VPN client. Before deploying the NSX Edge
Gateway L2VPN Client, we need to configure the Uplink and Trunk distributed port
groups on the distributed virtual switch.

Configure Uplink and Trunk Port Groups


1. Go back to the vSphere Web Client Home Screen and select Networking

HOL-1703-SDC-1

Page 342

HOL-1703-SDC-1

Configure Uplink distributed port group


1. Select RegionA01-vDS-COMP
2. Click Create a new port group

HOL-1703-SDC-1

Page 343

HOL-1703-SDC-1

Name New Distributed Port Group


1. Enter Uplink-RegionA01-vDS-COMP
2. Click Next

HOL-1703-SDC-1

Page 344

HOL-1703-SDC-1

Configure Settings
1. Leave default settings and click Next

HOL-1703-SDC-1

Page 345

HOL-1703-SDC-1

Ready to Complete
1. Click Finish
Repeat previous steps to configure Trunk-Network-RegionA01-vDS-COMP

HOL-1703-SDC-1

Page 346

HOL-1703-SDC-1

Completed vDS Configuration


1. When completed, we should see the newly created distributed port groups.
2. Click Home

Return to Networking & Security


1. Click Networking & Security

HOL-1703-SDC-1

Page 347

HOL-1703-SDC-1

NSX Edges
1. Select NSX Edges

Creating the new NSX Edge Gateway to be the L2VPNClient


1. Click the Green Plus sign (+) icon to open the New NSX Edge wizard.

HOL-1703-SDC-1

Page 348

HOL-1703-SDC-1

L2VPN-Client NSX Edge Name and Description


For the options here, select the following values:
1. Install Type: Edge Services Gateway
2. Hame: L2VPN-Client
3. Ensure that"Deploy NSX Edge" is checked, and click the Next button to
continue.

HOL-1703-SDC-1

Page 349

HOL-1703-SDC-1

Configuring NSX Edge Settings


For the Settings section, configure the following values:
1. User Name: admin
2. Password and confirm password: VMware1!VMware1!
3. Click the Next button to continue.

HOL-1703-SDC-1

Page 350

HOL-1703-SDC-1

Configure Placement of NSX Edge


In the "Add NSX Edge Appliance" pop-up, configure the following values:
1.
2.
3.
4.
5.
6.

Click the Green Plus sign (+) to create NSX Edge Appliance.
Cluster/Resource Pool: RegionA01-COMP01.
Datastore: RegionA01-ISCSI01-COMP01.
Host: esx-03a.corp.local.
Folder: Discovered virtual machine.
Click the OK button to submit this Edge VM placement config.

HOL-1703-SDC-1

Page 351

HOL-1703-SDC-1

Confirm NSX Edge Deployment


1. Confirm the configuration looks like the screenshot shown here, and click the
Next button to continue.

Configure Interfaces for the L2VPN-Client NSX Edge


1. In the Configure interfaces section of the wizard, click the Green Plus sign
(+) icon.

HOL-1703-SDC-1

Page 352

HOL-1703-SDC-1

Add new Interface


For the parameters on this interface, enter in the following values:
1.
2.
3.
4.
5.
6.

Name: L2VPN-Client-Uplink
Type: Uplink
Click the Green Plus sign (+) icon to add a new IP address.
Enter 192.168.200.5 for the Primary IP Address.
Enter 24 for the Subnet Prefix Length.
Click on the Select link next to the "Connected To" text box to bring up the list of
networks to choose where this interface will be attached to.

HOL-1703-SDC-1

Page 353

HOL-1703-SDC-1

Connect Interface to Distributed Port Group


In the "Connect NSX Edge to a Network" pop-up, perform the following actions:
1. Click on the Distributed Portgroup tab.
2. Click on the radio button for Uplink-RegionA01-vDS-COMP.
3. Click the OK button to continue.

HOL-1703-SDC-1

Page 354

HOL-1703-SDC-1

Confirm Interface Configuration


1. Click OK to confirm new interface.

HOL-1703-SDC-1

Page 355

HOL-1703-SDC-1

Click Next
1. Click Next

HOL-1703-SDC-1

Page 356

HOL-1703-SDC-1

Configure Default Gateway


In the Default Gateway Settings section, configure the following values:
1. Gateway IP: 192.168.200.1
2. Click the Next button to continue.

HOL-1703-SDC-1

Page 357

HOL-1703-SDC-1

Firewall and HA Settings


In the "Firewall and HA" section, perform the following actions:
1. Check the checkbox for Configure Firewall default policy.
2. Click the radio button for Accept for "Default Traffic Policy.
3. Click the Next button to continue.

HOL-1703-SDC-1

Page 358

HOL-1703-SDC-1

Confirm New NSX Edge Configuration


1. Click on the Finish button to submit the new Edge Gateway configuration and
initiate the deployment process.

Configuring the L2VPN-Client NSX Edge Gateway


1. Double click on the entry for the newly created Edge to enter its
management area.

HOL-1703-SDC-1

Page 359

HOL-1703-SDC-1

Adding the Trunk Interface


Just like with the L2VPN-Server Edge Gateway, there is also a need to add a Trunk
interface to this Edge. To bring up the configuration window for the new interface,
perform the following actions:
1.
2.
3.
4.
5.

Click on the Manage tab.


Click on the Settings sub-tab.
Click on the Interfaces selection on the left-hand nav bar.
Select the interface labeled vnic1 under the Name column.
Click on the pencil icon to bring up the configuration area for this interface.

HOL-1703-SDC-1

Page 360

HOL-1703-SDC-1

Configuring the Trunk Interface


In the Edit NSX Edge Interface pop-up, enter the following values:
1. Name: L2PVN-Client-Trunk
2. Type: Trunk
3. Click the Select link next to "Connected To" to bring up the list of available
vSphere networks to attach this interface to.

Connecting to the Trunk Network Distributed Port Group


1. Click the Distributed Portgroup tab.
2. Select the radio button for Trunk-Network-RegionA01-vDS-COMP.
3. Click the OK button.

Configuring Sub Interface


Configure the Sub Interface with the following values:
1. Click the Green Plus sign (+) to add Sub Interface.

HOL-1703-SDC-1

Page 361

HOL-1703-SDC-1

2.
3.
4.
5.
6.
7.
8.

Name: L2VPN-Client-SubInterface.
Tunnel ID: 1
Backing Type: Network
Click the Green Plus (+) sign icon.
Enter 172.16.10.1 for the Primary IP Address.
Enter 24 for the Subnet Prefix Length.
Click Select next to the Network text box to bring up the list of networks
available to attach this Sub Interface to.

HOL-1703-SDC-1

Page 362

HOL-1703-SDC-1

Connect Sub Interface to VM Network


1. Select the Distributed Portgroup tab.
2. Check the radio button for VM-RegionA01-vDS-COMP.
3. Click the OK button.

HOL-1703-SDC-1

Page 363

HOL-1703-SDC-1

Confirm Sub Interface Configuration


1. Confirm that your Sub Interface configuration looks similar to the screen-shot
here, and click the OK button to continue.

HOL-1703-SDC-1

Page 364

HOL-1703-SDC-1

Confirm Addition of Trunk & Sub Interface


1. Confirm that the Trunk Interface configuration is similar to the screenshot above,
and click the OK button to continue to configuring the L2 VPN client service.

HOL-1703-SDC-1

Page 365

HOL-1703-SDC-1

Configure L2VPN Client Services


To begin configuring the L2VPN client, perform the following actions:
1.
2.
3.
4.

Click
Click
Click
Click

on
on
on
on

HOL-1703-SDC-1

the
the
the
the

VPN sub-tab.
L2 VPN selection in the left-hand navigational bar.
radio button for Client in the L2VPN Mode area.
Change button in the Global Configuration Details area.

Page 366

HOL-1703-SDC-1

L2VPN Client Settings


For the client settings, enter in the following values:
1.
2.
3.
4.
5.

Server Address: 192.168.5.5


Encryption algorithm: ECDHE-RSA-AES256-GCM-SHA384
For user details, enter in siteadmin for the User ID,
Enter VMware1! for the Password and Confirm Password fields.
Click on the Select Sub Interfaces link to bring up the available list of Sub
Interfaces to attach to the service.

Add Sub Interface


To add a new Sub Interface to the L2 VPN service, perform the following actions:
1. Select the L2VPN-Client-SubInterface object from the list of available objects
on the left.

HOL-1703-SDC-1

Page 367

HOL-1703-SDC-1

2. Click the right-facing arrow to move the object to the Selected Objects list.
3. Click the OK button.

HOL-1703-SDC-1

Page 368

HOL-1703-SDC-1

Confirm L2VPN Client Settings


1. Confirm the configuration looks similar to the screenshot here, and click the OK
button to continue.

HOL-1703-SDC-1

Page 369

HOL-1703-SDC-1

Enable L2VPN Client Service


1. To enable the L2VPN Client service, click the Enable button here as shown in the
screenshot.

Publish Changes
1. Click Publish Changes.

Fetch Status of L2VPN


1. Once enabled, click on the button labeled Fetch Status. We may need to click
on this a couple of times after the service is enabled.
2. Expand Tunnel Status.
3. Verify the Status is Upas seen in the screenshot above.
Congrats! We've successfully configured the NSX L2VPN service.

HOL-1703-SDC-1

Page 370

HOL-1703-SDC-1

This concludes the lesson for configuring NSX Edge Services Gateway L2VPN services.

HOL-1703-SDC-1

Page 371

HOL-1703-SDC-1

Module 4 Conclusion
This concludes Module 4 - NSX Edge Services Gateway. We hope you have enjoyed the
lab! Please do not forget to fill out the survey when you are finished.
If you are looking for additional information on deploying NSX then review the NSX 6.2
Documentation Center via the URL below:
Go to https://pubs.vmware.com/NSX-62/index.jsp
If you have time remaining, here is a list of the other Modules that are part of this lab,
along with an estimated time to complete each one.
Click on the 'Table of Contents' button to quickly jump to a Module in the Manual
Lab Module List:
Module 1 - Installation Walk Through (30 minutes) - Basic - This module will
walk you through a basic install of NSX including deploying the .ova, configuring
NSX Manager, deploying controllers and preparing hosts.
Module 2 - Logical Switching (30 minutes) - Basic - This module will walk you
through the basics of creating logical switches and attaching virtual machines to
logical switches.
Module 3 - Logical Routing (60 minutes) - Basic - This module will help us
understand some of the routing capabilities supported in the NSX platform and
how to utilize these capabilities while deploying a three tier application.
Module 4 - Edge Services Gateway (60 minutes) - Basic - This module will
demonstrate the capabilities of the Edge Services Gateway and how it cam
provide common services such as DHCP, VPN, NAT, Dynamic Routing and Load
Balancing.
Module 5 - Physical to Virtual Bridging (30 minutes) - Basic - This module will
guide us through the configuration of a L2 Bridging instance between a traditional
VLAN and a NSX Logical Switch. There will also be an offline demonstration of
NSX integration with Arista hardware VXLAN-capable switches.
Module 6 - Distributed Firewall (45 minutes) - Basic - This module will cover
the Distributed Firewall and creating firewall rules between a 3-tier application.
Lab Captains:
Module 1
Kingdom
Module 2
Module 3
Module 4
Module 5
Module 6

HOL-1703-SDC-1

- Michael Armstrong, Senior Systems Engineer, United


-

Mostafa Magdy, Senior Systems Engineer, Canada


Aaron Ramirez, Staff Systems Engineer, USA
Chris Cousins, Systems Engineer, USA
Aaron Ramirez, Staff Systems Engineer, USA
Brian Wilson, Staff Systems Engineer, USA

Page 372

HOL-1703-SDC-1

Module 5 - Physical to
Virtual Bridging (60
minutes)

HOL-1703-SDC-1

Page 373

HOL-1703-SDC-1

Native Bridging
NSX provides in-kernel software L2 Bridging capabilities, that allow organizations to
seamlessly connect traditional workloads and legacy VLANs to virtualized networks
using VXLAN. L2 Bridging is widely used in brownfield environments to simplify the
introduction of logical networks, as well as other scenarios that involve physical systems
that require L2 connectivity to virtual machines.
The logical routers can provide L2 bridging from the logical networking space within NSX
to the physical VLAN-backed network. This allows for the creation of a L2 bridge
between a logical switch and a VLAN, which enables the migration of virtual workloads
to physical devices with no impact on IP addresses. A logical network can leverage a
physical L3 gateway and access existing physical networks and security resources by
bridging the logical switch broadcast domain to the VLAN broadcast domain. In NSX-V
6.2, this function has been enhanced by allowing bridged Logical Switches to be
connected to Distributed Logical Routers. This operation was not permitted in previous
versions of NSX.
This module will guide us through the configuration of a L2 Bridging instance between a
traditional VLAN and anAccess NSX Logical Switch.

HOL-1703-SDC-1

Page 374

HOL-1703-SDC-1

Introduction
The picture above shows the L2 Bridging enhancements provided in NSX 6.2:
In NSX 6.0 and 6.1, it was not possible to bridge a Logical Switch that was
connected to a Distributed Logical Router: for that scenario it was required to
connect the Logical Switch directly to an Edge Services Gateway.
In NSX 6.2 this configuration is now possible, and allows optimized East-West
traffic flows.
You will now configure NSX L2 Bridging with NSX 6.2 in the newly supported
configuration.

Special Instructions for CLI Commands


Many of the modules will have you enter Command Line Interface (CLI)
commands. There are two ways to send CLI commands to the lab.
First to send a CLI command to the lab console:
1. Highlight the CLI command in the manual and use Control+c to copy to
clipboard.
2. Click on the console menu item SEND TEXT.
3. Press Control+v to paste from the clipboard to the window.
4. Click the SEND button.
Second, a text file (README.txt) has been placed on the desktop of the
environment allowing you to easily copy and paste complex commands or
passwords in the associated utilities (CMD, Putty, console, etc). Certain
characters are often not present on keyboards throughout the world. This

HOL-1703-SDC-1

Page 375

HOL-1703-SDC-1

text file is also included for keyboard layouts which do not provide those
characters.
The text file is README.txt and is found on the desktop.

Access vSphere Web Client


Bring up the vSphere Web Client via the icon on the desktop labeled, Chrome.

Log into vSphere Web Client


Log into the vSphere Web Client using the Windows session authentication.
1. Click Use Windows session authentication - This will auto fill in the
credentials of administrator@corp.local / VMware1!
2. Click Login

HOL-1703-SDC-1

Page 376

HOL-1703-SDC-1

Verify Initial Configuration


You can now verify the initial configuration. The environment comes with a Port Group
on the Management & Edge cluster, named "Bridged-Net-RegionA0-vDS-MGMT". The
web server VMs, named "web-01a", and "web-02a" are attached to the Web-Tier-01
Logical Switch, at the moment, and is isolated from the Bridged-Net. The picture shows
the topology.

Access the vSphere Networking Configuration


1. Click on the Home icon
2. Click on "Networking" to access the vSphere Networking configuration interface.

HOL-1703-SDC-1

Page 377

HOL-1703-SDC-1

Open Port Group


1. Expand the object tree ("vcsa-01a.corp.local", "RegionA01",
"RegionA01-vDS-MGMT")
2. Click on the "Bridged-Net-RegionA0-vDS-MGMT" Port Group in the list

HOL-1703-SDC-1

Page 378

HOL-1703-SDC-1

Edit Bridge Net Setting


1. Click Actions.
2. Click Edit Settings.
Note: we are going to set the VLAN to allow for the presentation of the Bridged-Net to
the Distributed Router for L2 Bridging.

HOL-1703-SDC-1

Page 379

HOL-1703-SDC-1

Edit VLAN Settings


1. Click VLAN from the Edit Settings list.
2. Click the Drop-down list for the VLAN type.
3. Select VLAN.

HOL-1703-SDC-1

Page 380

HOL-1703-SDC-1

Add VLAN 101 to the Bridge Net


1. Add "101" to the VLAN ID field.
2. Click OK.

HOL-1703-SDC-1

Page 381

HOL-1703-SDC-1

Verify VLAN ID
1. Click on the "Summary" tab
2. Verify that the Port Group is configured on physical VLANID101.

HOL-1703-SDC-1

Page 382

HOL-1703-SDC-1

Migrate Web-01a to RegionA01-MGMT01 Cluster


1. Click the Home icon
2. Select VMs and Templates

HOL-1703-SDC-1

Page 383

HOL-1703-SDC-1

Migrate Web-01a
1.
2.
3.
4.

Expand the vcsa-01a.corp.local vCenter drop down.


Expand the RegionA01data center drop down.
Right click the web-01a.corp.local.
Click Migrate.

HOL-1703-SDC-1

Page 384

HOL-1703-SDC-1

Select Migration type


1. Select Change both compute resources and storage.
2. Select the option to Select compute resource first.
3. Click Next.

HOL-1703-SDC-1

Page 385

HOL-1703-SDC-1

Select Compute Resource


1.
2.
3.
4.
5.

Expand the vcsa-01a.corp.local vCenter drop down.


Expand the RegionA01data center drop down.
Expand the RegionA01-MGMT01 cluster.
Select the esx-04a.corp.local.
Click Next.

HOL-1703-SDC-1

Page 386

HOL-1703-SDC-1

Select Storage
1. Select the RegionA01-ISCSI01-MGMT01 storage.
2. Click Next.

HOL-1703-SDC-1

Page 387

HOL-1703-SDC-1

Select Destination Network


1. Click the drop down menu for the Destination Network.
2. Select Browse.

HOL-1703-SDC-1

Page 388

HOL-1703-SDC-1

Select Bridged-Net-RegionA0-vDS-MGMT
1. Select Bridged-Net-RegionA0-vDS-MGMT network.
2. Click OK.

HOL-1703-SDC-1

Page 389

HOL-1703-SDC-1

Click Next for Destination Network


1. Click Next.

HOL-1703-SDC-1

Page 390

HOL-1703-SDC-1

Click Next for vMotion Priority


1. Click Next.

HOL-1703-SDC-1

Page 391

HOL-1703-SDC-1

Click Finish
1. Click Finish.

View Connected VMs


1. Click the back button to go to Networking.

View Related Objects to the Bridged Net


1. Select the Bridged-Net-RegionA0-vDS-MGMT Port Group.
2. Select the Related Objects tab.

HOL-1703-SDC-1

Page 392

HOL-1703-SDC-1

3. Select Virtual Machines view.


Note: web-01a.corp.local should now be listed
4.

Click the web-01a.corp.local.

Open VM Console
1. Click on the Summary tab and verify that the VM has a 172.16.10.11 IP address.
2. Click the Launch Remote Console.

Verify that VM is isolated


Once the console window is open, click in the middle of the screen and type any
key to make the screen blanker go away.

HOL-1703-SDC-1

Page 393

HOL-1703-SDC-1

1. Login as "root", with password "VMware1!" (no brackets in username or


password)
2. Enter"ping -c 3 172.16.10.1"
(Remember to use SEND TEXT tool)
ping -c 3 172.16.10.1

Wait until the ping times out: you have verified that the VM is isolated, as there are no
other devices on VLAN 101 and the L2 Bridging is not configured yet.

HOL-1703-SDC-1

Page 394

HOL-1703-SDC-1

Migrate Web_Tier_Logical_Switch to Distributed Logical


Router
1. Click the Home icon.
2. Select the Network & Security.

Remove Web-Tier from Perimeter Gateway


1. Click NSX Edges.
2. Double click the edge-3 Perimeter-Gateway-01.

HOL-1703-SDC-1

Page 395

HOL-1703-SDC-1

Remove Web-Tier Interface


1.
2.
3.
4.
5.

Click Manage.
Click Settings.
Select Interfaces.
Highlight the Web_Tier.
Delete the Logical Switch from the Perimeter Gateway.

Click OK
Click OK.

Go back to Edges
1. Click Networking & Security back button to go back the Edges.

HOL-1703-SDC-1

Page 396

HOL-1703-SDC-1

Select Distributed Router


1. Double click the Distributed-Router-01.

Add the Web-Tier Logical Switch


1.
2.
3.
4.

Click Manage.
Click Settings.
Select interfaces from the left menu.
Click the Green plus sign icon to add the Web-Tier Logical Switch.

HOL-1703-SDC-1

Page 397

HOL-1703-SDC-1

Add the Web-Tier


1. Enter "Web-Tier" into the Name field.
2. Click Select next to the Connected To field.

HOL-1703-SDC-1

Page 398

HOL-1703-SDC-1

Select the Web-Tier Logical Switch


1. Select the Web_Tier_Logical_Switch.
2. Click OK.

HOL-1703-SDC-1

Page 399

HOL-1703-SDC-1

Add Primary IP Address


1.
2.
3.
4.

Click the Green plus sign icon to configure the subnet primary IP address.
Enter "172.16.10.1" in the Primary IP Address.
Enter "24" in the Subnet Prefix Length.
Click OK.

Confirm Logical Switch addition to Distributed Logical


Router
Confirm the Web-Tier Logical Switch deployed successfully.

HOL-1703-SDC-1

Page 400

HOL-1703-SDC-1

Configure NSX L2 Bridging


You will now enable NSX L2 Bridging between VLAN 101 and the Web-Tier-01 Logical
Switch, so that VM "web-01a" will be able to communicate with the rest of the network.
With NSX-V 6.2 is now possible to have a L2 Bridge and a Distributed Logical Router
connected to the same Logical Switch. This represents an important enhancement as it
simplifies the integration of NSX in brownfield environments, as well as the migration
from legacy to virtual networking.

Create a new L2 Bridge


1. Click on the "Bridging" tab.
2. Verify that there are no configured bridging instances in the list, then click on the
Green plus icon to add one.

HOL-1703-SDC-1

Page 401

HOL-1703-SDC-1

Enter Bridge Name


1. Enter "Bridge-01" in the "Name" input field.
2. Then click on the icon to select a Logical Switch to connect our Bridge to.

Select Logical Switch


1. Select the Web_Tier_Logical_Switch.
2. Click OK.

HOL-1703-SDC-1

Page 402

HOL-1703-SDC-1

Open Distributed Port Group selection dialog


Click on the Distributed Port Group icon to open the selection dialog.

Select Distributed Port Group


1. Select the Bridged-Net-RegionA0-vDA-MGMT Distributed Port Group
2. Click OK

HOL-1703-SDC-1

Page 403

HOL-1703-SDC-1

Confirm Bridge configuration


Verify the L2 Bridge configuration in the dialog and click OK

Publish Changes
To enable the L2 Bridging, click on the Publish Changes button, and wait until
the page refreshes.

Verify that routing is enabled


Verify the published configuration. You will notice the "Routing Enabled" message: it
means that this L2 Bridge is also connected to a Distributed Logical Router, which is an
enhancement in NSX-V 6.2.

HOL-1703-SDC-1

Page 404

HOL-1703-SDC-1

Verify L2 Bridging
NSX L2 Bridging has been configured. You will now verify L2 connectivity between the
"web-01a" VM, attached on VLAN 101, and the machines connected "Web-Tier-01"
Logical Switch

HOL-1703-SDC-1

Page 405

HOL-1703-SDC-1

Verify Default Gateway Connectivity


1. Open the web-01a console tab from the task bar, and try to ping the default
gateway again
2. Enter ping -c 3 172.16.10.1
ping -c 3 172.16.10.1

The ping is now successful: you have verified connectivity between a VM attached on
VLAN 101 and the Distributed Logical Router that is the default gateway of the network,
through a L2 Bridge provided by NSX!
Note: you might experience "duplicate" pings during this test (responses appearing as
DUPs): this is due to the nature of the Hands-On Labs environment and is not going to
happen in a real scenario.

HOL-1703-SDC-1

Page 406

HOL-1703-SDC-1

L2 Bridging Module Cleanup


If you want to proceed with the other modules of this Hands-On Lab, make sure to follow
the following steps to disable the L2 Bridging, as the example configuration realized in
this specific environment could conflict with other sections, such as L2VPN.

Return to the Web Client


1. Click on the vSphere Web Client tab on the browser.

Access the NSX configuration page


1. Click on the Home icon.
2. Click"Networking & Security"from the menu.

HOL-1703-SDC-1

Page 407

HOL-1703-SDC-1

Open Logical Router configuration page


1. From the Navigator menu on the left, click on "NSX Edges" and wait until the
list of NSX Edges is loaded.
2. Double-click on "edge-2" named "Local-Distributed-Router" to access its
configuration page.

Delete Bridge Instance


1. Click on the "Manage" tab, if not already selected.
2. Then click on the "Bridging" tab, if not already selected.
Note: we should see only the "Bridge-01" instance that you created before, which is
highlighted by default.
3.

Click on the Delete icon to destroy it.

HOL-1703-SDC-1

Page 408

HOL-1703-SDC-1

Publish Changes
1. Click on the "Publish Changes" button to commit the configuration.

Verify Bridge Cleanup


Verify that the Bridge instance has been deleted.

HOL-1703-SDC-1

Page 409

HOL-1703-SDC-1

Migrate Web-01a back to RegionA01-COMP01 Cluster


1. Click the Home icon.
2. Select VMs and Templates.

HOL-1703-SDC-1

Page 410

HOL-1703-SDC-1

Migrate Web-01a
1.
2.
3.
4.

Expand the vcsa-01a.corp.local vCenter drop down.


Expand the RegionA01data center drop down.
Right click the web-01a.corp.local.
Click Migrate.

HOL-1703-SDC-1

Page 411

HOL-1703-SDC-1

Select Migration type


1. Select Change both compute resources and storage.
2. Select the option to Select compute resource first.
3. Click Next.

HOL-1703-SDC-1

Page 412

HOL-1703-SDC-1

Select Compute Resource


1.
2.
3.
4.
5.

Expand the vcsa-01a.corp.local vCenter drop down.


Expand the RegionA01data center drop down.
Expand the RegionA01-COMP01 cluster.
Select the esx-01a.corp.local.
Click Next.

HOL-1703-SDC-1

Page 413

HOL-1703-SDC-1

Select Storage
1. Select the RegionA01-ISCSI01-COMP01 storage.
2. Click Next.

HOL-1703-SDC-1

Page 414

HOL-1703-SDC-1

Select Destination Network


1. Click the drop down menu for the Destination Network.
2. Select vxm-dvs-43-virtualwire-1-sid-5000-Web_Tier_Logical_Switch.

HOL-1703-SDC-1

Page 415

HOL-1703-SDC-1

Click Next for vMotion Priority


Click Next.

HOL-1703-SDC-1

Page 416

HOL-1703-SDC-1

Click Finish
Click Finish.

Conclusion
Congratulations, you have successfully completed the NSX L2 Bridging module! In this
module we configured, and tested the bridging a traditional VLAN-backed PortGroup to
an NSX VXLAN Logical Switch.

HOL-1703-SDC-1

Page 417

HOL-1703-SDC-1

Introduction to Hardware VTEP with


Arista
The following module sections regarding Hardware VTEP are informational in nature. If
you would like to jump directly to the lab work, please advance to Module 6 - Distributed
Firewall.
In many data centers, some workloads have not been virtualized, or cannot be
virtualized. In order to integrate them into the SDDC architecture, NSX provides the
capability of extending virtual networking to the physical one by the way of Layer 2 or
Layer 3 gateways. This section will focus on the Layer 2 gateway feature, and how it can
be achieved natively on a host running NSX, but also through a third party hardware
device, like an Arista network switch that can still be controlled by NSX. As a platform,
NSX provides partners the capability of integrating their solution and build on the top of
the existing functionalities. NSX also enables an agile overlay infrastructure for Public
and Private cloud environments.
NSX as a platform operates efficiently using a network hypervisor layer, distributed
across all the hosts. However, in some cases, certain hosts in the network are not
virtualized and cannot implement natively the NSX components. NSX provides the
capability to bridge or route toward external, non-virtualized networks. Again, focusing
on the bridging solution, we will show how a Layer 2 gateway extends a logical Layer 2
network to a physical Layer 2 network, and some use cases for the Layer 2 gateway.

HOL-1703-SDC-1

Page 418

HOL-1703-SDC-1

Arista Design Guide Copyright Information and Offline


Demo Credit
This section of the lab module has been used, and modified material from the Arista
Design Guide for NSXfor vSphere with Arista CloudVision, Version 1.0,
August 2015.
We also would like to give a special thank you to Francois Tallet, Sr. Technical
Product Manager from the VMware Networking and Security Business Unit,
USA, for walking us through his lab to provide the offline demonstration for the
following section of the module.

HOL-1703-SDC-1

Page 419

HOL-1703-SDC-1

Use Case Review


NSX operates efficiently using a network hypervisor layer, distributed across all the
hosts. However, in some cases, certain hosts in the network are not virtualized and
cannot implement natively the NSX components. NSX provides thus the capability to
bridge or route toward external, non-virtualized networks. This module is more
specifically focusing on the bridging solution, where a Layer 2 gateway extends a logical
Layer 2 network to a physical Layer 2 network.
The main functionality that a Layer 2 gateway achieves is:
Map an NSX logical switch to a VLAN. The configuration and management of the Layer
2 gateway is embedded in NSX.
Traffic received on the NSX logical switch via a tunnel is de-encapsulated and
forwarded to the appropriate port/VLAN on the physical network. Similarly, VLAN traffic
in the other direction is encapsulated and forwarded appropriately on the NSX logical
switch.
NSX includes natively a software version of the Layer 2 gateway functionality, with a
data plane entirely implemented in the kernel of the Hypervisor, for maximum
performance. On top of that functionality, NSX as a platform allows the integration of
third party components to achieve layer-2 gateway functionality in hardware.

Component Overview
Several components are involved in the connection of the hardware gateway to NSX.
They are represented in the figure above.

HOL-1703-SDC-1

Page 420

HOL-1703-SDC-1

The NSX controller is responsible for handling the interaction with the hardware
gateway. For this purpose, a connection is established between the NSX controller and a
dedicated piece of software called the Hardware Switch Controller (HSC). The HSC
can be embedded in the hardware gateway or can run as a separate standalone
appliance. The HSC can control one or several hardware gateways. Arista, for example,
leverages the CloudVision platform as a HSC, which acts as a single point of Integration
to NSX for all Arista hardware gateways. The HSC runs an OVSDB (Open vSwitch
Database) server, and the NSX controller connects as an OVSDB client. OVSDB is the
Open vSwitch Database Management Protocol detailed in RFC 7047. It is an open source
project that provides the capability of managing a database remotely.
The NSX controller will push the administrator-configured association between Logical
Switch and Physical Switch/Port/VLAN to the Hardware Gateway via the HSC. The NSX
Controller will also advertise a list of Replication Service Nodes (RSNs) that the
Hardware Gateway will leverage to forward Broadcast, Unknown unicast or Multicast
(BUM) traffic. The NSX Controller will advertise to the HSC the list of Hypervisor VTEPs
relevant to the Logical Switches configured on the Hardware Gateway. The NSX
Controller will also advertise to the HSC the association between the MAC address of the
VMs in the virtual network and the VTEP through which they can be reached.
Note: that there can be several NSX controllers in an NSX deployment, providing
redundancy and scale-out. The tasks mentioned as being performed by the NSX
Controller are in fact shared across all the NSX Controllers in the network. The HSC will
connect to all controllers.

HOL-1703-SDC-1

Page 421

HOL-1703-SDC-1

NSX Integration with Arista CloudVision and Hardware


gateway
The Arista CloudVision platform provides network-wide visibility and a single point of
integration to NSX.
CloudVisions foundation is an infrastructure service, sharing and aggregating working
state of physical switches running Arista EOS software to provide network visibility and
central coordination. State from each participating physical EOS node is registered to
CloudVision using the same publish/subscribe architecture of EOSs system database
(SysDB). As an example, CloudVisions VXLAN Control Service (VCS) aggregates
network-wide VXLAN state for integration and orchestration with VMware NSX.
CloudVision also provides redundant hardware L2 Gateways for NSX with the MLAG with
VXLAN functionality. MLAG with VXLAN on Arista switches provides nonblocking activeactive forwarding and redundancy with hitless failover in an event of switch failure. Also,
Arista CloudVision and Top of Rack switches internally run VCS, which each hardware
VTEP uses to share state between each other in order to establish VXLAN tunnels
without the need for a multicast control plane.

Operational Integration point


In operation, Arista CloudVision will register with the NSX controller and will use the
OVSDB protocol to synchronize topology information, MAC to VXLAN Endpoints, and
VXLAN ID bindings with NSX. CloudVision will appropriately program the Arista switch or
MLAG (Multi-Chassis Link Aggregation) switch pairs as the NSX hardware gateway. This
hardware gateway integration allows for nearly instantaneous synchronization of state
between physical and virtual VXLAN Tunnel Endpoints during any network change or
workload modification event.
VMware's NSX Manager front-ends the entire NSX network virtualization platform. Users
can also manage and operate workloads spanning virtual and non-virtualized systems
from NSX as a single pane of glass.

HOL-1703-SDC-1

Page 422

HOL-1703-SDC-1

Aristas CloudVision platform provides a set of services that simplifies monitoring,


management and NSX integration with Arista switches in the virtualized data center.
Users can provision Arista VTEPs as gateway nodes via the NSX Manager UI. This speeds
up service delivery and helps businesses better address their needs in a programmatic
and automated way across data centers.
Deployment of Arista's CloudVision Platform requires two steps:
1. Enable VXLAN Control Service (VCS) on Arista ToR switches and on CloudVision.
2. Enable Hardware Switch Controller (HSC) Service on CloudVision.

HOL-1703-SDC-1

Page 423

HOL-1703-SDC-1

Hands-on Lab Interactive Simulation:


Hardware VTEP with Arista
This portion of the lab is presented as a Hands-on Labs - Interactive Simulation. This
simulation will enable you to navigate the software interface as if you are interacting
with a live environment.
1. Click here to open the interactive simulation. It will open in a new browser
window or tab.
2. When finished, click the "Return to the lab" link in the upper right hand corner
of the web browser window or close the window to continue with this tab.

HOL-1703-SDC-1

Page 424

HOL-1703-SDC-1

Bridging Design Considerations


There is a significant difference between Hardware and Software Gateways:
A Logical Switch can only have one active bridging instance at a time (Software
Gateway).
On the other hand, there can be several Hardware Gateways extending the same
Logical Switch.
This has an impact on redundancy and on the scope of Layer 2 in the network. This
section of the module consists of design considerations when implementing a hardware
gateway architecture with the previously mentioned considerations. Best practices will
be referenced, but it is advised that users planning to implement hardware gateways
with NSX should consult a VMware professional for considerations specific to their own
environment.

Arista Design Guide Copyright Information


This section of the lab module was taken, and modified from the Arista Design Guide
for NSXfor vSphere with Arista CloudVision, Version 1.0, August 2015.

HOL-1703-SDC-1

Page 425

HOL-1703-SDC-1

High Availability with MLAG


Arista supports MLAG towards the compute and VXLAN together on its wide variety of
switches, which provides hardware L2 Gateway redundancy for NSX. MLAG with VXLAN
on Arista switches provide nonblocking, active-active forwarding and redundancy with
hitless failover in an event of switch failure. Arista CloudVision abstracts the details of
Multi-Chassis LAG (MLAG) and presents a pair of MLAG switches as a single VTEP.
The fact that several Hardware Gateways can be active at the same time can also
influence the network design. Typically, a Logical Switch is extended to a VLAN in order
to provide connectivity to some service that cannot be virtualized. This service is usually
redundant, meaning that its physical implementation spans several different racks in
the data center.

HOL-1703-SDC-1

Page 426

HOL-1703-SDC-1

Impact on the scope of Layer 2 in the network


If you look on the left side in the figure above, some virtual machines are attached to a
Logical Switch and access physical servers through a Software Gateway (Edge Services
Gateway). All the traffic from the Logical Switch to the VLAN 10, where the physical
servers are located, have to go through a single bridging instance. This means that
VLAN 10 has to be extended between racks in order to reach all the necessary physical
servers.
The trend in data center networking in the last few years has been to try to reduce as
much as possible the span of Layer 2 connectivity in order to minimize its associated
risks and limitations. The figure on the right side of the figure above shows how this can
be achieved leveraging separate active Hardware Gateway. In this alternate design,
each rack hosting physical servers is configured with a Hardware Gateway. Thanks to
this model, there is no need to extend Layer 2 connectivity between racks, as Logical
Switches can extend directly to the relevant Hardware Gateway when reaching physical
servers.
Note: VLANs defined in the racks have local significance (the example is showing that
the Logical Switch is extended to VLAN 10 in one rack and VLAN 20 in the other.)

NSX Components and Cluster Connectivity


The NSX functions and component operation are defined in the VMware NSX for vSphere
Network Virtualization Design Guide version 3.0 (available at
https://communities.vmware.com/docs/DOC-27683). The reader is strongly advised to
read the document in order to follow the rationale regarding connectivity to physical
network. The NSX components organization and functions are mapped to appropriate
cluster. The VMware NSX for vSphere Network Virtualization Design Guide calls for
organizing NSX components, compute, and management of the virtualized environment.
The VMware NSX for vSphere Network Virtualization Design Guide recommends building
three distinct vSphere cluster types. The figure above shows an example of logical
components of cluster design to the physical rack placement.

HOL-1703-SDC-1

Page 427

HOL-1703-SDC-1

Note: for even smaller configurations in a single rack can be used to provide
connectivity for the edge and management cluster. The key concept is that the edge
cluster configuration is localized to a ToR pair to reduce the span of layer-2
requirements; this also helps localize the egress routing configuration to a pair of ToR
switches. The localization of edge components also allows flexibility in selecting the
appropriate hardware (CPU, memory and NIC) and features based on network-centric
functionalities such as firewall, NetFlow, NAT and ECMP routing.

Arista Switches and NSX Routing


The NSX edge gateway provides ECMP (Equal Cost Multi-path) based routing, which
allows up to eight VMs presenting 8-way bidirectional traffic forwarding from NSX logical
domain to the enterprise DC core or Internet. This represents up to 80 Gbps (8 x10GE
interfaces) of traffic that can be offered from the NSX virtual domain to the external
network in both directions. Its scalable per tenant, so the amount of bandwidth is
elastic as on-demand workloads and/or multi-tenancy expand or contract. The
configuration requirements to support the NSX ECMP edge gateway for N-S routing is as
follows:
VDS uplink teaming policy and its interaction with ToR configuration.
Requires two external VLAN(s) per pair of ToR.
Route Peering with Arista Switches.

VDS uplink design with ESXi Host in Edge cluster


The edge rack has multiple traffic connectivity requirements. First, it provides
connectivity for east-west traffic to the VXLAN domain via VTEP; secondly, it provides a
centralized function for external user/traffic accessing workloads in the NSX domain via
dedicated VLAN-backed port- groups. This later connectivity is achieved by establishing

HOL-1703-SDC-1

Page 428

HOL-1703-SDC-1

routing adjacencies with the next-hop L3 devices. The figure above depicts two types of
uplink connectivity from host containing edge ECMP VM.

HOL-1703-SDC-1

Page 429

HOL-1703-SDC-1

Edge ECMP Peering and VLAN Design


Once the uplink-teaming mode is determined, the next step is to provide design
guidance around VLAN configuration and mapping to uplink as well peering to Arista
switches. The first decision is how many logical uplinks should be deployed on each NSX
edge. The recommended design choice is to always map the number of logical uplinks
to the number of VDS dvUplinks defined on NSX edge VM available on the ESXi servers
hosting the NSX edge VMs. This means always map a VLAN (port-group) to a VDS
dvUplink, which then maps to a physical link on the ESXi host that connects to the Arista
switch, over which an edge VM forming a routing peer relationship with Arista switch.
In the example shown above, NSX Edge ECMP VMs (E1-E8) are deployed on ESXi hosts
with two physical uplinks connected to the Arista ToR switches. Thus, the
recommendation is to deploy two logical uplinks on each NSX edge. Since an NSX edge
logical uplink is connected to a VLAN-backed port-group, it is necessary to use two
external VLAN segments to connect the physical routers and establish routing protocol
adjacencies. As shown in the figure above, each ECMP node peers over its respective
external VLANs to exactly one Arista router. Each external VLAN is defined only on one
ESXi uplink (in the figure above external VLAN10 is enabled on uplink toward R1 while
external VLAN20 on the uplink toward R2). This is done so that under normal
circumstances both ESXi uplinks can be concurrently utilized to send and receive northsouth traffic, even without requiring the creation of a port-channel between the ESXi
host and the ToR devices.
In addition, with this model a physical failure of an ESXi NIC would correspond to a
logical uplink failure for the NSX edge running inside that host, and the edge would
continue sending and receiving traffic leveraging the second logical uplink (the second
physical ESXi NIC interface).

HOL-1703-SDC-1

Page 430

HOL-1703-SDC-1

NSX Edge Routing Protocol Timer Recommendations


The NSX edge logical router allows dynamic as well as static routing. The
recommendation is to use a dynamic routing protocol to peer with Arista switches in
order to reduce the overhead of defining static routes every time the logical network is
defined. The NSX edge logical routers support OSPF, and BGP routing protocols. The NSX
edge ECMP mode configuration supports reduction of the routing protocol hello and
hold timer to improve failure recovery of traffic in the case of node or link failure. The
minimum recommended timer for both OSPF and BGP is shown in table above.

Benefits of NSX Architecture with Arista Infrastructure


NSX enables users to build logical services for networking and security without having
to make configuration changes to the physical infrastructure. In this case, once the
Arista switches are configured to provide IP connectivity and the routing configuration is
provisioned, we can continue to deploy new services with NSX.

Logical Layer Connectivity


Logical layer connectivity options for servers in the physical infrastructure allow for
them to be in different subnets, yet an overlay network enables VMs to be in the same
subnet and layer-2 adjacent, essentially providing topology-independent connectivity
and mobility beyond the structured topology constraint imposed by physical networking.

Routing to Physical Infrastructure


In order to route from the logical network to the physical network, NSX can learn and
exchange routes with the physical infrastructure to reach resources such as a database

HOL-1703-SDC-1

Page 431

HOL-1703-SDC-1

server or a non-virtualized application, which could be located on a different subnet on a


physical network.
NSX provides a scale-out routing architecture with the use of ECMP between the NSX
distributed router and the NSX Edge routing instances as shown in the figure above. The
NSX Edges can peer using dynamic routing protocols (OSPF or BGP) with the physical
routers and provide scalable bandwidth. In the case of a Arista switch infrastructure, the
routing peer could be a any Arista ToR.

HOL-1703-SDC-1

Page 432

HOL-1703-SDC-1

Simplified Operation and Scalability


Aristas CloudVision solution simplifies the integration of physical and virtual
environments. CloudVision leverages a network-wide database, which collects the
state of the entire physical network and presents a single, open interface for VMware
NSX, to integrate with the physical network. Using CloudVision as the integration point
allows for the details of the physical network to be abstracted away from cloud
orchestrators and overlay controllers. In addition, CloudVision simplifies the NSX
integration effort because NSX only needs to integrate with CloudVision itself. This
allows customers to avoid the complication of certifying NSX with the many
combinations of hardware and software versions. CloudVision in turn provides the
aggregate state of the physical network for the most effective physical to virtual
synchronization. This approach provides a simpler and more scalable approach for
controller integration to the physical network.
Note: the CloudVision is build on open APIs, including OVSDB and JSON, which provide a
standards- based approach for this integration.

HOL-1703-SDC-1

Page 433

HOL-1703-SDC-1

Logical and Physical Network Visibility with Arista


VMTracer
Aristas VM Tracer feature for NSX is natively integrated into Arista EOS. It automates
discovery of directly connected virtual infrastructure, streamlining dynamic provisioning
of related VLANs and port profiles on the network. Aristas switches utilize the VMware
vCenter and NSX Manager APIs to collect provisioning information. VM Tracer then
combines this information with data from the switch's database to provide a clear and
concise mapping of the virtual to physical network. Customers can get real time
tracking of logical switches and VMs from a single CLI on any Arista switch in the
network.

Security with Distributed Firewall


NSX by default enables the distributed firewall on each VM at the vNIC level. The firewall
is always in the path of the traffic to and from the VM. The key benefit is that it can
reduce the security exposure at the root for east-west traffic and not at the centralized
location. Additional benefits of distributed firewall include:
Eliminating the number of hops (helps reduce bandwidth consumption to and
from the ToR) for applications traversing to a centralized firewall.
Flexible rules sets (rules sets can be applied dynamically, using multiple objects
available in vSphere such as logical SW, cluster and DC).
Allowing the policy and connection states to move with VM vMotion.
Developing an automated workflow with programmatic security policy
enforcement at the time of deployment of the VM via cloud management
platform, based on exposure criteria such as tiers of security levels per client or
application zone.

Flexible Application Scaling with Virtualized Load Balancer


Elastic application workload scaling is one of the critical requirements in todays data
center. Application scaling with a physical load balancer may not be sufficient given the
dynamic nature of self-service IT and DevOps style workloads. The load-balancing
functionality natively supported in the NSX Edge Services Gateway covers most of the
practical requirements found in deployments. It can be deployed programmatically
based on application requirements with appropriate scaling and features. The scale and
application support level determines whether the load balancer can be configured with
layer-4 or layer-7 services. The topology wise the load balancer can be deployed either

HOL-1703-SDC-1

Page 434

HOL-1703-SDC-1

in-line or in one-arm mode. The mode is selected based on specific application


requirements, however the one-arm design offers extensive flexibility since it can be
deployed near the application segment and can be automated with the application
deployment.
The figure above shows the power of a software-based load-balancer in which multiple
instances of the load-balancer serve multiple applications or segments. Each instance of
the load-balancer is an edge appliance that can be dynamically defined via an API as
needed and deployed in a high-availability mode. Alternatively, the load balancer can be
deployed in an in-line mode, which can serve the entire logical domain. The in-line loadbalancer can scale via enabling multi- tier edge per application such that each
application is a dedicated domain for which first-tier edge is a gateway for an
application, the second-tier edge can be an ECMP gateway to provide the scalable
north-south bandwidth.

Conclusion
The VMware network virtualization solution addresses current challenges with physical
network and computing infrastructure, bringing flexibility, agility and scale to VXLANbased logical networks. Along with the ability to create on-demand logical networks
using VXLAN, the NSX Edge gateway helps users deploy various logical network services
such as firewall, DHCP or NAT. This is possible due to its ability to decouple the virtual
network from the physical network and then reproduce the properties and services in
the virtual environment.
In conclusion, Arista and VMware are delivering the industrys first scalable best-ofbreed solution for network virtualization in the Software Defined Data Center. Cloud
providers, enterprises and web customers will be able to drastically speed business
services, mitigate operational complexity, and reduce costs. All of this is available now

HOL-1703-SDC-1

Page 435

HOL-1703-SDC-1

from a fully automated and programmatic SDDC solution that bridges the virtual and
physical infrastructure.

HOL-1703-SDC-1

Page 436

HOL-1703-SDC-1

Module 5 Conclusion
In this module we showed the capability of NSX to bridge VLAN networks into VXLAN
logical networks. We performed the configuration of a Bridge within the NSX Distributed
Logical Router, to map a VLAN to VXLAN. We also performed a migration of a VM from a
Logical Switch to a VLAN-backed dvPortGroup to simulate the communication between
VMs. And last, we clicked through an offline demo of NSX integration with an Arista
Hardware VTEP to show the extension of a L2 Gateway into a hardware switch.

You've finished Module 5


Congratulations on completing Module 5.
If you are looking for additional information on deploying NSX then review the NSX 6.2
Documentation Center via the URL below:
Go to http://tinyurl.com/hkexfcl
Proceed to any module below which interests you the most:
Lab Module List:
Module 1 - Installation Walk Through (30 minutes) - Basic - This module will
walk you through a basic install of NSX including deploying the .ova, configuring
NSX Manager, deploying controllers and preparing hosts.
Module 2 - Logical Switching (30 minutes) - Basic - This module will walk you
through the basics of creating logical switches and attaching virtual machines to
logical switches.
Module 3 - Logical Routing (60 minutes) - Basic - This module will help us
understand some of the routing capabilities supported in the NSX platform and
how to utilize these capabilities while deploying a three tier application.
Module 4 - Edge Services Gateway (60 minutes) - Basic - This module will
demonstrate the capabilities of the Edge Services Gateway and how it cam
provide common services such as DHCP, VPN, NAT, Dynamic Routing and Load
Balancing.
Module 5 - Physical to Virtual Bridging (30 minutes) - Basic - This module will
guide us through the configuration of a L2 Bridging instance between a traditional
VLAN and a NSX Logical Switch. There will also be an offline demonstration of
NSX integration with Arista hardware VXLAN-capable switches.
Module 6 - Distributed Firewall (45 minutes) - Basic - This module will cover
the Distributed Firewall and creating firewall rules between a 3-tier application.
Lab Captains:
Module 1 - Michael Armstrong, Senior Systems Engineer, United
Kingdom

HOL-1703-SDC-1

Page 437

HOL-1703-SDC-1

Module
Module
Module
Module
Module

2
3
4
5
6

Mostafa Magdy, Senior Systems Engineer, Canada


Aaron Ramirez, Staff Systems Engineer, USA
Chris Cousins, Systems Engineer, USA
Aaron Ramirez, Staff Systems Engineer, USA
Brian Wilson, Staff Systems Engineer, USA

How to End Lab


To end your lab click on the END button.

HOL-1703-SDC-1

Page 438

HOL-1703-SDC-1

Module 6 - Distributed
Firewall (45 minutes)

HOL-1703-SDC-1

Page 439

HOL-1703-SDC-1

Distributed Firewall Introduction


NSX Distributed Firewall (DFW). One component of NSX is a distributed firewall
kernel module. The distributed firewall is installed in each vSphere host to enable the
functionality. The Distributed Firewall is near line-speed and has the resilience of
vSphere's host platform. It is also user-identity aware and provides unique activity
monitoring tools.
In this module you will explore how the distributed firewall helps protect a 3-tier
application. We will also demonstrate the firewall rule creation process based on
security groups rather than IP address based rules. IP Address based rules impose hard
limits on mobile VMs and reduces the flexibility of using resource pools.
This module is based on four guest VMs making up a common 3-tier application. The
web tier has two web servers (web-01a and web-02a). The web tier communicates to
a VM named app-01a that is running an application software, acting as the application
tier. The app tier VM in turn communicates to a VM named db-01a running MySQL in
the database tier. Enforcement of access rules between the tiers is provided by NSX
DFW Firewall.
The outline of this module is:
Distributed Firewall Basic Functionality

Check the status of the Distributed Firewall on vSphere hosts.


Verify full open communication to the web application and between the 3-tiers.
Block access to 3-tier app and verify.
Create a security group for the web tier.
Create Firewall rules to allow secure access to the web application.

Start the module from your desktop. The desktop is your Control center jumpbox in
the virtual environment. From this desktop you will access the vCenter Server
Appliance deployed in your virtual datacenter.
Special Note: On the desktop you will find a file names README.txt. It
contains the CLI commands needed in the lab exercises. If you can't type
them you can copy and paste them into the putty sessions. If you see a
number with "french brackets - {1}" this tells you to look for that CLI
command for this module in the text file.

HOL-1703-SDC-1

Page 440

HOL-1703-SDC-1

Confirm DFW Enablement


Use the vCenter web client to confirm the DFW is installed and enabled.

Launch the Google Chrome browser


1. Click on the Chrome browser icon from within the taskbar or on the desktop of
the main console.

Login to the vSphere Web Client


If you are not already logged into the vSphere Web Client:
(The home page should be the vSphere Web Client. If not, Click on the vSphere Web
Client Taskbar icon for Google Chrome.)
1. Type in administrator@vsphere.local into User name
2. Type in VMware1! into Password
3. Click OK

HOL-1703-SDC-1

Page 441

HOL-1703-SDC-1

Gain screen space by collapsing the right Task Pane


Clicking on the Push-Pins will allow task panes to collapse and provide more
viewing space to the main pane. You can also collapse the left-hand pane to gain
the maximum space.

HOL-1703-SDC-1

Page 442

HOL-1703-SDC-1

Explore the new NSX Distributed Firewall


1. Click on Networking & Security.

Open Installation
1. First click on Installation.
2. Click on the Host Preparation tab. The table will show the clusters in the
virtual datacenter.
You will see the Firewall is enabled for each cluster.
NSX is installed at the Cluster level, meaning that installation, removal, and
updates all are a cluster level definition. If later a new physical host is added to
the cluster it will have NSX added automatically. This provides a cluster level of
networking and security without fear of a VM migrating to a host without NSX.

HOL-1703-SDC-1

Page 443

HOL-1703-SDC-1

Configure Rules for Web Application


Access
You will now configure Distributed Firewall access to a 3-tier application (Customer DBapp). The application has two web servers, and one each of an application and
database server. There is also a Load Balancer servicing the two web servers.

Test the Customer DB app VM to VM connectivity using


PuTTY
Next you will test communication and access between the network segments and guest
VMs making up the 3-tier application. Your first test will be to open a console to
web-01a.corp.local and ping the other members.
1. Click on the PuTTY shortcut on the desktop taskbar

HOL-1703-SDC-1

Page 444

HOL-1703-SDC-1

Open SSH session to web-01a


1. Find and click on web-01a.corp.local in the Saved Sessions list
2. Click Open to connect the SSH session to web-01a.

Ping from web-01a to other 3-tier members


1. First you will show that web-01a can Ping web-02a by entering.
ping -c 2 172.16.10.12

2. Now ping app-01a.


3. Test ping to db-01a.
ping -c 2 172.16.20.11

ping -c 2 172.16.30.11

(Note: You might see DUP! at the end of a Ping line. This is due to the nature of the
virtual lab environment using nested virtualization and promiscuous mode on the virtual
routers. You will not see this in production.)

HOL-1703-SDC-1

Page 445

HOL-1703-SDC-1

Don't close the window just minimize it for later use.

Demonstrate Customer DB application using a web


browser
Using a browser you will access the 3-tier application to demonstrate the function
between the 3 parts.
1. Open a new browser tab
2. Click on the bookmark "Customer DB-app"

Demonstrate Customer DB application using a web


browser-cont
You should get back data that passed from the web tier to the app-01a vm and finally
queried the db-01a vm.

HOL-1703-SDC-1

Page 446

HOL-1703-SDC-1

A. Note the HTTPS connection to the Web Tier.


B. Note the TCP connection on port 8443 to App Tier.
C. Note the TCP connection on port 3306 to the Database Tier.
Note the actual webserver that responds note this server may be different than
shown.

HOL-1703-SDC-1

Page 447

HOL-1703-SDC-1

Change the default firewall policy from Allow to Block


In this section you will change the default Allow rule to Block and show communication
to the 3-tier Customer DB application to be broken. After that you will create new
access rules to re-establish communication in a secure method.
1. Click the browser tab for the vSphere Web Client.
2. Select Firewall on the left.
You will see the Default Section Layer3 on the General Section.

HOL-1703-SDC-1

Page 448

HOL-1703-SDC-1

Examine the Default Rules


1. Expand the section using the "twistie."
Notice the Rules have green check marks. This means a rule is enabled. Rules are
built in the typical fashion with source, destination, and service fields. Services are a
combination of protocols and ports.
The last Default Rule is a basic any-to-any-allow.

HOL-1703-SDC-1

Page 449

HOL-1703-SDC-1

Explore the Last Default Rule


Scroll to the right and you can see the Action choices for the Default Rule by placing the
cursor in the field for Action:Allow. This will bring up a pencil sign that allows you to
see the choices for this field.
1. Hover and Click on the Pencil Sign.

Change the Last Default Rule Action from Allow to Block


1. Select the Block action choice and select.
2. Click Save.

HOL-1703-SDC-1

Page 450

HOL-1703-SDC-1

Publish the Default Rule changes


You will notice a green bar appears announcing that you now need to choose either to
Publish Changes, Revert Changes or Save Changes. Publish pushes to the DFW. Revert
cancels your edits. Save Changes allows you to save and publish later.
1. Select Publish Change to save your block rule.

Re-open PuTTY Session


1. Click your PuTTY session for web-01a on the taskbar.

HOL-1703-SDC-1

Page 451

HOL-1703-SDC-1

Verify the Rule change blocks communication


To test the block rule using your previous PuTTY and browser sessions
PuTTY: In a few moments opening PuTTY will show it is no longer active due to the
default rule now blocks everything including SSH. Minimize the console again.

HOL-1703-SDC-1

Page 452

HOL-1703-SDC-1

Verify browser access denied


1. Click the tab for the webapp.corp.local.
2. Click the Refresh Button.
The site will time out in a few seconds to show that access is blocked by the Default
Rule set to Block.

HOL-1703-SDC-1

Page 453

HOL-1703-SDC-1

Security Group Creation


We will now create Security Groups. Security Groups allow us the ability to create
reusable containers of VMs to which we can apply policies. Membership in the groups
can be established both statically and or dynamically.

Create 3-Tier Security Groups


1. Click on the browser tab for vSphere Web Client.
2. then Click onService Composer.
Service Composer defines a new model for consuming network and security services in
virtual and cloud environments. Polices are made actionable through simple
visualization and consumption of services that are built-in or enhanced by 3rd party
solutions. These same polices can be made repeatable through export/import
capabilities, which would help make it easier to stand up and recover an environment
when there is an issue. One of those objects for repeatable use is a Security Group.

HOL-1703-SDC-1

Page 454

HOL-1703-SDC-1

Add Security Group


1. Select Security Groups.Note: there may be existing security groups to be used
in another lab module
2. To add a new security group click the New Security Group icon

New Security Group - Web


1. Name this first group Web-tier
2. Click "Select objects to include" section

HOL-1703-SDC-1

Page 455

HOL-1703-SDC-1

Select objects to include


1.
2.
3.
4.

Pull down the Object Types and select Virtual Machines.


You can filter by typing web into the search widow.
Select web-01a.corp.local.
Click the Right Hand arrow to push the VM to the Selected Objects
window.
5. Repeat for web-02a.corp.local.
6. Click Finish.
Note: As a shortcut you can double-click the VMs on the left and they will move to the
right in this one step.

HOL-1703-SDC-1

Page 456

HOL-1703-SDC-1

Verify Security Group Creation


You have created a security group named Web-tier having 2 VMs assigned.
Note the Web-tier security group.
Note the number of include VMs in the security group.

HOL-1703-SDC-1

Page 457

HOL-1703-SDC-1

Access Rule Creation


Create 3-Tier access rules for the Customer DB application.

Create 3-Tier Access Rules


Next you will add new rules to allow access to the web vm and then set up access
between the tiers.
1. On the left hand menu, choose Firewall.

HOL-1703-SDC-1

Page 458

HOL-1703-SDC-1

Add New Rule Section for 3-Tier Application


1. On the far right of the "Flow Monitoring & Trace Flow Rules-Disabled by Default
(Rule1)" row click on Add Section which looks like a folder.
2. Name the section "Customer DB-app".
3. Click Save.

Add Rule to New Section


1. On the row for the new "Customer DB-app" section click on the Add rule icon
which is a green plus-sign.

HOL-1703-SDC-1

Page 459

HOL-1703-SDC-1

Edit New Rule


1. Click the "twistie" to open the rule.
2. Hover to the upper right corner of the "Name" field until a pencil icon appears,
then click on the pencil.
3. Enter "EXT to Web" for the name.
4. Click Save.

Set Rule Source and Destination


Source:Leave the Rule Source set to any.
1. Hover the mouse pointer in the Destination field and select the Destination
pencil sign.

HOL-1703-SDC-1

Page 460

HOL-1703-SDC-1

Set Security Group values


Specify Destination:
1.
2.
3.
4.

Pull down the Object Type and scroll down until you find Security Group.
Click on Web-tier.
Click on the top arrow to move the object to the right.
Click OK.

Set Service
1. Hover and Click the pencil in the Service Field.

HOL-1703-SDC-1

Page 461

HOL-1703-SDC-1

Set Rule Service


Again hover in the Service field and click on the pencil sign.
1. In the search field you can search for service pattern matches. Enter "https"and
press enter to see all services associated with the name https.
2. Select the simple HTTPS service.
3. Click on the top arrow.
4. Note: Repeat the above steps 1-3 to find andadd SSH. (You will see later in
the module that we need SSH.)
5. Click OK.

HOL-1703-SDC-1

Page 462

HOL-1703-SDC-1

Create Rule to Allow Web Security Group Access to App


Logical Switch
You will now add a second rule to allow the Web Security Group to access the App
Security Group via the App port.
1. Start by opening the pencil sign.
2. You want this rule to be processed below the previous rule so choose Add Below
from the drop down box.

Create Second Rule Name and Source fields


1. As you did before hover the mouse over the Name field and click the plus-sign.
Enter "Web to App" for the name.
2. Choose the Security Group Object Type: Web-tier for the Source field.

HOL-1703-SDC-1

Page 463

HOL-1703-SDC-1

Set Destination
1. Hover and Click the pencil in the Destination Column.

HOL-1703-SDC-1

Page 464

HOL-1703-SDC-1

Create Second Rule Destination field: Choose Logical


Network
In the first rule you used the Web-tier security group as the destination. You could
proceed with the remaining rules in the same fashion. But as you see from the dropdown you can use several vCenter objects already defined. A powerful time saving
aspect of the integrated vSphere with NSX Security is you can use existing virtual
datacenter objects for your rules rather having to start from scratch. Here you will use a
VXLAN Logical Switch as the destination. This allows you to create a rule to be applied
to any VM attached to this network.
1. Scroll down in the Object Type drop-down and click on theLogical Switch
choice.
2. SelectApp_Tier_Logical_Switch .
3. Click on the top arrow to move the object to the right .
4. Click OK.

HOL-1703-SDC-1

Page 465

HOL-1703-SDC-1

Set Service
1. Hover and Click the pencil in the Service Column.

HOL-1703-SDC-1

Page 466

HOL-1703-SDC-1

Create Second Rule Service Field: New Service


The 3-tier application uses tcp port 8443 between the web and app tiers. You will create
a new Service called MyApp to be the allowed service.
Click the pencil icon for the Service field.
1.
2.
3.
4.
5.

Click on New Service.


Enter MyApp for the new service name.
Select TCP for the Protocol.
Enter 8443 for the Port number.
Click OK.

HOL-1703-SDC-1

Page 467

HOL-1703-SDC-1

Click OK
1. Click OK.

HOL-1703-SDC-1

Page 468

HOL-1703-SDC-1

Create Third Rule: Allow Logical Switch App to Access


Logical Switch Database
Repeating the steps: On your own create the third and last rule giving access between
the App-tier and the Database-tier.
1. Create the final rule allowing the App Logical Switch to communicate
with the Database Logical Switch via the predefined service for MySQL. The
service is predefined so you will only have to search for it rather than create it.
2. Publish Changes.

HOL-1703-SDC-1

Page 469

HOL-1703-SDC-1

Verify New Rule Allow Customer DB Application


Communication
1. Open your browser and return to the tab you used previously for the
Web App.
2. Refresh the browser to show you are getting the data via the Customer DB
app.
NOTE : If you do not have a tab already open, or you closed the previous one. Use the
"Customer DB-App Direct Connect" favorite in the favorite bar.

HOL-1703-SDC-1

Page 470

HOL-1703-SDC-1

Restart PuTTY Session to web-01a


1. Click the Session icon in the upper left
2. Click Restart Session.

Ping Test between Tiers


Try to ping 3-tier application guest VMs.
Note: Remember to use the SEND TEXT option.
1. Ping w.eb-02a
ping -c 2 172.16.10.12

2. Ping app-01a.
ping -c 2 172.16.20.11

3. Ping db-01a.
ping -c 2 172.16.30.11

Pings are not allowed and will fail as ICMP is not allowed between tiers or tier members
in your rules. Without allowing for ICMP between the tiers the Default Rule now blocks
all other traffic.

HOL-1703-SDC-1

Page 471

HOL-1703-SDC-1

Minimize PuTTY Session to web-01a.

HOL-1703-SDC-1

Page 472

HOL-1703-SDC-1

Topology After Adding Distributed Firewall Rules for the


Customer DB 3-Tier Application
The diagram shows the relative enforcement point of the vNIC level firewall. Although
the DFW is a Kernel Loadable Module (KLM) of the vSphere ESXi Host the rules are
enforced at the vNIC of the guest VM. This protection moves with the VM during
vMotion to provide complete fulltime protection not allowing for a "window of
opportunity" during which the VM is susceptible to attack.

HOL-1703-SDC-1

Page 473

HOL-1703-SDC-1

Module 6 Conclusion
In this module we have used the Distributed Firewall (DFW) feature within NSX to
provide security policies for a typical 3 tier application. This module illustrates how we
can provide a small set of rules that can be applied to a large number of VMs. We can
use Micro-Segmentation to secure thousands of VMs in our environments with very little
administrative intervention.

You've finished Module 6


Congratulations on completing Module 6.
If you are looking for additional information on deploying NSX then review the NSX 6.2
Documentation Center via the URL below:
Go to http://tinyurl.com/hkexfcl
Proceed to any module below which interests you the most:
Lab Module List:
Module 1 - Installation Walk Through (30 minutes) - Basic - This module will
walk you through a basic install of NSX including deploying the .ova, configuring
NSX Manager, deploying controllers and preparing hosts.
Module 2 - Logical Switching (30 minutes) - Basic - This module will walk you
through the basics of creating logical switches and attaching virtual machines to
logical switches.
Module 3 - Logical Routing (60 minutes) - Basic - This module will help us
understand some of the routing capabilities supported in the NSX platform and
how to utilize these capabilities while deploying a three tier application.
Module 4 - Edge Services Gateway (60 minutes) - Basic - This module will
demonstrate the capabilities of the Edge Services Gateway and how it cam
provide common services such as DHCP, VPN, NAT, Dynamic Routing and Load
Balancing.
Module 5 - Physical to Virtual Bridging (30 minutes) - Basic - This module will
guide us through the configuration of a L2 Bridging instance between a traditional
VLAN and a NSX Logical Switch. There will also be an offline demonstration of
NSX integration with Arista hardware VXLAN-capable switches.
Module 6 - Distributed Firewall (45 minutes) - Basic - This module will cover
the Distributed Firewall and creating firewall rules between a 3-tier application.
Lab Captains:
Module 1 - Michael Armstrong, Senior Systems Engineer, United
Kingdom
Module 2 - Mostafa Magdy, Senior Systems Engineer, Canada

HOL-1703-SDC-1

Page 474

HOL-1703-SDC-1

Module
Module
Module
Module

3
4
5
6

Aaron Ramirez, Staff Systems Engineer, USA


Chris Cousins, Systems Engineer, USA
Aaron Ramirez, Staff Systems Engineer, USA
Brian Wilson, Staff Systems Engineer, USA

How to End Lab


To end your lab click on the END button.

HOL-1703-SDC-1

Page 475

HOL-1703-SDC-1

Conclusion
Thank you for participating in the VMware Hands-on Labs. Be sure to visit
http://hol.vmware.com/ to continue your lab experience online.
Lab SKU: HOL-1703-SDC-1
Version: 20160906-074559

HOL-1703-SDC-1

Page 476

Das könnte Ihnen auch gefallen