Sie sind auf Seite 1von 178

VMware Cloud on AWS:

NSX Networking
and Security
HUMAIR AHMED
GILLES CHEKROUN
NICO VIBERT

Foreword by
Tom Gillis
Senior Vice President/General Manager
– NSBU, VMware
VMware Cloud on AWS:
NSX Networking
and Security
HUMAIR AHMED
GILLES CHEKROUN
NICO VIBERT

Foreword by
Tom Gillis
Senior Vice President/General Manager
– NSBU, VMware
VMWARE PRESS

Technical Writer
Rob Greanias

Design Agency
Mitchell Design

Warning & Disclaimer


Every effort has been made to make this book as complete and as accurate
as possible, but no warranty or fitness is implied. The information provided is
on an as is basis. This book is based on the VMware Cloud on AWS SDDC
version available at time of writing, SDDC version 1.7. The authors, VMware
Press, VMware, and the publisher shall have neither liability nor responsibility
to any person or entity with respect to any loss or damages arising from the
information contained in this book. The opinions expressed in this book
belong to the author and are not necessarily those of VMware.

VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA


Tel 877-486-9273 Fax 650-427-5001  www.vmware.com.
Copyright © 2019 VMware, Inc. All rights reserved. This product is protected
by U.S. and international copyright and intellectual property laws. VMware
products are covered by one or more patents listed at http://www.vmware.
com/go/patents. VMware is a registered trademark or trademark of VMware,
Inc. and its subsidiaries in the United States and/or other jurisdictions. All
other marks and names mentioned herein may be trademarks of their
respective companies.
Table of Contents
Preface..................................................................................................... XVI

Foreword................................................................................................. XVII

Chapter 1 - Introduction..................................................................................1

Chapter 2 - Traditional Cloud Challenges and Why VMware Cloud on AWS..... 5


Traditional Cloud Challenges................................................................... 6
Why VMware Cloud on AWS................................................................... 7

Chapter 3 - VMware Cloud on AWS Use Cases...............................................8


Data Center Extension............................................................................ 8
Migration...............................................................................................10
Disaster Recovery...................................................................................11

Chapter 4 - NSX Architecture and Features in VMware Cloud on AWS...........13


About NSX............................................................................................ 13
NSX Features......................................................................................... 14
VMware Cloud on AWS SDDC and NSX Architecture...............................18
Accessing VMware Cloud on AWS SDDC............................................... 23
Access to vCenter................................................................................. 26

Chapter 5 - NSX Networking in VMware Cloud on AWS SDDC.....................30


Routing................................................................................................ 30
Network Segments............................................................................... 33
DHCP................................................................................................... 35
DHCP Relay.......................................................................................... 36
DNS......................................................................................................37
NAT, Public IPs, and Internet Access...................................................... 39
NSX Connectivity.................................................................................. 45
Direct Connect......................................................................................45
Route-Based IPSEC VPN..................................................................... 56
Policy-based IPSEC VPN.......................................................................63
AWS Transit Gateway............................................................................ 64
AWS Transit Gateway and multiple AWS accounts................................... 72
L2VPN.................................................................................................. 74
Connected VPC................................................................................... 80

| IV
Chapter 6 - NSX Security in VMware Cloud on AWS SDDC........................... 83
Role Based Access Control (RBAC)........................................................ 84
Grouping Objects.................................................................................. 85
Groups Based on IP Address................................................................ 85
Groups Based on VM Instance.............................................................. 86
Groups Based on VM Name.................................................................. 86
Groups Based on Security Tags..............................................................87
Management Groups and Workload Groups.......................................... 90
Edge Firewall........................................................................................ 92
Distributed Firewall (DFW).................................................................... 94
Distributed Firewall Concepts................................................................94
Distributed Firewall on VMware Cloud on AWS...................................... 96
Distributed Firewall Sections................................................................. 97
Sections and Rules..............................................................................100

Chapter 7 - NSX Operational Tools in VMware Cloud on AWS SDDC............ 101


Port Mirroring...................................................................................... 101
VMware Log Intelligence......................................................................104
IPFIX on VMware Cloud on AWS..........................................................109
vRealize Network Insight....................................................................... 112
Path from a VMware Cloud on AWS VM to/from an On‑premises VM..... 113
Path within VMware Cloud on AWS SDDC............................................. 114
Micro-segmentation Planning............................................................... 115

Chapter 8 - VMware Cloud on AWS APIs and Automation............................116


Cloud Services Platform (CSP) APIs.......................................................118
API Token and Access Token.................................................................119
Create the API Token............................................................................ 119
Get the Access Token.......................................................................... 120
Python Example to Create the Access Token:........................................ 121
VMware Cloud on AWS APIs.................................................................121
Developer Center................................................................................ 122
NSX-T Policy APIs................................................................................ 123
NSX-T Manager................................................................................... 124

V |
NSX-T Proxy URL................................................................................. 124
Python Example to Retrieve the NSX-T Reverse Proxy URL...................124
Network Segments Example................................................................ 125
Management Gateway Firewall Rules.................................................... 126
Compute Gateway Firewall Rules.......................................................... 127
SDDC vSphere APIs............................................................................. 128
Automation with PowerShell and PowerCLI.......................................... 129
What is PowerCLI?...............................................................................130
Install PowerCLI...................................................................................130
Install VMware Cloud on AWS Module..................................................130
Install the NSX-T Module.......................................................................131
VMware Cloud on AWS PowerCLI Functions..........................................131
Start Using PowerCLI for VMware Cloud on AWS.................................. 132

Chapter 9 - Deploying Solutions in VMware Cloud on AWS SDDC............... 134


Hybrid Cloud with Workload Mobility Between On Prem and Cloud....... 134
HCX Details......................................................................................... 135
Web Applications in the Cloud and Load Balancing............................... 137
VMware Cloud on AWS Applications Using Native AWS Services...........140
Integrating AWS Native Storage Solutions with VMware Cloud on AWS.. 141
Access a RDS Database from VMware Cloud on AWS........................... 144
Leveraging AWS Web-Facing Apps with the VMware Cloud on AWS
SDDC..................................................................................................145
Accessing Managed Directory Services from VMware Cloud on AWS.... 146
Integration with Next-Gen Firewalls...................................................... 147
Disaster Recovery.................................................................................151

Summary...................................................................................................154

Appendix A: Account Structure and Data Charges...................................... 155


The VMware Cloud SDDC Account....................................................... 155
The AWS Customer Account/Connected VPC....................................... 155

Index.........................................................................................................158

| VI
List of Figures
Figure 1  VMware Cloud on AWS Solution........................................................... 2
Figure 2  VMware Cloud on AWS Underlay and Infrastructure............................. 2
Figure 3  Deploying VMware Cloud on AWS SDDC............................................. 3
Figure 4  Deployed VMware Cloud on AWS SDDCs............................................ 3
Figure 5  VMware Cloud on AWS Connectivity.................................................... 4
Figure 6  Data Center Extension Use Case.......................................................... 8
Figure 7  VMware Cloud on AWS - Data Center Extension.................................. 9
Figure 8  Migration Use Case..............................................................................10
Figure 9  VMware Cloud on AWS - Migration......................................................11
Figure 10  Disaster Recovery Use Case................................................................11
Figure 11  VMware Cloud on AWS - Disaster Recovery....................................... 12
Figure 12  NSX Features in VMware Cloud on AWS............................................ 14
Figure 13  Deploying VMware Cloud on AWS SDDC - Subnet Selection.............18
Figure 14  VMware Cloud on AWS SDDC - MGW and CGW...............................19
Figure 15  vSphere Client Accessing a VMware Cloud on AWS SDDC...............20
Figure 16  VMware Cloud on AWS SDDC........................................................... 21
Figure 17  Deployed VMware Cloud on AWS SDDCs......................................... 23
Figure 18  Deployed VMware Cloud on AWS SDDC.......................................... 23
Figure 19  VMware Cloud on AWS SDDC - Networking & Security Tab.............. 24
Figure 20  VMware Cloud on AWS Solution Design and Capabilities................. 25
Figure 21  VMware Cloud on AWS SDDC - Settings Tab.................................... 26
Figure 22  Default Firewall Rules on MGW Firewall.............................................27
Figure 23  MGW Firewall Rule Allowing vCenter Access Over Internet................27
Figure 24  vCenter Access Over Internet............................................................ 28
Figure 25  vCenter Access Over Direct Connect Private VIF.............................. 28
Figure 26  Changing DNS Resolution to Private IP under Settings Tab.............. 29
Figure 27  MGW Firewall Rule Allowing vCenter Access for Specific Workloads
Over Direct Connect Private VIF................................................... 29
Figure 28  NSX Distributed Routing...................................................................30
Figure 29  Active/Standby NSX Edge Appliances............................................... 31
Figure 30  VMware Cloud on AWS SDDC.......................................................... 31
Figure 31  NSX Edge Connectivity...................................................................... 32
Figure 32  NSX Network Segments................................................................... 33
Figure 33  NSX Network Segment Types........................................................... 33
Figure 34  L2VPN Tunnel Needed Before Creation of Extended Network......... 34
Figure 35  Creating a Disconnected Network.................................................... 34
Figure 36  Enabling NSX DHCP......................................................................... 35
Figure 37  Creating NSX Network Segment Using NSX DHCP.......................... 35
Figure 38  NSX DHCP Relay.............................................................................. 36

VII |
Figure 39  NSX DHCP Relay Configuration........................................................ 36
Figure 40  NSX DNS...........................................................................................37
Figure 41  Multiple DNS Zones for CGW DNS.....................................................37
Figure 42  Changing DNS Resolution to Private IP under Settings Tab.............. 38
Figure 43  CGW Source NAT Public IP............................................................... 39
Figure 44  Created NSX Network Segments......................................................40
Figure 45  Compute Traffic Flow for Internet Access..........................................40
Figure 46  Requesting Public IP......................................................................... 41
Figure 47  Allocated Public IP............................................................................ 42
Figure 48  NSX NAT Configuration.................................................................... 42
Figure 49  Traffic Flow for Incoming Traffic from the Internet to Compute.......... 43
Figure 50  Traffic Flow for Outgoing Traffic from the Compute to Internet.......... 43
Figure 51  CGW Firewall Behavior Configuration................................................ 44
Figure 52  Applied CGW Firewall Rule on Internet Interface.............................. 44
Figure 53  Hybrid Cloud Design with Direct Connect......................................... 45
Figure 54  Hybrid Cloud Design with Direct Connect and Customer Router
at DX Location............................................................................... 45
Figure 55  Direct Connect Private VIF Design.................................................... 48
Figure 56  Direct Connect Private VIF Setup..................................................... 49
Figure 57  VMware Cloud on AWS SDDC - Direct Connect Private VIF
Configuration................................................................................50
Figure 58  Direct Connect Private VIF - Editing BGP Local ASN......................... 51
Figure 59  Encrypting Traffic with Route Based IPSEC VPN
Over Direct Connect Private VIF................................................... 52
Figure 60  Direct Connect Private VIF - Configuring VPN as Backup................ 53
Figure 61  Direct Connect Private VIF and VPN as Backup Design.................... 53
Figure 62  Direct Connect Private VIF - BGP Behavior...................................... 54
Figure 63  Direct Connect Private VIF - Advertised and Learned Routes........... 55
Figure 64  Direct Connect Private VIF - Advertising Default Route from
On Premises.................................................................................. 56
Figure 65  Route Based IPSEC VPN Design...................................................... 56
Figure 66  Public and Private IP Address for Route Based IPSEC VPN.............. 57
Figure 67  Route Based IPSEC VPN Configuration............................................ 57
Figure 68  Configuring BGP Local and Remote IP/Subnet................................ 58
Figure 69  Redundancy with Route Based IPSEC VPN Using Two BGP Local
IPs/VTIs........................................................................................ 59
Figure 70  Redundancy with Route Based IPSEC VPN Using Hardware
Cluster On Premises...................................................................... 59
Figure 71  Network Advertisements via Route Based IPSEC VPN from SDDC
to On Premises..............................................................................60

| VIII
Figure 72  Route Based IPSEC VPN Connectivity to Different
Endpoints/Devices........................................................................60
Figure 73  Route Based IPSEC VPN - DOWNLOAD CONFIG............................61
Figure 74  Route Based IPSEC VPN - VIEW STATISTICS....................................61
Figure 75  Route Based IPSEC VPN - ADVERTISED ROUTES.......................... 62
Figure 76  Route Based IPSEC VPN - LEARNED ROUTES................................ 62
Figure 77  Policy Based IPSEC Configuration.................................................... 63
Figure 78  AWS Transit Gateway Design............................................................ 64
Figure 79  AWS Transit Gateway Design - Shared Connectivity to
On Premises.................................................................................. 65
Figure 80  AWS Transit Gateway Design - Direct Connect Private VIF
Used by Spoke SDDC...................................................................66
Figure 81  AWS Console - Deployed Transit Gateway........................................ 67
Figure 82  AWS Console - Transit Gateway Attachments................................... 67
Figure 83  AWS Console - Transit Gateway VPN Established to VMware Cloud
on AWS SDDC 1............................................................................68
Figure 84  AWS Console - Transit Gateway VPN Established to VMware Cloud
on AWS SDDC 2...........................................................................68
Figure 85  AWS Console - AWS VPC 1 Route Table...........................................69
Figure 86  AWS Console - AWS VPC 2 Route Table..........................................69
Figure 87  AWS Console - Transit Gateway Route Table..................................... 70
Figure 88  VMware Cloud on AWS SDDC Route Based IPSEC Configuration
to TGW.......................................................................................... 70
Figure 89  VMware Cloud on AWS SDDC Learned Routes from TGW................ 71
Figure 90  Route Based IPSEC VPN ECMP Design with TGW............................ 71
Figure 91  AWS Console - AWS Resource Access Manager................................72
Figure 92  AWS Console - Accepting Resource Share....................................... 73
Figure 93  AWS Console - Transit Gateway Route Table Showing
New Learned Route...................................................................... 73
Figure 94  L2VPN over Direct Connect Private VIF Design.................................74
Figure 95  Extending Networks to VMware Cloud on AWS SDDC with L2VPN
and No NSX On Premises..............................................................74
Figure 96  Created L2VPN Server and Extended Network in VMware Cloud
on AWS SDDC.............................................................................. 75
Figure 97  Creating an Extended Network......................................................... 75
Figure 98  Deploying Unmanaged L2VPN Client OVF On Premises................. 76
Figure 99  Uploading the Unmanaged L2VPN Client OVF Files........................ 76
Figure 100  Selecting Networks During Unmanaged L2VPN
Client Deployment.........................................................................77

IX |
Figure 101  Configuring IP Address/Subnet and Peer Address During
Unmanaged L2VPN Client Deployment........................................ 78
Figure 102  Configuring TCP Setting and VLAN/Tunnel ID During
Unmanaged L2VPN Client Deployment........................................ 79
Figure 103  Extending Networks to VMware Cloud on AWS SDDC with L2VPN
and NSX-V On Premises............................................................... 79
Figure 104  Connected VPC via ENI Design.......................................................80
Figure 105  AWS Console - Connected VPC Active ENI.....................................81
Figure 106  AWS Console - Connected VPC Route Table...................................81
Figure 107  VMware Cloud ON AWS - Connected VPC Configuration............... 82
Figure 108  VMware Cloud on AWS SDDC - Networking & Security Tab........... 84
Figure 109  VMware Cloud on AWS RBAC - Identity and Access Management.85
Figure 110  NSX Grouping Object Based on IP Address..................................... 85
Figure 111  NSX Grouping Object Based on VM Instance....................................86
Figure 112  NSX Grouping Object Based on VM Name......................................86
Figure 113  Configuring NSX Grouping Object Based on VM Name................... 87
Figure 114  VMs and Associated NSX Security Tags............................................88
Figure 115  Configuring NSX Grouping Object Based on NSX Security Tag........88
Figure 116  Configuring Matching Criteria for NSX Grouping Object Based on
NSX Security Tag...........................................................................88
Figure 117  Feature to View Members of a NSX Grouping Object.......................89
Figure 118  Results of Viewing Members of a NSX Grouping Object...................89
Figure 119  Available Options for Defining NSX Grouping Objects......................90
Figure 120  Creating Security/Firewall Policies Using NSX Grouping Objects.....90
Figure 121  NSX Management Groups.................................................................91
Figure 122  NSX Workload Groups......................................................................91
Figure 123  Applying CGW and MGW Firewall Rules and Design....................... 92
Figure 124  Using ‘Apply To’ to Apply CGW Firewall Policies
on Specific Interfaces.................................................................... 93
Figure 125  MGW Firewall Rules Using Management Groups............................. 93
Figure 126  MGW System Defined Groups......................................................... 94
Figure 127  NSX DFW - Micro-segmentation and Design.................................. 95
Figure 128  NSX DFW - East/West Traffic Firewalling and Design..................... 95
Figure 129  NSX DFW GUI................................................................................. 97
Figure 130  NSX DFW Grouping Sections - Infrastructure Rules Example.........98
Figure 131  NSX DFW Grouping Sections - Environment Rules Example............98
Figure 132  NSX DFW Grouping Sections - Application Rules Example.............99
Figure 133  Created ‘Deny All’ DFW Rule for Whitelisting policy........................99
Figure 134  Created DFW Section.................................................................... 100

| X
Figure 135  Created NSX DFW Application Rule.............................................. 100
Figure 136  MGW Firewall Rule to Allow Communication from the ESXi hosts
in the SDDC to Wireshark.............................................................102
Figure 137  VMware Cloud on AWS SDDC Design for Port Mirroring
to Wireshark.................................................................................103
Figure 138  Configured Port Mirroring Session..................................................103
Figure 139  Examining Wireshark Capture.........................................................104
Figure 140  Configured Port Mirroring Session for Egress-Only Traffic..............104
Figure 141  VMware Log Intelligence Available for VMware Cloud on AWS........105
Figure 142  Enabling NSX-T Firewall Logs Collection.........................................105
Figure 143  Enabling VMware - NSX-T for VMware Cloud on AWS
Content Pack................................................................................106
Figure 144  Enabling Logging on MGW Firewall Rule........................................106
Figure 145  VMware Log Intelligence NSX Firewall Logs...................................106
Figure 146  Examining VMware Log Intelligence NSX Firewall Logs.................. 107
Figure 147  Configuring Splunk Data Inputs....................................................... 107
Figure 148  Configuring Splunk HTTP Event Collector.......................................108
Figure 149  VMware Log Intelligence Log Forwarding to Splunk.......................108
Figure 150  VMware Log Intelligence Log Forwarding Configuration................108
Figure 151  VMware Log Intelligence - Example NSX Firewall Logs...................109
Figure 152  IPFIX Collector Configuration...........................................................110
Figure 153  VMware Cloud on AWS Design for IPFIX with Collector in
Connected VPC............................................................................110
Figure 154  Configuring IPFIX Session................................................................111
Figure 155  Configured IPFIX Session.................................................................111
Figure 156  IPFIX Collector Example (FlowMon).................................................112
Figure 157  VMware Network Insight Example - Analyze and Display Traffic
Flows Between On-Premises Workloads and SDDC Workloads..113
Figure 158  VMware Network Insight Example - Analyze and Display Traffic
Flows Between Workloads in SDDC.............................................114
Figure 159  VMware Network Insight Example - Analyze application traffic
patterns and receive firewall rule recommendations......................115
Figure 160  VMware Cloud on AWS Solution and APIs...................................... 117
Figure 161  Navigating to VMware Cloud on AWS User Account........................119
Figure 162  Generating API Token.....................................................................120
Figure 163  VMware Cloud on AWS Developer Center..................................... 122
Figure 164  VMware Cloud on AWS Developer Center - API Explorer.............. 123
Figure 165  Org ID and SDDC ID in Support Tab............................................... 125
Figure 166  Configured MGW Rule for vCenter Access..................................... 127
Figure 167  CGW Firewall Rule Created from REST API Call.............................. 128

XI |
Figure 168  vSphere Web Client Displaying Compute Resource Pool............... 129
Figure 169  vSphere Web Client Displaying Datastore for Workloads............... 129
Figure 170  Search for PowerCLI Module..........................................................130
Figure 171  Check PowerCLI Installation............................................................130
Figure 172  Search for VMware Cloud on AWS Module.....................................130
Figure 173  List all VMware Cloud on AWS NSX-T specific functions..................131
Figure 174  Using PowerCLI, Connect to VMware Cloud on AWS..................... 132
Figure 175  Retrieve the NSX-T Reverse Proxy URL.......................................... 132
Figure 176  HCX Architecture............................................................................ 135
Figure 177  Gateway On Premises for HCX Extended Networks........................136
Figure 178  HCX Network Architecture..............................................................136
Figure 179  Load Balancing Use Cases.............................................................. 137
Figure 180  Load-Balancing with the AWS Load-Balancer................................138
Figure 181  Network Load-Balancer on AWS Pointing to a Target Group..........138
Figure 182  Target Group (aka Server Pool) Pointing to 3 Web Servers
in Healthy Status...........................................................................138
Figure 183  Virtual LB Needs to be Deployed in One-Arm mode.....................139
Figure 184  Load Balancing with a Virtual LB Appliance....................................139
Figure 185  AVI Controller - Application Performance and Visibility...................140
Figure 186  Connected VPC Design..................................................................141
Figure 187  Accessing Managed File Services from VMware Cloud on AWS
SDDC........................................................................................... 142
Figure 188  Leveraging S3 buckets for Back-ups and Snapshots...................... 143
Figure 189  Accessing a RDS Database from VMware Cloud on AWS SDDC.... 144
Figure 190  Communication from the EC2 Web Servers Back to the SDDC DB
Tier Using ENI.............................................................................. 145
Figure 191  AWS Directory Service....................................................................146
Figure 192  Accessing AWS Directory Service Using ENI..................................146
Figure 193  Inspecting VMware Cloud on AWS Traffic via On-Premises
Next-gen Firewall.........................................................................148
Figure 194  Using Next-gen Firewall On Premises and Exposing SDDC
Applications to Internet................................................................148
Figure 195  Next-gen Firewall Deployed within a Transit VPC in Native AWS.... 149
Figure 196  Connected VPC Design with Transit VPC.......................................150
Figure 197  Palo Alto Networks Appliances Deployed in a Transit VPC.............150
Figure 198  VMware Site Recovery Use Cases.................................................. 152
Figure 199  VMware Site Recovery Fan In Topology......................................... 152
Figure 200  VMware Site Recovery Fan Out Topology..................................... 153
Figure 201  AWS Data Charges Diagram.......................................................... 157

| XII
About the Authors
Humair Ahmed is a Senior Technical Product
Manager in VMware’s Network & Security Business
Unit (NSBU). Humair has expertise in network
architecture & design, multi-site and cloud solutions,
disaster recovery, security, and automation. Humair
is currently focused on networking and security
within VMware Cloud™ on AWS and providing cloud
solutions for customers.
Humair holds multiple certifications in development,
systems, networking, virtualization, and cloud
computing and has over 17 years of experience
across networking, systems, and development.
Previously at Force10 Networks and Dell Networking,
Humair specialized in automation, data center
networking, and software defined networking
(SDN). He has designed many reference architectures
for Dell’s new products and solutions, including Dell’s
first reference architecture with VMware NSX.®
Humair has authored many white papers, reference
architectures, deployment guides, training materials,
and technical/marketing videos while also speaking
at industry events like AWS re:Invent and VMworld.®
He is also the author of the VMware Press book,
VMware NSX Multi-site Solutions and Cross-vCenter
NSX Design – Day 1.
In his spare time, Humair writes on the VMware
Network Virtualization Blog and authors a popular
technology blog – http://www.humairahmed.com
– focused on networking, cloud, systems, and
development.
You can contact Humair on LinkedIn or Twitter
@Humair_Ahmed.

| XIII
About the Authors
Gilles Chekroun is a VMware Cloud on AWS Lead
Specialist in the European team. He joined VMware
in January 2015 after spending 20 years at Cisco in
the Data Centre Network and Virtualization team.
He has a strong background in networking
and storage.
Gilles started in the NSX team and quickly moved to
the European Solutions Engineering team to focus
on software-defined data center in the cloud with
AWS. He works closely with VMware customers in
designing, building and implementing their move
to the hybrid cloud model, helping them in their
digital transformation.
Gilles is VCP6-NV and AWS Solution Architect
certified. In his spare time, Gilles writes on the
VMware Cloud on AWS blog and authors a popular
blog at – http://www.gilles.cloud – focused on
VMware Cloud on AWS technologies.
You can contact Gilles on Twitter @twgilles.

Nicolas Vibert is a VMware Cloud on AWS Lead


Specialist in the European team. Most of Nico’s
career has been spent in the networking world,
from a junior support engineer working for a Cisco
partner to a senior network architect working for
Cisco itself. Nico finally joined VMware late in 2015
and worked on NSX network virtualization until he
transitioned to the VMware Cloud on AWS team.
Nico has a strong technical background, validated
through over 15 certifications across multiple vendors
(Cisco, VMware, AWS, etc.). Nico holds the Cisco
CCIE certification, recognized as one of the toughest
certifications in the IT Industry.
Nico regularly speaks at events, whether on a large
scale such as VMworld and Cisco Live as well as
smaller forums such as VMUGs and local events.
Nico regularly blogs on https://nicovibert.com on
VMware Cloud on AWS, Cloud and Networking and
can be found on LinkedIn or on Twitter @nic972.

XIV |
Acknowledgements
We would like to thank our family and friends for supporting us throughout
our careers, across many endeavors, and during the process of writing this
book. We also want to give a big thank you to Tom Gillis for writing the
foreword and opening this book with his perspective on NSX and VMware
Cloud on AWS. We would also like to thank Katie Holmes from the NSBU
marketing team for helping with some of the administrative details for
making sure this book gets published on time.

Additionally, thank you to all contributors and folks who provided feedback
from the VMware Network and Security Business Unit (NSBU) and VMware
Cloud Platform Business Unit (CPBU).

Finally, we would also like to thank all the VMware Cloud on AWS customers
out there for taking this journey to the cloud with us. We have been very
fortunate to have a great customer base to gain insight and feedback from
to help improve our solutions and services. This book is first and foremost
dedicated to our customers. Thank you!

Humair Ahmed

Gilles Chekroun

Nicolas Vibert

| XV
Preface
Enterprise customers are looking to extend and migrate their data center
solutions to the cloud; however, traditional cloud solutions hold many
challenges. These include application redesign, workload migration,
investment protection, and operational model consistency between on
premises and cloud.

VMware Cloud on AWS addresses these challenges. By leveraging the same


technologies in the cloud that enterprise customers already use on premises.
VMware Cloud on AWS helps accelerate migration of VMware vSphere-
based workloads to the cloud while providing robust capabilities and hybrid
cloud solutions. VMware Cloud on AWS also allows organizations to take
immediate advantage of the scalability, availability, security, and global reach
of the AWS infrastructure.

No longer is extending or moving to the cloud a daunting task. Customers


can immediately utilize VMware Cloud on AWS SDDCs, deploying and
migrating workloads to the cloud while leveraging technologies they
already use on premises like VMware ESXi™, VMware vSAN,™ VMware
NSX® networking and security, and VMware vCenter.®

This book is targeted towards cloud, network, and security architects


interested in learning about and using NSX networking and security in
VMware Cloud on AWS and deploying hybrid cloud solutions.

Please note, this book is based on the VMware Cloud on AWS SDDC version
available at time of writing, SDDC version 1.7. As is with software, features/
capabilities change rapidly over time, especially for cloud/managed services.
Thus, if you are reading this book and have access to a VMware Cloud on
AWS SDDC on a later version, you may notice some discrepancies. Although
some differences may be seen in the GUI or there may be new features/
capabilities or different workflows, the general concepts, use cases, and
underlying core fundamentals will, for the most part, stay consistent.

XVI |
Foreword
The Hybrid Cloud is clearly becoming core to mainstream enterprise IT
infrastructure. Five years ago, there was public cloud and there was private
cloud – and the line between them was stark. VMware has been working
closely with the leading cloud providers to blur that line. As a result, today’s
hybrid cloud has many flavors. There are VMware private clouds, VMware
environments running on major public cloud vendors, VMware Cloud
Provider partners, and now public cloud providers bringing their hardware
into private cloud environments, like AWS Outposts. In all of these hybrid
cloud scenarios, NSX is the foundation that ties it all together.

VMware Cloud on AWS is a popular form of hybrid cloud that combines


the familiarity and rich features of a VMware private cloud with the elasticity
and scale of the public cloud. One of the most fundamental and core
VMware technologies used in enabling the VMware Cloud on AWS
solution is VMware’s network virtualization and security stack, VMware
NSX. As we’ve seen over the last several years, NSX has revolutionized
both networking and security. By taking a software-defined approach and
decoupling the network and security functions from hardware, VMware
NSX introduced an innovative new solution for networking and security
– offering inherent advanced networking and security capabilities,
automation, agility, and flexibility.

VMware pioneered the field of software defined networking with its


acquisition of Nicira in 2012. As the solution matured, VMware introduced
a new paradigm for internal security with micro-segmentation, a technique
that has now become an industry standard. In 2017, VMware introduced
advanced container networking in NSX that treated both containers and
VMs as first-class citizens, streamlining the connection between, for
example, a container-based app and a VM-based data store. In 2019
VMware launched a fully featured version of NSX called NSX-T. NSX-T
de-couples NSX from ESX, allowing NSX to run on bare metal workloads,
in vSphere environments, in KVM environments, and on native public
clouds. NSX is now the platform for consistent automation and policy
enforcement on every form of infrastructure, including VMware Cloud
on AWS which is the focus of this book.

Foreword | XVII
The Virtual Cloud Network is VMware’s vision of the future of networking
where NSX provides end-to-end connectivity, protection, and visibility of
workloads regardless of where they sit. Nothing exemplifies this vision
better than VMware Cloud on AWS. Whether your workloads are on
premises or in the cloud, the same NSX network and security stack can
be used to provide connectivity, security, and visibility.

In this book, VMware NSX and VMware Cloud on AWS experts Humair
Ahmed, Gilles Chekroun, and Nico Vibert take you through the most
fundamental NSX networking and security concepts in VMware Cloud on
AWS. Furthermore, you’ll walk away with a deep understanding of how
NSX is used in VMware Cloud on AWS and how it enables an unmatched
hybrid cloud solution where workloads can seamlessly be moved from on
premises to cloud and vice‑versa with no downtime.

So welcome to the cloud with VMware Cloud on AWS! I hope you are
excited as I am about this innovative new solution. As you read through
this book, I have no doubt you will start to see the NSX everywhere story
unfold in front of you and see how powerful the VMware Cloud on AWS
truly is and what it can do for your company.

Tom Gillis
Senior Vice President and General Manager,
NSBU, VMware

XVIII |
Chapter 1

Introduction

VMware Cloud on AWS is a cloud service jointly developed by VMware and


AWS. This offering brings the same technologies enterprise customers use
on premises (e.g., VMware vSphere® ESXi, vSAN, NSX) into the cloud.
VMware Cloud on AWS helps customers deploy and accelerate migration
of VMware vSphere-based workloads to the cloud while providing robust
capabilities and hybrid cloud solutions. VMware Cloud on AWS allows
organizations to take immediate advantage of the scalability, availability,
security, and global reach of the AWS infrastructure. Additionally,
customers can access native AWS services from the VMware Cloud on
AWS SDDC without incurring any ingress or egress charges. This is made
possible through technology jointly developed by VMware and AWS.

VMware vSphere ESXi is the most popular hypervisor platform used by


enterprises to run virtual machines (VMs). A key advantage of VMware
Cloud on AWS is that it runs ESXi hosts on bare metal EC2 instances on
the AWS infrastructure. Customers can establish a VMware Cloud on AWS
SDDC and quickly begin deploying workloads to the cloud. Efforts can be
focused on developing and deploying applications, leaving management
and upgrade of the infrastructure to VMware.

CHAPTER 1 - Introduction | 1
Additionally, customers can migrate and VMware vSphere® Storage
vMotion® workloads to the cloud with ease, as the same infrastructure is
used both on premises and in the cloud. Using Hybrid Linked Mode (HLM),
administrators can manage both the on-premises vCenter and VMware
Cloud on AWS SDDC vCenter from one centralized location, easily
migrating workloads between the different vCenters/sites.

Figure 1  VMware Cloud on AWS Solution

The AWS network is a full layer 3 network and is itself an overlay. vSphere
ESXi VDS leverages port groups which use VLANs, thus VMware and AWS
jointly developed a solution where the VDS VLANs are mapped to AWS
Elastic Network Interfaces (ENIs). This VLAN-to-ENI mapping allows the
VDS and ESXi hosts to run in VMware Cloud on AWS on the AWS
infrastructure. Figure 2 diagrams this VLAN-to-ENI mapping.

Figure 2  VMware Cloud on AWS Underlay and Infrastructure

2 |
Implementation details of the VMware Cloud on AWS SDDC underlay and
infrastructure are abstracted away; users can simply deploy an SDDC from
the VMware Cloud on AWS console as shown in Figure 3. vSphere ESXi
hosts, vCenter, vSAN, and NSX are automatically deployed and configured.
The administrator does not configure the underlay or vSphere infrastructure.

Figure 3  Deploying VMware Cloud on AWS SDDC

Users can deploy multiple SDDCs and can access them from a single
VMware Cloud on AWS console, as shown in Figure 4.

Figure 4  Deployed VMware Cloud on AWS SDDCs

CHAPTER 1 - Introduction | 3
Once the VMware Cloud on AWS SDDC is deployed, users have multiple
connectivity options between on premises and the cloud SDDC. These
connectivity options include policy-based IPSEC VPN, route-based IPSEC
VPN, L2VPN, and Direct Connect. As mentioned prior, users also have
connectivity to native AWS resources (e.g., EC2, S3, and RDS) via ENI
connectivity. This connectivity is shown in Figure 5.

Figure 5  VMware Cloud on AWS Connectivity

VMware Cloud on AWS is a complete cloud solution encompassing


multiple VMware technologies. It enables enterprise customers to quickly
leverage familiar technologies to take advantage of the cloud. While
VMware Cloud on AWS is built on several VMware technologies, this book
focuses on the NSX networking and security capabilities that enable the
VMware Cloud on AWS solution and will describe multiple connectivity
options for hybrid cloud solutions.

4 |
Chapter 2

Traditional Cloud Challenges and


Why VMware Cloud on AWS

Customers are under tremendous pressure to implement cloud strategies


– ranging from establishing cloud-based development environments to
completely evacuating data centers and moving to the cloud. A variety of
imperatives drive this shift: company mandates; cost-cutting initiatives;
agility and on-demand capacity; legacy infrastructure obsolescence; and
outsourcing-contract retirement.

With traditional cloud solutions, many challenges appear when looking to


adopt and implement a cloud strategy. The following sections will detail
some of the more common issues.

CHAPTER 2 - Traditional Cloud Challenges and Why VMware Cloud on AWS | 5


Traditional Cloud Challenges
Most challenges customers face when moving to the cloud fall into one or
both of the following categories: migrating on-premises workloads to the
cloud or deploying new workloads in the cloud.

Traditional cloud challenges associated with moving applications or


workloads from on premises to the cloud are:

• Applications may need to be re-written or re-architected due to


cloud architecture.

• Migration of workloads can be cumbersome due to the change in


VM/machine format, resulting in significant downtime.

• Application IP addresses may need to be changed, mandating a


cascade of additional changes (e.g., security policy updates,
application refactoring.)

• Security policies need to be rewritten or re-architected based on


cloud vendor security technologies or capabilities.

• Hybrid cloud solutions are not agile in terms of workload mobility


due to different technologies and platforms used on premises and
in the cloud.

• Traditional cloud solutions tend to lock customers into the cloud


platform due to individual architectures and capabilities. This makes it
difficult to move workloads back on premises at a later time.

There may be fewer issues when applications are deployed directly into
the cloud; however, challenges still exist, including:

• Customers must learn a new platform. This may be cost prohibitive or


logistically impossible due to limited resources, time constraints, or
cloud migration timelines.

• Customers may have large investments in training and management


software they can no longer leverage due to the new cloud
architecture.

• Hybrid cloud solutions are not agile in terms of workload mobility


due to different technologies and platforms used on premises and
in the cloud.

• Traditional cloud solutions tend to lock customers into the cloud


platform due to individual architectures and capabilities. This makes
it difficult to move workloads back on premises at a later time.

6 |
Why VMware Cloud on AWS
VMware Cloud on AWS addresses the traditional challenges customers
face when moving to public cloud. In addition to enhanced functionality
and operational flexibility, VMware Cloud on AWS offers the following
benefits:

• A cloud solution with familiar VMware technologies and management


that is already working on premises - technologies such as vSphere
ESXi, vCenter, vSAN, and NSX.

• Leverage existing investment in training, expertise, and management/


operations/automation tools.

• An integrated hybrid cloud solution that can seamlessly live migrate


workloads from on premises to cloud and vice-versa with zero
downtime. With VMware vSphere infrastructure and VMware
technologies both on premises and in the cloud, workload mobility
and migration are seamless with no cumbersome or error-prone VM
format changes.

• Applications do not need to be re-written since they share common


vSphere infrastructure and VMware technologies.

• There is no vendor lock-in in the cloud. If desired, workloads can


easily be migrated on premises within the common VMware
ecosystem.

• When using VMware NSX on premises, the networking and security


model is the same as in the cloud, the security posture is the same as
in the cloud, and NSX APIs can be used to automate the same
security posture and policies as in the cloud.

• As enhancements are made to the NSX and VMware Cloud on AWS


platforms, deployments will be able to easily take advantage of new
capabilities available both on premises and in the cloud. Since
VMware Cloud on AWS is a managed service, VMware is responsible
for managing and updating the infrastructure and management
components.

These examples demonstrate the significant benefits to using VMware


Cloud on AWS as a hybrid cloud solution. VMware Cloud on AWS offers a
hassle-free strategy to realize cloud and hybrid cloud solutions for any
organization.

The next chapter looks at some popular uses cases before digging into the
available networking and security features and capabilities.

CHAPTER 2 - Traditional Cloud Challenges and Why VMware Cloud on AWS | 7


Chapter 3

VMware Cloud on AWS


Use Cases

VMware Cloud on AWS enables three overarching use case categories


– data center extension, migration, and disaster recovery. Each category
contains specific scenarios that align with the overarching use case.

Data Center Extension

Figure 6  Data Center Extension Use Case

There are several factors that drive interest in leveraging cloud for data
center extension:

• Existing capacity, power, or cooling constraints with an on-premises


data center that limit easy/rapid expansion.

• Traditional hardware acquisition, deployment, and configuration


processes do not provide the flexibility and agility required for rapid
adoption and expansion.

CHAPTER 3 - VMware Cloud on AWS Use Cases | 8


• Expenses associated with acquiring new hardware and data center
space, including costs for power, cooling, site operations, and system
maintenance.

• Cloud solutions can scale up to offer on-demand capacity for spikes in


traffic or seasonal demand, then scale back down when demand
decreases. This enables a cost-efficient model with no long-term
investment or CAPEX.

• Cloud can provide capacity for test/dev workloads which do not


require permanent resources, again eliminating long-term investment
or CAPEX.

VMware Cloud on AWS provides the ability to build out or extend a data
center to the cloud with efficiency and ease, leveraging familiar technology
and tools. It enables customers to expand their data center footprint to the
cloud – either temporarily or permanently – with no hardware or CAPEX
investment. VMware Cloud on AWS is available globally, allowing
alignment of data center expansion with regional business requirements.
Figure 7 illustrates these global connectivity capabilities.

Figure 7  VMware Cloud on AWS - Data Center Extension

9 |
Migration

Figure 8  Migration Use Case

A top use case for VMware Cloud on AWS is migration of all applications
running in a data center to the VMware Cloud on AWS. This is sometimes
referred to as re-hosting.

Customers like the idea of accelerating their move to the cloud with no
disruption to their applications or operating model. Further, they can
continue leveraging their existing VMware expertise.

Some may want to move only specific workloads to the cloud, like dev/test
or VDI. Others are interested in moving all applications to VMware Cloud
on AWS ahead of decommissioning an on-premises data center. Still
others require a data center refresh, and instead of making additional
investments, decide to save on capital expenditure and associated
operational costs and migrate to VMware Cloud on AWS.

CHAPTER 3 - VMware Cloud on AWS Use Cases | 10


Since VMware Cloud on AWS uses the same technology both on premises
and in the cloud, customers can live migrate or vMotion workloads to the
cloud with no downtime. Cold migration of workloads is also supported.
VMware Hybrid Cloud Extension (HCX) technology is included with
VMware Cloud on AWS to facilitate bulk workload migration with no
downtime. HCX can extend L2 to the cloud and leverages additional
capabilities like WAN optimization and compression.

Figure 9  VMware Cloud on AWS - Migration

Disaster Recovery

Figure 10  Disaster Recovery Use Case

Another common use case for VMware Cloud on AWS is disaster recovery
(DR). Setting up DR solutions can be a challenge due to the expense and
expertise required. Some customers are wary of running a dedicated data
center simply for the purposes of DR.

11 |
VMware Cloud on AWS Site Recovery is an add-on that can be connected
to an on-premises VMware Site Recovery Manager,™ leveraging vSphere
Replication and vSphere Recovery Manager to provide the DR solution.

VMware Site Recovery enables protection and recovery of applications


without requiring the deployment and management of a dedicated
secondary site. In a nutshell, VMware Site Recovery provides a simple DR
solution for on-premises workloads. VMware Site Recovery can be used
for disaster avoidance, disaster recovery, and non-disruptive testing. Many
customers are using VMware Cloud on AWS as a new DR solution, while
others are replacing or complementing existing DR solutions.

When a customer enables the Site Recovery add-on as shown in Figure 11,
vSphere Recovery Manager and vSphere Replication are automatically
installed and configured on the management network in the SDDC. The
components and ecosystem lifecycle are managed by VMware while
customers manage the configuration and operations based on their DR
requirements. Customers can also download the on-premises components
to enable the DR solution between on-premises data centers and the
VMware Cloud on AWS SDDC.

VMware Site Recovery Manager and VMware vSphere Replication are


used to automate the process of recovering, testing, re-protecting, and
failing back virtual machine workloads. VMware Site Recovery Manager
serves to coordinate the operations of the VMware SDDC. As virtual
machines at the protected site are shut down, copies of these virtual
machines at the recovery site start up. By using the data replicated from
the protected site, these virtual machines then assume responsibility for
providing the same services.

Figure 11  VMware Cloud on AWS - Disaster Recovery

CHAPTER 3 - VMware Cloud on AWS Use Cases | 12


Chapter 4

NSX Architecture and Features


in VMware Cloud on AWS

About NSX
VMware NSX network virtualization provides a full set of logical network
and security services decoupled from the underlying physical infrastructure.
Distributed functions such as switching, routing, and firewalling not only
provide L2 extension capabilities, but also enhanced distributed
networking and security functions.

NSX has seen tremendous success, enabling customers to virtualize,


secure, and automate their networks. With the latest incarnation of NSX,
NSX-T, the scope has expanded to an NSX-everywhere model supporting
environments and use cases with both ESXi and KVM hypervisors,
containers, and cloud.

(Note – when NSX is mentioned in this book, it refers to NSX-T unless


otherwise stated. NSX-T is VMware’s new NSX platform replacing the
prior NSX-V platform.)

NSX’s distributed networking and security functions are provided as


kernel-level modules for the ESXi and KVM hypervisors. Other services
such as VPN, NAT, and Edge firewall are centralized services provided
by an Edge component.

This book does not cover the basics of NSX. Rather, it discusses what
NSX features and capabilities are provided in VMware Cloud on AWS and
how to leverage these capabilities to provide robust cloud and hybrid
cloud solutions. The following chapter discusses these capabilities in
more detail. The goal of this chapter is to provide an overview of what
NSX enables within VMware Cloud on AWS and define the basic SDDC
network and connectivity architecture.

CHAPTER 4 - NSX Architecture and Features in VMware Cloud on AWS | 13


When VMware Cloud on AWS first launched, NSX-V was the underlying
network and security platform. However, NSX-T has now replaced NSX-V
as the networking and security platform stack used in VMware Cloud on
AWS. NSX-T is the future platform for all things NSX networking and
security. It was designed specifically to support more diverse data center
environments at scale and provide more robust capabilities for containers
and cloud.

NSX Features
Figure 12 lists some of the core capabilities of NSX within VMware Cloud
on AWS, grouping them into three categories:

• Networking and connectivity

• Security

• Operations

Figure 12  NSX Features in VMware Cloud on AWS

The networking and connectivity features can be further grouped into


two areas:

• Networking within the SDDC

• Connectivity to on premises, other SDDCs, or other environments

14 |
Networking and Connectivity
Networking within the SDDC

NSX provides all the networking capabilities required by workloads


running in the SDDC. These capabilities include:

• Deploy networks and define the respective subnet and gateway for
the workloads that will reside there. Users can create L2, L3, and
isolated networks.

• Enable DHCP selectively for network segments or use DHCP Relay

• Create multiple DNS zones, allowing use of different DNS servers for
specific domains

• Offer distributed routing so workloads can efficiently communicate


with each other. Routing is done directly on each host where the
respective workload resides. Routing is done via an NSX kernel-level
module running in the respective ESXi hypervisor.

• Source NAT (SNAT) is automatically applied to all workloads in the


SDDC to enable Internet access. To provide a secure environment,
Internet access is blocked at the Edge firewall, but firewall policy can
be changed to allow access. Additionally, users have the ability to
request a public IP for workloads and create custom NAT policies.

Connectivity to on-prem, other SDDCs, or other environments

Connectivity to external environments such as on premises and other


SDDCs is critical for many customers looking to deploy hybrid cloud
solutions. NSX provides a robust set of capabilities enabling this
connectivity. For example:

• Direct Connect to carry all traffic between on premises and cloud over
high bandwidth, low latency connectivity, and optionally use VPN as
backup for compute traffic.

• Policy-based IPSEC VPN capability to connect to on premises, VPCs,


or other SDDCs.

CHAPTER 4 - NSX Architecture and Features in VMware Cloud on AWS | 15


• Route-based IPSEC VPN capability to connect to on premises, VPCs,
or other SDDCs. BGP is utilized to automatically learn networks and
can be used to provide an active/standby VPN connectivity model for
additional redundancy or an ECMP VPN connectivity model to
provide additional bandwidth and redundancy.

• L2VPN capability to extend L2 domains from on premises to the


cloud, enabling workload migration without IP address change.

• Ability to connect to an AWS Transit Gateway via VPN attachments,


enabling any-to-any communication and shared connectivity on
premises.

• Integration with the greater AWS ecosystem, allowing access to


native AWS services such as EC2, S3, RDS etc. over ENI connectivity.

HCX can also be used for L2 extension between on premises and a


VMware Cloud on AWS SDDC. HCX is an add-on that can be enabled in
the SDDC and is included for free with VMware Cloud on AWS SDDC. It
provides L2 extension with advanced capabilities like WAN optimization
and compression. HCX is especially useful for users looking to migrate a
large number of workloads from on premises to VMware Cloud on AWS
SDDC. More details on HCX are provided in Chapter 9.

Security
For security, VMware NSX provides several features and capabilities:

• Edge firewall for both management (i.e., MGW firewall) and compute
(i.e., CGW firewall). Edge firewall is a stateful firewall that helps
protect north/south traffic in and out of the SDDC.

• NSX Distributed Firewall (DFW) is a stateful firewall that is distributed


across all the hosts in the SDDC. It provides protection for east/west
traffic within the SDDC and enables micro-segmentation. This micro-
segmentation capability provides advanced-level protection for
workloads and is discussed in more detail in Chapter 6.

16 |
• For both Edge firewalls and DFW, NSX enables creation of Grouping
Objects used in the security rules. These Grouping Objects are defined
by the user; NSX automatically identifies and places workloads in the
group based on criteria like IP address, VM Instance, VM Name, and
Security Tag.

• Role-Based Access Control (RBAC), allowing definition of who has


access to edit networking and security policies and who can only view
networking and security policies.

Operations
VMware NSX also provides important operational tools for troubleshooting
and managing within the SDDC.

• Port mirroring can send mirrored traffic from a source to a destination


appliance (e.g., Wireshark). This destination can be in VMware Cloud
on AWS SDDC or on premises.

• IPFIX supports segment-specific network traffic analysis by sending


traffic flows to an IPFIX collector.

• DFW IPFIX allows tools like VMware vRealize® Network Insight™ to map
traffic path specifics and identify traffic traversing the NSX Edge and
DFW firewalls.

• Public and private endpoints can use NSX APIs to automate anything
that’s supported by the VMware Cloud on AWS console.

In summary, there are a lot of networking and security features and


capabilities offered by NSX for VMware Cloud on AWS SDDC. The
remainder of this chapter examines the NSX architecture associated
with a VMware Cloud on AWS SDDC.

CHAPTER 4 - NSX Architecture and Features in VMware Cloud on AWS | 17


VMware Cloud on AWS SDDC
and NSX Architecture
When deploying a VMware Cloud on AWS SDDC from the console
(https://vmc.vmware.com/console), specific information is required. An
administrator will link the instance to an AWS account, VPC, and subnet.
This allows for AWS native services access.

A management subnet designation is also required; 10.2.0.0/16 is the


default if not otherwise defined. This is the infrastructure subnet used
by the SDDC and is carved up into smaller subnets for use by several
different types of traffic (e.g., ESXi management, vMotion vmkernel
interfaces, management appliance traffic). Figure 13 shows the default
management subnet used during SDDC deployment.

Figure 13  Deploying VMware Cloud on AWS SDDC - Subnet Selection

When the SDDC is deployed, its respective ESXi hosts, vSAN, NSX,
and vCenter are automatically deployed and configured.

18 |
NSX-T is the networking and security platform used in VMware Cloud on
AWS. The VMware Cloud on AWS console in Figure 14 shows a deployed
Management Gateway (MGW) and a Compute Gateway (CGW) within a
SDDC. In NSX-T terminology, these are T1 routers.

Figure 14  VMware Cloud on AWS SDDC - MGW and CGW

All the management components (e.g., vCenter, VMware NSX® Manager™,


VMware NSX® Controllers™) reside on networks connected to the MGW.
VMware manages these components; customers cannot deploy workloads
on networks behind the MGW.

Customers only need be concerned with creating NSX network segments


– which are automatically connected to the CGW – and deploying
workloads on them.

CHAPTER 4 - NSX Architecture and Features in VMware Cloud on AWS | 19


As shown in Figure 15, there are always two resource pools configured in
VMware Cloud on AWS SDDC:

• Mgmt-ResourcePool: The resource pool for components that the


VMware infrastructure manages. Customers cannot deploy workloads
into this resource pool for management components. vCenter, NSX
Manager, NSX Controllers, and NSX Edges are deployed into this
resource pool.

• Compute-ResourcePool: The resource pool for compute. Customers


deploy workloads into the resource pool for compute workloads.

Figure 15 shows the vSphere Client accessing a VMware Cloud on AWS


SDDC, displaying the two distinct resource pools for management
components and compute workloads.

Figure 15  vSphere Client Accessing a VMware Cloud on AWS SDDC

Figure 16 shows that the MGW and CGW are connected via another
router. In NSX-T terminology, this is called a T0 router. This router allows
workloads on compute network segments connected to the CGW access

20 |
to components on the management network connected to the MGW.
The T0 router is also where connectivity to the external environment (e.g.,
IGW, Direct Connect, IPSEC VPN, ENI to connected VPC) takes place.
Note that in this architecture, both the compute network segments and
management network are overlays.

Figure 16  VMware Cloud on AWS SDDC

The T1 routers have a distributed component across all the hosts. The
distributed component of the CGW is the gateway for the workloads on
NSX network segments; it is essentially a distributed router. There is also a
T1 component on the Edge which provides access to services such as firewall
while also connecting to T0 for access to the external environment. The
Edge in VMware Cloud on AWS is a virtual appliance that runs in active/
standby mode. The standby is always placed on a different host from the
active appliance by leveraging anti-affinity rules.

Figure 16 also calls out security at different levels:

• A firewall enabled on the MGW. This MGW firewall protects access to


the management network and its components (e.g., vCenter.)

• A CGW firewall implemented to protect compute on the router where


external environment connectivity terminates.

• A DFW where firewall policies are applied on vNICs of compute workloads


that enables micro-segmentation, a level of advanced security.

CHAPTER 4 - NSX Architecture and Features in VMware Cloud on AWS | 21


The VMware NSX® Edge™ firewall (i.e., MGW firewall and CGW firewall)
provides protection for north/south traffic while the NSX DFW protects
east/west traffic.

A few important aspects to note and remember about this architecture:

• When a network segment is deployed from the VMware Cloud on


AWS console, it is automatically attached to the CGW. The T0 router
connecting the MGW and CGW automatically becomes aware of how
to reach the network.

• All external connectivity – whether through IGW for Internet Access,


VPN, Direct Connect, or connected VPC – terminates at the T0, which
knows how to reach the networks behind the MGW and CGW.

• Users can only deploy workloads on network segments connected to


the CGW; they cannot deploy on the management network connected
to the MGW.

• Source NAT (SNAT) is configured by default for compute workloads in


the SDDC. This permits all workloads deployed on network segments
to access the Internet; however, the CGW firewall must be configured
to explicitly allow this traffic. This is covered in more details in the
next chapter.

• There is a default route to the AWS Internet Gateway which provides


Internet access to and from the SDDC. The environment is secured
via the NSX Edge firewall.

• NSX Edge firewall provides protection for north/south traffic while


NSX DFW provides protection for east/west traffic.

• Management components are managed by VMware. Users can


access vCenter but do not have root access, thus cannot change
fundamental infrastructure configuration or components. They do
not have access to NSX Manager, Policy Appliance, NSX Edges, or
NSX Controllers.

• vCenter has both a public and a private IP address. User and location
access to vCenter can be restricted via the MGW firewall. This is
discussed in the coming sections.

22 |
Accessing VMware Cloud on AWS SDDC
Once the VMware Cloud on AWS SDDC is deployed, it is visible from the
central console (https://vmc.vmware.com/console). Figure 17 shows two
deployed SDDCs.

Figure 17  Deployed VMware Cloud on AWS SDDCs

The Networking & Security tab is accessed by clicking the name of the
SDDC or the VIEW DETAILS link on the bottom left of the SDDC. This
menu offers access to additional configuration options for networking and
security configuration. Figure 18 presents the user view of the SDDC
screen offering access to the Networking and Security tab.

Figure 18  Deployed VMware Cloud on AWS SDDC

CHAPTER 4 - NSX Architecture and Features in VMware Cloud on AWS | 23


Figure 19 moves into the Networking & Security tab within an SDDC.
A user must have the appropriate permission in Identity and Access
Management to see the Networking & Security tab. A user can have
either read/write access (e.g., NSX Cloud Admin) or read-only access (e.g.,
NSX Cloud Auditor). Additional details on VMware Cloud on AWS security
and RBAC are covered in Chapter 6.

The Overview tab provides important information about the SDDC. It


also shows the connectivity status to the Internet, on premises, and
connected VPC.

Figure 19  VMware Cloud on AWS SDDC - Networking & Security Tab

Once the SDDC is deployed, the customer can consume the rich NSX
networking and security capabilities to deploy robust solutions in VMware
Cloud on AWS SDDC. Figure 20 presents a design where the user:

• has deployed a VMware Cloud on AWS SDDC

• has a hybrid cloud architecture with connectivity from on premises


to the SDDC via a Direct Connect private VIF

• is accessing vCenter and workloads in the SDDC from on premises


via a Direct Connect private VIF

24 |
• is accessing AWS EC2 resources in the connected VPC via ENI
connectivity

• has workloads communicating with each other via NSX distributed


routing in the SDDC

• is using CGW Edge firewall policies on specific interfaces like Direct


Connect and connected VPC

• is using DFW for micro-segmentation within the SDDC

• is port mirroring traffic from a VM on the App network segment to an


AWS EC2 instance in the connected VPC, sending it to a VM running
Wireshark on the Monitoring network segment

Figure 20  VMware Cloud on AWS Solution Design and Capabilities

A demo video walkthrough of this VMware Cloud on AWS SDDC


deployment and its respective features and capabilities is available
from the following link:

VMware Cloud on AWS with NSX-T SDDC: Connectivity, Security,


and Port Mirroring Demo
https://youtu.be/CToNGJ7jGvgh

CHAPTER 4 - NSX Architecture and Features in VMware Cloud on AWS | 25


Access to vCenter
In the prior console screenshots, there are links to Open vCenter for the
respective SDDC. vCenter has both a public and private IP address used to
control access. The public IP is allocated during the provisioning of the SDDC,
while the private IP address is assigned within the management subnet range
defined during the deployment of the SDDC. These addresses, along with
the FQDN of the vCenter, are found in Settings -> vCenter FQDN on the
VMware Cloud on AWS console as shown in Figure 21.

Figure 21  VMware Cloud on AWS SDDC - Settings Tab

The MGW firewall is utilized to control access to the management network


and, respectively, vCenter. A brief guide is provided here on how to
access and control access to vCenter; following chapters will provide
extended details about networking and security controls.

26 |
When a VMware Cloud on AWS SDDC is deployed, all access to vCenter is
blocked by default. This is because the MGW firewall protecting vCenter
has a Default Deny policy. Figure 22 shows the default firewall policies on
the MGW firewall when a new SDDC is deployed. In order to reach vCenter,
access must be explicitly permitted.

Figure 22  Default Firewall Rules on MGW Firewall

In Figure 23, a new firewall rule has been added allowing HTTPS access
from anywhere. This vCenter is now accessible over the Internet. Access
could be further restricted to a single IP address or subnet.

The vCenter in the Destination field is a system-created Grouping Object


that has vCenter as a member. Grouping Objects are discussed in more
detail in Chapter 6. Note that the firewall rule allows access via HTTPS,
which is the protocol used to access vCenter via a web browser.

Figure 23  MGW Firewall Rule Allowing vCenter Access Over Internet

CHAPTER 4 - NSX Architecture and Features in VMware Cloud on AWS | 27


It is now possible to connect to vCenter via the Open vCenter link on the
VMware Cloud on AWS SDDC console or via its FQDN over the Internet. The
URL would take the form https://vcenter.sddc.35-160-70-176.vmwarevmc.com
with the example IP address (i.e., 35.160.70.176) replaced by the actual
IP Address; note that the IP address dots are replaced with dashes. Figure
24 provides a diagram of this connectivity.

Figure 24  vCenter Access Over Internet

On-premises connectivity may be secured using an IPSEC VPN or Direct


Connect and connecting to vCenter’s private IP address. An illustration
with a user connecting from on premises to vCenter over Direct Connect
is shown in Figure 25.

Figure 25  vCenter Access Over Direct Connect Private VIF

28 |
For access to vCenter via its private IP address, DNS resolution for
vCenter must be changed to Private IP as shown in Figure 26. DNS is
discussed in more detail in the next chapter.

Figure 26  Changing DNS Resolution to Private IP under Settings Tab

The next step is creation of an MGW firewall policy allowing access to


vCenter. In Figure 27, the OnPrem name in the Source field is a user-
created Grouping Object that has an on-premises subnet as a member.
Grouping Objects are discussed in more detail in Chapter 6. In this
example, the firewall policy also allows for ICMP and SSO (for Hybrid
Linked Mode) in addition to HTTPS.

Figure 27  MGW Firewall Rule Allowing vCenter Access for Specific Workloads Over
Direct Connect Private VIF

The next two chapters delve into greater details about the specific
features of NSX networking and security in VMware Cloud on AWS.

CHAPTER 4 - NSX Architecture and Features in VMware Cloud on AWS | 29


Chapter 5

NSX Networking in VMware


Cloud on AWS SDDC

NSX-T is VMware’s new NSX platform, replacing NSX-V, for network


virtualiztion. It is the networking and security platform used in VMware
Cloud on AWS. This chapter focuses on the networking capabilities NSX
brings to the VMware Cloud on AWS SDDC. The following chapter explores
the security aspects of NSX in the VMware Cloud on AWS SDDC.

Routing
There are two different levels of routing available within VMware Cloud on
AWS – distributed and edge.

Distributed Routing
Distributed routing is east/west routing performed across all hosts within
the SDDC via kernel-level modules. Each host in the SDDC knows how to
reach all other workloads within other hosts in the SDDC, thus can route
directly without hair-pinning to an external router. This routing is very
efficient as it is performed at the kernel level.

Figure 28  NSX Distributed Routing

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 30


Edge Routing
VMware Cloud on AWS SDDC has two Edge appliances – one active and the
other standby. Anti-affinity rules place the two Edge appliances on different
hosts, preventing a single host failure from affecting both appliances.

Figure 29  Active/Standby NSX Edge Appliances

As shown in Figure 29, the MGW and CGW routers are logical components
within the NSX Edge appliance.

The MGW and CGW provide north/south routing for management and
compute, respectively. Within the SDDC, there is a default route to the
AWS IGW for Internet connectivity.

Figure 30  VMware Cloud on AWS SDDC

31 |
It is important to understand the connectivity and traffic flow associated
with the NSX Edge appliance. All traffic entering and exiting the SDDC
must traverse the T0 router. This router, which also connects the MGW and
CGW, provides connectivity to the external world as presented in Figure 31.

Figure 31  NSX Edge Connectivity

A few design items to note:

• When the Direct Connect private VIF is connected, all ESXi and vMotion
traffic will use that connection. In that instance, it is not possible to set up
a separate IPSEC VPN connection for ESXi or vMotion traffic. If there is
no Direct Connect Private VIF connection, then the MGW route is used.

• vMotion is only supported over a Direct Connect private VIF as it


requires 250 Mbps sustained connectivity.

• If the same routes are learned over the Direct Connect private VIF and
route-based IPSEC VPN, the default setting prefers routes from the
route-based IPSEC VPN; however, ESXi and vMotion traffic will still
go over the Direct Connect private VIF connection.

• By enabling the Use VPN as backup to Direct Connect option under the
Direct Connect settings, all routes learned over Direct Connect Private
VIF will be preferred over routes learned over Route Based IPSEC VPN.
ESXi and VMotion traffic will always go over the Direct Connect
private VIF.

• VMware Cloud on AWS SDDC has a default route to the AWS IGW,
allowing workloads to have Internet connectivity if allowed through
the Edge FW. It is possible to also advertise a default route from
on premises over the Direct Connect private VIF or IPSEC VPN.
With this configuration, all traffic – including Internet-bound traffic
– will be routed back on premises.

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 32


Network Segments
NSX network segments can be created in the VMware Cloud on AWS
SDDC from the console as illustrated in Figure 32. Benefits include:

• User creation of routed, extended, or isolated networks

• NSX DHCP server can be used or DHCP Relay can be configured to


use a third party DHCP server

Figure 32  NSX Network Segments

Creation of a network segment requires its designation as a routed,


extended, or disconnected as shown in Figure 33.

Figure 33  NSX Network Segment Types

33 |
Routed Network
A routed network is connected to the CGW. The distributed component
of the CGW is the gateway for the workloads on each respective network
segment. The network is routed to other networks attached to the CGW.

Extended Network
This network type is extended on premises via L2VPN. The gateway for
an extended network is always on premises as the extended network is
not connected to the CGW. An L2VPN server must be configured to
create an extended network. Creation of an extended network without an
L2VPN server configured will result in the alert shown in Figure 34. The
extended network is attached to the T0 router.

Figure 34  L2VPN Tunnel Needed Before Creation of Extended Network

Disconnected Network
A disconnected network is not connected to the CGW. This isolated
segment is used by HCX – a VMware technology for extending layer 2 on
premises – so a field for Gateway / Prefix Length is present. This field
serves as a reminder of the IP address of the gateway used by the HCX
extended network; this gateway is always on premises.

Since the disconnected network is not connected to the CGW, it is also


isolated. Workloads on other network segments cannot reach workloads
on the isolated network segment; only workloads on the same isolated
network segment can talk to each other.

Figure 35  Creating a Disconnected Network

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 34


DHCP
DHCP can be enabled per segment as part of network creation. This
functionality is disabled by default for new network segments but can be
toggled as shown in Figure 36.

Figure 36  Enabling NSX DHCP

The DHCP server sits on a network segment hidden from the GUI; this
allows users to set up DHCP for any network segment. If preferred, static
IPs can be used for workloads instead of DHCP. A combination of static
IPs and DHCP is also possible, in which case the static IPs should be
excluded from the DHCP IP range.

Figure 37  Creating NSX Network Segment Using NSX DHCP

35 |
DHCP Relay
When using an external DHCP server, DHCP Relay can be configured
under the DHCP tab as shown in Figure 38.

Figure 38  NSX DHCP Relay

Figure 39 displays a configuration with an external DHCP server. By


default, the Attach checkbox is unchecked, so there is no connectivity to
the external DHCP server. Once the Attach checkbox is checked, DHCP
Relay is enabled and requests will flow to the external DHCP server. It is
not possible to use both the included NSX DHCP Server and an external
DHCP server via DHCP Relay; DHCP Relay cannot be enabled if there are
network segments using the NSX DHCP Server.

Figure 39  NSX DHCP Relay Configuration

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 36


DNS
VMware Cloud on AWS SDDC provides DNS for both the MGW and CGW.
As shown in Figure 40, the DNS tab in the console displays the default
DNS configuration for both gateways.

Figure 40  NSX DNS

Multiple DNS zones can be created for the CGW DNS. An example is
provided in Figure 41.

Figure 41  Multiple DNS Zones for CGW DNS

37 |
By default, the vCenter FQDN resolves to the public IP address for
vCenter. Resolution of the private vCenter IP address from the vCenter
FQDN may be desired when connecting to the SDDC via Direct Connect
private VIF or IPSEC VPN. Note, private vCenter IP address resolution is
required if you want to access vCenter over Direct Connect private VIF or
IPSEC VPN from on premises. This example was illustrated in the Access
to vCenter section in Chapter 4.

In this case, the default setting for vCenter FQDN under the Settings
must be changed as shown in Figure 42 to resolve to private IP address.

Figure 42  Changing DNS Resolution to Private IP under Settings Tab

Manual modifications of DNS settings are not required; VMware Cloud on


AWS automatically manages the vCenter FQDN, updating information
through DynDNS.

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 38


NAT, Public IPs, and Internet Access
Internet Access for Workloads
Chapter 4 provided details on accessing vCenter over the Internet using
its public IP address and over a IPSEC VPN or Direct Connect Private VIF
connection via its private IP address. Accessing vCenter is one of the first
actions usually taken once a VMware Cloud on AWS SDDC is deployed.
This section looks at components supporting compute traffic, including
NAT, public IPs, and Internet access.

By default, SNAT is configured for all workloads in a VMware Cloud on


AWS SDDC. To provide a secure environment, all communication to the
Internet is blocked via the CGW firewall. Access through this firewall must
be explicitly configured to allow Internet access for these workloads.

Because SNAT is already configured to translate the source IP of the VM


to the public IP assigned to the VMware Cloud on AWS SDDC, there is no
need to configure NAT for outbound traffic. While a default source NAT
rule exists, it is not visible in the console. The default translated public IP
address is visible on the Overview section under the Networking &
Security tab, shown in Figure 43 with the label Source NAT Public IP.

Figure 43  CGW Source NAT Public IP

In this SDDC, the public IP used by the default SNAT rule for workloads is
35.160.216.100.

39 |
When deploying a workload with a private IP address on any of the network
segments in Figure 44, the default SNAT rule will be used. The public IP
address of the workload will be Source NAT Public IP, 35.160.216.100.

Figure 44  Created NSX Network Segments

Figure 45 visualizes this concept where a VM with the private IP address


10.72.31.14 uses the public IP address 35.160.216.100 when accessing the
Internet. The policy allowing this traffic has been configured on the CGW
firewall and applied to the Internet interface. Chapter 6 goes into more
detail on CGW firewall rules and respective interfaces.

Figure 45  Compute Traffic Flow for Internet Access

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 40


It is also possible to do 1:1 NAT for workloads in VMware Cloud on AWS.
If 1:1 NAT is configured, the public IP address used for the respective
workload will be different than the default public IP address used by other
workloads. The IP address used in this case will be the public IP address
requested from the VMware Cloud on AWS console. This process is
explained in the next section.

Accessing Workloads via Internet


For VMs that need to be exposed to the Internet (e.g., public web servers),
it is necessary to request a public IP address from the VMware Cloud on
AWS console and configure 1:1 NAT.

Figure 46 shows the interface used to request a new public IP address; the
Notes field can be used to help remember how the public IP address is used.

Figure 46  Requesting Public IP

41 |
When a public IP address is requested from the SDDC console, the
respective public IP allocated will be shown as in Figure 47.

Figure 47  Allocated Public IP

Please note, public IPs are not free. VMware passes the standard AWS
cost of a public IP on to customers. The public IP address fee is shown on
the VMware Cloud on AWS bill, not on the standard AWS bill. Please refer
to AWS documentation for fees related to public IPs.

Once the public IP is allocated, it can be connected with NAT to the


private IP for the VM. For the policy set in Figure 48, all the traffic from
the Internet to the destination IP 52.88.100.13 over port 80 will be
translated to the destination 10.72.31.1, and vice-versa.

Figure 48  NSX NAT Configuration

It is important to configure a firewall rule that allows Internet access to the


private IP address of the VM. By default, the firewall rule needs to refer to
the private IP address, not the public IP address, as NAT is done before
inbound firewalling.

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 42


Figure 49 shows the order of operation for incoming traffic from the
Internet to the web server, while Figure 50 details the reverse traffic flow.

Figure 49  Traffic Flow for Incoming Traffic from the Internet to Compute

Figure 50  Traffic Flow for Outgoing Traffic from the Compute to Internet

43 |
Under the Firewall column of the NAT rule, the default is Match Internal
Address. By clicking on Match Internal Address, as shown in Figure 51,
it is possible to change the default behavior so that the match is made on
the external address or public IP.

Figure 51  CGW Firewall Behavior Configuration

Figure 52 displays the rule created on the CGW firewall to allow access to
the web server from the Internet. The term Web in the destination field is
a Grouping Object with membership based on the IP address of web
server.

Figure 52  Applied CGW Firewall Rule on Internet Interface

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 44


NSX Connectivity
Direct Connect
AWS Direct Connect (DX) establishes a private, dedicated network
connection from on premises to AWS. It is a private circuit from the
customer data center to the AWS data center in the specific region.

AWS DX routers or other routers connected directly to the AWS


backbone are located at specific DX locations/colocation/ISP facilities.
An example is shown in Figure 53 where a customer is connecting from
on premises directly into the AWS DX router at the DX location.

Figure 53  Hybrid Cloud Design with Direct Connect

As shown in Figure 54, customers may also have a switch/router sitting at


the DX location. In this case, a cross-connect can be made between the
customer and AWS devices. An ISP is providing connectivity (e.g., MPLS/
VPLS) from the customer’s premises to the DX location where the cross-
connect hooks into the AWS backbone via Direct Connect.

Figure 54  Hybrid Cloud Design with Direct Connect and Customer Router at DX Location

45 |
To start deploying Direct Connect, log into the AWS portal and request
a Direct Connect connection. Select the origin DX location/facility for the
Direct Connect connection and set the port speed. This will create a
Letter of Authorization (LOR) which can be provided to the DX location
facility to run the cross-connect.

Direct Connect benefits include:

• increased bandwidth throughput

• decreased latency

• a more consistent network experience than Internet-based connections

• reduced egress data charges

Direct Connect can be established with either a public or private VIF.


Once the Direct Connect connection is established, select the VIF type
from the AWS console. Specific information is required for the VIF
configuration.

Details for each VIF Type are explained below. In this context, connection
means physical circuit and virtual interface or VIF means logical interface
on an AWS virtual router.

Public Virtual Interface (Public VIF)


• Private dedicated connection to AWS backbone

• Uses public IP address space and terminates at the AWS region-level

• Reliable, consistent connectivity with dedicated network performance


to connect to AWS public endpoints (e.g., EC2, S3, RDS)

• Receive Amazon’s global IP routes via BGP and access publicly-


routable Amazon services

Private Virtual Interface (Private VIF)


• Private dedicated connection to AWS backbone

• Uses private IP address space and terminates at the customer VPC-level

• Reliable, consistent connectivity with dedicated network performance


to connect directly to customer VPC

• AWS advertises the entire customer VPC CIDR via BGP

• AWS public endpoint services are not accessible over a private VIF

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 46


Customers typically prefer a Direct Connect private VIF because it
terminates directly within the VPC of the VMware Cloud on AWS SDDC.
This allows all traffic – ESXi management, vMotion, cold migration,
management appliance, and workload traffic – to be carried natively
over a Direct Connect private VIF.

When using a Direct Connect public VIF to send traffic to VMware Cloud
on AWS SDDC, a VPN is required on the connection since it terminates at
the region level.

Direct Connect physical ports can either be 1G or 10G links, but this
bandwidth can be broken up or aggregated as desired. Two different
contract methods are supported:

Dedicated Ports

• Full port speed reserved for customers

• Ordered from AWS

• Supports multiple virtual interfaces

Hosted connections

• Provided on a partner interconnect

• Hosted connection has a defined bandwidth and VLAN. Traditional


offerings provide a slice of the 1G pipe (e.g., 50Mbps, 100Mbps, etc.)
to an end customer, with the provider breaking down a 1G or 10G pipe
into smaller chunks for multiple customers.

• AWS recently announced that some selected AWS Direct Connect


partners can offer higher capacity hosted connections – 1, 2, 5, or
10 Gbps.

• Hosted connection supports a single virtual interface

The last point is important in the context of VMware Cloud on AWS. As


a hosted connection only supports a single VIF and a dedicated VIF is
required for VMware Cloud on AWS, a hosted connection will only work
for either native AWS or VMware Cloud on AWS, not both. This limitation
does not apply to dedicated ports as multiple VIFs can reside on top of a
dedicated port.

It is also possible to use link aggregation groups (LAGs) with AWS; more
details on this are provided below.

47 |
Link Aggregation Group (LAG)

• Multiple 1G or 10G ports can be bundled into a single managed


connection.

• Traffic will load balance across these links, per flow

• 4 links are supported per LAG

Direct Connect locations have resilient and diverse paths back to AWS
backbone. Using Direct Connect with VMware Cloud on AWS provides high
bandwidth, low latency connectivity to an SDDC deployed in VMware Cloud
on AWS. It can provide connectivity from on premises directly into a VMware
Cloud on AWS SDDC, delivering consistent connectivity and performance
while enabling live migration/vMotion from on premises to the cloud.

AWS Direct Connect can carry all traffic from on premises to a VMware
Cloud on AWS SDDC, and vice-versa. This includes ESXi management,
vMotion, cold migration, management appliance, and workload traffic.

Figure 55 provides an example deployment using a Direct Connect


private VIF. In this example, when a Direct Connect private VIF is
established, a number of networks are automatically advertised on
premises. These include:

• ESXi management subnet, management appliance network, and


another infrastructure subnet used by VMware Cloud on AWS. These
networks are the first three subnets in the box in Figure 55. They
were carved out of the infrastructure subnet provided during SDDC
deployment – in this case a /23.

• NSX network segments. These networks are the last three subnets in
the box at the bottom of the diagram.

Figure 55  Direct Connect Private VIF Design

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 48


This example also demonstrates that route filtering and aggregation
can be done on premises. At a minimum, the infrastructure subnet
(10.2.0.0/23) and the NSX network segment subnet (10.3.0.0/16) should
be allowed to communicate on premises. Any on-premises route filtering
or security should allow this communication.

Direct Connect uses BGP to exchange routes, using private and public
virtual interfaces for BGP peering. In the context of a Direct Connect
connection, there will be an external BGP (eBGP) session between two
different ASNs: one belonging to the customer and one belonging to AWS.

Direct Connect sessions in a VMware Cloud on AWS environment use


BGP Private ASN 64512 for the default local ASN. The local ASN is
editable, so any private ASN can be used (64512 to 65534). If ASN 64512
is already in use on premises, a different ASN must be selected.

Figure 56 shows basic BGP configuration that is entered from the AWS
portal. This configuration is pushed down to the respective components
on AWS side, then must be propagated to complete the local router
connection to AWS Direct Connect. Once connectivity is established,
the entire VPC CIDR, management appliance network, and NSX network
segments are automatically advertised on premises. As of version 1.5, 16
network segments are supported for advertisement over a Direct Connect
private VIF to on premises. 100 networks can be learned from on premises
to VMware Cloud on AWS SDDC, and the actual range can be enhanced
with appropriate use of on-premises route aggregation.

Figure 56  Direct Connect Private VIF Setup

49 |
Figure 57 is a screenshot of the Direct Connect tab in the VMware Cloud
on AWS console.

Items of note regarding Direct Connect setup include:

• The AWS Account ID. To use a Direct Connect private VIF in VMware
Cloud on AWS, create a hosted VIF in the AWS console and assign it
to the AWS account used by the associated VMware Cloud on AWS
SDDC. The only requirement in a VMware Cloud on AWS SDDC is to
accept the VIFs. Figure 57 shows two VIFs that were assigned to and
accepted by the respective AWS account used by the VMware Cloud
on AWS SDDC.

Figure 57  VMware Cloud on AWS SDDC - Direct Connect Private VIF Configuration

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 50


• It can take up to 30 mins for the BGP to come up; in most cases it is
up within ~5 minutes.

• For all new SDDCs, the default BGP Local ASN is 64512. It should not
overlap with what is being used on premises and can be changed as
necessary, as shown in Figure 58.

• An earlier version of the offering used the public ASN 7224. This
configuration remains valid, but once it is changed to a private ASN it
cannot be changed back. VMware recommends use of a private ASN.

Figure 58  Direct Connect Private VIF - Editing BGP Local ASN

51 |
• Multiple Direct Connect private VIFs can be deployed in active/active
or active/standby mode, agnostic to the VMware Cloud on AWS SDDC.

• Traffic over Direct Connect is not encrypted. Where there is a


requirement to encrypt traffic, use an IPSEC VPN over the Direct
Connect Connection. This is possible as a VMware Cloud on AWS
SDDC has both public and private IP addresses for use with a VPN.
If using a Direct Connect private VIF, the private IP address can be
used to set up the VPN over Direct Connect. In this configuration,
only compute and management appliance traffic will traverse the IPSEC
VPN; ESXi management and vMotion traffic always go over Direct
Connect private VIF, when present. This is illustrated in Figure 59.

Figure 59  Encrypting Traffic with Route Based IPSEC VPN Over Direct Connect Private VIF

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 52


• When using both a route-based IPSEC VPN and a Direct Connect
private VIF, routes learned over the route-based IPSEC VPN are
preferred. The exceptions are ESXi management traffic and vMotion
traffic, which will always go over the Direct Connect private VIF.
Enable the Use VPN as backup to Direct Connect option to prefer
routes learned via the Direct Connect private VIF over those from the
route-based IPSEC VPN. When this option is enabled, the VPN
becomes the backup for management appliance and compute traffic,
as illustrated in Figures 60 and 61.

Figure 60  Direct Connect Private VIF - Configuring VPN as Backup

Figure 61  Direct Connect Private VIF and VPN as Backup Design

53 |
With a VMware Cloud on AWS SDDC and a Direct Connect private VIF,
Direct Connect terminates on an AWS Virtual Private Gateway (VGW), not
on the NSX Edge. The VGW learns the routes from on premises and the
NSX Edge route table is updated via API calls. This is illustrated in the
diagram in Figure 62. For this reason, it is not possible to advertise
different BGP metrics over a Direct Connect private VIF to influence
the routes; the metrics are never passed to the NSX Edge.

Figure 62  Direct Connect Private VIF - BGP Behavior

For routing purposes, in terms of a Direct Connect private VIF, it is


important to remember the following points:

• there is no route control via BGP metric with Direct Connect

• when a Direct Connect private VIF is attached, ESXi management


and vMotion traffic will always go over the Direct Connect connection

• when there are multiple routes, the route with longest prefix match is
always used

• by default, routes learned via a route-based IPSEC VPN are preferred


over routes learned from a Direct Connect private VIF

• by default, if the same network is advertised over route-based IPSEC


VPN and a Direct Connect private VIF, the route learned from the
route-based IPSEC VPN will be used

• when the Use VPN as backup to Direct Connect option is enabled,


routes learned over the Direct Connect Private VIF are preferred over
routes learned via the route-based IPSEC VPN. When this option is
enabled, the route-based IPSEC VPN becomes the backup for
management appliance and compute traffic.

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 54


When accessing vCenter from on premises over Direct Connect, the
private IP address of vCenter is used; change the DNS resolution to
resolve to the private IP address of vCenter. See the Accessing VMware
Cloud on AWS SDDC section in Chapter 4 for more details; both public
and private access to vCenter are discussed.

From the VMware Cloud on AWS console, both networks advertised from
the SDDC to on premises and those learned from on premises to the
SDDC are visible. This is shown in Figure 63.

Figure 63  Direct Connect Private VIF - Advertised and Learned Routes

In this example, a default route is advertised from on premises. This


routing will prevent connections to vCenter over the Internet using the
public IP address. In this case, the only way to access vCenter is to
connect from on premises via the private IP address of vCenter.

Another valid design will intentionally advertise a default route from on


premises to route all traffic back on premises, even for Internet access. This
is depicted in Figure 64. This may be done due to compliance mandates or
a desire to process all traffic through on-premises security appliances.

55 |
Figure 64  Direct Connect Private VIF - Advertising Default Route from On Premises

Route-Based IPSEC VPN


VMware Cloud on AWS provides IPSEC VPN capabilities. Both policy-
based IPSEC VPN and route-based IPSEC VPN are supported. Route-
based IPSEC VPN is the preferred model. Reasons for this recommendation
are offered in this section, which is followed by a section discussing
policy-based IPSEC VPN design.

Route-based IPSEC VPN allows configuration of an IPSEC VPN from the


VMware Cloud on AWS SDDC to on premises or to another location (e.g.,
AWS VPC, VMware Cloud on AWS SDDC). Route-based IPSEC VPN runs
BGP over an IPSEC VPN tunnel. First the IPSEC VPN tunnel is established,
and then BGP is established on top over virtual tunnel interfaces (VTIs).
An example design is shown in Figure 65, where a route-based IPSEC
VPN is established from the VMware Cloud on AWS SDDC to the
Customer Data Center.

Figure 65  Route Based IPSEC VPN Design

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 56


In this example, the route-based IPSEC VPN tunnel can be established
over the Internet using a public IP address or via a private IP address
across a private connection (e.g., a Direct Connect private VIF). Figure 66
shows how to select either a public IP address or a private IP address
when configuring route-based IPSEC VPN.

Figure 66  Public and Private IP Address for Route Based IPSEC VPN

Figure 67 displays a fully-configured route-based IPSEC VPN tunnel. The


IP addresses used for the BGP sessions can be selected as desired. In
this example, the BGP Local IP is the IP address used by the VTI interface
used by BGP. When configuring the BGP Local IP, a loopback interface is
created on the router connecting the CGW and MGW. BGP uses this
loopback interface to establish a session with its neighbor.

Figure 67  Route Based IPSEC VPN Configuration

57 |
The informational button next to the BGP Remote IP displays the details
shown in Figure 68. If the IP address range is within the 169.254.0.0/16
subnet, the appropriate Edge/Gateway firewall policies are automatically
created. This eliminates the need for manual creation of firewall rules on
the compute gateway firewall permitting BGP communication over the
VPN interface.

Figure 68  Configuring BGP Local and Remote IP/Subnet

Using a route-based IPSEC VPN has a few advantages:

• Networks are automatically advertised and learned between the VMware


Cloud on AWS SDDC and the neighbor. This simplifies operations and
prevents errors. Manual configuration is not needed since networks are
automatically added to or removed from the environment

• A route-based IPSEC VPN provides redundancy; multiple VPNs can


be set up in either active/standby or ECMP mode. The behavior is
determined from BGP metric advertisements from the neighbor.
Figure 69 displays such a model.

There is only one public and one private IP address for VPN use in the
VMware Cloud on AWS SDDC. Even though there are two route-based
IPSEC VPN tunnels, the endpoint for the IPSEC VPN tunnel in the VMware
Cloud on AWS SDDC is the same. The two endpoints on premises are
different; the two tunnels each terminate on different hardware devices.

To create two tunnels to the on-premises data center, create two BGP
Local IPs or VTIs to peer with the two different on-premises neighbors.
Selection of the active and standby tunnels can be controlled via BGP
metrics from on premises.

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 58


Figure 69  Redundancy with Route Based IPSEC VPN Using Two BGP Local IPs/VTIs

It is also possible to achieve resiliency by using a hardware cluster on


premises, as shown in Figure 70. In this design, there is only one tunnel
and the hardware cluster on premises handles failover to other VPN
devices in the cluster.

Figure 70  Redundancy with Route Based IPSEC VPN Using Hardware Cluster On Premises

When the route-based IPSEC VPN tunnel is established on premises, the


entire VPC CIDR is advertised along with the management appliance
network and NSX Network segments, as shown in Figure 71. A default
route or specific subnets can be advertised from on premises.

If both a route-based IPSEC VPN and a Direct Connect private VIF are
configured, routes learned over the route-based IPSEC VPN are given a
better administrative distance than routes learned over the Direct Connect
private VIF, so routes learned over route-based IPSEC VPN will be
preferred. The exception is that ESXi management and vMotion traffic
always go over the Direct Connect private VIF connection.

59 |
Figure 71  Network Advertisements via Route Based IPSEC VPN from SDDC to On Premises

As mentioned in the Direct Connect section, if the Use VPN as backup to


Direct Connect option under the Direct Connect tab is enabled, routes
learned over Direct Connect private VIF are preferred over routes learned
via route-based IPSEC VPN. With this option enabled, the route-based
VPN becomes the backup for management appliance and compute traffic.

As Figure 72 shows, a route-based IPSEC VPN can be used to connect to


any device with route-based IPSEC VPN capabilities. Standard BGP is
used to automate the learning and advertisement of networks over the
IPSEC VPN.

Figure 72  Route Based IPSEC VPN Connectivity to Different


Endpoints/Devices

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 60


Figure 73 shows the DOWNLOAD CONFIG link used to download the
configuration parameters required for connecting both ends via the
route‑based IPSEC VPN.

Figure 73  Route Based IPSEC VPN - DOWNLOAD CONFIG

The VIEW STATISTICS link, pictured in Figure 74, shows traffic statistics
and information useful in troubleshooting VPN connectivity issues.

Figure 74  Route Based IPSEC VPN - VIEW STATISTICS

61 |
There are also VIEW ROUTES and DOWNLOAD ROUTES links with
details on the advertised and learned routes. Figures 75 and 76 show
the advertised and learned routes from the VIEW ROUTES link. The
Advertised Routes tab displays the routes advertised from the SDDC.

Figure 75  Route Based IPSEC VPN - ADVERTISED ROUTES

The Learned Routes tab displays the routes learned by the SDDC.

Figure 76  Route Based IPSEC VPN - LEARNED ROUTES

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 62


Policy-based IPSEC VPN
In addition to route-based IPSEC VPN, VMware Cloud on AWS supports
policy-based IPSEC VPN. While a route-based IPSEC VPN is generally
preferred, policy-based IPSEC can be useful for cases where there is no
BGP configuration on the other end, as this prevents the use of a route-
based IPSEC VPN. However, since VPN devices usually sit at the edge of
data centers where BGP is typically running, this is rarely an issue.

The configuration for a policy-based IPSEC VPN is straightforward as


there is no BGP component. Enter the remote IP address, remote
networks, and select from a drop-down box all the local NSX network
segments which should be reachable.

Figure 77  Policy Based IPSEC Configuration

There are a few design considerations to keep in mind when using a


policy-based IPSEC VPN:

• There are no entries in the route table for policy-based IPSEC VPN as
with a Direct Connect Private VIF and a route-based IPSEC VPN.
Route table decisions do not take into account the policy-based
IPSEC VPN.

• The policy-based IPSEC VPN configuration is applied at the global


level. If there is a match with source and destination traffic, the policy-
based IPSEC VPN path will be used as it is matched globally before
route table lookup.

63 |
AWS Transit Gateway
At AWS re:Invent 2018, AWS introduced a regional construct called a
Transit Gateway (TGW). The AWS Transit Gateway allows customers to
easily connect multiple Virtual Private Clouds (VPCs). The TGW can be
seen as a hub with the VPCs as the spokes in a hub and spoke-type
model. Any-to-any communication is made possible by traversing the
TGW. The TGW can replace the popular AWS Transit VPC design
previously used to connect multiple VPCs. The TGW can also be used
with VMware Cloud on AWS SDDC.

TGW VPN attachments are used to connect to VMware Cloud on AWS


SDDC. This VPN connectivity leverages the route-based IPSEC VPN. In
Figure 78, a TGW is deployed and connected to both VMware Cloud on
AWS SDDCs and native AWS VPCs. The connections from the TGW to
the VMware Cloud on AWS SDDCs leverage VPN attachments while the
connections from the TGW to native AWS VPCs leverage VPC attachments.
When connecting with VPN attachments, the NSX route-based IPSEC
VPN is used. The connectivity to native AWS VPCs uses VPC attachments,
leveraging the native underlying ENI connectivity to connect directly from
the TGW to the VPC.

Figure 78  AWS Transit Gateway Design

Two clear advantages of TGW are evident:

1. Ease of creating any-to-any communications between native AWS


VPCs and VMware Cloud on AWS SDDCs

2. Ease of sharing on-premises connectivity with all connecting VMware


Cloud on AWS SDDCs and native AWS VPCs

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 64


A TGW has a single route table by default; however, it is possible to
leverage multiple route tables to perform traffic engineering by controlling
routes between different route tables. TGW attachments on the TGW
associated with different route tables can control traffic either via static
routes or propagation. Specific details of this advanced functionality are
out of scope of this book.

In this example, there is a VPN connection from the TGW to the on-premises
data center. Because the on-premises VPN connection connects to a TGW,
it can be shared among all the spoke VMware Cloud on AWS SDDCs and
native AWS VPCs. This is depicted more clearly in the Figure 79.

Figure 79  AWS Transit Gateway Design - Shared Connectivity to


On Premises

This design provides both shared connectivity on premises for all the
spokes while also supporting any-to-any configuration and communication.

It is possible some of the spokes may have Direct Connect connectivity


but still be connected to a TGW. This may be desired where workloads in
a VMware Cloud on AWS SDDC need to access shared services in
multiple other VPCs/SDDCs. This design is shown in Figure 80.

65 |
Figure 80  AWS Transit Gateway Design - Direct Connect Private VIF Used by Spoke SDDC

The VMware Cloud on AWS SDDCs in Figures 81 and 82 are attached to


a TGW via VPN. For these VPN attachments, a route-based IPSEC VPN
can be used in the VMware Cloud on AWS SDDC. In this design there are
two IPSEC VPN tunnels in active/standby mode per VMware Cloud on
AWS SDDC. Selection of active and standby tunnels is automatically
performed on the AWS side. ECMP is supported in VMware Cloud on
AWS SDDC starting in version 1.7.

Reflecting this design, below figures show the creation of an AWS Transit
Gateway in the console. ECMP is initially disabled as this design is using
active/standby VPN. When AWS TGW is deployed, ECMP is automatically
enabled. In the below case, as the VMware Cloud on AWS SDDC was
deployed before the 1.7 SDDC release, ECMP was manually disabled on
TGW. With the 1.7 SDDC release, if customers want to leverage ECMP
connectivity between TGW and VMware Cloud on AWS SDDC, they
should leave ECMP enabled.

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 66


Figure 81  AWS Console - Deployed Transit Gateway

Figure 82 details two VPN attachments, each connected to a different


VMware Cloud on AWS SDDC. There are also two VPC attachments,
each connected to a different native AWS VPC.

Figure 82  AWS Console - Transit Gateway Attachments

Selecting the Resource ID in the first row shown displays the VPN
connections established to the first VMware Cloud on AWS SDDC. There
is a customer gateway in AWS which has the public IP address for the
VMware Cloud on AWS SDDC 1 VPN. In Figure 83, the Tunnel Details tab
displays two AWS-provided public IP addresses and two admin-selected
inside IP CIDR blocks used for virtual tunnel interfaces (VTI) interfaces.

67 |
BGP peers over these VTIs through an IPSEC VPN connection. This view
confirms that the tunnels are up and that the routes have been propagated
and learned automatically, both by the TGW and VMware Cloud on AWS
SDDC 1 via BGP.

Figure 83  AWS Console - Transit Gateway VPN Established to VMware Cloud on AWS SDDC 1

The same process is used to set up connectivity to VMware Cloud on


AWS SDDC 2. The customer gateway address is different because the
connection goes to a different VMware Cloud on AWS SDDC. Also note
that there are two different AWS public IPs and different inside IP address
CIDR blocks.

Figure 84  AWS Console - Transit Gateway VPN Established to VMware Cloud on AWS SDDC 2

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 68


In Figure 85, the subnet for VPC 1 is 172.32.0.0/16. For its attached VPC,
two route entries were manually created. To reach 10.72.31.16/28 – the
subnet of the App network segment in VMware Cloud on AWS SDDC 1
– traffic is sent through the TGW listed as the Target. To reach 10.61.4.0/28 –
the subnet of the Web network segment in VMware Cloud on AWS SDDC
2 – traffic is also sent through the TGW.

With VPC attachments, the routes are not propagated from the TGW to
the spoke VPCs; however, the routes from the VPCs are propagated to the
TGW as propagation is enabled by default on the TGW.

Figure 85  AWS Console - AWS VPC 1 Route Table

In Figure 86, the subnet of VPC 2 is 172.33.0.0/16, with VPC 2’s route
table configured identically as before.

Figure 86  AWS Console - AWS VPC 2 Route Table

69 |
Figure 87 shows that all routes in the AWS Transit Gateway’s route table are
learned from the VMware Cloud on AWS SDDCs and native AWS VPCs.

Figure 87  AWS Console - Transit Gateway Route Table

VMware Cloud on AWS SDDCs also allows status verification of VPN


connections. Figure 88 is a screen shot from VMware Cloud on AWS
SDDC 1 showing VPN status.

Figure 88  VMware Cloud on AWS SDDC Route Based IPSEC Configuration to TGW

From the VMware Cloud on AWS console, users can also download or
view advertised routes and learned routes.

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 70


The routes in Figure 89 are learned by VMware Cloud on AWS SDDC
from AWS Transit Gateway via BGP over Route Based IPSEC VPN.

Figure 89  VMware Cloud on AWS SDDC Learned Routes from TGW

Figure 90 displays another design where ECMP is used with a VMware


Cloud on AWS SDDC to allow for additional bandwidth and resiliency.

Figure 90  Route Based IPSEC VPN ECMP Design with TGW

71 |
It is easy to enable any-to-any communication by setting up connectivity
from a VMware Cloud on AWS SDDC to an AWS TGW. Further details and
a demo walkthrough of AWS Transit Gateway with VMware Cloud on AWS
are available at:

VMware Cloud on AWS with Transit Gateway


https://youtu.be/OJgGNCz4tVs

AWS Transit Gateway and multiple AWS accounts


To use a TGW with multiple accounts, log into the main AWS account
where the TGW was created and establish an AWS resource share.

Figure 91  AWS Console - AWS Resource Access Manager

After creating this resource share, add other AWS accounts. Resource
share invitations can be accepted manually or automatically; this is a
parameter set at TGW creation.

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 72


Once the resource share acceptance has been granted, the VPC attachment
will show up.

Figure 92  AWS Console - Accepting Resource Share

At this stage, normal operations can proceed. New routes will be learned
from the newly-attached VPC and will appear in the TGW route table.

Figure 93  AWS Console - Transit Gateway Route Table Showing New Learned Route

73 |
L2VPN
VMware Cloud on AWS offers L2 extension via L2VPN. L2VPN
capabilities allow extension of L2 domains to the cloud to maintain
workload IP addresses and facilitate vMotion. Since IP addresses are
maintained via L2VPN, workloads can migrate to the cloud via vMotion
with no disruption. L2VPN can be used to extend L2 for on-premises
workloads that are on a VLAN or overlay; Figure 94 depicts an example.
Details of the specifics are explained in the remainder of this section.

Figure 94  L2VPN over Direct Connect Private VIF Design

Figure 95 displays L2VPN deployed with an NSX-V-based unmanaged


client on premises. The unmanaged client is an OVF appliance deployed
on an ESXi hypervisor to provide for L2VPN client functionality; it could
also be deployed in HA mode for extra resiliency. In this design, VLANs
are extended to the VMware Cloud on AWS SDDC and NSX is not present
or required on premises. L2VPN provides the bridging functionality between
the VLAN and the overlay in the VMware Cloud on AWS SDDC. The
bridging is facilitated by a Tunnel ID manually assigned to the VLAN/
overlay pair.

Figure 95  Extending Networks to VMware Cloud on AWS SDDC with L2VPN
and No NSX On Premises

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 74


The OVF/appliance deployment uses a GUI. This tool presents a set
of questions to help extend the correct L2 domains upon deployment
without further configuration. Subsequent updates must be performed
via the CLI.

Figure 96 is a screenshot of the L2VPN server configured on a VMware


Cloud on AWS SDDC. In this example, the Test network segment is
extended to on premises.

Figure 96  Created L2VPN Server and Extended Network in VMware Cloud on AWS SDDC

As shown in Figure 97, when the Test extended network is created


through the VMware Cloud on AWS SDDC console, there is no option
for a gateway IP address as it is an extended segment; the gateway IP
address for extended networks is always on premises.

Figure 97  Creating an Extended Network

75 |
Figure 98 shows deployment of the unmanaged L2VPN client OVF. The
first step is to select an OVF deployment template.

Figure 98  Deploying Unmanaged L2VPN Client OVF On Premises

Upload the unmanaged L2VPN client OVF files to the on-premises ESXi
server. These files can be downloaded from the VMware software
download website.

Figure 99  Uploading the Unmanaged L2VPN Client OVF Files

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 76


In Figure 100, the following items are configured:

• Trunk: port group that trunks all the extended VLANs

• Public: port group with connectivity to VMware Cloud on AWS SDDC

• HA Interface: port group used by active and standby unmanaged


Edge clients for HA heartbeat

Figure 100  Selecting Networks During Unmanaged L2VPN


Client Deployment

77 |
The next step of the OVF deployment configures additional settings
including default gateway, peer address, VLAN/Tunnel ID, and HA mode.
This configuration is shown in Figures 101 and 102.

Figure 101  Configuring IP Address/Subnet and Peer Address During Unmanaged L2VPN
Client Deployment

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 78


In Figure 102, the Enable TCP Loose Setting is checked. The Enable TCP
Loose Setting should be checked to allow vMotion of workloads without
active TCP connections incurring connection issues. The VLAN and
Tunnel ID are configured to the same value through a single field.

Figure 102  Configuring TCP Setting and VLAN/Tunnel ID During Unmanaged L2VPN


Client Deployment

It is also possible to extend the overlay to VMware Cloud on AWS SDDC


where NSX is installed on premises. In the design outlined in Figure 103,
NSX-V is deployed on premises and is using the L2VPN capabilities of the
managed Edge to extend VXLAN to VMware Cloud on AWS SDDC via
L2PVN. In this deployment, all of the on-premises configuration is
API‑based.

Figure 103  Extending Networks to VMware Cloud on AWS SDDC with L2VPN
and NSX-V On Premises

79 |
Connected VPC
During the SDDC deployment process, VMware creates an AWS account
for the customer and a dedicated VPC for the organization hosting the
SDDC. The customer must link its AWS account and provide a VPC with
one or more subnets to be linked to the VMware Cloud on AWS VPC. This
is referred to as the “connected VPC.” The connected VPC must meet a
few requirements:

• It must be in an availability zone (AZ) where VMware Cloud on AWS


resources are available. For example, in Virginia, there are six AZs but
the VMware Cloud on AWS is not available in all of them.

• The AWS-connected account should have enough capacity to create


17 Elastic Network Interfaces (ENI).

• The AWS subnet(s) used for native AWS services should be associated
with the default route table.

Elastic Network Interface

Figure 104  Connected VPC via ENI Design

• Every ESXi host is a bare metal EC2 instance with a 25 Gbps network
interface.

• Every network interface has an associated ENI. In addition to the


maximum 16 ENIs – one for each host in a cluster – one additional
ENI is created for maintenance.

• Within a VMware Cloud on AWS cluster, one of the hosts runs the
active NSX Edge appliance; this is the host associated with the active ENI.

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 80


The active ENI is the only one with a secondary IP address. If there are 3
nodes in the cluster, there will be 3 ENIs shown as “in-use,” but only one
ENI will be active. In Figure 105, the SDDC is a one node deployment.

Figure 105  AWS Console - Connected VPC Active ENI

AWS default route table


The CloudFormation stack executed during AWS account linkage gives VMware
Cloud on AWS access to the connected VPC default route table. This allows
any new logical segments created on the CGW to automatically appear in
the route table. This also allows access to native AWS services like S3, RDS,
EC2, EFS etc.

Figure 106  AWS Console - Connected VPC Route Table

81 |
Connected VPC Information
In the main VMware Cloud on AWS console, there is a tab detailing
information on the connected VPC.

Figure 107  VMware Cloud ON AWS - Connected VPC Configuration

Figure 107 shows the eu-central-1a as the availability zone of the


attached subnet in the VPC Subnet field. AWS resources should be
deployed in that subnet to avoid cross-AZ charges. Resources in other
subnets will still be able to access VMware Cloud on AWS SDDC
resources since the main route table is automatically updated and
includes all VMware Cloud on AWS SDDC subnets.

CHAPTER 5 - NSX Networking in VMware Cloud on AWS SDDC | 82


Chapter 6

NSX Security in VMware Cloud


on AWS SDDC

The prior chapter explored the networking aspects of NSX in the VMware
Cloud on AWS SDDC. This chapter focuses on the security capabilities
NSX brings to the VMware Cloud on AWS SDDC. These security
capabilities include Role-Based Access Control (RBAC), Grouping Objects,
Security Tags, Edge/Gateway Firewall for North/South traffic, and
Distributed Firewall (DFW) for East/West traffic.

CHAPTER 6 - NSX Security in VMware Cloud on AWS SDDC | 83


Role Based Access Control (RBAC)
RBAC allows users of the VMware Cloud on AWS portal to have different
levels of access.

Visibility of the Networking & Security tab, shown in Figure 108, requires
a user role of either NSX Cloud Auditor or NSX Cloud Admin.

Figure 108  VMware Cloud on AWS SDDC - Networking & Security Tab

Within Identity and Access Management, an admin can add users and
assign roles. NSX Cloud Auditor role and NSX Cloud Admin role are the
two roles specific to NSX networking and security; their permissions are
set as follows:

• NSX Cloud Auditor: read-only access to Networking and Security tab

• NSX Cloud Admin: read/write access to Networking and Security tab

84 |
Figure 109  VMware Cloud on AWS RBAC - Identity and Access Management

Grouping Objects
Grouping Objects, also called Security Groups, allow for criteria-based
static or dynamic identification of workloads in an SDDC. These grouping
objects can also be used in security policies. This capability is beneficial
as it allows creation of security policies that are not simply dependent on
IP address; policies can be created around more meaningful concepts or
metadata like VM Name or Security Tag.

Groups Based on IP Address


Groups can be based on a single IP address, a CIDR range, or both.
Figure 110 shows a group created with the subnet of 10.10.0.0/16 and the
unique IP 10.20.34.56. Any workload that has an IP address that falls
within the CIDR 10.10.0.0/16 will be a member of this group. Additionally,
if the workload has an IP address of 10.20.34.56, it will also become a
member of the group.

Figure 110  NSX Grouping Object Based on IP Address

CHAPTER 6 - NSX Security in VMware Cloud on AWS SDDC | 85


Groups Based on VM Instance
Groups can also be based on manually selected VMs, a process
sometimes referred to as statically selecting the VMs.

In Figure 111, a group with a single VM called WindowsM is created.


This VM was statically selected to be a member of the group.

Figure 111  NSX Grouping Object Based on VM Instance

Groups Based on VM Name


Groups can be created based upon the names of the VMs.

When using a consistent naming convention (e.g., COUNTRY-CITY-DC-


PROD-APP-NUMBER), a VM (e.g., UK-LONDON-DC-TEST-SQL-01A)
could automatically, at creation, become a member of a group and be
assigned security policies.

In this example, as the name includes TEST, there may be a rule stating that
this group cannot talk to VMs that have names including PRODUCTION.
Additionally, as the name contains SQL, there may be a rule specifying that
only port 1433 for SQL is opened for that VM.

There are multiple advantages of building policies in this manner: VM


protection from day 1; security consistency; and rules read more like
business logic rather than simply containing IP addresses.

Figure 112 shows a grouping object named webSG. If a VM has web in its
name, it will automatically become part of this group. A security policy can
later be created using this group and only allow traffic on ports 443 or 80.

Figure 112  NSX Grouping Object Based on VM Name

86 |
Figure 113 shows that a group based on VM Name can have matching
criteria where the VM can either start with or simply contain a specific set
of characters.

Figure 113  Configuring NSX Grouping Object Based on VM Name

Groups Based on Security Tags


Tags have become very popular in the cloud era. They have been
especially useful in the AWS world to operate a cloud platform at scale.
Tags can be used to identify cloud entities as well as to verify costs and
charges based upon these tags. They can represent business units,
tenants, countries, or environments (e.g., test, dev, prod).

In VMware Cloud on AWS, tagging allows a user to assign tags to VMs.


Based on these tags, VMs can automatically be made part of a group used
for security policies. Creating groups based on security tags rather than
VM Name provides more control to the NSX Security Admin, and only
users with NSX Cloud Admin or read/write access can change the security
tag. With VM Name, it is possible for someone who has access to vCenter
to change the VM Name, circumventing security policies.

CHAPTER 6 - NSX Security in VMware Cloud on AWS SDDC | 87


In Figure 114, VMs are tagged with the name of one of the authors, nico. VMs
are tagged from the GUI located at Inventory->Groups->Virtual Machines.

Figure 114  VMs and Associated NSX Security Tags

Figure 115 shows creation of a group with matching criteria stating that any
VM that has a tag of nico is a member.

Figure 115  Configuring NSX Grouping Object Based on NSX Security Tag

Figure 116 displays the membership criteria of this group.

Figure 116  Configuring Matching Criteria for NSX Grouping Object Based on NSX Security Tag

88 |
Once the group is created, membership visibility is provided so users can
easily determine which workloads are members.

Clicking the three dots to the left of the grouping object brings up the
menu in Figure 117.

Figure 117  Feature to View Members of a NSX Grouping Object

View Members will display the VMs that are members of the group.
View References will show what security policies are using the group.

Figure 118 provides an example showing members of a group created


based on a security tag.

Figure 118  Results of Viewing Members of a NSX Grouping Object

CHAPTER 6 - NSX Security in VMware Cloud on AWS SDDC | 89


Figure 119 depicts all the options available when defining grouping objects.

Figure 119  Available Options for Defining NSX Grouping Objects

When the grouping objects are defined, they can then be used in security/
firewall policies, as shown in Figure 120.

Figure 120  Creating Security/Firewall Policies Using NSX Grouping Objects

Management Groups and Workload Groups


Figure 121 shows that Grouping Objects can be created under either
Management Groups or Workload Groups. Management Groups can be
used under MGW firewall policies, while Workload Groups can be used in
DFW and CGW firewall policies. Management Groups only support IP
addresses as they are infrastructure-based. Predefined Management
Groups already exist for vCenter, ESXi hosts, and NSX Manager. Users
can also create groups based on IP address for on-premises ESXi hosts,
vCenter, and other management appliances. Management Groups are
shown in Figure 121.

90 |
Figure 121  NSX Management Groups

Workload Groups, shown in Figure 122, can use all the matching criteria
previously discussed such as IP address, VM Instance, VM Name, and
Security Tags.

Figure 122  NSX Workload Groups

CHAPTER 6 - NSX Security in VMware Cloud on AWS SDDC | 91


Edge Firewall
NSX Edge firewall (also called Gateway Firewall in VMware Cloud on AWS)
provides security for north/south traffic in and out of the VMware Cloud
on AWS SDDC. There are two Edge firewalls – the MGW firewall and the
CGW firewall. The MGW firewall protects management traffic and controls
access to management components like vCenter. The CGW firewall
protects compute traffic and controls access to compute workloads.

The logical diagram in Figure 123 shows that CGW firewall rules can be
selectively applied to specific interfaces, providing connectivity to the external
environment. These interfaces include Internet, Direct Connect private VIF,
customer-native AWS or connected VPC, and VPN. The MGW firewall rules
control who and what can access vCenter and the management network.

Figure 123  Applying CGW and MGW Firewall Rules and Design

Figure 124 presents rules implemented on the CGW. CGW firewall rules
can use Grouping Objects based on the different matching criteria discussed
earlier in the chapter: IP address, VM Instance, VM Name, and Security
Tag. Workload Groups are used in CGW firewall rules.

The screen shot also contains a Security Group called Connected VPC
Prefixes. This Security Group is automatically created and has the Connected
VPC prefixes as members. This allows easy creation of security policies for
access to AWS services running in the Connected VPC.

92 |
The Applied To field identifies which specific interfaces the policies are
applied to as described in Figure 123’s logical diagram.

Figure 124  Using ‘Apply To’ to Apply CGW Firewall Policies


on Specific Interfaces

Figure 125 shows MGW firewall rules using Management Groups. MGW
firewall rules only use groups based on IP addresses. These rules are
typically based on infrastructure or access to management components
like vCenter. As mentioned earlier, predefined management groups
already exist for vCenter, ESXi hosts, and NSX Manager. Users can also
create groups based on IP address for on-premises ESXi hosts, vCenter,
and other management appliances.

Figure 125  MGW Firewall Rules Using Management Groups

CHAPTER 6 - NSX Security in VMware Cloud on AWS SDDC | 93


There are predefined management groups and user defined management
groups. When selecting a Source or Destination for an MGW firewall rule,
there are three options as shown in Figure 126 – Any, System Defined
Groups, and User Defined Groups. System Defined Groups simplifies
creation of common MGW firewall rules. User Defined Groups allows
creation of custom groups based on IP address, where membership can
include on-premises workloads (e.g., those which require access to vCenter).

Figure 126  MGW System Defined Groups

Distributed Firewall (DFW)


This section provides a deep dive on the VMware Cloud on AWS distributed
firewall.

Distributed Firewall Concepts


The DFW is an essential feature of NSX Data Center. It provides the ability
to wrap virtual machines around a virtual firewall. The virtual firewall is a
stateful layer 4 firewall, capable of inspecting the traffic up to layer 4 of
the OSI model. It can inspect and filter traffic based on IP address – both
source and destination – and TCP/UDP port.

The NSX DFW provides the capability to implement micro-segmentation.


Micro-segmentation is the ability to segment the network and apply
security policies extremely close to the application – at the vNIC level.
Granular security policies can be applied at the vNIC level of VMs,

94 |
allowing for segmentation within the same L2 network or across separate
L3 networks. A picture of this is shown in Figure 127 along with several
benefits of the practice.

Figure 127  NSX DFW - Micro-segmentation and Design

The NSX DFW presents a contextual view of the virtual data center. It
can secure workloads based on meaningful metadata rather than simply
source and destination IP addresses. This can be done using Grouping
Objects discussed earlier in this chapter. These Grouping Objects are
called Workload Groups and can be based off of IP address, VM Instance,
VM Name, or Security Tag.

Traditional firewalling is based on source and destination IPs, constructs


that have no business logic or application context. Using Grouping Objects,
an NSX DFW can secure workloads based on higher-level criteria such as
VM name or other metadata such as security tags. This enables building of
security policies based on business logic using security tags or naming
convention. Common examples include physical location, business
application, or workload type (e.g., test, dev, prod).

Figure 128  NSX DFW - East/West Traffic Firewalling and Design

CHAPTER 6 - NSX Security in VMware Cloud on AWS SDDC | 95


The NSX DFW provides east/west traffic firewalling within the data center. It
accomplishes this through micro-segmentation, helping reduce the impact
of security breaches and achieve compliance targets. The NSX DFW’s
powerful capabilities enable use cases such as advanced security with
micro-segmentation, isolation, multi-tenancy, and DMZ Anywhere designs.

Advanced Security with Micro-segmentation


NSX DFW security polices are deployed at the vNIC level, securing
the network by segmenting it close to the applications being protected.
This micro-segmentation capability also prevents hair-pinning traffic to a
perimeter device, improving not only security but also performance by
decreasing latency.

Isolation
Isolation of different workloads is achieved by leveraging micro-segmentation
capabilities and preventing communication between different environments
(e.g., production, test).

Multi-tenancy
NSX DFW allows for migration of different tenants to the cloud while
preventing communication between them. Since NSX DFW permits
applying policies at the vNIC level of workloads, tenants can easily be
migrated to the cloud while maintaining separation. As of SDDC version
1.7, overlapping IP addresses are not supported due to the common
connection to a single CGW.

DMZ anywhere
Since security policies can be applied granularly with the DFW, this allows for
creation of DMZs anywhere by applying the respective policies to workloads
in specific Grouping Objects. It is straightforward to control what is accessible
by the Internet and enforce rules on communication between zones.

Additional information is available on this concept in the Micro-Segmentation


for Dummies book.

Distributed Firewall on VMware Cloud on AWS


Without a NSX DFW, the VMware Cloud on AWS SDDC is essentially one
flat zone:

• All VMs on a logical segment can talk to each other.

• A VM on a logical segment can talk to a VM on a different logical segment.

• There is no east/west security.

96 |
This might be acceptable for customers using a VMware Cloud on AWS
SDDC to host a specific application or for test/dev workloads, but as
customers evacuate entire data centers and migrate workloads to VMware
Cloud on AWS or want to run a DMZ within a VMware Cloud on AWS
SDDC, they often require more advanced security and some additional
level of segmentation.

The GUI presented in Figure 129, displaying the Distributed Firewall tab,
is available under the Networking and Security tab within the VMware
Cloud on AWS console.

Figure 129  NSX DFW GUI

Users do not have direct access to the NSX Manager on VMware Cloud on
AWS. All networking and security configuration is provided via the VMware
Cloud on AWS console using an NSX Policy Appliance and respective APIs.

Distributed Firewall Sections


The NSX DFW has four high-level sections for DFW rules – Emergency
Rules, Infrastructure Rules, Environment Rules, and Application Rules.
Each of these high-level sections can also have multiple sub-sections.

These high-level sections offer a way to build security logic and organize
rules/policies. Users do not have to create security rules to fit this model.
They may want to define all rules in one of these predefined sections;
however, these pre-defined high-level sections offer a standardized
approach to laying out policy.

CHAPTER 6 - NSX Security in VMware Cloud on AWS SDDC | 97


Emergency Rules: Applies to temporary rules needed in emergency
situations. If a VM has been compromised, there may be a need to create a
rule to block traffic to and from it. That rule would be placed under Emergency
Rules. When the issue is resolved, the rule can easily be located and deleted.

Infrastructure Rules: Applies to infrastructure rules only. Infrastructure


rules are global rules that define communications between workloads and
core/common services (e.g., all applications need to talk to the set of AD
and DNS servers).

Figure 130  NSX DFW Grouping Sections - Infrastructure Rules Example

Environment Rules: Applies to broad groups/separate zones. Rules here


may prevent communication between production and test workloads.

Figure 131  NSX DFW Grouping Sections - Environment Rules Example

98 |
Application Rules: Applies to specific application rules. An example is
allowing app-1 to talk to app-2 or block traffic between app-3 and app-4.

Figure 132  NSX DFW Grouping Sections - Application Rules Example

Default Rule: The default rule allows all traffic.

The rules are processed in top-down order, meaning traffic will hit the
emergency rules first and the application-specific rules last; therefore,
rules should be implemented from least specific to most specific. If no
rules are hit, the default rule is applied; this will allow all traffic.

The default rule – allow all traffic - cannot be changed from allow to deny.
With this ‘allow all traffic’ rule at the end, users need to blacklist (i.e., specify
which traffic is specifically blocked) traffic prior to reaching that rule.

If a whitelist (i.e., specify which traffic is allowed to transit and block


everything else) policy is desired, add a manual rule at the bottom of the
rules in the Application Rules section to deny all the traffic.

Figure 133  Created ‘Deny All’ DFW Rule for Whitelisting policy

CHAPTER 6 - NSX Security in VMware Cloud on AWS SDDC | 99


Sections and Rules
Within the high-level DFW sections, users can create sub-sections and
rules within these designations. Each of these sub-section rules is a
standard DFW rule as shown in Figure 134.

Figure 134  Created DFW Section

Figure 135 shows that each DFW rule contains a name, a source, a
destination, services (e.g., ICMP, http, https, etc.), an action (e.g., allow,
drop, or reject), and an option for logging via VMware Log Intelligence,
VMware’s cloud log platform.

Figure 135  Created NSX DFW Application Rule

100 |
Chapter 7

NSX Operational Tools in


VMware Cloud on AWS SDDC

This chapter focuses on some of the operational tools that can be leveraged
with NSX in VMware Cloud on AWS. Some of these tools like port mirroring
and IPFIX are included with NSX, while other applications/tools like VMware
Log Intelligence and vRealize Network Insight leverage NSX features/
capabilities to provide additional useful information and services.

Port Mirroring
Port Mirroring is a feature on a virtual or physical switch that allows
capture of packets from a port and redirection to a destination device or
appliance. This capability is useful to analyze or monitor specific traffic. It
is typically used for these reasons:

• copy traffic to an advanced firewall (IPS / IDS) to inspect the traffic.

• copy voice traffic to a voice recorder, often used in a call center


environment to record conversations with customers.

• copy traffic for analysis in troubleshooting. Traffic is mirrored to


packet capture software like Wireshark to understand packet loss
or application issues.

With port mirroring, decision criteria include:

• what traffic to monitor (i.e., the source)

• in which directions to monitor - traffic to the source, from the source


or both

• where to send the traffic (e.g., a monitoring device, which might be


local or remote)

CHAPTER 7 - NSX Operational Tools in VMware Cloud on AWS SDDC | 101


There are different types of port mirroring sessions: local Switch Port
Analyzer (SPAN), remote SPAN, and encapsulated remote SPAN.

VMware Cloud on AWS leverages encapsulated remote SPAN to:

• copy traffic leaving or entering a virtual port

• encapsulate the traffic in a GRE (Generic Routing Encapsulation) packet

• send it to a destination device – usually a host or workload running


packet capture software like Wireshark or an IDS/IPS appliance for
security analysis

VMware Cloud on AWS supports selection of one or multiple virtual machines


as a source. When selecting a VM, all of its vNICs will be added to a port
mirroring session; as of SDDC version 1.7, a single vNIC cannot be selected.

A port mirroring session can be created on the VMware Cloud on AWS


console or through the API. A security policy must be created on the
MGW firewall to allow traffic from the ESXi hosts to the destination device.
In Figure 137, a user is mirroring traffic and sending it to a VM running
Wireshark.

On the VMC console, select the Networking & Security tab and then
Security -> Edge Firewall -> Management Gateway to get to the MGW
firewall policies.

Figure 136 shows a rule called To Wireshark that allows communication from
the ESXi hosts in the SDDC to Wireshark. This rule is needed to allow port
mirroring traffic to be sent from the ESXi hosts directly to Wireshark. The
ESXi in the source field is a system-created group that identifies all the ESXi
hosts in the VMware Cloud on AWS SDDC. The Wireshark in the destination
is a user-created group that identifies the workload where Wireshark is
running. Chapter 6 provides additional detail on Grouping Objects.

Figure 136  MGW Firewall Rule to Allow Communication from the ESXi hosts
in the SDDC to Wireshark

102 |
In the example from Figure 137, the user is monitoring traffic to the web
VMs. All the traffic to the web VMs is being copied and sent to Wireshark
running on the VM with the 192.168.1.134/24 IP address.

Figure 137  VMware Cloud on AWS SDDC Design for Port Mirroring
to Wireshark

Creation of mirroring sessions is performed on the VMware Cloud on AWS


console, as shown in Figure 138.

Figure 138  Configured Port Mirroring Session

In this screen shot, groups have been selected for Source and Destination.
For the Direction field, there are three options: Ingress, Egress, and
Bi-directional:

• Ingress is the outbound network traffic from the VM to the logical


network.

• Egress is the inbound network traffic from the logical network to the VM.

• Bi-directional is both outbound and inbound traffic.

CHAPTER 7 - NSX Operational Tools in VMware Cloud on AWS SDDC | 103


In this example, Egress was selected for the traffic to the WebFarm –
a group previously defined including the web servers running on the
172.30.120.0/24 segment.

Once the traffic capture has begun, it can be examined as it is copied


across onto the Wireshark VM.

Figure 139  Examining Wireshark Capture

The outer header has a source IP of 10.56.32.4 (the ESXi host) and a
destination IP of 192.168.1.134 (the Wireshark VM) and is using GRE. The
inside header has a source IP of 172.31.26.221 (traffic coming from the
Internet over to the connected AWS VPC and over the ENI to the webserver)
and a destination IP of 172.30.120.18 (one of the web servers) and a
destination port of 80 (HTTP).

Figure 140  Configured Port Mirroring Session for Egress-Only Traffic

VMware Log Intelligence


Network and security administrators often need to log network packet traces.
This may be required for auditing, troubleshooting, or security analysis.

VMware Cloud on AWS integrates with VMware’s cloud logging platform


VMware Log Intelligence.

VMware Log Intelligence offers IT teams unified visibility across private,


hybrid, and native public clouds by adding structure to unstructured log
data, providing intuitive dashboards, and leveraging machine learning for
faster troubleshooting.

104 |
VMware Log Intelligence provides unified visibility into VMware Cloud on
AWS NSX-T network packet logs. With this capability, customers can analyze
and troubleshoot application flows through visibility into packets matching
specific NSX firewall rules. Benefits of this feature include the ability to:

• Monitor firewall policy usage

• Analyze application traffic patterns

• Maintain security

VMware Log Intelligence is another service accessible over the VMware


Cloud portal (console.cloud.vmware.com), which is also referred to the
VMware Cloud Services Platform (CSP).

Figure 141  VMware Log Intelligence Available for VMware Cloud on AWS

To connect to VMware Log Intelligence, enable NSX-T Firewall Logs


Collection in Log Management – NSXT from the Log Intelligence
management portal.

Figure 142  Enabling NSX-T Firewall Logs Collection

For additional insight into the NSX-T firewall logs, enable the VMware -
NSX-T for VMware Cloud on AWS content pack. These dashboards provide
visibility into packet count and application flows in a single pane of glass.

CHAPTER 7 - NSX Operational Tools in VMware Cloud on AWS SDDC | 105


Figure 143  Enabling VMware - NSX-T for VMware Cloud on AWS
Content Pack

When a firewall rule is created on any of the gateways or on the DFW,


logging can then be enabled directly on the firewall rule.

As shown in Figure 144, to monitor the traffic to the SDDC NSX Manager,
enable logging on the rule on the right-hand side. In this example, the
source is Any, but a specific source IP can be specified if desired.

Figure 144  Enabling Logging on MGW Firewall Rule

Logs will appear in VMware Log Intelligence, as shown in Figure 145.

Figure 145  VMware Log Intelligence NSX Firewall Logs

106 |
When drilling down on these logs, specific packet-level detail is available.
Figure 146 shows a packet sent from 172.30.0.177 (vRealize Network
Insight proxy on prem) to the VMware Cloud on AWS NSX Manager on
10.56.224.4 over HTTPS (443). As indicated by the firewall action (PASS),
traffic is allowed to go through.

Figure 146  Examining VMware Log Intelligence NSX Firewall Logs

It is also possible to forward these logs to a SIEM (Security Information


and Event Management) – with Splunk being the most common and
widely deployed.

For Splunk to accept logs over HTTP/HTTPS from VMC, set up the data
inputs and configure the HTTP Event Collector.

Figure 147  Configuring Splunk Data Inputs

CHAPTER 7 - NSX Operational Tools in VMware Cloud on AWS SDDC | 107


The Splunk administrator must create a new token, which will be used for
authentication. In this Splunk example, the token value is e0b3fac1-6b83-
4110-a51c-3d6ddccfc213.

Figure 148  Configuring Splunk HTTP Event Collector

Log forwarding on VMware Log Intelligence requires the use of a cloud


proxy – a virtual appliance – which Log Intelligence will communicate
through to forward logs to a SIEM. The installation and configuration of
this practice are outside the scope of this book.

In the Log Intelligence portal, go to the Log Forwarding section, select


Destination (On Premise), select the Cloud Proxy and the Endpoint Type
(Splunk in this example).

The endpoint URL should be the Splunk collector (https://splunk-


url:8088/services/collector) with the authorization header following the
model in Figure 149 (Splunk API-token where the API token is the one
previously configured.).

Either a subset or the full set of logs can be forwarded to Splunk.

Figure 149  VMware Log Intelligence Log Forwarding to Splunk

Once set up, the system will show how many events and logs are forwarded.

Figure 150  VMware Log Intelligence Log Forwarding Configuration

108 |
In the example from Figure 151, Splunk is running on native AWS in an EC2
instance from the AWS Marketplace. Ensure traffic is allowed on port 8008
to that instance in the AWS Security Group or the traffic will be dropped.

Figure 151  VMware Log Intelligence - Example NSX Firewall Logs

IPFIX on VMware Cloud on AWS


Internet Protocol Flow Information Export (IPFIX) and its predecessor
NetFlow are used by network and security administrators for troubleshooting
and auditing. In the past, many network operators have relied on Cisco’s
proprietary NetFlow technology for traffic flow information export. NetFlow
is the basis for IPFIX, which is a modern IETF-standard protocol for
exporting traffic flow information.

A flow is defined as a set of packets transmitted in a specific time slot that


share the same 5-tuple values: source IP address, source port, destination
IP address, destination port, and protocol. The flow information may include
properties such as timestamps, packets/bytes count, input/output interfaces,
TCP flags, or encapsulated flow information.

An IPFIX architecture requires identification of exporters – network entities


that observe the traffic leaving/entering and export the traffic flow in the
IPFIX model – and collectors – systems that rece

Table 1 shows typical information in an IPFIX packet.

Source IP Destination IP Source port Destination Port Packets


172.30.110.11 172.30.120.10 2000 443 3454

Table 1 

CHAPTER 7 - NSX Operational Tools in VMware Cloud on AWS SDDC | 109


An IPFIX collector could reside anywhere on the network as long as it has
IP connectivity from VMware Cloud on AWS. It could be on premises, on a
network segment within a VMware Cloud on AWS SDDC, or elsewhere in
the cloud.

Once the collector is created, IPFIX can be set up on the VMware Cloud
on AWS console. IPFIX data can be sent to up to four different collectors
at a time.

In the example in Figure 152, IPFIX data will be sent to IPFIX collector
software with an IP of 172.31.38.7 over port 2055.

Figure 152  IPFIX Collector Configuration

This IPFIX collector is running in an EC2 instance, deployed natively on


AWS in the connected VPC. The security group has been configured to
permit IPFIX flows through to the IPFIX collector.

Figure 153  VMware Cloud on AWS Design for IPFIX with Collector in Connected VPC

Once the collector has been created, an IPFIX session can be created as
shown in Figure 154.

110 |
Figure 154  Configuring IPFIX Session

Several IPFIX settings can be customized.

Individual network segments can be selected for flow monitoring. All the
flows from the VMs connected to those network segments are captured
and sent to the IPFIX collector.

To control the granularity of data captured, controls are available for


sampling rate and timeout parameters. Where a large number of flows
exist, it is possible to lower the sampling rate. Lowering the sampling rate
reduces the number of packets processed, reducing the load on the
underlying platform.

A sampling rate of 2% means only 2% of the exported data is captured


while the rest of the data packets are ignored. A sampling rate of 100%
means all the exported data packets are captured. The sampling rate can
be set between 0.1% and 100% for VMware Cloud on AWS.

Users can also specify the Active Timeout – the length of time after which a
flow will time out, even if more packets associated with the flow are received
– and the Idle Timeout – the length of time after which a flow will time out if
no more packets associated with the flow are received. Both values can be
between 60 and 3600 seconds for VMware Cloud on AWS.

Figure 155 shows a configuration with an IPFIX session called IPFIX Export.
It has a 60-second Active Timeout, a 60-second Idle Timeout, and is
capturing flows on several network segments.

Figure 155  Configured IPFIX Session

With this configuration complete, flows will start appearing on the IPFIX
Collector and can be used for analysis, troubleshooting, and reporting.

The example in Figure 156 leverages the FlowMon software, but other
IPFIX platforms can also be used. In this example, traffic statistics are
stacked up by service.

CHAPTER 7 - NSX Operational Tools in VMware Cloud on AWS SDDC | 111


Figure 156  IPFIX Collector Example (FlowMon)

VMware also has its own network and security monitoring platform –
vRealize Network Insight. The next section will walk through its integration
with VMware Cloud on AWS.

vRealize Network Insight


VMware Network Insight is a network and security analysis service
purpose-built for software-defined data centers and public clouds.

VMware Network Insight provides comprehensive network visibility and


granular understanding of traffic flows between applications to enable
cloud security planning and network troubleshooting.

To provide this deep level of traffic flow granularity, VMware Network


Insight collects network flow data through NetFlow, IPFIX, VPC flow logs,
as well as over SSH/SNMP and HTTPS.

VMware Network Insight is available via two models:

• deployed as an appliance in a vSphere environment


(vRealize Network Insight)

• consumed as a service (VMware Network Insight)

112 |
With these two solutions, VMware offers customers a choice of software
consumption model. Customers can consume VMware Network Insight as
a service or deploy vRealize Network Insight within a data center to manage
their multi-cloud environment.

The following three examples are the most common VMware Network
Insight use cases for customers of VMware Cloud on AWS.

Path from a VMware Cloud on AWS VM to/from an


On‑premises VM
Visualize the traffic path from an on-premises VM to a VMware Cloud on
AWS VM – and vice-versa – using VMware Network Insight’s ability to
analyze and display traffic flows.

Figure 157  VMware Network Insight Example - Analyze and Display Traffic Flows Between
On-Premises Workloads and SDDC Workloads

CHAPTER 7 - NSX Operational Tools in VMware Cloud on AWS SDDC | 113


Path within VMware Cloud on AWS SDDC
Understand the connectivity between VMs running in the same SDDC. The
VMs in the SDDC in Figure 158 are on two different network segments.

Figure 158  VMware Network Insight Example - Analyze and Display Traffic Flows Between
Workloads in SDDC

114 |
Micro-segmentation Planning
Analyze application traffic patterns and receive firewall rule recommendations
– based upon traffic observed – to implement on the DFW. VMware
Network Insight analytics can be applied before or after migrating
applications to the cloud.

Figure 159  VMware Network Insight Example - Analyze application traffic patterns and receive
firewall rule recommendations

CHAPTER 7 - NSX Operational Tools in VMware Cloud on AWS SDDC | 115


Chapter 8

VMware Cloud on AWS APIs


and Automation

The use of application programming interfaces (APIs) is common in the


networking industry. APIs enable use cases such as automation, custom
tool and dashboard creation, and troubleshooting. They allow applications
to easily communicate with one another. Publicly-available web-based
APIs return data in JSON or XML, standard formats that facilitate parsing
of data for further action.

Representational State Transfer (REST) API is an increasingly popular


programming interface that builds on the standard HTTP and HTTPS
protocols, using ports 80 and 443. REST API uses HTTP requests to GET,
PUT, POST, and DELETE data . REST, GET, and DELETE are self-explanatory.
PUT and POST are similar; POST creates something, whereas PUT modifies
it. VMware Cloud on AWS and NSX also leverage REST API architecture.

The VMware Cloud on AWS REST APIs can be consumed through


programming language SDKs. APIs form the foundation, and the API calls
are often aggregated around wrapper functions and methods created in
programming languages like Python, Perl, PowerShell, or Java. Popular
programming languages already have REST API libraries, enabling users
to easily call REST APIs.

VMware Cloud on AWS has four different groups of APIs:

• Cloud Services Platform APIs

• VMware Cloud on AWS APIs

• NSX-T Policy APIs

• vSphere APIs

CHAPTER 8 - VMWARE CLOUD ON AWS APIs AND AUTOMATION | 116


Each has a different role; a detailed description for each group is provided
later in the chapter.

Figure 160  VMware Cloud on AWS Solution and APIs

VMware Cloud on AWS provides a REST programming interface. The


respective REST APIs operate on the VMware Cloud on AWS SDDC.

The following list of tools and programming languages will be used with
the VMware Cloud on AWS REST APIs for examples in this chapter.

• cURL
–– cURL means command URL. It sends commands to a URL, in this
case sending REST commands with header, body, and parameters.
One downside of using cURL with VMware Cloud on AWS APIs
is that each cURL command must include a long authorization
token. The cURL command is available on Linux and Mac OS X
machines. It can be downloaded from https://curl.haxx.se

117 |
• Python
–– Python programming is a good way to use the REST interfaces
with VVMware Cloud on AWS. The program can store the
authorization token, minimizing copy and paste overhead. The
requests package is required for REST transmission, and the json
or simplejson package to form JSON headers. Python program
examples are available on the https://code.vmware.com , in the
code samples section.

• PowerCLI
–– It is also possible to control VMware Cloud on AWS using PowerCLI
commandlets in the Windows PowerShell. More information is
provided later in this book. Examples are available at the VMware
Cloud on AWS console under Developer Center > Downloads.

• Postman
–– Postman REST client is available as a free stand-alone application
for Windows, Mac OS X, and Linux. It simplifies making REST API
calls in general and to VMware Cloud on AWS SDDC.

The following section reviews the four different API families, providing
examples using different tools and programming languages.

Cloud Services Platform (CSP) APIs


The Cloud Services Platform (CSP) APIs are common across all VMware
Cloud services, including VMware Cloud on AWS. These APIs include
identity and access management, billing, service lifecycle, message bus,
email services, etc. This API is the main point of focus for VMware Cloud
on AWS organization and user management.

The base URL for the CSP APIs is https://console.cloud.vmware.com/


csp/gateway

CHAPTER 8 - VMWARE CLOUD ON AWS APIs AND AUTOMATION | 118


API Token and Access Token
To start using the CSP APIs, an API token for authentication is needed.
This API token is also used with VMware Cloud on AWS API calls.

Create the API Token


API tokens are user and organization specific. Once a VMware Cloud on
AWS organization is created, log in, navigate to the main console page,
and select My Account under the user’s name in the right corner.

Figure 161  Navigating to VMware Cloud on AWS User Account

119 |
In My Account page, select API Tokens and generate one. Tokens are
valid for six months.

Figure 162  Generating API Token

Once the API token is available, it is used for generating a session token
called access token

Get the Access Token


The access token is a token valid during a session and used for every API
call. To create an access token, use the specific API token from the cloud
console as a parameter and call a POST function at the following link:

https://console.cloud.vmware.com/csp/gateway/am/api/auth/api-tokens/
authorize

This will create the access token.

CHAPTER 8 - VMWARE CLOUD ON AWS APIs AND AUTOMATION | 120


Python Example to Create the Access Token:
The following parameters are used:
myKey = your API token
CSPURL = https://console.cloud.vmware.com

def getAccessToken(myKey):
params = {‘refresh_token’: myKey}
headers = {‘Content-Type’: ‘application/json’}
response = requests.post(
CSPURL +
‘/csp/gateway/am/api/auth/api-tokens/authorize’,
params=params,
headers=headers)
json_response = response.json()
access_token = json_response[‘access_token’]
return access_token

The python function above returns the access token; it makes a POST
request with the API token as an input parameter.

VMware Cloud on AWS APIs


The VMware Cloud on AWS APIs are the main point for actions related to
the VMware Cloud on AWS console like SDDC creation, host addition or
removal, and networking configuration.

Access to VMware Cloud on AWS APIs is available through multiple interfaces.


It is a REST-compatible API and can be called using languages like Python,
Java, or PowerShell. There are also PowerCLI modules available.

The base URL for the VMware Cloud on AWS APIs is https://vmc.vmware.
com/vmc/api. Full documentation of the available VMware Cloud on AWS
NSX Networking and Security APIs can be found by selecting VMware
Cloud on AWS at the following link https://code.vmware.com/apis.

121 |
Developer Center
In every VMware Cloud on AWS organization, there is a Developer
Center page.

Figure 163  VMware Cloud on AWS Developer Center

This is a generic page where code samples, downloads, and the API
Explorer can be found. The API Explorer is available for both Cloud
Services Platform and VMware Cloud on AWS APIs.

Execution is performed online and will apply to the selected SDDC. Use
this with caution as API calls will be applied to a live environment and not
to a simulation.

With the 1.7 SDDC release, NSX-T APIs are now available with API
Explorer. Two NSX-T APIs are available: NSX VMC Policy API and NSX
VMC AWS Integrations API. These are shown in Figure 164.

CHAPTER 8 - VMWARE CLOUD ON AWS APIs AND AUTOMATION | 122


Figure 164  VMware Cloud on AWS Developer Center - API Explorer

NSX-T Policy APIs


With the introduction of NSX-T networking within VMware Cloud on AWS,
the new NSX-T Policy API is leveraged. Users can now use NSX-T Policy
APIs on premises and in VMware Cloud on AWS to have the same
operation model.

In order to execute API calls specific to networking and security in VMware


Cloud on AWS, the NSX-T Proxy URL can be used or the Private IP address
of the NSX Manager can be leveraged.

There are two methods for interacting with NSX-T policy APIs on VMware
Cloud on AWS: VMC Console (via NSX-T Proxy URL) or NSX Manager (via
NSX Manager private IP address).

Connections performed through the NSX-T Manager follow the same process
as on premises. This approach is only possible after creating a VPN or Direct
Connect link and an initial firewall rule that allows incoming connection to the
NSX-T Manager in VMware Cloud on AWS SDDC. Setting this up can be
problematic since the NSX firewall blocks all access by default, including
access to the NSX-T Manager.

123 |
NSX-T Manager
Clients interact with the NSX-T Manager policy API using RESTful web
service calls over the HTTPS protocol. Once the proper firewall rule
provides access to the NSX-T Manager, API calls can use its IP address
with the correct https-based or session-based authentication.

For example, the base URL for the API call is:

https://<nsx-mgr-ip>/policy/api/v1/

An example NSX-T Policy API call listing all network segments is:

https:// <nsx-mgr-ip>/policy/api/v1/infra/tier-1s/cgw/segments

NSX-T Proxy URL


Use of a reverse proxy will also allow access to VMware Cloud on AWS
NSX-T APIs. An API is provided to retrieve the reverse proxy URL.

In VMware Cloud on AWS, the underlying infrastructure is already set up


and fixed. While MGW and CGW details are set, values in the NSX-T policy
API for properties like <vmc-domain> will need to be replaced for the
proper infrastructure domain. To retrieve the reverse proxy URL, execute a
GET function at the following link (where <orgId> and <sddcId> are
replaced by user’s own respective Org and SDDC IDs):

https://vmc.vmware.com/vmc/api/orgs/<orgId>/sddcs/<sddcId>

and look for a new property called nsx_api_public_endpoint_url (under


resource_config).

This parameter is the base NSX-T reverse proxy URL. The URL looks like:

https://nsx-A-B-C-D.rp.vmwarevmc.com/vmc/reverse-proxy/api/
orgs/<orgId>/sddcs/<sddcId>/sks-nsxt-manager

Once the reverse proxy URL is retrieved, it will be used in every API call.
For any operation using the NSX-T policy API, the access token is needed
with the reverse proxy URL appended in front of the specific API.

Python Example to Retrieve the NSX-T Reverse Proxy URL


The following parameters are used:
sessiontoken = your access token
ProdURL = https://vmc.vmware.com
org_id = Your Organization ID (long format)
sddc_id = Your SDDC ID

CHAPTER 8 - VMWARE CLOUD ON AWS APIs AND AUTOMATION | 124


The Org ID and SDDC ID can be found under the Support tab in the SDDC

Figure 165  Org ID and SDDC ID in Support Tab

def getNSXTproxy(org_id, sddc_id, sessiontoken):


myHeader = {‘csp-auth-token’: sessiontoken}
myURL = “{}/vmc/api/orgs/{}/sddcs/{}”.format(ProdURL,org_id,sddc_id)
response = requests.get(myURL, headers=myHeader)
json_response = response.json()
proxy_url = json_response[‘resource_config’][‘nsx_api_public_
endpoint_url’]
reverse_proxy_url = (proxy_url + “/cloud-service/api/v1/infra/sddc-
user-config”)
return reverse_proxy_url

This python function returns the reverse proxy URL and receives as input
the Org ID, the SDDC ID, and the session token.

The NSX-T proxy URL allows the use of all VMware Cloud on AWS
networking and security functions via API.

Network Segments Example


The below example looks at how to obtain a list of all logical segments,
executing a GET function at the following link:

/policy/api/v1/infra/tier-1s/<networkId>/segments

125 |
The reverse proxy URL is used and appended in front of the base link in
the following way:

https://nsx-A-B-C-D.rp.vmwarevmc.com/vmc/reverse-proxy/api/
orgs/<orgId>/sddcs/<sddcId>/sks-nsxt-manager/policy/api/v1/infra/
tier-1s/cgw/segments

Note that <network-Id> is replaced by cgw since this is where all network
segments are connected.

Management Gateway Firewall Rules


This example sets a vCenter inbound rule on the management gateway.

Execute a PUT function with the following URL:

https://nsx-A-B-C-D.rp.vmwarevmc.com/vmc/reverse-proxy/api/
orgs/<orgId>/sddcs/<sddcId>/sks-nsxt-manager/policy/api/v1/infra/
domains/mgw/gateway-policies/default/rules/ID

and the following body:


{
“logged”: false,
“display_name”: “vCenter Inbound”,
“sequence_number”: 0,
“action”: “ALLOW”,
“source_groups”: [
“ANY”
],
“services”: [
“/infra/services/HTTPS”,
“/infra/services/ICMP-ALL”,
“/infra/services/SSO”
],
“resource_type”: “CommunicationEntry”,
“scope”: [
“/infra/labels/mgw”
],
“destination_groups”: [
“/infra/domains/mgw/groups/VCENTER”
]
}

CHAPTER 8 - VMWARE CLOUD ON AWS APIs AND AUTOMATION | 126


This API call needs a name (vCenter Inbound), a sequence number (0
meaning on top of the existing list), a source (any), a destination (the
vCenter), and an action (allow). It will create the MGW rule as shown in
Figure 166.

Figure 166  Configured MGW Rule for vCenter Access

Next, an example is shown on how to create a CGW firewall rule.

Compute Gateway Firewall Rules


CGW firewall rules can be created as shown in the example Internet-out
rule below.

The CGW firewall rules can also use predefined groups. The source/
destination can be a predefined group like Connected VPC or S3
prefixes. The Applied to field defines the scope to VPC interface,
Internet Interface, VPN Tunnel, Direct Connect, or All Uplinks.

To accomplish this, execute a PUT function with the following URL:

https://nsx-A-B-C-D.rp.vmwarevmc.com/vmc/reverse-proxy/api/
orgs/<orgId>/sddcs/<sddcId>/sks-nsxt-manager/policy/api/v1/infra/
domains/cgw/gateway-policies/default/rules/LS1

and the following body:


{
“logged”: false,
“display_name”: “Internet-out”,
“sequence_number”: 2,
“action”: “ALLOW”,
“source_groups”: [
“/infra/domains/cgw/groups/LS1”
],
“services”: [
“ANY”
],

127 |
“resource_type”: “CommunicationEntry”,
“scope”: [
“/infra/labels/cgw-public”
],
“destination_groups”: [
“ANY”
]
}

This API call needs a name (Internet-out), a sequence number (2 meaning


on top of the VTI rule), a source (LS1 – the respective network segment),
a destination (any), an action (allow), and an applied to (here the scope
referring to the Internet interface). It will create the CGW rule as shown in
Figure 167.

Figure 167  CGW Firewall Rule Created from REST API Call

SDDC vSphere APIs


In VMware Cloud on AWS, a user is no longer a vSphere administrator
but a cloud administrator. There is no need to worry about managing the
underlying infrastructure; users only need to focus on deploying and
managing the workloads.

The SDDC vSphere APIs are the same ones consumed on premises.
Since this SDDC is now a managed environment, not all vSphere APIs are
applicable and users should note the managed model and structure with
VMware Cloud on AWS SDDC. For example, virtual machines should be
deployed in the compute resource pool, as shown in Figure 168.

CHAPTER 8 - VMWARE CLOUD ON AWS APIs AND AUTOMATION | 128


Figure 168  vSphere Web Client Displaying Compute Resource Pool

Storage should use the workload datastore.

Figure 169  vSphere Web Client Displaying Datastore for Workloads

Automation with PowerShell and PowerCLI


Microsoft has taken significant effort to develop new releases of PowerShell
with PowerShell Core. This code is freely downloadable from GitHub and
brings support for Windows, Linux, and Mac OS.

VMware integrates PowerShell with VMware’s PowerCLI, allowing


management of vSphere environments from Linux or Mac OS.

PowerCLI is a strong tool that helps with automation of installation,


provisioning, or any regularly-performed tasks.

129 |
What is PowerCLI?
PowerCLI is the VMware PowerShell module that enables interaction with
VMware environments. When PowerCLI is installed, a series of commandlets
are deployed. These can be used in a Windows or Mac-OS PowerShell
environment.

Install PowerCLI
In a PowerShell window, run the command Find-Module -Name VMware.
PowerCLI, then Install-Module -Name VMware.PowerCLI.

Figure 170  Search for PowerCLI Module

Check the installation with Get-Module VMware.*

Figure 171  Check PowerCLI Installation

Install VMware Cloud on AWS Module


Search for the VMC module with Find-Module -Name VMware.VMC

Figure 172  Search for VMware Cloud on AWS Module

Install it with Install-Module -Name VMware.VMC

CHAPTER 8 - VMWARE CLOUD ON AWS APIs AND AUTOMATION | 130


Install the NSX-T Module
Look in William Lam’s GitHub repository for the specific VMware.VMC.
NSXT and VMware.VMC modules that contains all commandlets for
VMware Cloud on AWS.

https://github.com/lamw/PowerCLI-Example-Scripts/tree/master/
Modules/VMware.VMC.NSXT

Download and import the modules using:


Import-Module ./VMware.VMC.NSXT.psd1
Import-Module ./VMware.VMC.psd1

VMware Cloud on AWS PowerCLI Functions


To list all VMware Cloud on AWS NSX-T specific functions, use
Get‑Command -Module VMware.VMC.NSXT

Figure 173  List all VMware Cloud on AWS NSX-T specific functions

131 |
Start Using PowerCLI for VMware Cloud on AWS
At this stage the VMware Cloud on AWS PowerCLI is ready to automate
the SDDC. To do this, there are three mandatory parameters:

• $APIToken = “2fa2ef9b-xxxx-xxxx-xxxx-d9457ebae5c8”

• $OrgName = “Your Org Name”

• $SDDCName = “Your SDDC Name”

The module import commands can be integrated in the PowerCLI script


as follows:
Import-Module ./VMware.VMC.NSXT.psd1
Import-Module ./VMware.VMC.psd1
Step 1 – Connect to VMC
Connect-Vmc -RefreshToken $APIToken

Figure 174  Using PowerCLI, Connect to VMware Cloud on AWS

Step 2 – Retrieve the NSX-T reverse proxy URL


Connect-NSXTProxy -RefreshToken $APIToken -OrgName $OrgName
-SDDCName $SDDCName

Figure 175  Retrieve the NSX-T Reverse Proxy URL

CHAPTER 8 - VMWARE CLOUD ON AWS APIs AND AUTOMATION | 132


Step3 - Create the basic networking segments, groups, MGW rules, and
CGW rules.
# Create logical networks
for($i = 2; $i -lt 6; $i++)
{
Write-Output $i
New-NSXTSegment -Name “sddc-cgw-network-$i” -Gateway
“192.168.$i.1/24” -DHCP -DHCPRange “192.168.$i.2-192.168.$i.254”
}

# Create Groups
New-NSXTGroup -GatewayType CGW -Name LS1 -IPAddress @(“192.168.1.0/24”)
New-NSXTGroup -GatewayType CGW -Name LS2 -IPAddress @(“192.168.2.0/24”)
New-NSXTGroup -GatewayType CGW -Name VPC1 -IPAddress @(“172.201.0.0/16”)
New-NSXTGroup -GatewayType CGW -Name VPC2 -IPAddress @(“172.202.0.0/16”)
New-NSXTGroup -GatewayType CGW -Name VPC3 -IPAddress @(“172.203.0.0/16”)

# Create MGW vCenter inbound rule


New-NSXTFirewall -GatewayType MGW -Name “vCenter Inbound”
-SourceGroup @(“ANY”) -DestinationGroup @(“VCENTER”) -Service @
(“HTTPS”,”ICMP-ALL”,”SSO”) -Logged $false -SequenceNumber 0 -Action
ALLOW

# Create CGW rules


New-NSXTFirewall -GatewayType CGW -Name “vmc2aws” -SourceGroup
@(“ANY”) -DestinationInfraGroup @(“Connected VPC Prefixes”, “S3
prefixes”) -Service @(“ANY”) -Logged $false -SequenceNumber 0
-Action ALLOW -InfraScope @(“VPC Interface”)
New-NSXTFirewall -GatewayType CGW -Name “aws2vmc”
-SourceInfraGroup @(“Connected VPC Prefixes”, “S3 prefixes”)
-DestinationGroup @(“ANY”) -Service @(“ANY”) -Logged $false
-SequenceNumber 1 -Action ALLOW -InfraScope @(“VPC Interface”)
New-NSXTFirewall -GatewayType CGW -Name “Internet-out”
-SourceGroup LS1 -DestinationGroup @(“ANY”) -Service ANY -Logged
$false -SequenceNumber 2 -Action ALLOW -InfraScope “Internet Interface”

# List the user firewall Rules


Get-NSXTFirewall -GatewayType MGW
Get-NSXTFirewall -GatewayType CGW

133 |
Chapter 9

Deploying Solutions in VMware


Cloud on AWS SDDC

As discussed in Chapter 3, there are many use cases for VMware Cloud on
AWS. This chapter discusses some specific use cases and solutions including
hybrid cloud, disaster recovery, workload mobility/migration, access to native
AWS services, load balancing applications, and advanced 3rd party security.

Hybrid Cloud with Workload Mobility Between


On Prem and Cloud
Common use cases for VMware Cloud on AWS include data center
consolidation – consolidating several discrete data centers into fewer –
and data center exit – migrating all workloads and services from one data
center to a VMware Cloud on AWS SDDC. With the benefits described in
previous sections, this chapter reviews the tool used to provide the bulk
migration of services. This tool is VMware Hybrid Cloud Extension (HCX),
and it is bundled with VMware Cloud on AWS.

VMware HCX delivers secure, seamless app mobility and infrastructure


hybridity across vSphere 5.0+ versions, both on premises and in the cloud.

HCX offers bi-directional application mobility and data center extension


capabilities between any vSphere version. HCX includes vMotion, bulk
migration, high-throughput network extension, WAN optimization, traffic
engineering, load balancing, automated VPN with strong encryption (Suite
B), and secured data center interconnectivity.

HCX provides the ability to extend networks. Through its incorporated WAN
optimization engine, it enables customers to vMotion workloads over Internet.

HCX has retro-compatibility with vSphere 5.0 versions. Customers do not need
to upgrade their VMware estate to connect to VMware Cloud on AWS; they can
simply lift and shift applications, migrating hundreds of VMs out of their data
center to the cloud – and back if necessary – in a matter of days, if not hours.

CHAPTER 9 - Deploying Solutions in VMware Cloud on AWS SDDC | 134


HCX Details
During the initial installation, a single HCX Manager appliance is installed
at the source. It will install one or more of the following appliances as
required by the environment:

• Hybrid Interconnect Appliance: This gateway provides a secure hybrid


interconnect to the remote site with intelligent routing to avoid
networking middle-mile problems.

• WAN Optimization Appliance: This appliance improves performance


by utilizing WAN optimization techniques such as data de-duplication
and line conditioning.

• Network Extension Service Appliance: This appliance extends L2


networks to the remote site. This enables VMs movement to the cloud
without IP and MAC address changes.

The HCX Manager is installed on the cloud side when HCX is activated on
the VMware Cloud on AWS Add-Ons tab.

Figure 176 explains the overall HCX architecture.

Figure 176  HCX Architecture

The HCX Network Extension provides a high-performance (4–6 Gbps)


service to extend the VM networks to VMware Cloud on AWS.

VMs migrated to or created on the extended segment at the remote site


are L2-adjacent to VMs placed on the origin network.

Currently with the 1.7 SDDC release, with HCX Network Extension, the
default gateway for the extended network only exists at the origin site.
Routed traffic from VMs on remote extended networks returns to the
origin site gateway.

135 |
This applies also to a network stretched with a NSX L2VPN, as previously
described in Chapter 5.

Figure 177  Gateway On Premises for HCX Extended Networks


No local egress with HCX L2VPN

Using HCX Network Extension with HCX Migration allows retention of IP


and MAC addresses of VMs as they are migrated to the cloud.

HCX enables extension of VLAN networks on VMware’s vSphere Distributed


Switch, from NSX overlay networks, and from Cisco Nexus 1000v networks.
As a best practice, it is recommended not to extend the management VLAN
under any circumstance.

Migration can begin once the secure tunnel has been established between
the HCX source and destination sites. Using the HCX Network Extension
appliance is not compulsory; HCX can still be used – along with its benefits
such as retro-compatibility to vSphere 5.0 and its WAN optimization engine
– to migrate workloads when maintaining the IP address is not a strict
requirement.

Figure 178 depicts the overall architecture of HCX.

Figure 178  HCX Network Architecture

CHAPTER 9 - Deploying Solutions in VMware Cloud on AWS SDDC | 136


Web Applications in the Cloud
and Load Balancing
Some business applications may require scale and resiliency via a load
balancer. Both customers migrating applications to VMware Cloud on
AWS and those spinning up new ones from scratch may have use of load
balancing capabilities for applications running on VMware Cloud on AWS.

Figure 179  Load Balancing Use Cases

While VMware Cloud on AWS leverages network virtualization through


NSX-T to provide virtual networking, Edge firewall, DFW, L3 VPN, and L2
VPN, the load balancing feature of NSX-T is not available as of VMware
Cloud on AWS SDDC version 1.7.

In order to offer load balancing within VMware Cloud on AWS, two


options are currently available:

• Use AWS Elastic Load Balancing

• Deploy a 3rd-party load balancer (e.g., F5, AVI Networks) in a


VM-form factor, attached in one-arm mode

The example in Figure 180 shows a simple web farm with three Windows
web servers in VMware Cloud on AWS.

The first design leverages AWS Elastic Load Balancing to balance traffic
towards the web farm through the Elastic Network Interface. The health
of the web farm can be monitored with health checks monitoring the
responsiveness of each real server over TCP 80.

137 |
Figure 180  Load-Balancing with the AWS Load-Balancer

Going to http://elbvip.vmc.ninja, one of the three web servers will be hit.

AWS ELB leverages the high-throughput and free-of-charge connection


through the ENI (note, charges apply if going across availability zones) is
simple to set up, and provides DDOS and WAF.

Figure 181  Network Load-Balancer on AWS Pointing to a Target Group

Figure 182  Target Group (aka Server Pool) Pointing to 3 Web Servers in Healthy Status

CHAPTER 9 - Deploying Solutions in VMware Cloud on AWS SDDC | 138


The alternative is implementing a load balancer as a virtual appliance
within the VMware Cloud on AWS SDDC. With this approach, traffic does
not need to hairpin between the native AWS side and VMware Cloud on AWS.
Note that the virtual LB needs to be deployed in One-Arm mode.

Figure 183  Virtual LB Needs to be Deployed in One-Arm mode

One such load balancer is available from AVI Networks. This solution
not only provides enterprise-grade load balancing but also the ability to
perform multi-cloud load-balancing across VMs, native cloud instances,
and containers

Leveraging AVI requires installing the AVI Controller – where all the
intelligence lives – and the AVI Service Engine – essentially the load-
balancer that does all the work – within VMware Cloud on AWS.

Figure 184  Load Balancing with a Virtual LB Appliance

139 |
In Figure 184, a virtual AVI load balancer was deployed with one of its
network interfaces attached to the web segment where the web farm is
based. Additionally, the management network interface of the load balancer
is connected to a dedicated management segment. The AVI load balancer
connects to the AVI Controller to receive its configuration. A pool of servers
and a virtual service – designated by the virtual IP – that monitors and
load balances the traffic are also depicted.

The example illustrated on Figure 184 – http://avivip.vmc.ninja/ – is live


and accessible over the Internet. It redirects to a web farm load-balanced
by an AVI controller on a VMware Cloud on AWS SDDC.

Application performance and visibility through the AVI Service Engine is


monitored by AVI Controller.

Figure 185  AVI Controller - Application Performance and Visibility

VMware Cloud on AWS Applications Using


Native AWS Services
One of the unique attributes of VMware Cloud on AWS is its deep integration
with native AWS services. This required deep engineering collaboration
between VMware and AWS, with the results enabling customers to build
hybrid applications leveraging the best of both VMware and AWS.

As explained previously, the VMware Cloud on AWS SDDC is deployed on


EC2. This is not done through nested virtualization – the VMware
hypervisor is deployed directly on top of bare-metal servers.

VMware Cloud on AWS leverages the elasticity and scale of AWS. A


VMware Cloud on AWS SDDC can scale up and down at speed by adding
or removing hosts to the SDDC from AWS’ huge pool of compute capacity.

CHAPTER 9 - Deploying Solutions in VMware Cloud on AWS SDDC | 140


The VMware Cloud on AWS SDDC is deployed in a shadow VPC belonging
to a shadow AWS account. This account is managed by VMware on behalf
of the VMware Cloud on AWS customer.

The shadow AWS account is linked to the customer’s own AWS account.
At creation of the SDDC, an AWS identity and access management (IAM)
role is created in the customer’s AWS account to allow the shadow AWS
account to update its connected VPC routing table. This enables
connectivity between the SDDC and the connected VPC.

The previous Connected VPC section in Chapter 5 explained the technology


used to enable connectivity between the VMware Cloud on AWS SDDC
and the connected VPC.

Figure 186  Connected VPC Design

This connectivity provides the opportunity to leverage AWS services


within a VMware Cloud on AWS SDDC.

The following architectures detail a number of common use cases.

Integrating AWS Native Storage Solutions with VMware Cloud


on AWS
As part of the process for evaluating cloud migration, customers often
struggle with how to move their most critical asset – their data – securely
to the cloud.

141 |
While VMware Cloud on AWS makes cloud migration easier than ever
before, it is still worthwhile to consider how best to move, especially for
those with a significant volume of data. It is worth breaking down terabytes
or petabytes of data into different buckets, then transferring that data to
where it is most appropriate.

For example, customers might want to:

• Migrate VMs and their associated virtual hard drives to VMware Cloud
on AWS SDDC.

• Migrate object-based storage to S3 and attach it to VMware Cloud on


AWS SDDC over the ENI.

• Leverage S3 for backup and snapshots of VMs running on VMware


Cloud on AWS SDDC.

• Migrate database workloads to VMware Cloud on AWS SDDC or to


RDS depending on performance and licensing requirements. When
using RDS, a workload can directly connect to VMware Cloud on AWS
SDDC over the ENI.

• Migrate file services to EFS for Linux VMs or FSx for Windows VMs.

Accessing Managed File Services from VMware Cloud on AWS

EFS is an AWS-managed file server for Linux VMs. FSx is the AWS-
managed file server for Windows VMs. As these services reside on EC2
within a VPC, they can easily be attached and connected to over the
connected VPC.

Figure 187  Accessing Managed File Services from VMware Cloud on AWS SDDC

CHAPTER 9 - Deploying Solutions in VMware Cloud on AWS SDDC | 142


Accessing Object Storage from VMware Cloud on AWS

AWS Simple Storage Service (S3) was the first AWS service launched in
2006. S3 provides the following benefits:

• Highly available object storage for the Internet

• HTTP/HTTPS endpoint to store and retrieve any amount of data,


at any time, from anywhere on the web

• Highly scalable, reliable, fast, and inexpensive

• Annual durability of 99.999999999%; designed for 99.99% availability

Leveraging S3 buckets for back-ups and snapshots


Protecting VMware Cloud on AWS VMs with backup software is as
essential as it would be for on-premises VMware workloads.

An advantage of VMware Cloud on AWS is SDDC VMs can connect


directly to S3 buckets via a VPC endpoint; they can also access the
buckets over the Internet if necessary.

Figure 188  Leveraging S3 buckets for Back-ups and Snapshots

Many backup vendors have certified their solutions on VMware Cloud on


AWS. These can be set up within VMware Cloud on AWS to send snapshots
to an EC2 instance with a large EBS volume or to an S3 bucket.

In addition, restoration of a VM from a snapshot can be done at speed


over the ENI without incurring traffic charges so long as it is configured to
stay within the same availability zone.

143 |
Direct Access to S3 buckets from VMware Cloud on AWS VMs
VMware Cloud on AWS administrators may notice improvements in
performances when accessing S3 buckets. If on-premises VMs are already
accessing AWS S3 buckets to store and access objects, migrating them to
the VMware Cloud on AWS SDDC should improve performance. The
VMware Cloud on AWS SDDC will be able to connect to the buckets
across the private AWS network via the VPC endpoint.

Access a RDS Database from VMware Cloud on AWS


After migrating a multi-tier application to VMware Cloud on AWS, it may be
desirable to break down the app into smaller services. This could involve
leaving the web and middleware tiers on VMware Cloud on AWS while
migrating the DB to AWS Relational Database Services (RDS) to benefit
from managed services such as automatic backups and managed patching.

While there are other criteria to consider – including licensing, cost, and
performance – the benefits of managed database administration and
maintenance remain attractive.

As the RDS service lives within a VPC, setting up connectivity between


the VMware Cloud on AWS and the RDS DB is straightforward as it goes
over the ENI.

Figure 189  Accessing a RDS Database from VMware Cloud on AWS SDDC

Alternatively, multi-tier applications may be split to accomplish the


opposite – running the DB tier on VMware Cloud on AWS and web front
end on native AWS.

CHAPTER 9 - Deploying Solutions in VMware Cloud on AWS SDDC | 144


Leveraging AWS Web-Facing Apps with the VMware Cloud
on AWS SDDC
Organizations often start with the web tier when modernizing an application.
This is the simplest path and benefits can be immediately achieved from
leveraging auto-scaling web-facing servers.

In the context of VMware Cloud on AWS, this involves:

• keeping the DB tier on VMware Cloud on AWS unchanged, benefiting


from performance and licensing advantages.

• transforming the web tier, leveraging:


–– AWS EC2 instances running across multiple availability zones for
resiliency, or alternatively containers via the Amazon ECS service.
–– AWS Elastic Load Balancing to balance traffic between the EC2
instances. Likewise, AWS ELB could also balance the traffic
between ECS containers.
–– AWS Web Application Firewall (WAF) services, offered by the
AWS Shield service, to protect web-facing application from Denial
of Service (DoS) and Distributed Denial of Services (DDoS) attacks.

Communications from the EC2 web servers back to the SDDC DB tier
would be across the ENI.

Figure 190  Communication from the EC2 Web Servers Back to the SDDC DB Tier Using ENI

145 |
Accessing Managed Directory Services from VMware Cloud
on AWS
AWS offers a managed Microsoft Active Directory (AD) platform called
AWS Directory Service.

Figure 191  AWS Directory Service

This managed directory service currently supports Microsoft AD, Simple


AD, AD Connector and AWS Cognito. As this service is essentially a
managed EC2 instance hosted within a VPC and protected by Security
Groups, it can be accessed by VMs in VMware Cloud on AWS over the ENI.

Figure 192  Accessing AWS Directory Service Using ENI

CHAPTER 9 - Deploying Solutions in VMware Cloud on AWS SDDC | 146


Integration with Next-Gen Firewalls
VMware Cloud on AWS includes two Edge firewalls – the Management
Gateway Firewall (MGW FW) and the Compute Gateway Firewall
(CGW FW).

The management domain is protected by the MGW FW, which is an NSX


Edge security gateway that provides north/south network connectivity for
the vCenter Server and NSX Manager running in the SDDC.

The compute domain is protected by the CGW FW. The CGW FW provides
north/south network connectivity for VMs running in the SDDC.

Both the CGW and MGW provide firewall capabilities. The CGW FW and
MGW FW offer layer 4 (L4) firewalling – inspecting traffic up to layer 4 of
the OSI model. They look at IP addresses – both source and destination
– as well as TCP/UDP ports, filtering traffic based upon these criteria.

For Internet-facing applications or for Internet-bound traffic, security


architects might want to leverage an L7 firewall – a firewall capable of
inspecting packet payload and URL, then dropping traffic if its content
or its URL destination do not adhere to a defined security policy.

L7 firewalls are sometimes referred to IPS/IDS, context-aware firewalls,


next-gen firewalls, or application firewalls. This book does not explore the
conceptual differences between individual approaches, but this definition
of firewall refers to common products from vendors such as Palo Alto
Networks, Check Point, Cisco, Juniper, and Fortinet.

There are a number of options for integrating next-generation firewalls


with VMware Cloud on AWS; below two options are explored.

Option 1: Inspect VMC traffic via the On-premises next-gen firewall

When using VMware Cloud on AWS as a data center extension while


maintaining a presence on premises, it may be desirable for the traffic
to be inspected by an on-premises web proxy and Internet L7 firewall.

To accomplish this, advertise the default route over the VPN or Direct
Connect. All Internet-bound traffic from the VMware Cloud on AWS VMs
will transit the on-premises L7 appliance.

147 |
Figure 193  Inspecting VMware Cloud on AWS Traffic via On-Premises Next-gen Firewall

If there is a need to expose web-facing applications on VMware Cloud on


AWS, advertise the public IPs of these VMs from the Internet-facing router,
then NAT the public IPs to the private IP of VMC-VM. This setup is pictured
in Figure 194.

Inbound traffic from the Internet will go through the on-premises Internet
firewall, where the destination IP will be NATted to the private IP of
VMC-VM, then transferred across DX/VPN to VMC-VM.

Figure 194  Using Next-gen Firewall On Premises and Exposing SDDC Applications to Internet

This is a common option which works well. For customers who have no
presence on premises, a different option is required – a firewall hosted in
the cloud.

CHAPTER 9 - Deploying Solutions in VMware Cloud on AWS SDDC | 148


Option 2: Next-Gen Firewall deployed within a Transit VPC in native AWS

This option leverages the concept of an AWS transit VPC. A transit VPC is
a common strategy for connecting multiple, geographically-dispersed
VPCs and remote networks to create a global network transit center.

A transit VPC simplifies network management and minimizes the number


of connections required to connect multiple VPCs and remote networks.

Figure 195  Next-gen Firewall Deployed within a Transit VPC in Native AWS

The AWS transit VPC serves as a hub VPC that connects to spoke VPCs
via a VPN. A next-gen firewall would be deployed within the transit VPC
as an EC2 instance. All traffic leaving the spoke VPCs is directed to the
hub/transit VPC for inspection by the next-gen firewall. In this setup,
VMware Cloud on AWS SDDC is just another spoke VPC.

149 |
The ENI-connected VPC is accessed via the ENI when the SDDC is
deployed. It is typically used for services such as Active Directory,
File Services (FSx) or backups. The ENI-connected VPC would not be
connected to the transit VPC – it would remain reachable from the
VMware Cloud on AWS SDDC via the ENI.

Figure 196  Connected VPC Design with Transit VPC

As an example, Figure 197 shows Palo Alto Networks appliances deployed


in a Transit VPC.

Figure 197  Palo Alto Networks Appliances Deployed in a Transit VPC

CHAPTER 9 - Deploying Solutions in VMware Cloud on AWS SDDC | 150


This is the ideal option for customers already using transit VPCs, as
VMware Cloud on AWS SDDC would simply be another spoke. All traffic
from VMware Cloud on AWS SDDC to spoke VPCs or to the Internet
would transit through the secure transit VPC.

Disaster Recovery
As introduced in the use case section in Chapter 3, VMware Cloud on
AWS can offer a highly-reliable disaster recovery (DR) solution.

Building a reliable DR solution is not an easy task. Traditionally, many


companies create a secondary DR site that typically is infrequently tested
while sitting idle until a disaster happens.

VMware Site Recovery uses public cloud to help enterprises solve the DR
problem. VMware Site Recovery is a disaster recovery as-a-Service (DRaaS)
offering on VMware Cloud on AWS. It is built on three key components:

• The ability to quickly and easily spin up a VMware environment on


AWS instead of building a secondary DR target on premises.

• vSphere Replication to protect data with a five-minute RPO,


regardless of the underlying storage solution.

• Site Recovery Manager (SRM) to orchestrate recovery in case of a


disaster, to run non-disruptive testing to verify a DR plan, and to
provide the detailed reporting needed for DR compliance.

These capabilities allow VMware Site Recovery customers to easily deploy


a DR solution. For customers that already have a DR solution, VMware
Site Recovery provides a way to easily protect more workloads or to save
money by completely replacing the existing DR solution. It also enables
workload protection across multiple AWS regions. Figure 198 summarizes
these use cases.

151 |
Figure 198  VMware Site Recovery Use Cases

VMware Site Recovery can be used between an on-premises data center


and an SDDC deployed on VMware Cloud on AWS, or it can be used
between two SDDCs deployed to different AWS availability zones or
regions. It also supports a fan-in topology to protect multiple source sites
to VMware Cloud on AWS SDDC – with the source site either on premises
or in VMware Cloud on AWS SDDC.

Figure 199  VMware Site Recovery Fan In Topology

VMware Site Recovery is also supported in the fan-out topology portrayed


in Figure 200.

CHAPTER 9 - Deploying Solutions in VMware Cloud on AWS SDDC | 152


Figure 200  VMware Site Recovery Fan Out Topology

In this scenario, a main on-premises site is already protected by one or


more on-premises DR sites. VMware Cloud on AWS can be leveraged as
an additional DR site for an additional level of protection.

Migration of protected inventory and services from one site to the other
is controlled by a recovery plan that specifies the order in which virtual
machines are shut down and started up, the resource pools to which they
are allocated, and the networks they can access.

VMware Site Recovery enables the testing of recovery plans, using a


temporary copy of the replicated data and isolated networks in a way
that does not disrupt ongoing operations at either site.

Multiple recovery plans can be configured to migrate individual applications


or entire sites, providing a fine degree of control over what virtual machines
are failed over and failed back. This also enables flexible testing schedules.

153 |
Summary
VMware Cloud on AWS enables several uses cases around data center
extension, migration, and disaster recovery. VMware Cloud on AWS
with NSX networking and security features provides robust capabilities
and solutions to existing data center problems. Customers can easily
deploy a cloud or hybrid cloud solution leveraging many of the technologies
they are already familiar with using on premises. VMware Cloud on AWS
offers a consistent look-and-feel experience on premises and in the cloud
while following similar operational and management models. Customers
can also leverage existing investments in technology and training. It
is extremely easy to get started using VMware Cloud on AWS and the
respective networking and security features; more information is
available at: https://cloud.vmware.com/vmc-aws.

| 154
Appendix A: Account Structure and
Data Charges
With the success of the VMware Cloud on AWS service, there are often
questions about the AWS account structure and data charges. The charges
discussed below are in addition to the cost of deploying a VMware Cloud
on AWS SDDC – which is available as on-demand, 1-year, and 3-year
terms. The latest pricing and details are available at https://cloud.vmware.
com/vmc-aws/pricing.

It is important to understand who owns what and how AWS charges will
be presented.

The VMware Cloud SDDC Account


Each VMware Cloud on AWS SDDC has an associated AWS account/VPC.
This AWS/VPC account is owned by VMware. The VMware Cloud on AWS
SDDC will be deployed using this respective AWS/VPC account on the back
end. The cluster nodes are EC2 bare metal instances deployed in the VPC
and belong to VMware; this is paid for directly to AWS by VMware. The
associated resources/SDDCs are dedicated to the customer. VMware has a
different account per customer, and the resources deployed are not shared.
These costs are covered by VMware and passed on to the customer.

The AWS Customer Account/Connected VPC


The AWS customer account/connected VPC is owned by the customer
and will be linked to the VMware Cloud on AWS SDDC. AWS resources in
this account are paid directly by the customer to AWS.

There is a private networking connection between the VMware Cloud on


AWS SDDC and the customer/connected VPC. This high speed, low latency
construct developed and enabled jointly by VMware and AWS allows VMs
in the VMware Cloud on AWS SDDC to access native resources in the
AWS customer/connected VPC like S3, EC2, RDS etc.

155 |
The customer will receive two bills: one from VMware and one from AWS.
The AWS bill is paid directly by the customer. It includes charges related
to the customer VPC like Internet out, cross-region, cross-AZ, EC2, S3,
RDS, etc.

The VMware bill includes charges related to the AWS VPC used by
sthe VMware Cloud on AWS SDDC. This bill includes egress charges,
cross-AZ charges, Elastic IPs etc. as described at the following link:
https://cloud.vmware.com/vmc-aws/pricing

Prices are subject to changes and are specific to AWS regions as


described at the following link:
https://aws.amazon.com/ec2/pricing/on-demand/

Pricing examples include:

• Internet out is $0.05/GB

• Direct Connect out is $0.02/GB

• Incoming traffic is free

• ENI connection traffic is free if in the same region and availability


zone. Note, when deploying a stretched cluster solution, there will be
cross-AZ charges if resources being accessed in the connected VPC
are in a different AZ than the respective source VM/workload.

• For S3 storage, data out to S3 is free and data in from S3 in the same
region is also free. The AWS S3 service itself is not free.

• Egress Internet out to Internet is $0.09/GB for US and Europe


(up to 10TB/month)

Appendix A: Account Structure and Data Charges | 156


Figure 201 provides a visual depiction of the different data charges
discussed above.

Figure 201  AWS Data Charges Diagram

157 |
Index
A G
APIs  7, 17, 97, 116, 117, 118, 119, 121, Grouping Objects  17, 27, 29, 44,
122, 123, 124, 128 83, 85, 86, 87, 88, 89, 90, 92,
ASN  49, 51 95, 96, 102
AWS EC2  25, 145
H
AWS S3  144, 156
AWS Transit Gateway  16, 64, 65, 66, HCX  11, 16, 34, 134, 135, 136
70, 71, 72 Hybrid Linked Mode  2, 29

B I
BGP  16, 46, 49, 51, 54, 56, 57, 58, IGW  21, 22, 31, 32
59, 60, 63, 68, 71 IPFIX  17, 101, 109, 110, 111, 112

C L
CGW Firewall  16, 21, 22, 39, 40, 44, L2VPN  4, 16, 34, 74, 75, 76, 77, 78,
90, 92, 93, 127, 128 79, 136
Connected VPC  21, 22, 24, 25, 80, LAG  47, 48
81, 82, 92, 110, 127, 133, 141, 142, Load Balancing  134, 137, 138, 139,
150, 155, 156 145
D
M
DHCP  15, 33, 35, 36, 133
MGW Firewall  16, 21, 22, 26, 27, 29,
Direct Connect  4, 15, 21, 22, 25, 28, 90, 92, 93, 94, 102, 106
32, 45, 46, 47, 48, 49, 50, 52, Micro-segmentation  XVII, 16, 21,
53, 54, 55, 60, 65, 123, 127, 147 25, 94, 95, 96, 115
Direct Connect Private VIF  24, 28,
29, 32, 38, 39, 45, 47, 48, 49, N
50, 51, 52, 53, 54, 55, 56, 57,
59, 60, 63, 66, 74, 92, 156 NAT  13, 15, 39, 41, 42, 44, 148
Direct Connect Public VIF  47 Network Segments  15, 19, 20, 21,
Disaster Recovery  XIII, 8, 11, 12, 134, 22, 33, 34, 35, 36, 40, 48, 49,
151, 154 59, 63, 111, 114, 124, 126
Distributed Firewall (DFW)  16, 83, NSX  XIII, XIV, XV, XVI, XVII, XVIII,
90, 94, 95, 96, 97, 98, 99, 100, 1, 3, 4, 7, 13, 14, 15, 16, 17, 18, 19,
106, 115, 137 20, 21, 22, 24, 25, 29, 30, 31,
32, 33, 35, 36, 37, 40, 42, 45,
DNS  15, 29, 37, 38, 55, 98
48, 49, 54, 59, 63, 64, 74, 79,
DX  45, 46, 148 80, 83, 84, 85, 86, 87, 88, 89,
90, 91, 92, 93, 94, 95, 96, 97,
E 98, 99, 100, 101, 105, 106, 107,
Edge Firewall  13, 15, 16, 17, 22, 25, 109, 116, 121, 122, 123, 124, 125,
92, 102, 137, 147 131, 132, 136, 137, 147, 154

Index | 158
P T
Policy-Based IPSEC VPN  4, 15, 56, Transit VPC  64, 149, 150, 151
63
Port Mirroring  17, 25, 101, 102, 103, V
104 vCenter  XIII, XVI, 2, 3, 7, 19, 20, 21,
Public IPs  15, 26, 38, 39, 40, 41, 42, 22, 24, 26, 27, 28, 29, 38, 39,
44, 46, 55, 57, 67, 68, 148 55, 87, 90, 92, 93, 94, 126, 127,
133, 147
R
VIF  46, 47, 50
Role Based Access Control VPC  15, 16, 18, 46, 47, 49, 56, 59,
(RBAC)  17, 24, 83, 84, 85 64, 65, 67, 69, 70, 73, 80, 82,
Route-Based IPSEC VPN  4, 16, 32, 104, 110, 112, 127, 133, 141, 142,
52, 53, 54, 56, 57, 58, 59, 60, 143, 144, 146, 149, 151, 155, 156
61, 62, 63, 64, 66, 71 vRealize Network Insight  101, 107,
Routing  13, 15, 25, 30, 31, 54, 55, 112, 113
102, 135, 141 vSphere ESXi  1, 2, 3, 7

S
SDDC  XVI, 1, 2, 3, 4, 12, 13, 14, 15,
16, 17, 18, 19, 20, 21, 22, 23, 24,
25, 26, 27, 28, 30, 31, 32, 33, 37,
38, 39, 42, 47, 48, 49, 50, 51, 52,
54, 55, 56, 58, 60, 62, 64, 65,
66, 67, 68, 69, 70, 71, 72, 74, 75,
77, 79, 80, 81, 82, 83, 84, 85, 92,
96, 97, 101, 102, 103, 106, 110,
113, 114, 117, 118, 121, 122, 123, 124,
125, 128, 132, 134, 135, 137, 139,
140, 141, 142, 143, 144, 145, 147,
148, 149, 150, 151, 152, 155, 156
Security Groups  85, 92, 109, 110, 146
Security Tags  17, 83, 85, 87, 88, 89,
91, 92, 95
Site Recovery Manager (SRM)  12, 151
SNAT  15, 22, 39, 40

159 |
Companies are under tremendous pressure to implement cloud strategies –
ranging from establishing cloud-based development environments to
completely evacuating data centers and moving to the cloud. Traditional
cloud solutions hold many challenges; these include application redesign,
workload migration, investment protection, and operational model
consistency between on premises and cloud. VMware Cloud on AWS –
an offering jointly developed by VMware and AWS – addresses these
challenges with comprehensive solutions. VMware Cloud on AWS
simplifies the experience of enterprise customers looking to implement
a cloud strategy, allowing them to extend and migrate their data center
solutions to the cloud with ease.

By leveraging the same technologies in the cloud that enterprise customers


already use on premises, VMware Cloud on AWS helps accelerate migration
of VMware vSphere-based workloads to the cloud while providing robust
capabilities and hybrid cloud solutions. VMware Cloud on AWS also allows
organizations to take immediate advantage of the scalability, availability,
security, and global reach of the AWS infrastructure.

This book dives into the features and capabilities of VMware NSX
networking and security in VMware Cloud on AWS. NSX is VMware’s
network virtualization and security stack that provides a full set of logical
network and security services decoupled from the underlying physical
infrastructure. These enhanced NSX distributed networking and security
functions enable much of the functionality in VMware Cloud on AWS,
providing a robust cloud and hybrid cloud solution. Customers can now
immediately utilize VMware Cloud on AWS SDDCs, deploying and
migrating workloads to the cloud while leveraging technologies they
already use on premises like VMware ESXi™, VMware vSAN,™ VMware
NSX® networking and security, and VMware vCenter.®

With VMware Cloud on AWS extending or moving to the cloud is no


longer a daunting task. In this book, VMware NSX and VMware Cloud on
AWS experts Humair Ahmed, Gilles Chekroun, and Nico Vibert discuss
use cases and solutions while also providing a detailed walkthrough of
the NSX networking and security capabilities in VMware Cloud on AWS.

ISBN-13: 978-0-9986104-9-8
ISBN-10: 0-9986104-9-6

www.vmware.com/go/run-nsx
www.cloud.vmware.com/vmc-aws

Das könnte Ihnen auch gefallen