Sie sind auf Seite 1von 29

DELL EMC VxRAIL NETWORK GUIDE

Physical and Logical Network Considerations and


Planning

ABSTRACT
This is a planning and consideration guide for VxRail Appliances. It
can be used to better understand the networking required for VxRail
implementation. This whitepaper does not replace the requirement for
implementation services with VxRail Appliances and should not be
used in attempt to implement the required networking for VxRail
Appliances.

December 2016

1
The information in this publication is provided as is. Dell Inc. makes no representations or warranties of any kind
with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or
fitness for a particular purpose.

Use, copying, and distribution of any software described in this publication requires an applicable software license.

Copyright 2016 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks
of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Published in the
USA. 12/16 Technical White Paper H15300.1

Dell EMC believes the information in this document is accurate as of its publication date. The information
is subject to change without notice.

2
Contents
Intended Use and Audience 5
Introduction to VxRail 5
Decision Regarding vCenter Server ..................................................................5
Planning Your Network ......................................................................................6
Physical Network ...............................................................................................6

VxRail Clusters, Appliances and Nodes 6

Network Switch 7

Topology and Connections 8

Workstation/Laptop 9

Out-of-Band Management (optional) 9

Before Cabling VxRail Appliances 10


Step 1: Plan Logical Network...........................................................................10
Step 1A. Reserve VLANs (Best Practice) ........................................................11
Step 1B. System ..............................................................................................12

Time Zone, NTP Server 12

DNS Server 12
Step 1C. Management .....................................................................................13

ESXi Hostnames and IP Addresses 13

vCenter Server 14

VxRail Manager and Networking 15

Passwords 16
Step 1D. vMotion and vSAN ............................................................................16
Step 1E. Solutions ...........................................................................................17
Step 1F. Workstation/Laptop ...........................................................................17
Step 2: Set Up Switch ......................................................................................18

3
Step 2A. Understanding Switch Configuration .................................................18

Network Traffic 18

Multicast Traffic 21

Inter-switch Communication 21

Disable Link Aggregation 21

vSphere Security Recommendations 21


Step 2B. Configure VLANs on your switch(es) ................................................22
Step 2C. Confirm Your Configuration ..............................................................23

After Planning and Switch Setup 23


Unassigned Physical Ports 24

Network Segregation 25

VxRail Network Configuration Table 26


VxRail Setup Checklist 27
Appendix A: NSX Support on VxRail 28

4
Intended Use and Audience
This guide discusses the essential network details for VxRail deployment planning purposes only. It also introduces
best practices, recommendations, and requirements for both physical and virtual network environments. The guide
has been prepared for anyone involved in planning, installing, and maintaining VxRail, including Dell EMC field
engineers and customer system and network administrators. This guide should not be used to perform the actual
installation and set-up of VxRail. Please work with your Dell EMC service representative to perform the actual
installation.

Introduction to VxRail
Dell EMC VxRail Appliances are a hyper-converged infrastructure (HCI) solution that consolidates compute and
storage into a single, highly available, network-ready unit. With careful planning, VxRail Appliances can be rapidly
deployed into an existing environment and the infrastructure is immediately available to deploy applications and
services.

VxRail is not a server. It is an appliance. The G Series consists of up to four nodes in a single appliance, all other
models based on Dell PowerEdge servers are a single node per appliance. A 10GbE switch (or a 1GbE switch for
certain models of VxRail) is required. A workstation/laptop for the VxRail user interface is also required.

VxRail has a simple, scale-out architecture, leveraging VMware vSphere and vSAN to provide server virtualization
and software-defined storage. Fundamental to the VxRail clustered architecture is network connectivity. It is through
the logical and physical networks that individual nodes act as a single system providing scalability, resiliency and
workload balance.

The VxRail software bundle is preloaded onto hardware and consists of the following components (specific software
versions not shown):

VxRail Manager
VMware vCenter Server
VMware vRealize Log Insight
VMware vSAN
EMC Secure Remote Support (ESRS)/VE
EMC Recover Point for Virtual Machines (RP4VM) - 15 Full Licenses per G-series appliance chassis or 5 Full Licenses per all
other single node per chassis VxRail series appliances
EMC CloudArray- 1 TB local cache/10 TB cloud storage License (per appliance chassis)
VMware vSphere licenses are also required and can be purchased through Dell EMC, VMware or your preferred VMware
reseller partner
VxRail is fully compatible with other software in the VMware ecosystem, including VMware NSX.

Decision Regarding vCenter Server


The VxRail virtual infrastructure is managed by a single vCenter Server instance. Either VxRail connects to an
existing vCenter Server, or vCenter Server is installed and configured during VxRail initial configuration. Whether to
use an internal vCenter Server that is part of the VxRail infrastructure or to connect to an existing external vCenter
Server instance is an important decision point. If your VxRail cluster is in a standalone environment, then configuring
an internal vCenter Server is the easiest approach. On the other hand, if the new VxRail cluster will be added to an
existing VMware environment, integrating into the existing vCenter Server offers a consolidated view and
management point for the virtualized environment.

Please note, the included vCenter license is for internal vCenter and is not transferrable to use as an External
vCenter.

5
Planning Your Network
The network considerations are no different from those of any enterprise IT infrastructure: availability, performance,
and extensibility. Generally, VxRail Appliances are delivered ready to deploy and attach to any 10GbE network
infrastructure and use IPv4 and IPv6. Some models with single processors are available for 1GbE networks. Most
production VxRail network topologies use dual top-of-the-rack (ToR) switches to eliminate the switch as a single point
of failure.

Follow all of the network prerequisites described in this document; otherwise VxRail will not install properly, and it will
not function correctly in the future. If you have separate teams for network and servers in your data center, you will
need to work together to design the network and configure the switch(es).

Physical Network
This section describes the physical components found in a VxRail cluster:

VxRail clusters, appliances and nodes


Network switch
Topology and connections
Workstation/laptop
Out-of-band management (optional)

VxRail Clusters, Appliances and Nodes


VxRail starts with a minimum of 3 nodes (either in a single G-series chassis or three individual appliance nodes for all
other models) connected to one or more network switches, deployed to form a VxRail cluster that contains the vSAN
environment. Up to 64 VxRail nodes can be added to the cluster. The internal disks on each node combine to create
a VxRail datastore that is shared across all the nodes in the cluster, whether its a cluster of three nodes or 64 nodes.
Within the cluster, multiple networks may service different functions or types of traffic.

The cluster is managed by a single instance of VxRail Manager and vCenter Server. A logical tag in each node and
chassis is used to display the identity of the appliance in VxRail Manager. These tags are 11 alphanumeric
characters that uniquely identify the appliance.

Please review the physical power, space and cooling requirements for your expected resiliency level.

The following illustrations show possible configurations of a VxRail Appliance with four nodes:

Figure 1. VxRail G Series appliance with four nodes, showing the 10GbE ports on each node

6
BMC 1GbE
port ports

Figure 2. VxRail G Series appliance with four nodes, showing the 1GbE ports on each node

iDRAC 10 GbE 1 GbE


port
Note: The 2x10GbE ports will auto-negotiate to 1GbE when used with 1GbE networking

Figure 3. VxRail E, P, S and V Series node, showing the 10GbE and 1GbE ports

Network Switch
VxRail is broadly compatible with most customer networks and switches. VxRail nodes communicate over one or
more customer-provided network switch(es), typically a top-of-rack switch. One example is the Dell EMC Connectrix
VDX-6740 switch, available through your Dell EMC representative.

Switch requirements:

The switch(es) connected directly to VxRail Appliances must support IPv4 and IPv6 multicast on 10GbE ports for all models of
VxRail except for the models that utilize 1GbE for their primary networking.
Be sure to have access to the manufacturers documentation for your specific switch(es).
Keep in mind that while one switch can work, it is a potential single point of failure.

7
Port availability:

G Series
Each VxRail node with 10GbE ports ships with either two SFP+ or RJ-45 NIC ports. Two corresponding ports are required
for each VxRail node on one or more 10GbE switch(es). Six ports are needed for a three-node initial configuration.
Each VxRail node with 1GbE ports ships with four RJ-45 NIC ports. Four corresponding ports are required for each VxRail
node on one or more 1GbE switch(es). Twelve ports are needed for a three-node initial configuration.
One additional port on the switch or one logical path on the VxRail management VLAN is required for a workstation/laptop
to access the VxRail user interface for the cluster.
E, P, S and V Series
Each VxRail node comes with a Network Daughter Card (NDC) consisting of 2x10GbE + 2x1GbE in either SFP+ or RJ-45
NIC ports.
The 2x10GbE ports will auto-negotiate to 1GbE when used with 1GbE networking.
Two corresponding ports are required for each VxRail node on one or more 10GbE switch(es), when utilizing 10GbE as
the primary networking speed.
Four corresponding ports are required for each VxRail node on one or more 1GbE switch(es), when utilizing 1GbE
networking on the single processor modes. (Note: The P and V series do not offer any single processor configurations.)
One additional port on the switch or one logical path on the VxRail management VLAN is required for a workstation/laptop
to access the VxRail user interface for the cluster.
One additional PCI-e NIC can be added to the node, except single processor E460.
- The PCI-e NIC comes with 2 10GbE ports. The interface can be either SFP+ or RJ45.
- VxRail initialization process will not touch PCI-e NIC. Customers can use the ports for their own purposes such as
VM networks, iSCSI, or NFS, etc.
In an E, P, S, and V series VxRail Appliances utilizing 10GbE, the 1GbE NIC ports must be disconnected
during VxRail initialization, node addition and node replacement. Contact your Dell EMC sales or support
team before using the 1GbE NIC ports when the 2x10GbE ports on the NDC are connected to a 10GbE switch.
Special approval is required for this use case.

Cable requirements:

VxRail nodes with RJ-45 ports require CAT5 or CAT6 cables. CAT6 cables are included with every VxRail
VxRail nodes with SFP+ ports require optics modules (transceivers) and optical cables, or Twinax Direct-Attach-Copper (DAC)
cables. These cables and optics are not included; you must supply your own. The NIC and switch connectors and cables must
be on the same wavelength.

Please review the logical switch configuration requirements in the next section of this document.

Topology and Connections


Various network topologies for switch(es) and VLANs are possible with VxRail Appliances. Complex production
environments will have multiple core switches and VLANs. A site diagram showing the proposed network
components and connectivity is highly recommended before cabling and powering on VxRail Appliances.

Be sure to follow your switch vendors best practices for performance and availability. For example, packet buffer
banks may provide a way to optimize your network with your wiring layout.

8
Decide if you plan to use one or two switches for VxRail. One switch is acceptable and is often seen in
test/development or remote/branch office (ROBO) environments. However, two or more switches are used for high
availability and failover in production environments. Because VxRail is an entire software-defined data center in a
box, if one switch fails you are at risk of losing availability of hundreds of virtual machines.

Figure 4. Rear view of one deployment of a VxRail Appliance connected to two 10GbE
switches and a separate switch for out-of-band management

Workstation/Laptop
A workstation/laptop with a web browser for the VxRail user interface is required. It must be either plugged into the
switch or able to logically reach the VxRail management VLAN from elsewhere on your network; for example, a jump
server (https://en.wikipedia.org/wiki/Jump_server).

Dont try to plug your workstation/laptop directly into a server node on a VxRail Appliance; plug it into your
network or switch and make sure that it is logically configured to reach VxRail.

You will use a browser for the VxRail user interface. The latest versions of Firefox, Chrome, and Internet Explorer 10+
are all supported. If you are using Internet Explorer 10+ and an administrator has set your browser to compatibility
mode for all internal websites (local web addresses), you will get a warning message from VxRail. Contact your
administrator to whitelist URLs mapping to the VxRail user interface.

Out-of-Band Management (optional)


If the VxRail Appliances will be located at a data center that you cannot access easily, we recommend setting up an
out-of-band management switch to facilitate direct communication with each node.

For G-Series (and VxRail models prior to VxRail 4.0):


To use out-of-band management, connect the BMC port on each node to a separate switch to provide physical
network separation.

9
Default values, capabilities, and recommendations for out-of-band management are provided with server hardware
information. The default configuration is via DHCP with:

Username: UserId Password: Passw0rd!

NOTE: Case sensitive and using a zero in place of a lowercase o in the password
The <ApplianceID> can be found on a pull out tag located in front of the chassis. The default hostnames should be as
follows:

BMC interface node 1: hostname = <ApplianceID>-01


BMC interface node 2: hostname = <ApplianceID>-02
BMC interface node 3: hostname = <ApplianceID>-03
BMC interface node 4: hostname = <ApplianceID>-04

For E, P, S and V Series based on Dell PowerEdge Servers:


To use out-of-band management, connect the internal Dell Remote Access Controller (iDRAC) port to a separate
switch to provide physical network separation.

Default values, capabilities, and recommendations for out-of-band management are provided with server hardware
information. The default configuration is:

Username: root Password: calvin

You will need to reserve an IP address for each iDRAC in your VxRail cluster (one per node).

Before Cabling VxRail Appliances

Step 1: Plan Logical Network


VxRail is not a simple server but is an entire data center in a box. Consequently, the network and virtualization teams
need to meet in advance to plan VxRails network architecture.

Use the VxRail Setup Checklist and the VxRail Network Configuration Table to help create your network plan.
References to rows in this document are to rows in the VxRail Network Configuration Table.

Once you set up VxRail Appliances, the configuration cannot be changed easily. Consequently, we
strongly recommend that you take care during this planning phase to decide on the configurations that
will work most effectively for your organization.

A VxRail cluster consists of three or more VxRail nodes in VxRail 4.0 and four or more VxRail nodes in earlier
releases. VxRail clusters can scale out to 64 ESXi hosts all on one vSAN datastore, backed by a single vCenter
Server and VxRail Manager. Deployment, configuration, and management are handled by VxRail, allowing the
compute capacity and the vSAN datastore to grow automatically. VxRail Manager automatically discovers the new
node and configures the new node, and automatically adds the new node to default vSphere Distributed Switch.
vCenter propagates the port groups of default VDS to the new node. However, if customers manually add a new
VDS/VSS, or add unused physical network adapter(s) to default/new VDS/VSS, then they need to manually configure
the network on new node accordingly.

10
You will be making decisions in the following areas:

Step 1A. Reserve VLANs (best practice)


Step 1B. System
Step 1C. Management
Step 1D. vMotion and vSAN
Step 1E. Solutions
Step 1F. Workstation/laptop

Step 1A. Reserve VLANs (Best Practice)


VxRail groups traffic in the following categories: management, vSphere vMotion, vSAN, and Virtual Machine. Traffic
isolation on separate VLANs is highly recommended (but not required) in VxRail. If you are using multiple switches,
connect them via VLAN trunked interfaces and ensure that all VLANs used for VxRail are carried across the trunk
following the requirements in this user guide.

Management traffic includes all VxRail, vCenter Server, and ESXi communication. The management VLAN also
carries traffic for vRealize Log Insight. By default, all management traffic is untagged and must be able to go over a
Native VLAN on your switch or you will not be able to build VxRail and configure the ESXi hosts. However, you can
tag management traffic in one of two ways:

1. Configure each VxRail port on your switch to tag the management traffic and route it to the desired VLAN.

2. Alternately, you can configure a custom management VLAN to allow tagged management traffic. After you power on each
node, but before your run VxRail initial configuration. Your Dell EMC service representative will take care of this during
installation.

vSphere vMotion and vSAN traffic cannot be routed. This traffic will be tagged for the VLANs you specify in VxRail
initial configuration.

Dedicated VLANs are preferred to divide virtual machine traffic. VxRail will create one or more VM Networks for
you, based on the name and VLAN ID pairs that you specify. Then when you create VMs in vSphere Web Client, you
can easily assign the virtual machine to the VM Network(s) of your choice. For example, you could have one VLAN
for Development, one for Production, and one for Staging.

Network Configuration Enter the management VLAN ID for VxRail, ESXi, and vCenter Server. If you do
Table not plan to have a dedicated management VLAN and will accept this traffic as
Row 1 untagged, enter 0 or Native VLAN.

Network Configuration
Enter a VLAN ID for vSphere vMotion.
Table
(Enter a 0 in the VLAN ID field for untagged traffic)
Row 34

Network Configuration
Enter a VLAN ID for vSAN.
Table
(Enter a 0 in the VLAN ID field for untagged traffic)
Row 38

Network Configuration Enter a Name and VLAN ID pair for each VM network you want to create.
Table You must create at least one VM Network.
Rows 39-41 (Enter a 0 in the VLAN ID field for untagged traffic)

NOTE: If you have multiple independent VxRail clusters, we recommend using different VLAN IDs for vSAN traffic
and for management across multiple VxRail clusters. Otherwise, all VxRail nodes on the same network will
see all multicast traffic.

11
Step 1B. System
VxRail can configure connections to external servers in your network.

Time Zone, NTP Server


A time zone is required. It is configured on vCenter Server and each ESXi host.

An NTP server is not required, but it is recommended. If you provide an NTP server, vCenter Server will be
configured to use it. If you do not provide at least one NTP server, VxRail uses the time that is set on ESXi host #1
(regardless of whether the time is correct or not).

A proxy server is optional and only applies to VxRail models prior to 3.5. If you have a proxy server on your network
and vCenter Server needs to access services outside of your network, supply the IP address, port, username, and
password.

Network Configuration
Table Enter your time zone.
Row 3

Network Configuration
Table Enter the hostname(s) or IP address(es) of your NTP server(s).
Row 4

Network Configuration
Table Enter the proxy server IP address, port, username, and password.
Rows 6 and 7

DNS Server
One or more external DNS servers are required for production use (it is not required in a completely isolated
environment). DNS is used for some VxRail management operations, such as importing an OVA file, which requires a
FQDN for direct host access. During initial configuration, VxRail sets up the internal vCenter Server to resolve
hostnames to the DNS server. If you are using an external vCenter Server, the DNS server must be correct and
resolve all ESXi hostnames.

If you are in an isolated environment, you will need to use the DNS server that is built into vCenter Server. To
manage VxRail via your workstation/laptop, configure your laptops network settings to use the vCenter Server IP
address (Row 15) for DNS. VxRails IP addresses and hostnames are configured for you.

Make sure that the DNS IP address is accessible from the network to which VxRail is connected and
functioning properly. If the DNS server requires access via a gateway that is not reachable during initial
configuration, do not enter a DNS IP address. Instead, add a DNS server after you have configured
VxRail using VMware KB (http://kb.vmware.com/kb/2107249).

Network Configuration
Enter the IP address(es) for your DNS server(s). Leave blank if you are in an
Table
isolated environment. Required when an external vCenter Server is used.
Row 5

If you are using your corporate DNS server(s) for VxRail, be sure to add the hostnames and IP addresses for VxRail
Manager, vCenter Server, Log Insight, and each ESXi host (see the naming scheme in ESXi Hostnames and IP
Addresses). vMotion and vSAN IP addresses are not configured for routing by VxRail and there are no hostnames

12
Example of VxRail hostnames and IP addresses configured on a DNS server:

esxi-host01.localdomain.local 192.168.10.1
esxi-host02.localdomain.local 192.168.10.2
esxi-host03.localdomain.local 192.168.10.3
esxi-host04.localdomain.local 192.168.10.4
vxrail.localdomain.local 192.168.10.100
vcserver.localdomain.local 192.168.10.101
loginsight.localdomain.local 192.168.10.102

Step 1C. Management


VxRail does not have a single hostname. You must configure the hostnames for each ESXi host, VxRail Manager,
and vCenter Server.

You must configure the IP addresses for VxRail, vCenter Server, and your ESXi hosts. When selecting your IP
addresses, you must make sure that none of them conflict with existing IP addresses in your network. Also make sure
that these IP addresses can reach other hosts in your network.

You cannot easily change the IP addresses after you have configured VxRail.

ESXi Hostnames and IP Addresses


All ESXi hostnames in a VxRail cluster are defined by a naming scheme that comprise: an ESXi hostname prefix (an
alphanumeric string), a separator (None or a dash -), an iterator (Alpha, Num X, or Num 0X), and a domain. The
Preview field shown during VxRail initial configuration is an example of the hostname of the first ESXi host. For
example, if the prefix is host, the separator is None, the iterator is Num 0X, and the domain is local, the first
ESXi hostname would be host01.local. The domain is also automatically applied to the vCenter Server and VxRail
virtual machines. (Example: my-vcenter.local)

Examples:

Example 1 Example 2 Example 3


Prefix host myname esxi-host
Separator None - -
Iterator Num 0X Num X Alpha
Domain local college.edu company.com
Resulting hostname host01.local myname-1.college.edu esxi-host-a.company.com

There are three or more ESXi hosts in your initial cluster and each requires an IP address. If you plan to scale out
with additional nodes in this VxRail cluster within the first few weeks after installation, we recommend you allocate
extra IP addresses for each of the ESXi, vMotion, and vSAN IP pools when you initially configure VxRail (three extra
IP addresses per node). Then when you add nodes to a cluster, you will only need to enter the ESXi and VxRail /
vCenter Server passwords.

Network Configuration
Enter an example of your desired ESXi host-naming scheme. Be sure to show
Table
your desired prefix, separator, iterator, and domain.
Rows 8-11

Network Configuration
Enter the starting and ending IP addresses for the ESXi hosts - a continuous IP
Table
range is required, with a minimum of 4 IPs.
Rows 12 and 13

13
vCenter Server
A new feature introduced in VxRail 3.5 is the ability to join an existing vCenter Server instead of deploying a new
vCenter Server for the VxRail cluster you will build. This allows a remote central vCenter Server to manage multiple
VxRail clusters in a single pane of glass.

If you want VxRail to create a new vCenter Server, you will need to specify a hostname and IP address for your new
vCenter Server and Platform Services Controller (PSC) virtual machines. (Rows 14-17)

If you want VxRail to join an existing vCenter Server, you will need to:

Know whether your external vCenter Server has an embedded or external Platform Services Controller. If the PSC is external,
enter the PSC FQDN (Row 18).
Know the external vCenter Server FQDN (Row 19), Single Sign-on tenant (SSO) (Row 20) and the administrative username
and password (Row 21).
Create a VxRail management user and password (Row 22) for this VxRail cluster on the external vCenter Server. This user
must be created with no permissions and it must be unique for each VxRail cluster on this external vCenter Server.
Create or select an existing datacenter (Row 23) on the external vCenter Server.
Specify the name of the cluster (Row 24) that will be created by VxRail in the selected datacenter when the cluster is built. This
name must be unique and not used anywhere in the datacenter on the external vCenter Server.
Modify the default IP address for VxRail initial configuration. This will be done by your Dell EMC service representative.

Starting with Release 3.5, the top-level domain of the external vCenter Server and PSC must be publicly
known, such as .com, .net, .edu, .local, and many country-specific suffixes. Most of those listed in this
reference are supported: https://en.wikipedia.org/wiki/List_of_Internet_top-level_domains

14
Internal vCenter Server (deployed when VxRail is built)

Network Configuration
Enter an alphanumeric string for the new vCenter Server hostname. The domain specified in
Table
Row 11 will be appended.
Row 14

Network Configuration
Table Enter the IP address for new vCenter Server.
Row 15

Network Configuration
Enter an alphanumeric string for the new Platform Services Controller hostname. The
Table
domain specified in Row 11 will be appended.
Row 16

Network Configuration
Table Enter the IP address for new Platform Services Controller.
Row 17

External vCenter Server

Network Configuration Enter the FQDN of the external Platform Services Controller (PSC) in the hostname. In the
Table user interface, there is a checkbox for external PSC.
Row 18 Leave this row blank if the PSC is embedded in the external vCenter Server.

Network Configuration
Table Enter the FQDN of the external vCenter Server in the hostname field.
Row 19

Network Configuration
Enter the Single Sign-on (SSO) tenant for the external vCenter Server. (For example
Table
vsphere.local)
Row 20

Network Configuration
Enter the full administrative username and password for the external vCenter Server. (For
Table
example, administrator@vpshere.local)
Row 21

Network Configuration Go to the external vCenter Server and create a new, unique user and password with no
Table permissions for this cluster.
Row 22 (For example, cluster1-manager@vsphere.local)
Enter the full VxRail management username and password that you created.

Network Configuration Go to the external vCenter Server and select or create a datacenter.
Table
Row 23 Enter the name of a datacenter on the external vCenter Server.

Network Configuration
Table Enter the name of the cluster that will be created by VxRail.
Row 24

VxRail Manager and Networking


You must specify the hostname and IP address for the VxRail Manager virtual machine. In addition, you must specify
the subnet mask and gateway that VxRail Manager, vCenter Server, and the ESXi hosts all share.

We do not recommend using the default VxRail initial IP address (192.168.10.200/24) as your permanent
VxRail IP address (Row 26), because if you later add more nodes to the VxRail cluster or if you create
more clusters, the initial IP addresses will conflict with the existing clusters IP address.

15
Network Configuration
Table Enter an alphanumeric string for the VxRail Manager hostname.
Row 25

Network Configuration
Table Enter the IP address for VxRail Manager after it is configured. We recommend that you do
not use the default 192.168.10.200/24
Row 26

Network Configuration
Table Enter the subnet mask and gateway for all management IP addresses.
Rows 27 and 28

Passwords
You must specify one root password for all ESXi hosts in the cluster. You must also specify one password for the
VxRail Manager virtual machine. Unless you are using an external vCenter Server, the VxRail Manager and vCenter
Server virtual machines will have the same administrative password.

Passwords must contain between 8 and 20 characters with at least one lowercase letter, one uppercase letter, one
numeric character, and one special character. For more information about password requirements, see the vSphere
password documentation and vCenter Server password documentation.

For ESXi hosts, the username is root; the pre-configuration password is Passw0rd! and the post-configuration
password is the one you set in VxRail initial configuration (Row 29).

For VxRail Manager and the internal vCenter Server, the username for both user interfaces is
administrator@vsphere.local and the console username is root. The pre-configuration password for VxRail is
Passw0rd! and the post-configuration password is the one you set in VxRail initial configuration (Row 30).

Network Configuration
Please check that you know your passwords in these rows, but for security reasons, we
Table
suggest that you do not write them down.
Rows 29 and 30

Step 1D. vMotion and vSAN


vSphere vMotion and vSAN each require at least three IP addresses for the initial cluster.

Because VxRail supports up to 64 nodes in a cluster, you can allocate up to 64 vMotion IP addresses and 64 vSAN
IP addresses.

16
Network Configuration
Enter the starting and ending IP addresses for vSphere vMotion a continuous IP range is
Table
required, with a minimum of 4 IPs. Routing is not configured for vMotion.
Rows 31 and 32

Network Configuration
Table Enter the subnet mask for vMotion.
Row 33

Network Configuration
Enter the starting and ending IP addresses for vSAN a continuous IP range is required,
Table
with a minimum of 4 IPs. Routing is not configured for vSAN.
Rows 35 and 36

Network Configuration
Table Enter the subnet mask for vSAN.
Row 37

Step 1E. Solutions


VxRail is deployed with vRealize Log Insight. Alternately, you may choose to use your own third-party syslog
server(s). If you choose to use vRealize Log Insight, it will always be available by pointing a browser to the configured
IP address with the username, admin. (If you ssh to Log Insight instead of pointing your browser to it, the username is
root.) The password, in either case, is the same password that you specified for vCenter Server/VxRail (Row 29).

NOTE: The IP address for Log Insight must be on the same subnet as VxRail and vCenter Server.

Network Configuration
Table Enter the hostname and IP address for vRealize Log Insight or the hostname(s) of your
Rows 42 and 43 or existing third-party syslog server(s).
Row 44

Step 1F. Workstation/Laptop


To access the VxRail for the first time, you must use the temporary VxRail initial IP address that was pre-configured,
typically 192.168.10.200/24. You will change this IP address during VxRail initial configuration to your desired
permanent address for your new VxRail cluster.

VxRail Workstation/laptop
Example
Configuration IP address/netmask IP address Subnet mask Gateway
Initial (temporary) 192.168.10.200/24 192.168.10.150 255.255.255.0 192.168.10.254
Post-
configuration 10.10.10.100/24 10.10.10.150 255.255.255.0 10.10.10.254
(permanent)

Your workstation/laptop will need to be able to reach both the VxRail initial IP address (Row 2) and your selected
permanent VxRail IP address (Row 26). VxRail initial configuration will remind you that you may need to reconfigure
your workstation/laptop network settings to access the new IP address.

It may be possible to give your workstation/laptop or your jump server two IP addresses, which allows for a smoother
experience. Depending on your workstation/laptop, this can be implemented in several ways (such as dual-homing or
multi-homing). Otherwise, change the IP address on your workstation/laptop when instructed to and then return to
VxRail Manager.

17
If you cannot reach the VxRail initial IP address, Dell EMC support team can configure a custom IP address, subnet
mask, and gateway.

Furthermore, if a custom management VLAN ID will be used for VxRail other than VLAN 1 (VLAN 1
default management VLAN ID for most of switches), make sure the workstation/laptop can also access
this management VLAN.

Network Configuration Please enter the VxRail initial IP address.


Table Enter 192.168.10.200/24 if you can reach this address on your network.
Row 2
Otherwise, enter your custom IP address, subnet mask, and gateway.

Step 2: Set Up Switch


In order for VxRail to function properly, you must configure the ports that VxRail will use on your switch before you
plug in VxRail nodes and turn them on.

Set up your switch by following these steps:

Step 2A. Understanding switch configuration


Step 2B. Configure VLANs on your switch(es)
Step 2C. Confirm your configuration

Step 2A. Understanding Switch Configuration


Be sure to follow your switch vendors best practices for performance and availability. Ports on a switch operate in
one of the following modes:

Access mode The port accepts only untagged packets and distributes the untagged packets to all VLANs on that port. This is
typically the default mode for all ports.
Trunk mode When this port receives a tagged packet, it passes the packet to the VLAN specified in the tag. To configure the
acceptance of untagged packets on a trunk port, you must first configure a single VLAN as a Native VLAN. A Native VLAN
is when you configure one VLAN to use as the VLAN for all untagged traffic.
Tagged-access mode The port accepts only tagged packets.

Network Traffic
Each VxRail node will utilize either two 10GbE network ports or four 1GbE network ports (Note: the 10GbE ports on
the VxRail models in the E and S Series with single processors will auto-negotiate to 1GbE). Each port must be
connected to a switch that supports IPv4 multicast and IPv6 multicast. To ensure vSphere vMotion traffic does not
consume all available bandwidth on the port, VxRail limits vMotion traffic to 4Gbps.

18
10GbE Traffic Configuration

VxRail traffic on G Series and models prior to VxRail 4.0 10GbE NICs is separated as follows:

Traffic Type Requirements 1st 10GbE NIC 2nd 10GbE NIC NIOC Shares

Management IPv6 multicast Active Standby 20

vSphere vMotion Active Standby 50

vSAN IPv4 multicast Standby Active 100

Virtual Machines Active Standby 30

VxRail traffic on E, P, S and V Series 10GbE NICs is separated as follows:

Traffic Type Requirements UPLINK1(10Gb) UPLINK2(10Gb) UPLINK3 UPLINK4 NIOC Shares

VMNIC0 VMNIC1 No VMNIC No VMNIC

Management IPv6 multicast Active Standby Unused Unused 20

vSphere vMotion Active Standby Unused Unused 50

vSAN IPv4 multicast Standby Active Unused Unused 100

Virtual Machines Active Standby Unused Unused 30

19
1GbE Traffic Configuration

VxRail traffic on models prior to VxRail 4.0 1GbE NICs is separated as follows:

Traffic Type Requirements UPLINK1 (1Gb) UPLIINK2(1Gb) UPLINK3(1Gb) UPLINK4(1Gb) NIOC


VMNIC0 VMNIC1 VMNIC2 VMNIC3 Shares

Management IPv6 multicast Standby Active Unused Unused 40

vSphere vMotion Unused Unused Standby Active 50

vSAN IPv4 multicast Unused Unused Active Standby 100

Virtual Machines Active Standby Unused Unused 60

VxRail traffic on G Series 1GbE NICs is separated as follows:

Traffic Type Requirements UPLINK1 (1Gb) UPLIINK2(1Gb) UPLINK3(1Gb) UPLINK4(1Gb) NIOC


VMNIC0 VMNIC1 VMNIC2 VMNIC3 Shares

Management IPv6 multicast Active Standby Unused Unused 40

vSphere vMotion Unused Unused Standby Active 50

vSAN IPv4 multicast Unused Unused Active Standby 100

Virtual Machines Standby Active Unused Unused 60

VxRail traffic on E and S Series 1GbE NICs is separated as follows:

Traffic Type UPLINK1(1Gb) UPLINK2(1Gb) UPLINK3(1Gb) UPLINK4(1Gb) NIOC Shares


VMNIC2 VMNIC3 VMNIC0 VMNIC1

Management Standby Active Unused Unused 40

vMotion Unused Unused Standby Active 50

vSAN Unused Unused Active Standby 100

Virtual Machines Active Standby Unused Unused 60

20
Multicast Traffic
IPv4 multicast support is required for the vSAN VLAN. IPv6 multicast is required for the VxRail
management VLAN. The network switch(es) that connect to VxRail must allow for pass-through of
multicast traffic on these two VLANs. Multicast is not required on your entire network, just on the ports
connected to VxRail.

Why multicast? VxRail Appliances have no backplane, so communication between its nodes is facilitated via the
network switch. This communication between the nodes uses VMware Loudmouth auto-discovery capabilities, based
on the RFC-recognized "Zero Network Configuration" protocol. New VxRail nodes advertise themselves on a network
using the VMware Loudmouth service, which uses IPv6 multicast. This IPv6 multicast communication is strictly limited
to the management VLAN that the nodes use for communication.

VxRail creates very little traffic via IPv6 multicast for autodiscovery and management. It is optional to limit traffic
further on your switch by enabling MLD Snooping and MLD Querier.

There are two options to handle vSAN IPv4 multicast traffic. Either limit multicast traffic by enabling both IGMP
Snooping and IGMP Querier or disable both of these features. We recommend enabling both IGMP Snooping and
IGMP Querier, if your switch supports them.

IGMP Snooping software examines IGMP protocol messages within a VLAN to discover which interfaces are
connected to hosts or other devices interested in receiving this traffic. Using the interface information, IGMP
Snooping can reduce bandwidth consumption in a multi-access LAN environment to avoid flooding an entire VLAN.
IGMP Snooping tracks ports that are attached to multicast-capable routers to help manage IGMP membership report
forwarding. It also responds to topology change notifications. Disabling IGMP Snooping may lead to additional
multicast traffic on your network.

IGMP Querier sends out IGMP group membership queries on a timed interval, retrieves IGMP membership reports
from active members, and allows updates to group membership tables. By default, most switches enable IGMP
Snooping, but disable IGMP Querier.

Inter-switch Communication
In a multi-switch environment, configure the ports used for inter-switch communication to carry IPv6 multicast traffic
for the VxRail management VLAN. Likewise, carry IPv4 multicast traffic between switches for the vSAN VLAN.
Consult your switch manufacturers documentation for how to do this.

Disable Link Aggregation


Do not use link aggregation, including protocols such as LACP and EtherChannel, on any ports directly connected to
VxRail Appliances. VxRail Appliances use active/standby configuration (NIC teaming) for network redundancy, as
discussed in the section on Network Traffic.

vSphere Security Recommendations


Security recommendations for vSphere are found here:

http://pubs.vmware.com/vsphere-60/index.jsp?topic=%2Fcom.vmware.vsphere.security.doc%2FGUID-FA661AE0-
C0B5-4522-951D-A3790DBE70B4.html

In particular, ensure that physical switch ports are configured with Portfast if spanning tree is enabled. Because
VMware virtual switches do not support STP, physical switch ports connected to an ESXi host must have Portfast
configured if spanning tree is enabled to avoid loops within the physical switch network. If Portfast is not set, potential
performance and connectivity issues might arise.

21
Step 2B. Configure VLANs on your switch(es)
Now that you understand the switch requirements, it is time to configure your switch(es).

The VxRail network can be configured with or without VLANs. For performance and scalability, it is highly
recommended to configure VxRail with VLANs. As listed in the VxRail Setup Checklist, you will be configuring the
following VLANs:

Management VLAN (default is untagged/native): make sure that IPv6 multicast is configured/enabled on the management
VLAN (regardless of whether tagged or native).
vSAN VLAN: make sure that IPv4 multicast is configured/enabled on the vSAN VLAN (enabling IGMP snooping and querier is
highly recommended).
vSphere vMotion VLAN
VM Networks VLANs

Figure 5. VxRail VLAN configuration, G Series shown.

Using the VxRail Network Configuration Table configure each switch port that will be connected to a VxRail node:

Configure the Management VLAN (Row 1) on the switch ports. If you entered Native VLAN, then set the ports on
the switch to accept untagged traffic and tag it to the custom management VLAN ID. Untagged management traffic is
the default management VLAN setting on VxRail. For example, on the Dell EMC Connectrix VDX-6740 switch, using
switchport trunk native VLAN vid configures VxRail management traffic to travel on the customers management
VLAN.

Regardless of whether you are using an untagged Native VLAN or a tagged VLAN, you must set the management
VLAN to allow IPv6 multicast traffic to pass through. Depending on the type of switch you have, you may need to turn
on IPv6 and multicast directly on the port or on the VLAN. Be sure to review the previous section, Step 2A.
Understanding Switch Configuration, and consult the switch manufacturer for further instructions on how to
configure these settings.

22
Configure a vSphere vMotion VLAN (Row 34) on the switch ports.

Configure a vSAN VLAN (Row 38) on the switch ports, set to allow IPv4 multicast traffic to pass through.

Configure the VLANs for your VM Networks (Rows 39-41) on the switch ports.

Step 2C. Confirm Your Configuration


Some network configuration errors cannot be recovered from and you will need VxRail support to reset to factory
defaults. When VxRail is reset to factory defaults, all data is lost. Please confirm your switch setting in this step.

Read your vendor instructions for your switch:

a. Confirm that IPv4 multicast and IPv6 multicast are enabled for the VLANs described in this document.
b. If you have two or more switches, confirm that IPv4 multicast and IPv6 multicast traffic is transported between them.
c. Remember that management traffic will be untagged on the native VLAN on your switch, unless all ESXi hosts have been
customized for a specific management VLAN.
Network design and accessibility:

a. Confirm that you can ping or point to the VxRail initial IP address (Row 2).
b. Confirm that your DNS server(s) are reachable unless you are in an isolated environment (Row 5). The DNS server must
be reachable from the VxRail, vCenter Server, and ESXi network addresses. Then update your DNS server with all VxRail
hostnames and IP addresses.
c. Confirm that your management gateway IP address is accessible (Row 26). It is used for vSphere High Availability (HA)
to work correctly. You can use a corporate gateway on your VxRail network segment or you may be able to configure your
L3 switch as the gateway. When vSphere HA is not working, you will see a network isolation address error. VxRail will
continue to function, but it will not be protected by the vSphere HA feature.
http://pubs.vmware.com/vsphere-60/index.jsp#com.vmware.vsphere.avail.doc/GUID-5432CA24-14F1-44E3-87FB-
61D937831CF6.html
d. If you have configured NTP servers, proxy servers, or a third-party syslog server, confirm that you are able to reach them
from all of your configured VxRail IP addresses.

After Planning and Switch Setup


If you have successfully followed all of the previous steps, your network setup is complete and you are ready to
connect and initialize your VxRail Appliance. These steps are done by Dell EMC service representatives. They are
included here to help you understand the complete process.

Step 1. Rack and cable the VxRail Appliance. After the nodes are cabled, power on all three or four initial nodes
in your VxRail cluster.
Do not turn on any other VxRail nodes until you have completed the full configuration of the first three or
four nodes.
Step 2. Connect a workstation/laptop to access the VxRail initial IP address on your selected management VLAN.
It must be either plugged into the switch or able to logically reach the VxRail management VLAN from
elsewhere on your network.
Step 3. Use the VxRail Pre-Installation Site Checklist provided by the Dell EMC service representative to
automatically generate the JSON-formatted configuration file using the VxRail Network Configuration
Table.
Step 4. Browse to the VxRail initial IP address (Row 2); for example, https://192.168.10.200.
Step 5. Click Get Started. Then if you agree, accept the VxRail End-User License Agreement (EULA).
Step 6. Click Configuration File to upload a JSON-formatted configuration file that you have created in Step 3.
Step 7. Click the Review First or Validate button. VxRail verifies the configuration data, checking for conflicts.

23
Step 8. After validation is successful, click the Build VxRail button.
Step 9. The new IP address for VxRail will be displayed.
Click Start Configuration. Ignore any browser messages about security (for example, by clicking
Advanced and Proceed.)

NOTE: You may need to manually change the IP settings on your workstation/laptop to be on the same
subnet as the new VxRail IP address (Row 26).
NOTE: If your workstation/laptop cannot connect to the new IP address that you configured, you will get
a message to fix your network and try again. If you are unable to connect to the new IP address
after 20 minutes, VxRail will revert to its un-configured state and you will need to re-enter your
configuration at the initial IP address (Row 2).
NOTE: After the build process starts, if you close your browser, you will need to browse to the new IP
address (Row 26).

Step 10. Progress is shown as VxRail is built. VxRail implements services, creates the new ESXi hosts, sets up
vCenter Server, vMotion, and vSAN.
When you see the Hooray! page, VxRail is built. Click the Manage VxRail button to continue to VxRail
management. You should also bookmark this IP address in your browser for future use.

Step 11. Configure your corporate DNS server for all VxRail hostnames and IP addresses unless you are in an
isolated environment.
Step 12. Connect to VxRail Manager using either the VxRail Manager IP address (Row 26) or the fully-qualified
domain name (FQDN) (Row 25) that you configured on your DNS server (e.g.
https://vxrail.yourcompany.com).

Unassigned Physical Ports


For VxRail models based on Dell PowerEdge servers, which includes the E, S, P and V series, VxRail Manager will
not manage the optional 2 x 10GbE ports on the PCI-e NIC or the 2 x 1GbE NDC if customers choose 10GbE as their
primary networking interface. Customers can configure the additional ports in vCenter for non-VxRail system traffics,
such as VM networks, iSCSI, NFS, etc.

The supported operations include:

Create a new vSphere Standard Switch(VSS), and connect unused ports to the VSS.

Connect unused ports to the default vSphere Distributed Switch.

Create a new vSphere Distributed Switch(VDS), add VxRail nodes to the new VDS, and connect their
unused network ports to the VDS.

Create new VMKernel adapters and enable services of IP Storage and vSphere Replication.

Create new VM Networks.

Customers need to follow VMwares official instructions/procedures for the above operations.

Note: Do NOT move VxRail system traffic to these ports. VxRail system traffic includes the Management,
vSAN, vCenter Server and vMotion Networks.

Unsupported Operations:

Renaming the default VDS.

24
Renaming the default port group.

Migrating VxRail system traffic to other port groups.

Network Segregation
Some customers may want to separate VM networks and vSphere management network. They can leverage those
unused ports to enforce network segregation.

Please be sure to work with your Dell EMC implementation and support teams to ensure these additional ports are
cabled and set-up in the appropriate order as prescribed by Dell EMC.

25
VxRail Network Configuration Table
The Dell EMC service representative will use a VxRail Pre-Site Installation tool. This table lists the information
needed for the tool.

Row Category Description


1 VxRail Management Set tagging for your management VLAN on your switch or on ESXi before you
VLAN ID configure VxRail. Default is untagged traffic on the Native VLAN.
2 VxRail initial IP If you cannot reach the default (192.168.10.200/24), set an alternate IP address
address
3 System Global settings Time zone
4 NTP server(s)
5 DNS server(s)
6 Proxy settings IP address and port
7 Username and password
8 Management ESXi ESXi hostname prefix
9 hostnames Separator
10 and IP Iterator
11 addresses Domain
12 ESXi starting address for IP pool
13 ESXi ending address for IP pool
14 vCenter Server vCenter Server hostname
15 (internal) vCenter Server IP address
16 Leave blank if Platform Services Controller hostname
17 external VC Platform Services Controller IP address
18 vCenter Server External Platform Services Controller (PSC) Hostname (FQDN)
(external) Leave blank if PSC is internal
19 Leave blank if External vCenter Server hostname (FQDN)
20 internal VC External vCenter Server SSO tenant
21 Existing administrative username and password
22 New VxRail management username and password
23 Existing datacenter name
24 New cluster name
25 VxRail VxRail hostname
26 Manager VxRail IP address
27 Networking Subnet mask
28 Gateway
29 Passwords ESXI root
30 VxRail Manager and internal vCenter Server administrator@vsphere.local
31 vMotion Starting address for IP pool
32 Ending address for IP pool
33 Subnet mask
34 VLAN ID
35 vSAN Starting address for IP pool
36 Ending address for IP pool
37 Subnet mask
38 VLAN ID
39 VM VM Network name and VLAN ID
40 Networks (unlimited number)
41 VM Network name and VLAN ID
42 Solutions Logging vRealize Log Insight hostname
43 vRealize Log Insight IP address
44 Syslog server (instead of Log Insight)

26
VxRail Setup Checklist

Physical Network
VxRail cluster: Decide if you want to plan for additional nodes beyond the initial three (or four)-node cluster.
Network switch: Two 10GbE ports (SFP+ or RJ-45) or four 1GbE ports for each VxRail node. VxRail initially has three or four
nodes. You can have up to 64 nodes in a VxRail cluster. Check cable requirements.
Topology: Decide if you will have a single or multiple switch setup for redundancy.
Workstation/laptop: Any operating system with a browser to access the VxRail user interface. The latest versions of Firefox,
Chrome, and Internet Explorer 10+ are all supported.
Out-of-band Management (optional): One available port that supports 100Mbps for each VxRail node.

Logical Network
One management VLAN with IPv6 multicast for traffic from VxRail, vCenter Server, ESXi (default is
untagged/native).
Reserve VLANs One VLAN with IPv4 multicast for vSAN traffic.
One VLAN for vSphere vMotion.
One or more VLANs for your VM Network(s).
Time zone.
Hostname or IP address of the NTP server(s) on your network (recommended).
System
IP address of the DNS server(s) on your network (required, except in isolated environments).
Optional: IP address, port, username, and password of your proxy server.
Decide on your ESXi host naming scheme.
Reserve three or more contiguous IP addresses for ESXi hosts.
Decide if you will use a vCenter Server that is external or internal to your new VxRail cluster.
Internal vCenter Server: Decide on hostnames for vCenter Server and PSC and reserve two IP addresses.
External vCenter Server: Determine PSC, hostname, administration user, and datacenter. Create a new
Management
VxRail management user. Decide on a VxRail cluster name.
Decide on a hostname and reserve one IP address for VxRail Manager.
Determine IP address of the default gateway and subnet mask.
Select a single root password for all ESXi hosts in the VxRail cluster.
Select a single password for VxRail and vCenter Server.
Reserve three or more contiguous IP addresses and a subnet mask for vSphere vMotion.
vMotion and Reserve three or more contiguous IP addresses and a subnet mask for vSAN.
vSAN

To use vRealize Log Insight: Reserve one IP address and decide on the hostname.
Solutions
To use an existing syslog server: Get the hostname or IP address of your third-party syslog server.
Configure your workstation/laptop to reach the VxRail initial IP address.
Workstation
Make sure you also know how to configure it to reach the VxRail Manger IP address after configuration.
Configure your selected management VLAN (default is untagged/native). Confirm that IPv6 multicast is
configured/enabled on the management VLAN (regardless of whether tagged or native).
Configure your selected VLANs for vSAN, vSphere vMotion, and VM Networks.
Set up Switch
In multi-switch environments, configure the management and vSAN VLANs to carry the multicast traffic
between switches.
Confirm configuration and network access.

27
Appendix A: NSX Support on VxRail
VxRail supports VMware NSX software-defined networking (SDN) through vCenter Server. vCenter Server offers a
fully integrated option for SDN and network-layer abstraction with NSX. The NSX network-virtualization platform
delivers for networking what VMware delivers for compute and storage. In much the same way that server
virtualization allows operators to programmatically create, snapshot, delete, and restore software-based virtual
machines (VMs) on demand, NSX enables virtual networks to be created, saved, deleted, and restored on demand
without requiring reconfiguration of the physical network. The result fundamentally transforms the datacenter network-
operational model, reduces network-provisioning time from days or weeks to minutes, and dramatically simplifies
network operations. NSX is a non-disruptive solution that is deployed on any IP network, including existing datacenter
network designs or next-generation fabric architectures from any networking vendor.

With network virtualization, the functional equivalent of a network hypervisor reproduces the complete set of Layer 2
to Layer 7 networking services (e.g., switching, routing, access control, firewalling, QoS, and load balancing) in
software. Just as VMs are independent of the underlying x86 hardware platform and allow IT to treat physical hosts
as a pool of compute capacity, virtual networks are independent of the underlying IP network hardware and allow IT
to treat the physical network as a pool of transport capacity that can be consumed and repurposed on demand.

NSX coordinates ESXis vSwitches and the network services pushed to them for connected VMs to effectively deliver
a platformor network hypervisorfor the creation of virtual networks. Similar to the way that a virtual machine is a
software container that presents logical compute services to an application, a virtual network is a software container
that presents logical network serviceslogical switches, logical routers, logical firewalls, logical load balancers,
logical VPNs and moreto connected workloads. These network and security services are delivered in software and
require only IP packet forwarding from the underlying physical network.

To connected workloads, a virtual network looks and operates like a traditional physical network. Workloads see the
same Layer 2, Layer 3, and Layers 4-7 network services that they would in a traditional physical configuration. Its just
that these network services are now logical instances of distributed software modules running in the hypervisor on the
local host and applied at the vSwitch virtual interface.

The following NSX components are illustrated in Figure 4:

NSX vSwitch operates in ESXi server hypervisors to form a software abstraction layer between servers and the physical
network.

NSX Controller is an advanced, distributed state management system that controls virtual networks and overlays transport
tunnels. It is the central control point for all logical switches within a network and maintains information of all virtual machines,
hosts, logical switches, and VXLANs.

NSX Edge provides network-edge security and gateway services to isolate a virtualized network. You can install NSX Edge
either as a logical (distributed) router or as a services gateway.

NSX Manager is the centralized network management component of NSX, installed as a virtual appliance on an ESXi host.

28
Figure 6. NSX component information flow: NSX Manager, NSX Controller, NSX Edge, NSX
vSwitch
One NSX Manager maps to a single vCenter Server and to multiple NSX Edge, vShield Endpoint, and NSX Data
Security instances. Before you install NSX in your vCenter Server environment, consider your network configuration
and resources using the chart below.

NSX Resource Requirements:

Memory Disk Space vCPU

NSX Manager 12GB 60GB 4

NSX Edge:
Compact 512MB 512MB 1
Large 1GB 512MB 2
Extra Large 8GB 4.5GB (with 4GB swap) 6
Quad Large 1GB 512MB 4

vShield Endpoint 1GB 4GB 2


NSX Data Security 512MB 6GB per ESXi host 1

In a VxRail cluster, the key benefits of NSX are consistent, simplified network management and operations, plus the
ability to leverage connected workload mobility and placement. With NSX, connected workloads can freely move
across subnets and availability zones. Their placement is not dependent on the physical topology and availability of
physical network services in a given location. Everything a VM needs from a networking perspective is provided by
NSX, wherever it resides physically. It is no longer necessary to over-provision server capacity within each
application/network pod. Instead, organizations can take advantage of available resources wherever theyre located,
thereby allowing greater optimization and consolidation of resources. VxRail easily inserts into existing NSX
environments and provide NSX awareness so network administrators can leverage simplified network administration.
See the VMware NSX Design Guide for NSX best practices and design considerations.

For additional information related to NSX, refer to the following materials:

VMware NSX Network Virtualization Platform Technical White Paper at http://www.vmware.com/files/pdf/products/nsx/VMware-


NSX-Network-Virtualization-Platform-WP.pdf

Reference Design Guide: VMware NSX for vSphere at https://www.vmware.com/files/pdf/products/nsx/vmw-nsx-network-


virtualization-design-guide.pdf

Hyperconverged Transformation White Paper at http://www.vce.com/asset/documents/esg-whitepaper-hyperconverged-


transformation-sddc.pdf

29

Das könnte Ihnen auch gefallen