Sie sind auf Seite 1von 102

Field Installation Guide

Foundation 3.7
27-Feb-2017
Notice

Copyright
Copyright 2017 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.

License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.

Conventions
Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command The commands are executed in the Nutanix nCLI.

user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.

root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.

> command The commands are executed in the Hyper-V host shell.

output The information is displayed as output from a command or in a log file.

Default Cluster Credentials


Interface Target Username Password

Nutanix web console Nutanix Controller VM admin admin

vSphere Web Client ESXi host root nutanix/4u

Copyright | Field Installation Guide | Foundation | 2


Interface Target Username Password

vSphere client ESXi host root nutanix/4u

SSH client or console ESXi host root nutanix/4u

SSH client or console AHV host root nutanix/4u

SSH client or console Hyper-V host Administrator nutanix/4u

SSH client Nutanix Controller VM nutanix nutanix/4u

SSH client or console Acropolis OpenStack root admin


Services VM (Nutanix
OVM)

Version
Last modified: February 27, 2017 (2017-02-27 10:48:08 GMT-8)

Copyright | Field Installation Guide | Foundation | 3


Contents

1: Field Installation Overview..................................................................... 6

2: Creating a Cluster................................................................................... 7
Discovering Nodes and Launching Foundation...................................................................................8
Discovering Nodes That Are In the Same Broadcast Domain................................................. 8
Discovering Nodes in a VLAN-Segmented Network................................................................ 9
Launching Foundation.............................................................................................................10
Updating Foundation..........................................................................................................................11
Selecting the Nodes to Image........................................................................................................... 11
Defining the Cluster........................................................................................................................... 13
Setting Up the Nodes........................................................................................................................ 15
Selecting the Images......................................................................................................................... 17
Creating the Cluster...........................................................................................................................20
Configuring a New Cluster................................................................................................................ 23
Creating a Cluster (Single-Node Replication Target)........................................................................ 24

3: Imaging Bare Metal Nodes................................................................... 29


Preparing Installation Environment....................................................................................................30
Preparing a Workstation......................................................................................................... 30
Setting Up the Network...........................................................................................................35
Updating Foundation............................................................................................................... 37
Configuring Node Parameters........................................................................................................... 37
Configuring Global Parameters......................................................................................................... 42
Configuring Image Parameters..........................................................................................................44
Configuring Cluster Parameters........................................................................................................ 46
Monitoring Progress........................................................................................................................... 49
Cleaning Up After Installation............................................................................................................52

4: Downloading Installation Files.............................................................53


Foundation Files.................................................................................................................................54

5: Hypervisor ISO Images......................................................................... 56

6: Network Requirements......................................................................... 58

7: Controller VM Memory Configurations............................................... 61


CVM Memory and vCPU Configurations (G5/Broadwell)..................................................................61
Platform Workload Translation (G5/Broadwell).......................................................................62
CVM Memory and vCPU Configurations (G4/Haswell/Ivy Bridge).................................................... 62
CVM Memory Configurations for Features........................................................................................63

8: Hyper-V Installation Requirements......................................................65

4
9: Setting IPMI Static IP Address.............................................................69

10: Troubleshooting...................................................................................71
Fixing IPMI Configuration Problems.................................................................................................. 71
Fixing Imaging Problems................................................................................................................... 72
Frequently Asked Questions (FAQ)...................................................................................................73

11: Appendix: Imaging a Node (Phoenix)................................................80


Installation ISO Images......................................................................................................................80
Phoenix ISO Image.................................................................................................................81
AHV ISO Image...................................................................................................................... 82
Installing a Hypervisor By Using Hypervisor Installation Media........................................................ 83
Nutanix NX Series Platforms.................................................................................................. 83
Lenovo Converged HX Series Platforms................................................................................90
Cisco UCS Platforms............................................................................................................ 94
Installing the Controller VM and Hypervisor by Using Phoenix.......................................................100

5
1
Field Installation Overview
Nutanix installs AHV and the Nutanix Controller VM at the factory before shipping a node to a customer.
To use a different hypervisor (ESXi or Hyper-V) on factory nodes or to use any hypervisor on bare metal
nodes, the nodes must be imaged in the field. This guide provides step-by-step instructions on how to
use the Foundation tool to do a field installation, which consists of installing a hypervisor and the Nutanix
Controller VM on each node and then creating a cluster. You can also use Foundation to create just a
cluster from nodes that are already imaged or to image nodes without creating a cluster.
Note: Use Foundation to image factory-prepared (or bare metal) nodes and create a new cluster
from those nodes. Use the Prism web console (in clusters running AOS 4.5 or later) to image
factory-prepared nodes and then add them to an existing cluster. See the "Expanding a Cluster"
section in the Web Console Guide for this procedure.
A field installation can be performed for either factory-prepared nodes or bare metal nodes.
See Creating a Cluster on page 7 to image factory-prepared nodes and create a cluster from those
nodes (or just create a cluster for nodes that are already imaged).
See Imaging Bare Metal Nodes on page 29 to image bare metal nodes and optionally configure
them into one or more clusters.

Note: Foundation supports imaging an ESXi, Hyper-V, or AHV hypervisor on nearly all Nutanix
hardware models with some restrictions. Click here (or log into the Nutanix support portal and
select Documentation > Compatibility Matrix from the main menu) for a list of supported
configurations. To check a particular configuration, go to the Filter By fields and select the
desired model, AOS version, and hypervisor in the first three fields and then set the last field to
Foundation. In addition, check the notes at the bottom of the table.

Field Installation Overview | Field Installation Guide | Foundation | 6


2
Creating a Cluster
This procedure describes how to install a selected hypervisor and the Nutanix Controller VM on discovered
nodes and how to configure the nodes into a cluster. "Discovered nodes" are factory prepared nodes on
the same subnet that are not part of a cluster currently. This procedure runs the Foundation tool through
the Nutanix Controller VM (Controller VMbased Foundation).
Before you begin:
Make sure that the nodes that you want to image are factory-prepared nodes that have not been
configured in any way and are not part of a cluster.
Physically install the Nutanix nodes at your site. See the Physical Installation Guide for your model type
for installation instructions.
Your workstation must be connected to the network on the same subnet as the nodes you want to
image. Foundation does not require an IPMI connection or any special network port configuration
to image discovered nodes. See Network Requirements for general information about the network
topology and port access required for a cluster.

Determine the appropriate network (gateway and DNS server IP addresses), cluster (name, virtual IP
address), and node (Controller VM, hypervisor, and IPMI IP address ranges) parameter values needed
for installation.

Note: The use of a DHCP server is not supported for Controller VMs, so make sure to assign
static IP addresses to Controller VMs.

Note: Nutanix uses an internal virtual switch to manage network communications between the
Controller VM and the hypervisor host. This switch is associated with a private network on the
default VLAN and uses the 192.168.5.0/24 address space. For the hypervisor, IPMI interface,
and other devices on the network (including the guest VMs that you create on the cluster), do
not use a subnet that overlaps with the 192.168.5.0/24 subnet on the default VLAN. If you want
to use an overlapping subnet for such devices, make sure that you use a different VLAN.
Download the following files, as described in Downloading Installation Files on page 53:
AOS installer named nutanix_installer_package-version#.tar.gz from the AOS (NOS) download
page.
Hypervisor ISO if you want to instal Hyper-V or ESXi. The user must provide the supported Hyper-V
or ESXi ISO (see Hypervisor ISO Images on page 56); Hyper-V and ESXi ISOs are not available
on the support portal.
It is not necessary to download an AHV ISO because both Foundation and the AOS bundle include
an AHV installation bundle. However, you have the option to download an AHV upgrade installation
bundle if you want to install a non-default version of AHV.
Make sure that IPv6 is enabled on the network to which the nodes are connected and that IPv6
broadcast and multicast are supported.
If the nodes contain self-encrypting drives (SEDs), disable encryption on the SEDs before imaging the
nodes. If the nodes contain only SEDs, you can enable encryption after you image the nodes. If the
nodes contain both regular hard disk drives (HDDs) and SEDs, do not enable encryption on the SEDs
at any time during the lifetime of the cluster.

Creating a Cluster | Field Installation Guide | Foundation | 7


For information about enabling and disabling encryption, see the "Data-at-Rest Encryption" chapter in
the Prism Web Console Guide.

Note: This method creates a single cluster from discovered nodes. This method is limited to
factory prepared nodes running AOS 4.5 or later. If you want to image discovered nodes without
creating a cluster, image factory prepared nodes running an earlier AOS (NOS) version, or image
bare metal nodes, see Imaging Bare Metal Nodes on page 29.
To image the nodes and create a cluster, do the following:

1. Run discovery and launch Foundation (see Discovering Nodes and Launching Foundation on
page 8).

2. (Optional) Upgrade Foundation to the latest version (see Updating Foundation on page 11).

3. Select the nodes to image and specify a redundancy factor (see Selecting the Nodes to Image on
page 11).

4. Define cluster parameters and, optionally, enable health tests, which run after the cluster is created (see
Defining the Cluster on page 13).

5. Configure the discovered nodes (see Setting Up the Nodes on page 15).

6. Select the AOS and hypervisor images to use (see Selecting the Images on page 17).

7. Start the process and monitor progress as the nodes are imaged and the cluster is created (see
Creating the Cluster on page 20).

8. After the cluster is created successfully, begin configuring the cluster (see Configuring a New Cluster on
page 23).

Discovering Nodes and Launching Foundation


The procedure to follow for node discovery depends on whether all the devices involvedthe Nutanix
nodes and the workstation that you use for imagingare in the same broadcast domain or in a VLAN-
segmented network.
If VLANs are not configured in the network, or if all the nodes and your workstation are in the same
broadcast domain, you can use the Foundation Java applet to discover the nodes. The applet also enables
you to start Controller VMbased Foundation on one of the discovered nodes.
If VLANs are configured, you can do the following:
1. Use the network configuration tool to assign one of the nodes to the production VLAN and to configure
IP addresses.
2. Run Controller VMbased Foundation on that node to discover nodes that are on other VLANs.

Discovering Nodes That Are In the Same Broadcast Domain


To discover nodes in a network that does not have VLANs, do one of the following:

If the workstation that you want to use for imaging has internet access, on that workstation, do the
following:

a. Open a browser and go to the Nutanix support portal (https://my.nutanix.com).

Creating a Cluster | Field Installation Guide | Foundation | 8


b. Browse to Downloads > Foundation, and then click FoundationApplet-online.zip.

c. Extract the contents of the downloaded bundle, and then double-click


nutanix_foundation_applet.jnlp.

If the workstation that you want to use for imaging does not have internet access, on a workstation that
has internet access, do the following:

a. Open a browser and go to the Nutanix support portal (https://my.nutanix.com).

b. Browse to Downloads > Foundation, and then click FoundationApplet-offline.zip.

c. Copy the downloaded bundle to the workstation that you want to use for imaging.

d. Extract the contents of the downloaded bundle and then double-click


nutanix_foundation_applet.jnlp.

The discovery process begins and a window appears with a list of discovered nodes.
Note:
A security warning message may appear indicating this is from an unknown source. Click the
accept and run buttons to run the application.

Figure: Foundation Launcher Window

Discovering Nodes in a VLAN-Segmented Network


Nutanix nodes running AHV version 20160215 (or later) include a network configuration tool that you
can use to assign a VLAN tag to the public interface on the Controller VM and to one or more physical
interfaces on the host. You can also use the tool to assign an IP address to the Controller VM and
hypervisor. After network configuration is complete, you can use the Foundation service running on the
Controller VM of that host to discover and image other Nutanix nodes. Foundation uses a VLAN sniffer to
detect free Nutanix nodes, including nodes in other VLANs. The VLAN sniffer uses the Neighbor Discovery
protocol for IP version 6 and therefore requires that the physical switch to which the nodes are connected
supports IPv6 broadcast and multicast. During the imaging process, Controller VMbased Foundation

Creating a Cluster | Field Installation Guide | Foundation | 9


also assigns the specified VLAN tag (assumed to be that of the production VLAN) to the corresponding
interfaces on the selected nodes, eliminating the need to perform additional VLAN assignment tasks for
those nodes.
Before you begin: Connect the Nutanix nodes to a switch.
Note: Use the network configuration tool only on factory-prepared nodes that are not part of
a cluster. Use of the tool on a node that is part of a cluster makes the node inaccessible to the
other nodes in the cluster, and the only way to resolve the issue is to reconfigure the node to the
previous IP addresses by using the network configuration tool again.
To configure the network for a node, do the following:

1. Connect a console to one of the nodes and log on to the Acropolis host with root credentials.

2. Change your working directory to /root/nutanix-network-crashcart/, and then start the network
configuration utility.
root@ahv# ./network_configuration

3. In the network configuration utility, do the following:

a. Review the network card details to ascertain interface properties and identify connected interfaces.

b. Use the arrow keys to shift focus to the interface that you want to configure, and then use the
Spacebar key to select the interface.
Repeat this step for each interface that you want to configure.

c. Use the arrow keys to navigate through the user interface and specify values for the following
parameters:
VLAN Tag. VLAN tag to use for the selected interfaces.
Netmask. Network mask of the subnet to which you want to assign the interfaces.
Gateway. Default gateway for the subnet.
Controller VM IP. IP address for the Controller VM.
Hypervisor IP. IP address for the hypervisor.

d. Use the arrow keys to move the focus to Done, and then hit Enter.
The network configuration utility configures the interfaces.

Launching Foundation
How you launch Foundation depends on whether you used the Foundation Applet to discover nodes in the
same broadcast domain or the crash cart user interface to discover nodes in a VLAN-segmented network.
To launch the Foundation user interface, do one of the following:

If you used the Foundation Applet to discover nodes in the same broadcast domain. do the following:

a. Select the node on which you want to run Foundation.


The selected node will be imaged first and then be used to image the other nodes. Only nodes with
a status field value of Free can be selected, which indicates it is not currently part of a cluster. A
value of Unavailable indicates it is part of an existing cluster or otherwise unavailable. To rerun the
discovery process, click the Retry discovery button.

Note: A warning message may appear stating this is not the highest available version of
Foundation found in the discovered nodes. If you select a node using an earlier Foundation
version (one that does not recognize one or more of the node models), installation may fail

Creating a Cluster | Field Installation Guide | Foundation | 10


when Foundation attempts to image a node of an unknown model. Therefore, select the
node with the highest Foundation version among the nodes to be imaged. (You can ignore
the warning and proceed if you do not intend to select any of the nodes that have the higher
Foundation version.)

b. Click the Launch Foundation button.


Foundation searches the network subnet for unconfigured Nutanix nodes (factory prepared nodes
that are not part of a cluster) and then displays information about the discovered blocks and nodes in
the Discovered Nodes screen. (It does not display information about nodes that are powered off or in
a different subnet.) The discovery process normally takes just a few seconds.

Note: If you want Foundation to image nodes from an existing cluster, you must first either
remove the target nodes from the cluster or destroy the cluster.

If you used the crash cart user interface to discover nodes in a VLAN-segmented network, in a browser
on your workstation, enter the following URL: http://CVM_IP_address:8000
Replace CVM_IP_address with the IP address that you assigned to the Controller VM when using the
network configuration tool.

Updating Foundation
You can use the Foundation user interface to update Foundation. Updating Foundation to the latest version
is optional but recommended.
To update Foundation, do the following:

1. From the Foundation download page on the Nutanix support portal (https://portal.nutanix.com/#/page/
foundation/list), download the Foundation tarball to the workstation that you plan to use for imaging.

2. Open the Foundation user interface. See Configuring Node Parameters on page 37 if you are using
standalone Foundation. See Discovering Nodes and Launching Foundation on page 8 if you are
using CVM Foundation.

3. In the gear icon menu at the top-right corner of the Foundation user interface, click Update
Foundation.

4. In the Update Foundation dialog box, do the following:

a. Click Browse, and then browse to and select the Foundation tarball you downloaded.

b. Click Install.

Selecting the Nodes to Image


When the Foundation user interface is launched, it displays information about the discovered blocks and
nodes. In AOS 4.5 and later releases, Foundation uses IPv6 link-local discovery to search across the
VLAN to which it is assigned. Therefore, it does not display information about nodes that are in a different
broadcast domain. In AOS 4.6 and later releases, Foundation uses a VLAN sniffer to discover nodes that
are on other VLANs.
Note: If you want Foundation to image nodes from an existing cluster, you must first either remove
the target nodes from the cluster or destroy the cluster.

Creating a Cluster | Field Installation Guide | Foundation | 11


To select the nodes to image, do the following:

1. Select the nodes to be imaged.


All discovered blocks and nodes are displayed by default, including those that are already in an existing
cluster. An exclamation mark icon is displayed for unavailable (already in a cluster) nodes; these node
cannot be selected. All available nodes are selected by default.

Note: A cluster requires a minimum of three nodes. Therefore, you must select at least three
nodes.

Note: If a discovered node has a VLAN tag, that tag is displayed. Foundation applies an
existing VLAN tag when imaging the node, but you cannot use Foundation to edit that tag or
add a tag to a node without one.
To display just the available nodes, select the Show only new nodes option from the pull-down
list on the right of the screen. (Blocks with unavailable nodes only do not appear, but a block with
both available and unavailable nodes does appear with the exclamation mark icon displayed for the
unavailable nodes in that block.)
To deselect nodes you do not want to image, uncheck the boxes for those nodes. Alternately, click
the Deselect All button to uncheck all the nodes and then select those you want to image. (The
Select All button checks all the nodes.)
If you want to select only those nodes that can form a cluster of two or more ESXi nodes and one
AHV node, with the AHV node being used for storage only, click select all capable nodes in the
message that indicates that Foundation detected such nodes.
If Foundation detects nodes that can form such a cluster, the Block & Node Config screen displays
a message prompting you to select those nodes. For information about which nodes can form a
mixed-hypervisor cluster, see "Product Mixing Restrictions" in the NX and SX Series Hardware
Administration and Reference.

Note: You can get help or reset the configuration at any time from the gear icon pull-down
menu (top right). Internet access is required to display the help pages, which are located in the
Nutanix support portal.

Figure: Discovery Screen

2. Set the redundancy factor for the cluster to be created.


The redundancy factor specifies the number of times each piece of data is replicated in the cluster.

Creating a Cluster | Field Installation Guide | Foundation | 12


Setting this to 2 means there will be two copies of data, and the cluster can tolerate the failure of any
single node or drive.
Setting this to 3 means there will be three copies of data, and the cluster can tolerate the failure of
any two nodes or drives in different blocks. Redundancy factor 3 requires that the cluster have at
least five nodes, and it can be enabled only when the cluster is created. (In addition, containers must
have replication factor 3 for guest VM data to withstand the failure of two nodes.)
The default setting for a cluster is redundancy factor 2. To set it to redundancy factor 3, do the following:

a. Click the Change RF (2) button.


The Change Redundancy Factor window appears.

b. Click (check) the RF 3 button and then click the Save Changes button.
The window disappears and the RF button changes to Change RF (3) indicating the redundancy
factor is now set to 3.

Figure: Change Redundancy Factor Window

3. By default, the user interface displays only new nodes. To show all nodes, from the menu at the top-
right corner of the page, select Show all nodes.

4. Click the Next button at the bottom of the screen to configure cluster parameters (see Defining the
Cluster on page 13).

Defining the Cluster


The Define Cluster configuration screen appears. This screen allows you to define a new cluster and
configure global network parameters for the Controller VM, hypervisor, and (optionally) IPMI. It also allows
you to enable diagnostic and health tests after creating the cluster.

Creating a Cluster | Field Installation Guide | Foundation | 13


Figure: Cluster Screen

1. In the Cluster Information section, do the following in the indicated fields:

a. Cluster Name: Enter a name for the cluster.

b. IP Address (optional): Enter an external (virtual) IP address for the cluster.


This field sets a logical IP address that always points to an active Controller VM (provided the cluster
is up), which removes the need to enter the address of a specific Controller VM. This parameter is
required for Hyper-V clusters and is optional for ESXi and AHV clusters.

c. NTP Server Address (optional): Enter the NTP server IP address or (pool) domain name.

d. DNS Server IP (optional): Enter the DNS server IP address.

e. Time Zone (optional): Select the cluster's time zone.

2. (optional) Select Configure IPMI IP to specify an IPMI address.


When this button is enabled, fields for IPMI global network parameters appear below. Foundation does
not require an IPMI connection, so this information is not required. However, you can use this option to
configure IPMI for your use.

3. In the CVM and Hypervisor section of the Network Information area, do the following in the indicated
fields:

a. Netmask: Enter the netmask for the CVM and hypervisor subnet.

b. Gateway: Enter the gateway IP address.

c. CVM Memory (optional): Specify a memory size for the Controller VM from the pull-down list.
For more information about Controller VM memory configuration, see CVM Memory and vCPU
Configurations (G4/Haswell/Ivy Bridge) on page 62.

Creating a Cluster | Field Installation Guide | Foundation | 14


This field is set initially to default. (The default amount varies according to the node model type.)
The other options allow you to specify a memory size of 16 GB, 24 GB, 32 GB, or 64 GB. The
default setting represents the recommended amount for the model type. Assigning more memory
than the default might be appropriate in certain situations.

d. Note: The following fields appear only if you selected Configure IPMI in the previous step.

IPMI Netmask: Enter the IPMI netmask value.

e. IPMI Gateway: Enter the gateway IP address.

f. IPMI Username: Enter the IPMI user name. The default user name is ADMIN.

g. IPMI Password: Enter the IPMI password. The default password is ADMIN.
Check the show password box to display the password.

4. (optional) Select Enable Testing to run Nutanix Cluster Check (NCC) after the cluster is created.
NCC is a test suite that checks a variety of health metrics in the cluster. The results are stored in the ~/
foundation/logs/ncc directory.

5. Click the Next button at the bottom of the screen to configure the cluster nodes (see Setting Up the
Nodes on page 15).

Setting Up the Nodes


Before you begin: Complete Defining the Cluster on page 13.
The Setup Node configuration screen appears. This screen allows you to specify the Controller VM,
hypervisor, and (if enabled) IPMI IP addresses for each node.
The page consists of the following two sections:
The upper section includes a field in which you can specify a base name for each host being imaged.
When you enter the base name, the user interface assigns each host the base name along with a a
suffix (-1for the first host, -2 for the second host, and so on). The section also includes columns of IP
address fields for the Controller VMs, hypervisors, and, if enabled, IPMI interfaces.
You only need to enter the starting IP address in each starting field. The entered address is assigned
to the Controller VM of the first node, and consecutive IP addresses (sequentially from the entered
address) are assigned automatically to the remaining nodes. Discovered nodes are sorted first by
block ID and then by position, so IP assignments are sequential. If you do not want all addresses to
be consecutive, you can change the IP address for specific nodes by updating the address in the
appropriate fields for those nodes.
You can also specify the amount by which the last octet in the IP address range must increment. For
example, typing 10.1.80.10 +2 in the text field for the starting IP address results in the hosts being
assigned the IP addresses 10.1.80.10, 10.1.80.12, 10.1.80.14, and so on.

The lower section includes fields for the host names and IP addresses that are automatically generated
based on your entries in the upper section. You can edit the values in these fields if you want to make
changes.

Creating a Cluster | Field Installation Guide | Foundation | 15


Figure: Node Screen

1. If you want the blocks to be in a particular order before assigning addresses, click Reorder Blocks. In
the Reorder Blocks dialog box, drag blocks to their desired positions, and then click Done.
If you reorder blocks after manually assigning each IP address, the blocks retains their IP addresses,
even in their new positions. If you reorder blocks after the user interface generates IP addresses for
you, the reorder operation regenerates the IP addresses, and any changes that you make to individual
IP address assignments are lost.

2. In the Hostname and IP Range section, do the following in the indicated fields:

a. Hypervisor Hostname: Enter a base host name for the set of nodes. Host names should contain
only digits, letters, and hyphens.
The base name with a suffix of "-1" is assigned as the host name of the first node, and the base
name with "-2", "-3" and so on are assigned automatically as the host names of the remaining nodes.

b. CVM IP: Enter the starting IP address for the set of Controller VMs across the nodes.
Enter a starting IP address in the FROM/TO line of the CVM IP column. The entered address is
assigned to the Controller VM of the first node, and consecutive IP addresses (sequentially from
the entered address) are assigned automatically to the remaining nodes. Discovered nodes are
sorted first by block ID and then by position, so IP assignments are sequential. If you do not want
all addresses to be consecutive, you can change the IP address for specific nodes by updating the
address in the appropriate fields for those nodes.

c. Hypervisor IP: Repeat the previous step for this field.


This sets the hypervisor IP addresses for all the nodes.
Caution: The Nutanix high availability features require that both hypervisor and Controller
VM be in the same subnet. Putting them in different subnets reduces the failure protection
provided by Nutanix and can lead to other problems. Therefore, it is strongly recommended
that you keep both hypervisor and Controller VM in the same subnet.

d. IPMI IP (when enabled): Repeat the previous step for this field.

Creating a Cluster | Field Installation Guide | Foundation | 16


This sets the IPMI port IP addresses for all the nodes. This column appears only when IPMI is
enabled on the previous cluster setup screen.

3. In the Manual Input section, review the assigned host names and IP addresses. If any of the names or
addresses are not correct, enter the desired name or IP address in the appropriate field.
There is a section for each block with a line for each node in the block. The letter designation (A, B, C,
and D) indicates the position of that node in the block.

4. When all the host names and IP addresses are correct, click the Validate Network button at the bottom
of the screen.
This does a ping test to each of the assigned IP addresses to check whether any of those addresses
are being used currently.
If there are no conflicts (none of the addresses return a ping), the process continues (see Selecting
the Images on page 17).
If there is a conflict (one or more addresses returned a ping), this screen reappears with the
conflicting addresses highlighted in red. Foundation will not continue until the conflict is resolved.

Selecting the Images


Before you begin: Complete Setting Up the Nodes on page 15.
The Select Images configuration screen appears. This screen allows you to specify and upload the AOS
and hypervisor image files to use.
If all the nodes you selected are capable of forming a multi-hypervisor cluster running ESXi and AHV,
with the AHV node used for storage only, on the Select Images screen, Foundation first prompts you
to specify whether you want to create a single-hypervisor cluster or a multi-hypervisor cluster. Even if a
multi-hypervisor cluster that includes ESXi and AHV nodes is possible, you can choose to create a single-
hypervisor cluster.
If a multi-hypervisor cluster is not possible, you do not receive the prompt, and the default Select Images
screen is displayed.

Creating a Cluster | Field Installation Guide | Foundation | 17


Figure: Images Screen

1. If the prompt to choose between creating a single-hypervisor cluster and a multi-hypervisor cluster is
displayed, click the type of cluster you want, and then click Continue.
If you choose the multi-hypervisor option, an additional item named ESX+AHV is included in the
Hypervisor Type list and selected by default. If you choose the single-hypervisor option, Foundation
displays the default Image Selection screen shown earlier.

2. Select the AOS image as follows:


Note: The list displays the packages that are available in the ~/foundation/nos directory
on the Foundation VM. If the desired AOS package does not appear in the list, or if the list is
empty, you must upload a package from your workstation to the ~/foundation/nos directory. To
upload the package to the directory, perform the next step.

a. From the Available Packages list in the AOS section, select the AOS installer package that you
want to use.

Note: The list displays the packages that are available in the ~/foundation/nos directory
on the Foundation VM. If the desired AOS package does not appear in the list, or if the list is
empty, you must download a package to the ~/foundation/nos directory. If the list is empty
or the package you want to use is not available, perform the next step.

b. If you want to upload a package to the ~/foundation/nos directory, click Manage above the
Available Packages list. Click Add, and then click Choose File. Browse to and upload the package
that you want to use, and then click Close. Finally, from the Available Packages list, select the AOS
installer package that you uploaded.

3. From the Hypervisor Type list, select the hypervisor type that you want to install (for multi-hypervisor
clusters, the selected hypervisor is installed on the compute nodes and AHV is installed on the storage-
only nodes.
Selecting Hyper-V as the hypervisor displays a SKU list.

Creating a Cluster | Field Installation Guide | Foundation | 18


Caution: To install Hyper-V, the nodes must have a 64 GB DOM. Attempts to install Hyper-
V on nodes with less DOM capacity will fail. See Hyper-V Installation Requirements on
page 65 for additional considerations when installing a Hyper-V cluster.

4. From the Available Packages list, select the hypervisor.

Note: The hypervisor field is not active until the AOS installation bundle is selected (uploaded)
in the previous step. Foundation comes with an AHV image. If that is the correct image to use,
skip to the next step.
If the AHV ISO that you want to use is not listed, click Manage above the Available Packages list,
upload the ISO file that you want to use, and then select the file from the Available Packages list, as
described earlier for uploading an AOS package.
Only approved hypervisor versions are permitted; Foundation will not image nodes with an unapproved
version. To verify your version is on the approved list, click the See Whitelist link and select the
appropriate hypervisor tab in the pop-up window. Nutanix updates the list as new versions are
approved, and the current version of Foundation may not have the latest list. If your version does not
appear on the list, click the Update the whitelist link to download the latest whitelist from the Nutanix
support portal.

Figure: Whitelist Compatibility List Window

5. From the list of AHV bundles for the storage-only nodes, backup nodes, or multi-cluster nodes (based
on the mix of selected nodes, the list is named Available AHV Packages (Storage Only Nodes),
Available AHV Packages (Backup Nodes), or just Available AHV Packages, select the AHV package
that you want to use.
If the AHV ISO that you want to use is not listed, click Manage above the Available Packages list,
upload the ISO file that you want to use, and then select the file from the Available Packages list, as
described earlier for uploading an AOS package.

6. [Hyper-V only] From the Choose Hyper-V SKU list, select the SKU for the Hyper-V version to use.
Five Hyper-V versions are supported: Free, Standard, Datacenter, Standard with GUI, Datacenter
with GUI. This list appears only when you select Hyper-V.

Creating a Cluster | Field Installation Guide | Foundation | 19


Note: See Hyper-V Installation Requirements on page 65 for additional considerations
when installing a Hyper-V cluster.

7. When both images are uploaded and ready, do one of the following:
To image the nodes and then create the new cluster, click the Create button at the bottom of the
screen.
To create the cluster without imaging the nodes, click the Skip button (in either case see Creating
the Cluster on page 20).

Note: The Skip option requires that all the nodes have the same hypervisor and AOS
version. This option is disabled if they are not all the same (with the exception of any model
NX-6035C "cold" storage nodes in the cluster that run AHV regardless of the hypervisor
running on the other nodes).

Creating the Cluster


Before you begin: Complete Selecting the Images on page 17.
After clicking the Create Cluster or Skip Imaging button (in the Select Images screen), the Create Cluster
screen appears. This is a dynamically updated display that provides progress information about node
imaging and cluster creation. Foundation first images one of the other nodes, and then transfers the
imaging process to that node.

1. Monitor the node imaging and cluster creation progress.


The progress screen includes the following sections:
Progress bar at the top (blue during normal processing or red when there is a problem).
Cluster Creation Status section with a line for the cluster being created (status indicator, cluster
name, progress message, and log link).
Node Status section with a line for each node being imaged (status indicator, IPMI IP address,
progress message, and log link).

Creating a Cluster | Field Installation Guide | Foundation | 20


Figure: Foundation Progress Screen: Ongoing

The status message for each node (in the Node Status section) displays the imaging percentage
complete and current step. The selected node (see Launching Foundation on page 10) is imaged
first. When that imaging is complete, the remaining nodes are imaged in parallel. The imaging process
takes about 30 minutes, so the total time is about an hour (30 minutes for the first node and another 30
minutes for the other nodes imaged in parallel). You can monitor overall progress by clicking the Log
link at the top, which displays the foundation.out contents in a separate tab or window. Click on the
Log link for a node to display the log file for that node in a separate tab or window.

Note: Simultaneous processing is limited to a maximum of 20 nodes. If the cluster contains


more than 21 nodes, add an extra 30 minutes processing time for each group of 20 nodes.
When installation moves to cluster creation, the status message displays the percentage complete and
current step. Cluster creation happens quickly, but this step could take some time if you enabled the
post-creation tests. Click on the Log link for a cluster to display the log file for the cluster in a separate
tab or window.

2. When processing completes successfully, either open the Prism web console and begin configuring the
cluster (see Configuring a New Cluster on page 23) or exit from Foundation.
When processing completes successfully, a "Cluster creation successful" message appears. This
means imaging both the hypervisor and Nutanix Controller VM across all the nodes in the cluster was
successful (when imaging was not skipped) and cluster creation was successful.
To configure the cluster, click the Prism link. This opens the Prism web console (login required using
the default "admin" username and password). See Configuring a New Cluster on page 23 for
initial cluster configuration steps.
To download the log files, click the Export Logs link. This packages all the log files into a
log_archive.tar file and allows you to download that file to your workstation.

The Foundation service shuts down two hours after imaging. If you go to the cluster creation success
page after a long absence and theExport Logs link does not work (or your terminal went to sleep

Creating a Cluster | Field Installation Guide | Foundation | 21


and there is no response after refreshing it), you can point the browser to one of the Controller VM IP
addresses. If the Prism web console appears, installation completed successfully, and you can get the
logs from ~/data/logs/foundation on the node that was imaged first.

Note: If nothing loads when you refresh the page (or it loads one of the configuration pages),
the web browser might have missed the hand-off between the node that starts imaging and the
first node imaged. This can happen because the web browser went to sleep, you closed the
browser, or you lost connectivity for some other reason. In this case, enter http://cvm ip for
any Controller VM, which should open the Prism GUI if imaging has completed. If this does not
work, enter http://cvm ip:8000/guion each of the Controller VMs in the cluster until you see
the progress screen, from which you can continue monitoring progress.

Figure: Foundation Progress Screen: Successful Installation

3. If processing does not complete successfully, review and correct the problem(s), and then restart the
process.
If the progress bar turns red with a "There were errors in the installation" message and one or more
node or cluster entries have a red X in the status column, the installation failed at the node imaging or
cluster creation step. To correct such problems, see Fixing Imaging Problems on page 72. Clicking
the Back to config button returns you to the configuration screens to correct any entries. The default
per-node installation timeout is 30 minutes for ESXi or 60 minutes for Hyper-V and AHV, so you can
expect all the nodes (in each run of up to 20 nodes) to finish successfully or encounter a problem in that
amount of time.
Note: If an imaging problem occurs, it typically appears when imaging the first node. In that
case Foundation will not attempt to image the other nodes, so only the first node will be in an
unstable state. Once the problem is resolved, the first node can be re-imaged and then the
other nodes imaged normally.

Creating a Cluster | Field Installation Guide | Foundation | 22


Figure: Foundation Progress Screen: Unsuccessful Installation

Configuring a New Cluster


After creating the cluster, you can configure it through the Prism web console. A storage pool and a
container are created automatically when the cluster is created, but many other set up options require user
action. The following are common cluster set up steps typically done soon after creating a cluster. (All the
sections cited in the following steps are in the Prism Web Console Guide.)

1. Verify the cluster has passed the latest Nutanix Cluster Check (NCC) tests.

a. Check the installed NCC version and update it if a later version is available (see the "Software and
Firmware Upgrades" section).

b. Run NCC if you downloaded a newer version or did not run it as part of the install.
Running NCC must be done from a command line. Open a command window, log on to any
Controller VM in the cluster with SSH, and then run the following command:
nutanix@cvm$ ncc health_checks run_all

If the check reports a status other than PASS, resolve the reported issues before proceeding. If you
are unable to resolve the issues, contact Nutanix support for assistance.

c. Configure NCC so that the cluster checks are run and emailed according to your desired frequency.
nutanix@cvm$ ncc --set_email_frequency=num_hrs

where num_hrs is a positive integer of at least 4 to specify how frequently NCC is run and results are
emailed. For example, to run NCC and email results every 12 hours, specify 12; or every 24 hours,
specify 24, and so on. For other commands related to automatically emailing NCC results, see
"Automatically Emailing NCC Results" in the Nutanix Cluster Check (NCC) Guide for your version of
NCC.

2. Specify the timezone of the cluster.


Specifying the timezone must be done from the Nutanix command line (nCLI). While logged in to the
Controller VM (see previous step), run the following commands:
nutanix@cvm$ ncli
ncli> cluster set-timezone timezone=cluster_timezone

Creating a Cluster | Field Installation Guide | Foundation | 23


Replace cluster_timezone with the timezone of the cluster (for example, America/Los_Angeles, Europe/
London, or Asia/Tokyo). Restart all Controller VMs in the cluster after changing the timezone. Because a
cluster can tolerate only a single Controller VM unavailable at any one time, restart the Controller VMs
in a series, waiting until one has finished starting before proceeding to the next. See the Command
Reference for more information about using the nCLI.

3. Specify an outgoing SMTP server (see the "Configuring an SMTP Server" section).

4. If the site security policy allows Nutanix customer support to access the cluster, enable the remote
support tunnel (see the "Controlling Remote Connections" section).

Caution: Failing to enable remote support prevents Nutanix support from directly addressing
cluster issues. Nutanix recommends that all customers allow email alerts at minimum because
it allows proactive support of customer issues.

5. If the site security policy allows Nutanix support to collect cluster status information, enable the Pulse
feature (see the "Configuring Pulse" section).
This information is used by Nutanix support to diagnose potential problems and provide more informed
and proactive help.

6. Add a list of alert email recipients, or if the security policy does not allow it, disable alert emails (see the
"Configuring Email Alerts" section).
You also have the option to specify email recipients for specific alerts (see the "Configuring Alert
Policies" section).

7. If the site security policy allows automatic downloads to update AOS and other upgradeable cluster
elements, enable that feature (see the "Software and Firmware Upgrades" section).

Note: Allow access to the following through your firewall to ensure that automatic download of
updates can function:
*.compute-*.amazonaws.com:80
release-api.nutanix.com:80

8. License the cluster (see the "License Management" section).

9. For ESXi and Hyper-V clusters, add the hosts to the appropriate management interface.
vCenter: See the Nutanix vSphere Administration Guide.
SCVMM: See the Nutanix Hyper-V Administration Guide.

Creating a Cluster (Single-Node Replication Target)


This procedure describes how to configure single-node replication target cluster along with primary cluster
and how to configure the nodes into a cluster. "Discovered nodes" are factory prepared nodes on the same
subnet that are not part of a cluster currently. This procedure runs the Foundation tool through the Nutanix
Controller VM.
Before you begin: Ensure that primary and single-node replication target cluster is in the same broadcast
domain or VLAN.
In Controller VM foundation, you can map the primary cluster nodes (three nodes or more) and the single-
node replication target cluster node (one node) during the foundation process itself.

Note:

Creating a Cluster | Field Installation Guide | Foundation | 24


You can run ESXi, Hyper-V, or AHV on the primary cluster. The single-node replication target
cluster is supported on AHV only.
You can also create a standalone single-node replication target cluster. If you decide not to map
the primary cluster and replication target cluster, you need to add the replication target cluster
as a remote site to the primary cluster. For more information on the adding the single-node
replication target cluster as a remote site, see Prism Web Console guide.

To image the nodes and create a cluster, do the following:

1. Download the required files, start the cluster creation GUI, and run discovery. Follow the steps
mentioned in the Discovering Nodes and Launching Foundation on page 8 and Launching
Foundation on page 10 sections to get complete information on how to download the files, start the
cluster creation GUI, and set the redundancy factor.

Note: You can only configure redundancy factor of 2 in the single-node replication target
clusters. However, you can configure redundancy factor of 2 or 3 on the primary clusters.
The step that is specific for creating primary and single-node replication target cluster together is as
follows.
In the Discovery Nodes tab you have an option to create the primary and single-node replication
target cluster in parallel. Foundation detects and provides you with the information if any single node is
detected.

a. Select the nodes that you want image for the primary cluster (three nodes or more) and select the
node that you want to image as a single-node replication target.

Figure: Discover Nodes

2. Define the cluster parameters; specify Controller VM, hypervisor, and (optionally) IPMI global network
addresses; and (optionally) enable health tests after the cluster is created. See Defining the Cluster on
page 13 for detailed description about the different fields that appear in the screen.
The steps that are specific for creating primary and single-node replication target cluster together are as
follows.

Creating a Cluster | Field Installation Guide | Foundation | 25


a. Type the primary and replication target cluster name in the Primary Cluster Name and Backup
Cluster Name text box.

b. (Optional) Type the virtual IP address for the primary and replication target cluster.

c. In the Backup Configuration field, if you do not deselect the Remote Site Names Same As the
Cluster Names check box, both the primary and replication target clusters will be setup as the
remote sites for each other after the cluster creation process is completed. By default this option is
selected. However, you can deselect the check box and provide the remote site name of the primary
cluster and remote site name of the replication target cluster in the respective text boxes.

d. Perform rest of the steps as described in Defining the Cluster on page 13 topic.

Figure: Cluster Screen

3. Configure the discovered nodes. See Setting Up the Nodes on page 15 for detailed description
about the different fields that appear in the screen.
The steps that are specific for creating primary and single-node replication target cluster together are as
follows.

a. Type the hypervisor hostname, Controller VM IP address, and hypervisor IP address of the primary
and single-node replication target cluster in the respective fields.

b. Perform rest of the steps as described in Setting Up the Nodes on page 15 topic.

Creating a Cluster | Field Installation Guide | Foundation | 26


Figure: Node Screen

4. Select the AOS and hypervisor images to use. See Selecting the Images on page 17 for more
information on how to complete this screen.
For single-node replication target cluster, you do not need to import the images as the AHV image is
bundled with AOS release.

Figure: Images Screen

5. Start the process and monitor progress as the nodes that are imaged for the primary and single-node
replication target cluster. See Creating the Cluster on page 20 for more information.

Creating a Cluster | Field Installation Guide | Foundation | 27


Figure: Foundation Progress Screen

6. After the cluster is created successfully, begin configuring the cluster. See Configuring a New Cluster on
page 23 for more information.
You can also create a standalone single-node replication target cluster and then add it as a replication
target (remote site) for the primary cluster. Perform the same procedure as described above, but fill in
the fields that are specific to creating replication target cluster.
After creating the standalone single-node replication target cluster, you need to add it as a replication
target (remote site) for the primary cluster. For information on the adding the single-node replication
target cluster as a remote site, see Prism Web console guide.

Creating a Cluster | Field Installation Guide | Foundation | 28


3
Imaging Bare Metal Nodes
This procedure describes how to install a selected hypervisor and the Nutanix Controller VM on bare metal
nodes and optionally configure the nodes into one or more clusters. "Bare metal" nodes are those that
are not factory prepared or cannot be detected through discovery. You can also use this method to image
factory prepared nodes that you do not want to configure into a cluster.
Before you begin:
Note: Imaging bare metal nodes is restricted to Nutanix sales engineers, support engineers, and
partners. Contact Nutanix customer support or your partner for help with this procedure.

Physically install the nodes at your site. For installing Nutanix hardware platforms, see the NX and SX
Series Hardware Administration and Reference for your model type. For installing hardware from any
other manufacturer, see that manufacturer's documentation.
Set up the installation environment (see Preparing Installation Environment on page 30).

Note: If you changed the boot device order in the BIOS to boot from a USB flash drive, you will
get a Foundation timeout error if you do not change the boot order back to virtual CD-ROM in
the BIOS.

Note: If STP (spanning tree protocol) is enabled, it can cause Foundation to timeout during the
imaging process. Therefore, disable STP before starting Foundation.

Note: Avoid connecting any device (that is, plugging it into a USB port on a node) that presents
virtual media, such as CDROM. This could conflict with the foundation installation when it tries
to mount the virtual CDROM hosting the install ISO.
Have ready the appropriate global, node, and cluster parameter values needed for installation. The use
of a DHCP server is not supported for Controller VMs, so make sure to assign static IP addresses to
Controller VMs.

Note: If the Foundation VM IP address set previously was configured in one (typically public)
network environment and you are imaging the cluster on a different (typically private) network
in which the current address is no longer correct, repeat step 13 in Preparing a Workstation on
page 30 to configure a new static IP address for the Foundation VM.
If the nodes contain self-encrypting drives (SEDs), disable encryption on the SEDs before imaging the
nodes. If the nodes contain only SEDs, you can enable encryption after you image the nodes. If the
nodes contain both regular hard disk drives (HDDs) and SEDs, do not enable encryption on the SEDs
at any time during the lifetime of the cluster.
For information about enabling and disabling encryption, see the "Data-at-Rest Encryption" chapter in
the Prism Web Console Guide.

To image the nodes and create a cluster(s), do the following:

Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may
not reflect the latest features described in this section.)

1. Prepare the installation environment:

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 29


a. Download necessary files and prepare a workstation (see Preparing a Workstation on page 30).

b. Connect the workstation and nodes to be imaged to the network (Setting Up the Network on
page 35).

2. Start the Foundation VM and configure global parameters (see Configuring Global Parameters on
page 42).

3. Configure the nodes to image (see Configuring Node Parameters on page 37).

4. Select the images to use (see Configuring Image Parameters on page 44).

5. [optional] Configure one or more clusters to create and assign nodes to the clusters (see Configuring
Cluster Parameters on page 46).

6. Start the imaging process and monitor progress (see Monitoring Progress on page 49).

7. If a problem occurs during configuration or imaging, evaluate and resolve the problem (see
Troubleshooting on page 71).

8. [optional] Clean up the Foundation environment after completing the installation (see Cleaning Up After
Installation on page 52).

Preparing Installation Environment


Standalone (bare metal) imaging is performed from a workstation with access to the IPMI interfaces of the
nodes in the cluster. Imaging a cluster in the field requires first installing certain tools on the workstation
and then setting the environment to run those tools. This requires two preparation tasks:
1. Prepare the workstation. Preparing the workstation can be done on or off site at any time prior to
installation. This includes downloading ISO images, installing Oracle VM VirtualBox, and using
VirtualBox to configure various parameters on the Foundation VM (see Preparing a Workstation on
page 30).
2. Set up the network. The nodes and workstation must have network access to each other through a
switch at the site (see Setting Up the Network on page 35).

Preparing a Workstation
A workstation is needed to host the Foundation VM during imaging. To prepare the workstation, do the
following:
Note: You can perform these steps either before going to the installation site (if you use a portable
laptop) or at the site (if you can connect to the web).

1. Get a workstation (laptop or desktop computer) that you can use for the installation.
The workstation must have at least 3 GB of memory (Foundation VM size plus 1 GB), 25 GB of disk
space (preferably SSD), and a physical (wired) network adapter.

2. Go to the Foundation download page in the Nutanix support portal (see Downloading Installation Files
on page 53) and download the following files to a temporary directory on the workstation.
Foundation_VM_OVF-version#.tar. This tar file includes the following files:

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 30


Foundation_VM-version#.ovf. This is the Foundation VM OVF configuration file for the version#
release, for example Foundation_VM-3.1.ovf.
Foundation_VM-version#-disk1.vmdk. This is the Foundation VM VMDK file for the version#
release, for example Foundation_VM-3.1-disk1.vmdk.
VirtualBox-version#-[OSX|Win].[dmg|exe]. This is the Oracle VM VirtualBox installer for Mac
OS (VirtualBox-version#-OSX.dmg) or Windows (VirtualBox-version#-Win.exe). Oracle VM
VirtualBox is a free open source tool used to create a virtualized environment on the workstation.

Note: Links to the VirtualBox files may not appear on the download page for every
Foundation version. (The Foundation 2.0 download page has links to the VirtualBox files.)
nutanix_installer_package-version#.tar.gz. This is the tar file used for imaging the desired AOS
release. Go to the AOS (NOS) download page on the support portal to download this file.
If you want to run the diagnostics test after creating a cluster, download the diagnostics test file(s) for
your hypervisor from the Tools & Firmware download page on the support portal:
AHV: diagnostic.raw.img.gz
ESXi: diagnostics-disk1.vmdk, diagnostics.mf, and diagnostics.ovf
Hyper-V: diagnostics_uvm.vhd.gz

3. Go to the download location and extract Foundation_VM_OVF-version#.tar by entering the following


command:
$ tar -xf Foundation_VM_OVF-version#.tar

Note: This assumes the tar command is available. If it is not, use the corresponding tar utility
for your environment.

4. Open the Oracle VM VirtualBox installer and install Oracle VM VirtualBox using the default options.
See the Oracle VM VirtualBox User Manual for installation and start up instructions (https://
www.virtualbox.org/wiki/Documentation).
Note: This section describes how to use Oracle VM VirtualBox to create a virtual environment.
Optionally, you can use an alternate tool such as VMware vSphere in place of Oracle VM
VirtualBox.

5. Create a new folder called VirtualBox VMs in your home directory.


On a Windows system this is typically C:\Users\user_name\VirtualBox VMs.

6. Copy the Foundation_VM-version#.ovf and Foundation_VM-version#-disk1.vmdk files to the VirtualBox


VMs folder that you created in step 5.

7. Start Oracle VM VirtualBox.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 31


Figure: VirtualBox Welcome Screen

8. Click the File option of the main menu and then select Import Appliance from the pull-down list.

9. Find and select the Foundation_VM-version#.ovf file, and then click Next.

10. Click the Import button.

11. In the left column of the main screen, select Foundation_VM-version# and click Start.
The Foundation VM console launches and the VM operating system boots.

12. At the login screen, login as the Nutanix user with the password nutanix/4u.
The Foundation VM desktop appears (after it loads).

13. If you want to enable file drag-and-drop functionality between your workstation and the Foundation VM,
install Oracle Additions as follows:

a. On the VirtualBox window for the Foundation VM, select Devices > Insert Guest Additions CD
Image... from the menu.
A VBOXADDITIONS CD entry appears on the Foundation VM desktop.

b. Click OK when prompted to Open Autorun Prompt and then click Run.

c. Enter the root password (nutanix/4u) and then click Authenticate.

d. After the installation is complete, press the return key to close the VirtualBox Guest Additions
installation window.

e. Right-click the VBOXADDITIONS CD entry on the desktop and select Eject.

f. Reboot the Foundation VM by selecting System > Shutdown... > Restart from the Linux GUI.
Note: A reboot is necessary for the changes to take effect.

g. After the Foundation VM reboots, select Devices > Drag 'n' Drop > Bidirectional from the menu on
the VirtualBox window for the Foundation VM.

14. Open a terminal session and run the ifconfig command to determine if the Foundation VM was able to
get an IP address from the DHCP server.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 32


If the Foundation VM has a valid IP address, skip to the next step. Otherwise, configure a static IP as
follows:
Note: Normally, the Foundation VM needs to be on a public network in order to copy selected
ISO files to the Foundation VM in the next two steps. This might require setting a static IP
address now and setting it again when the workstation is on a different (typically private)
network for the installation (see Imaging Bare Metal Nodes on page 29).

a. Double click the set_foundation_ip_address icon on the Foundation VM desktop.

Figure: Foundation VM: Desktop

b. In the pop-up window, click the Run in Terminal button.

Figure: Foundation VM: Terminal Window

c. In the Select Action box in the terminal window, select Device Configuration.
Note: Selections in the terminal window can be made using the indicated keys only. (Mouse
clicks do not work.)

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 33


Figure: Foundation VM: Action Box

d. In the Select a Device box, select eth0.

Figure: Foundation VM: Device Configuration Box

e. In the Network Configuration box, remove the asterisk in the Use DHCP field (which is set by
default), enter appropriate addresses in the Static IP, Netmask, and Default gateway IP fields, and
then click the OK button.

Figure: Foundation VM: Network Configuration Box

f. Click the Save button in the Select a Device box and the Save & Quit button in the Select Action
box.
This save the configuration and closes the terminal window.

15. Copy nutanix_installer_package-version#.tar.gz (downloaded in step 2) to the /home/nutanix/


foundation/nos folder.

16. If you intend to install ESXi or Hyper-V, you must provide a supported ESXi or Hyper-V ISO image
(see Hypervisor ISO Images on page 56). Therefore, download the hypervisor ISO image into the
appropriate folder for that hypervisor.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 34


ESXi ISO image: /home/nutanix/foundation/isos/hypervisor/esx
Hyper-V ISO image: /home/nutanix/foundation/isos/hypervisor/hyperv

Note: You do not have to provide an AHV image because Foundation includes an AHV tar file
in /home/nutanix/foundation/isos/hypervisor/kvm. However, if you want to install a different
version of AHV, download the AHV tar file from the Nutanix support portal (see Downloading
Installation Files on page 53).

17. If you intend to run the diagnostics test after the cluster is created, download the diagnostic test file(s)
into the appropriate folder for that hypervisor:
AHV (diagnostic.raw.img.gz): /home/nutanix/foundation/isos/diags/kvm
ESXi (diagnostics-disk1.vmdk, diagnostics.mf, and diagnostics.ovf): /home/nutanix/foundation/
isos/diags/esx
Hyper-V (diagnostics_uvm.vhd.gz): /home/nutanix/foundation/isos/diags/hyperv

Setting Up the Network


The network must be set up properly on site before imaging nodes through the Foundation tool. To set up
the network connections, do the following:
Note: You can connect to either a managed switch (routing tables) or a flat switch (no routing
tables). A flat switch is often recommended to protect against configuration errors that could
affect the production environment. Foundation includes a multi-homing feature that allows you
to image the nodes using production IP addresses despite being connected to a flat switch (see
Configuring Global Parameters on page 42). See Network Requirements on page 58 for
general information about the network topology and port access required for a cluster.

1. Connect the first 1 GbE network interface of each node to a 1GbE Ethernet switch. The IPMI LAN
interfaces of the nodes must be in failover mode (factory default setting).
The exact location of the port depends on the model type. See the hardware documentation for your
model to determine the port location.
(Nutanix NX Series) The following figure illustrates the location of the network ports on the back of
an NX-3050 (middle RJ-45 interface).

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 35


Figure: Port Locations (NX-3050)
(Lenovo Converged HX Series) Unlike Nutanix NX-series systems, which only require that you
connect the 1 GbE port, Lenovo HX-series systems require that you connect both the system
management (IMM) port and one of the 10 GbE ports. The following figure illustrates the location of
the network ports on the back of the HX3500 and HX5500.

Note: If you want to install AHV by using Foundation, connect both the IMM port and a 10
GbE port.

Figure: Port Locations (HX System)


(Dell XC series) Unlike Nutanix NX-series systems, which only require that you connect the 1 GbE
port, Dell XC-series systems require that you connect both the iDRAC port and one of the 1 GbE
ports.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 36


Note: When using Foundation to image a Dell XC430-4 node with AHV, connect both the
iDRAC interface and, instead of a 1 GbE interface, a 10 GbE interface.

Figure: Port Locations (XC System)

2. Connect the installation workstation (see Preparing a Workstation on page 30) to the same 1 GbE
switch as the nodes.

Updating Foundation
You can use the Foundation user interface to update Foundation. Updating Foundation to the latest version
is optional but recommended.
To update Foundation, do the following:

1. From the Foundation download page on the Nutanix support portal (https://portal.nutanix.com/#/page/
foundation/list), download the Foundation tarball to the workstation that you plan to use for imaging.

2. Open the Foundation user interface. See Configuring Node Parameters on page 37 if you are using
standalone Foundation. See Discovering Nodes and Launching Foundation on page 8 if you are using
CVM Foundation.

3. In the gear icon menu at the top-right corner of the Foundation user interface, click Update
Foundation.

4. In the Update Foundation dialog box, do the following:

a. Click Browse, and then browse to and select the Foundation tarball you downloaded.

b. Click Install.

Configuring Node Parameters


Before you begin: Complete Imaging Bare Metal Nodes on page 29.

Note: During this procedure, you assign IP addresses to the hypervisor host, the Controller
VMs, and the IPMI interfaces. Do not assign IP addresses from a subnet that overlaps with the
192.168.5.0/24 address space on the default VLAN. Nutanix uses an internal virtual switch to
manage network communications between the Controller VM and the hypervisor host. This switch
is associated with a private network on the default VLAN and uses the 192.168.5.0/24 address
space. If you want to use an overlapping subnet, make sure that you use a different VLAN.

1. Click the Nutanix Foundation icon on the Foundation VM desktop to start the Foundation GUI.

Note: See Preparing Installation Environment on page 30 if Oracle VM VirtualBox is not


started or the Foundation VM is not running currently. You can also start the Foundation GUI by

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 37


opening a web browser and entering http://localhost:8000/gui/index.html. Once you assign
an IP to the Foundation VM, you can access it from outside VirtualBox.

Figure: Foundation VM Desktop

The Node Config screen appears. This screen allows you to configure discovered nodes and add other
(bare metal) nodes to be imaged. Upon opening this screen, Foundation searches the network for
unconfigured Nutanix nodes (that is, factory prepared nodes that are not part of a cluster) and then
displays information about the discovered blocks and nodes. The discovery process can take several
minutes if there are many nodes on the network. Wait for the discovery process to complete before
proceeding. The message "Searching for nodes. This may take a while" appears during discovery.
Note: Foundation discovers nodes on the same subnet as the Foundation VM only. Any nodes
to be imaged that reside on a different subnet must be added explicitly (see step 2). In addition,
Foundation discovers unconfigured Nutanix nodes only. If you are running Foundation on a
preconfigured block with an existing cluster and you want Foundation to image those nodes,
you must first destroy the existing cluster in order for Foundation to discover those nodes.

Figure: Node Configuration Screen

If Foundation detects nodes that can form a mixed-hypervisor cluster running ESXi and AHV, with the
AHV node used for storage only, the Node Config screen displays a message prompting you to select
those nodes.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 38


For information about which nodes can form a mixed-hypervisor cluster, see "Product Mixing
Restrictions" in the NX and SX Series Hardware Administration and Reference.

2. Review the list of discovered nodes.


The discovered nodes are displayed in alphabetical order in tables. Nodes belonging to different series
(Xpress series, XCP series, and so on) are represented in separate tables. Within each table, each
block is displayed in a separate section along with information about contained nodes.
You can exclude a block by clicking the X on the far right of that block. The block disappears from
the display, and the nodes in that block will not be imaged. Clicking the X on the top line removes all
the displayed blocks.
To repeat the discovery process (search for unconfigured nodes again), click the Retry Discovery
button. You can reset all the global and node entries to the default state by selecting Reset
Configuration from the gear icon pull-down menu.

Note: Do not select the Switch to software-only installation check box.

3. To image additional (bare metal) nodes, click the Add Blocks button.
A window appears to add a new block. Do the following in the indicated fields:

Figure: Add Bare Metal Blocks Window

a. Number of Blocks: Enter the number of blocks to add.

b. Nodes per Block: Enter the number of nodes to add in each block.
All added blocks get the same number of nodes. To add multiple blocks with differing nodes per
block, add the blocks as separate actions.

c. Click the Create button.

The window closes and the new blocks appear at the end of the discovered blocks table.

4. Configure the fields for each node as follows:

a. Block ID: Do nothing in this field because it is a unique identifier for the block that is assigned
automatically.

b. Position: Uncheck the boxes for any nodes you do not want to be imaged.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 39


The value (A, B, and so on) indicates the node placement in the block such as A, B, C, D for a four-
node block. You can exclude the node in that block position from being imaged by unchecking the
appropriate box. You can check (or uncheck) all boxes by clicking Select All or (Unselect All) above
the table on the right.

c. IPMI MAC Address: For any nodes you added in step 2, enter the MAC address of the IPMI
interface in this field.
Foundation requires that you provide the MAC address for nodes it has not discovered. (This field is
read-only for discovered nodes and displays a value of "N/A" for those nodes.) The MAC address of
the IPMI interface normally appears on a label on the back of each node. (Make sure you enter the
MAC address from the label that starts with "IPMI:", not the one that starts with "LAN:".) The MAC
address appears in the standard form of six two-digit hexadecimal numbers separated by colons, for
example 00:25:90:D9:01:98.

Caution: Any existing data on the node will be destroyed during imaging. If you are using
the add node option to re-image a previously used node, do not proceed until you have
saved all the data on the node that you want to keep.

Figure: IPMI MAC Address Label

d. IPMI IP, Hypervisor IP, and CVM IP: Do one of the following in each of these fields:

Note: If you are using a flat switch, the IPMI IP addresses must be on the same subnet as
the Foundation VM unless you configure multi-homing (see Configuring Global Parameters
on page 42).

Caution: The Nutanix high availability features require that both hypervisor and Controller
VM be in the same subnet. Putting them in different subnets reduces the failure protection
provided by Nutanix and can lead to other problems. Therefore, it is strongly recommended
that you keep both hypervisor and Controller VM in the same subnet.
Specify the IP addresses for each node manually.
Specify the starting address in the row immediately below the table header. The user interface
assigns the IP address to the first node, and it generates consecutive IP addresses for the
remaining nodes. Discovered nodes are sorted first by block ID and then by position, so
IP address assignments are sequential. You can modify these generated IP addresses as
necessary.
Note: Automatic assignment is not used for addresses ending in 0, 1, 254, or 255
because such addresses are commonly reserved by network administrators.

You can specify a step increment or decrement value along with the starting IP address. IP
addresses for the second and subsequent nodes are generated by adding the value to the IP
address in the previous row. For example, if you enter 192.0.2.10 +3, nodes are assigned IP
addresses 192.0.2.10, 192.0.2.13, 192.0.2.16, and so on. If you enter 192.0.2.20 -2, nodes are
assigned IP addresses 192.0.2.20, 192.0.2.18, 192.0.2.16, and so on.
If you want the blocks to be in a particular order before assigning addresses, click Reorder
Blocks. In the Reorder Blocks dialog box, drag blocks to their desired positions, and then click
Done.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 40


If you reorder blocks after manually assigning each IP address, the blocks retains their IP
addresses, even in their new positions. If you reorder blocks after the user interface generates
IP addresses for you, the reorder operation regenerates the IP addresses, and any changes that
you make to individual IP address assignments are lost.

e. Hypervisor Hostname: Do one of the following in this field:


A host name is automatically generated for each host ( NTNX-unique_identifier). If these names
are acceptable, do nothing in this field.
Caution: Windows computer names (used in Hyper-V) have a 15 character limit. The
automatically generated names might be longer than 15 characters, which would result
in the same truncated name for multiple hosts in a Windows environment. Therefore, do
not use automatically generated names longer than 15 characters when the hypervisor is
Hyper-V.
To specify the host names manually, go to the line for each node and enter the desired name in
that field. Host names should contain only digits, letters, and hyphens.
To specify the host names automatically, enter a base name in the top line of the Hypervisor
Hostname column. The base name with a suffix of "-1" is assigned as the host name of the first
node, and the base name with "-2", "-3" and so on are assigned automatically as the host names
of the remaining nodes. You can specify different names for selected nodes by updating the entry
in the appropriate field for those nodes.

f. NX-6035C : Check this box for any node that is a model NX-6035C.
Model NX-6035C nodes are used for "cold" storage and run nothing but a Controller VM; user VMs
are not allowed. NX-6035C nodes run AHV (and so will be imaged with AHV) regardless of what
hypervisor runs on the other nodes in a cluster (see Configuring Image Parameters on page 44).

g. If a platform that functions as a single-node replication target is discovered on the network and you
want to image the node, select its check box in the Backup Node column.

5. To check which IP addresses are active and reachable, click Ping Scan.
This does a ping test to each IP address in the IPMI, hypervisor, and CVM IP address fields. A
(returned response) or (no response) icon appears next to that field to indicate the ping test result
for each node. This feature is most useful when imaging a previously unconfigured set of nodes. None
of the selected IPs should be pingable. Successful pings usually indicate a conflict with the existing
infrastructure.
Note: When re-imaging a configured set of nodes using the same network configuration, failure
to ping indicates a networking issue.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 41


6. Click the Next button at the bottom of the screen to specify global parameters (see Configuring Global
Parameters on page 42).

Configuring Global Parameters


Use the Global Config page to specify globally applicable network information such as netmasks,
gateways, and DNS servers.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 42


Figure: Global Configuration Screen

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 43


1. In the IPMI section, enter appropriate values for the IPMI, hypervisor, and Controller VM in the indicated
a. Netmask: Enter the IPMI netmask value.

b. Gateway: Enter an IP address for the gateway.

c. Username: Enter the IPMI user name. The default user name is ADMIN.

d. Password: Enter the IPMI password. The default password is ADMIN.


Check the show password box to display the password as you type it.

2. In the CVM and Hypervisor section, specify global information for the CVM and hypervisor network:

a. Netmask: Enter the netmask.

b. Gateway: Enter the gateway IP address.

c. DNS Server IP: Enter a DNS server IP address.

d. CVM Memory: Specify a memory size for the Controller VM from the pull-down list. For more
information about Controller VM memory configuration, see CVM Memory and vCPU Configurations
(G4/Haswell/Ivy Bridge) on page 62.
This field is set initially to default. The default value varies by platform, and represents the
recommended value for the platform. The options are 16 GB, 24 GB, 32 GB, and 64 GB. Assigning
more than the default might be appropriate in certain situations.

3. Select the cluster's time zone from the Time Zone list. Time zone selection is optional.

4. If you are using a flat switch (no routing tables) for installation and require access to multiple subnets,
check the Multi-Homing box in the bottom section of the screen.
This section includes fields that you can use to assign the Foundation VM IP addresses in each of
the subnets configured in the upper half of this page, namely the subnet of the IPMI IP address and
the subnet of the CVMs and hypervisor hosts. Multi-homing enables the Foundation VM to configure
production IP addresses when connected to a flat switch. If you do not configure multi-homing, all IP
addresses must be either on the same subnet or routable.
In IPMI IP, enter an unused IP address in the same subnet as the IPMI interfaces of the Nutanix
hosts.
In CVM AND Hypervisor IP, enter an unused IP address in the same subnet as the CVMs and
hypervisor hosts.

5. Click Next.

Configuring Image Parameters


Use the Image Selection page to select the AOS package and hypervisor image to use for imaging the
nodes.
If all the nodes you selected are capable of forming a mixed-hypervisor cluster running ESXi and AHV,
with the AHV node used for storage only, on the Image Selection screen, Foundation first prompts you
to specify whether you want to create a single-hypervisor cluster or a multi-hypervisor cluster. Even if a
multi-hypervisor cluster that includes ESXi and AHV nodes is possible, you can choose to create a single-
hypervisor cluster.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 44


If a multi-hypervisor cluster is not possible, you do not receive the prompt, and the default Node Imaging
screen is displayed.

Figure: Node Imaging Screen

1. If the prompt is displayed, click the type of cluster you want, and then click Continue.
One of the following occurs:
If you choose the multi-hypervisor option, an additional item named ESX+AHV is included in the
Hypervisor Type list and selected by default.
If you choose the single-hypervisor option, Foundation displays the default Image Selection screen
shown earlier.

2. From the Available Packages list, select the AOS installer package that you want to use.
Note: The list displays the packages that are available in the ~/foundation/nos directory
on the Foundation VM. If the desired AOS package does not appear in the list, or if the list is
empty, you must upload a package from your workstation to the ~/foundation/nos directory. To
upload the package to the directory, perform the next step.

3. If you want to upload a package to the ~/foundation/nos directory, do the following:

a. Click Manage above the Available Packages list.

b. Click Add, and then click Choose File.

c. Browse to and upload the package that you want to use, and then click Close.
The package you uploaded appears in the Available Packages list.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 45


d. From the Available Packages list, select the AOS installer package that you uploaded.

4. From the Hypervisor Type list, select the hypervisor type that you want to install (for multi-hypervisor
clusters, the selected hypervisor is installed on the compute nodes and AHV is installed on the storage-
only nodes.
Selecting Hyper-V as the hypervisor displays a SKU list.

Caution: To install Hyper-V, the nodes must have a 64 GB DOM. Attempts to install Hyper-
V on nodes with less DOM capacity will fail. See Hyper-V Installation Requirements on
page 65 for additional considerations when installing a Hyper-V cluster.

5. From the Available Packages list, select the hypervisor ISO image that you want to use.
If the hypervisor ISO that you want to use is not listed, click Manage above the Available Packages
list, upload the ISO file that you want to use, and then select the file from the Available Packages list,
as described earlier for uploading an AOS package.

6. From the list of AHV bundles for the storage-only nodes, backup nodes, or multi-cluster nodes (based
on the mix of selected nodes, the list is named Available AHV Packages (Storage Only Nodes),
Available AHV Packages (Backup Nodes), or just Available AHV Packages, select the AHV package
that you want to use.
If the AHV ISO that you want to use is not listed, click Manage above the Available Packages list,
upload the ISO file that you want to use, and then select the file from the Available Packages list, as
described earlier for uploading an AOS package.

7. [Hyper-V only] From the Choose Hyper-V SKU list, select the SKU for the Hyper-V version to use.
Five Hyper-V versions are supported: Free, Standard, Datacenter, Standard with GUI, Datacenter
with GUI. This column appears only if you select Hyper-V.

Note: See Hyper-V Installation Requirements on page 65 for additional considerations


when installing a Hyper-V cluster.

8. When all the settings are correct, do one of the following:


To create a new cluster, click the Next button at the bottom of the screen (see Configuring Cluster
Parameters on page 46).
To start imaging immediately (bypassing cluster configuration), click the Verify and Install button at
the top of the screen (see Monitoring Progress on page 49).

Configuring Cluster Parameters


Before you begin: Complete Configuring Image Parameters on page 44.
The Clusters configuration screen appears. This screen allows you to create one or more clusters and
assign nodes to those clusters. It also allows you to enable diagnostic and health tests after creating the
cluster(s).

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 46


Figure: Cluster Configuration Screen

1. To add a new cluster that will be created after imaging the nodes, click Create New Cluster in the
Cluster Creation section at the top of the screen.
This section includes a table that is empty initially. A blank line appears in the table for the new cluster.
Enter the following information in the indicated fields:

a. Cluster Name: Enter a cluster name.

b. External IP: Enter an external (virtual) IP address for the cluster.


This field sets a logical IP address that always points to an active Controller VM (provided the cluster
is up), which removes the need to enter the address of a specific Controller VM. This parameter is
required for Hyper-V clusters and is optional for ESXi and AHV clusters. (This applies to NOS 4.0 or
later; it is ignored when imaging an earlier NOS release.)

c. CVM DNS Servers: Enter the Controller VM DNS server IP address or URL.
Enter a comma separated list to specify multiple server addresses in this field (and the next two
fields).

d. CVM NTP Servers: Enter the Controller VM NTP server IP address or URL.
You must enter an NTP server that the Controller VMs can reach. If the NTP server is not reachable
or if the time on the Controller VMs is ahead of the current time, cluster services may fail to start.

Note: For Hyper-V clusters, the CVM NTP Servers parameter must be set to the Active
Directory domain controller.

e. Hypervisor NTP Servers: Enter the hypervisor NTP server IP address or URL.

f. Max Redundancy Factor: Select a redundancy factor (2 or 3) for the cluster from the pull-down list.
This parameter specifies the number of times each piece of data is replicated in the cluster (either 2
or 3 copies). It sets how many simultaneous node failures the cluster can tolerate and the minimum
number of nodes required to support that protection.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 47


Setting this to 2 means there will be two copies of data, and the cluster can tolerate the failure of
any single node or drive.
Setting this to 3 means there will be three copies of data, and the cluster can tolerate the failure
of any two nodes or drives in different blocks. A redundancy factor of 3 requires that the cluster
have at least five nodes, and it can be enabled only when the cluster is created. It is an option on
NOS release 4.0 or later. (In addition, containers must have replication factor 3 for guest VM data
to withstand the failure of two nodes.)

Note: For single-node replication target clusters, you can only configure redundancy factor
of 2.
In the single-node replication target clusters, all the nodes are automatically added. You
need to enter just the name of the cluster.

In the single-node replication target clusters, all the nodes are automatically added. You need to enter
just the name of the cluster.

2. To run cluster diagnostic and/or health checks after creating a cluster, check the appropriate boxes in
the Post Image Testing section.

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 48


Check the Diagnostics box to run a diagnostic utility on the cluster. The diagnostic utility analyzes
several performance metrics on each node in the cluster. These metrics indicate whether the cluster
is performing properly. The results are stored in the ~/foundation/logs/diagnostics directory.

Note: You must download the appropriate diagnostics test file(s) from the support portal to
run this test (see Preparing a Workstation on page 30).
Check the NCC Testing box to run the Nutanix Cluster Check (NCC) test suite. This is a suite of
tests that check a variety of health metrics in the cluster. The results are stored in the ~/foundation/
logs/ncc directory.

3. To assign nodes to a new cluster (from step 1), check the boxes for each node in the Block and Nodes
field to be included in that cluster.
A section for each new cluster appears in the bottom of the screen. Each section includes all the nodes
to be imaged. You can assign a node to any of the clusters (or leave it unassigned), but a node cannot
be assigned to more than one cluster.
Note: This assignment is to a new cluster only. Uncheck the boxes for any nodes you want to
add to an existing cluster, which can be done through the web console or nCLI at a later time.

4. When all settings are correct, click the Run Installation button at the top of the screen to start the
installation process (see Monitoring Progress on page 49).

Monitoring Progress
Before you begin: Complete Configuring Cluster Parameters on page 46 (or Configuring Image
Parameters on page 44 if you are not creating a cluster).
When all the global, node, and cluster settings are correct, do the following:

1. Click the Run Installation button at the top of the screen.


This starts the installation process. First, the IPMI port addresses are configured. The IPMI port
configuration processing can take several minutes depending on the size of the cluster.

Note: If the IPMI port configuration fails for one or more nodes in the cluster, the installation
process stops before imaging any of the nodes. To correct a port configuration problem, see
Fixing IPMI Configuration Problems on page 71.

2. Monitor the imaging and cluster creation progress.


If IPMI port addressing is successful, Foundation moves to node imaging and displays a progress
screen. The progress screen includes the following sections:
Progress bar at the top (blue during normal processing or red when there is a problem).
Cluster Creation Status section with a line for each cluster being created (status indicator, cluster
name, progress message, and log link).
Node Status section with a line for each node being imaged (status indicator, IPMI IP address,
progress message, and log link).

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 49


Figure: Foundation Progress Screen: Ongoing Installation

The status message for each node (in the Node Status section) displays the imaging percentage
complete and current step. Nodes are imaged in parallel, and the imaging process takes about 45
minutes. You can monitor overall progress by clicking the Log link at the top, which displays the
service.log contents in a separate tab or window. Click on the Log link for a node to display the log file
for that node in a separate tab or window.

Note: Simultaneous processing is limited to a maximum of 20 nodes. If the cluster contains


more than 20 nodes, the total processing time is about 45 minutes for each group of 20 nodes.
When installation moves to cluster creation, the status message for each cluster (in the Cluster
Creation Status section) displays the percentage complete and current step. Cluster creation
happens quickly, but this step could take some time if you selected the diagnostic and NCC post-
creation tests. Click on the Log link for a cluster to display the log file for that cluster in a separate
tab or window. (The log file is not available until after cluster creation begins, so wait for cluster
progress reporting to start before clicking this link.)
When processing completes successfully, an "Installation Complete" message appears, along with
a green check mark in the Status field for each node and cluster. This means IPMI configuration
and imaging (both hypervisor and Nutanix Controller VM) across all the nodes in the cluster was
successful, and cluster creation was successful (if enabled).

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 50


Figure: Foundation Progress Screen: Successful Installation

3. If the progress bar turns red with a "There were errors in the installation" message and one or more
node or cluster entries have a red X in the status column, the installation failed at the node imaging or
cluster creation step. To correct such problems, see Fixing Imaging Problems on page 72. Clicking
the Back to config button returns you to the configuration screens to correct any entries. The default
per-node installation timeout is 30 minutes for ESXi or 60 minutes for Hyper-V and AHV, so you can
expect all the nodes (in each run of up to 20 nodes) to finish successfully or encounter a problem in that
amount of time.

Figure: Foundation Progress Screen: Failed Installation

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 51


Cleaning Up After Installation
Some information persists after imaging a cluster using Foundation. If you want to use the same
Foundation VM to image another cluster, the persistent information must be removed before attempting
another installation.

To remove the persistent information after an installation, go to a configuration screen and then click the
Reset Configuration option from the gear icon pull-down list in the upper right of the screen.
Clicking this button reinitializes the progress monitor, destroys the persisted configuration data, and
returns the Foundation environment to a fresh state.

Figure: Reset Configuration

Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 52


4
Downloading Installation Files
This procedure describes how to download the files required for installing Nutanix software by using
Foundation.
Nutanix maintains a support portal where you can download the Foundation and AOS files required to do a
field installation. To download the required files, do the following:

1. Open a web browser and log in to the Nutanix Support portal: http://portal.nutanix.com.

2. Click Downloads from the main menu (at the top) and then select the desired page: AOS (NOS) to
download AOS files and Foundation to download Foundation files.

Figure: Nutanix Support Portal Main Screen

3. To download a Foundation installation bundle (see Foundation Files on page 54), go to the
Foundation page and do one (or more) of the following:
To download the Java applet used in discovery (see Creating a Cluster on page 7), click link to
jnlp from online bundle. This downloads nutanix_foundation_applet.jnlp and allows you to start
discovery immediately.
To download an offline bundle containing the Java applet, click offline bundle. This downloads an
installation bundle that can be taken to environments which do not allow Internet access.
To download the standalone Foundation bundle (see Imaging Bare Metal Nodes on page 29), click
Foundation_VM-version#.ovf.tar. (The exact file name varies by release.) This downloads an
installation bundle that includes OVF and VMDK files.
To download an installation bundle used to upgrade standalone Foundation, click
foundation-version#.tar.gz.
To download the current hypervisor ISO whitelist, click iso_whitelist.json.

Note: Use the filter option to display the files for a specific Foundation release.

Downloading Installation Files | Field Installation Guide | Foundation | 53


Figure: Foundation Download Screen

4. To download an AOS release bundle, go to the AOS (NOS) page and click the button or link for the
desired release.
Clicking the Download version# button in the upper right of the screen downloads the latest
AOS release. You can download an earlier AOS release by clicking the appropriate Download
version# link under the ADDITIONAL RELEASES heading. The tar file to download is named
nutanix_installer_package-version#.tar.gz.

5. To download AHV, go to the Hypervisor Details page and do one of the following:
Download an AHV tar archive. AHV tar archives are named host-bundle-
el6.nutanix.version#.tar.gz, where version# is the AHV version.

Download an AHV tar archive if the tar archive that is included by default in the standalone
Foundation VM is not of the desired version. If you want to use a downloaded AHV tar archive to
image a node, you might also have to download the latest hypervisor ISO whitelist when using
Foundation.
In the Foundation VM, you can also generate an AHV ISO file from the downloaded tar archive and
use the ISO file to manually image nodes.

Download an AHV ISO file. AHV ISO files are named installer-el6.nutanix.version#.iso, where
version# is the AHV version.
Download the ISO file if you want to manually image each node.

For information about using the Foundation VM to image nodes, see Imaging Bare Metal Nodes on
page 29. For information about manually imaging nodes, see Appendix: Imaging a Node (Phoenix) on
page 80.

Foundation Files
The following table describes the files required to install Foundation. Use the latest Foundation version
available unless instructed by Nutanix customer support to use an earlier version.

Downloading Installation Files | Field Installation Guide | Foundation | 54


File Name Description

nutanix_foundation_applet.jnlp This is the Foundation Java applet. This is the file


needed for doing a Controller VM-based installation
(see Creating a Cluster on page 7) supported in
Foundation 3.0 and later releases.
FoundationApplet-offline.zip This is an installation bundle that includes the
Foundation Java applet. Download and extract this
bundle for environments where Internet access is
not allowed.
Foundation_VM-version#.ovf This is the Foundation VM OVF configuration file
where version# is the Foundation version number.
Foundation_VM-version#-disk1.vmdk This is the Foundation VM VMDK file.
Foundation_VM-version#-disk1.qcow2 This is the Foundation VM data disk in qcow2
format.
Foundation_VM-version#.ovf.tar This is a Foundation tar file that contains
the Foundation_VM-version#.ovf and
Foundation_VM-version#-disk1.vmdk files.
Foundation 2.1 and later releases package the OVF
and VMDK files into this TAR file.
Foundation-version#.tar.gz This is a tar file used for upgrading when
Foundation is already installed.
host-bundle-el6.nutanix.version#.tar.gz This is a tar file used to generate an AHV ISO
image.
nutanix_installer_package-version#.tar.gz This is the tar file used for imaging the desired
AOS release where version# is a version and build
number. Go to the Acropolis (NOS) download
page on the support portal to download this file.
(You can download all the other files from the
Foundation download page.)
iso_whitelist.json This file contains a list of supported ISO images.
Foundation uses the whitelist to validate an ISO
file before imaging (see Selecting the Images on
page 17).
VirtualBox-version#-OSX.dmg This is the Oracle VM VirtualBox installer for Mac
OS where version# is a version and build number.
VirtualBox-version#-Win.exe This is the Oracle VM VirtualBox installer for
Windows.

Downloading Installation Files | Field Installation Guide | Foundation | 55


5
Hypervisor ISO Images
An AOS ISO image is included as part of Foundation. However, customers must provide an ESXi or Hyper-
V ISO image for those hypervisors. Check with your VMware or Microsoft representative, or download an
ISO image from an appropriate VMware or Microsoft support site:
VMware Support: http://www.vmware.com/support.html
Note: For the Lenovo Converged HX Series platform, use the custom ISOs that are available
on the VMware website (www.vmware.com) at Downloads > Product Downloads >
vSphere > Custom ISOs.
Microsoft Technet: http://technet.microsoft.com/en-us/evalcenter/dn205299.aspx
Microsoft EA portal: http://www.microsoft.com/licensing/licensing-options/enterprise.aspx
MSDN: http://msdn.microsoft.com/subscriptions/downloads/#FileId=57052
The list of supported ISO images appears in a iso_whitelist.json file used by Foundation to validate ISO
images. ISO files are identified in the whitelist by their MD5 value (not file name), so verify that the MD5
value of the ISO you want to use matches the corresponding one in the whitelist. You can download the
current whitelist from the Foundation page on the Nutanix support portal: https://portal.nutanix.com/#/page/
foundation/list
Note: The ISO images in the whitelist are the ones supported in Foundation, but some might no
longer be available from the download sites.

The following table describes the fields that appear in the iso_whitelist.json file for each ISO image.

iso_whitelist.json Fields

Name Description

(n/a) Displays the MD5 value for that ISO image.


min_foundation Displays the earliest Foundation version that
supports this ISO image. For example, "2.1"
indicates you can install this ISO image using
Foundation version 2.1 or later (but not an earlier
version).
hypervisor Displays the hypervisor type (esx, hyperv, or kvm).
The "kvm" designation means AHV. Entries with
a "linux" hypervisor are not available; they are for
Nutanix internal use only.
min_nos Displays the earliest AOS version compatible with
this hypervisor ISO. A null value indicates there are
no restrictions.
friendly_name Displays a descriptive name for the hypervisor
version, for example "ESX 6.0" or "Windows
2012r2".

Hypervisor ISO Images | Field Installation Guide | Foundation | 56


Name Description
version Displays the hypervisor version, for example "6.0"
or "2012r2".
unsupported_hardware Lists the Nutanix models on which this ISO cannot
be used. A blank list indicates there are no model
restrictions. However, conditional restrictions such
as the limitation that Haswell-based models support
only ESXi version 5.5 U2a or later may not be
reflected in this field.
skus (Hyper-V only) Lists which Hyper-V types (datacenter, standard,
and free) are supported with this ISO image. In
most cases, only datacenter and standard are
supported.
compatible_versions Reflects through regular expressions the hypervisor
versions that can co-exist with the ISO version in an
Acropolis cluster (primarily for internal use).
deprecated (optional field) Indicates that this hypervisor image is not
supported by the mentioned Foundation version
and higher versions. If the value is null, the image
is supported by all Foundation versions to date.

The following are sample entries from the whitelist for an ESX and an AHV image.
"iso_whitelist": {
"478e2c6f7a875dd3dacaaeb2b0b38228": {
"min_foundation": "2.1",
"hypervisor": "esx",
"min_nos": null,
"friendly_name": "ESX 6.0",
"version": "6.0",
"unsupported_hardware": [],
"compatible_versions": {
"esx": ["^6\\.0.*"]
},

"a2a97a6af6a3e397b43e3a4c7a86ee37": {
"min_foundation": "3.0",
"hypervisor": "kvm",
"min_nos": null,
"friendly_name": "20160127",
"compatible_versions": {
"kvm": [
"^el6.nutanix.20160127$"
]
},
"version": "20160127",
"deprecated": "3.1",
"unsupported_hardware": []
},

Hypervisor ISO Images | Field Installation Guide | Foundation | 57


6
Network Requirements
When configuring a Nutanix block, you will need to ask for the IP addresses of components that should
already exist in the customer network, as well as IP addresses that can be assigned to the Nutanix cluster.
You will also need to make sure to open the software ports that are used to manage cluster components
and to enable communication between components such as the Controller VM, Web console, Prism
Central, hypervisor, and the Nutanix hardware.

Existing Customer Network

You will need the following information during the cluster configuration:
Default gateway
Network mask
DNS server
NTP server
You should also check whether a proxy server is in place in the network. If so, you will need the IP address
and port number of that server when enabling Nutanix support on the cluster.

New IP Addresses

Each node in a Nutanix cluster requires three IP addresses, one for each of the following components:
IPMI interface
Hypervisor host
Nutanix Controller VM
All Controller VMs and hypervisor hosts must be on the same subnet. No systems other than the Controller
VMs and hypervisor hosts can be on this network, which must be isolated and protected.

Software Ports Required for Management and Communication

The following Nutanix network port diagrams show the ports that must be open for supported hypervisors.
The diagrams also shows ports that must be opened for infrastructure services.

Network Requirements | Field Installation Guide | Foundation | 58


Figure: Nutanix Network Port Diagram for VMware ESXi

Figure: Nutanix Network Port Diagram for AHV

Network Requirements | Field Installation Guide | Foundation | 59


Figure: Nutanix Network Port Diagram for Microsoft Hyper-V

Network Requirements | Field Installation Guide | Foundation | 60


7
Controller VM Memory Configurations
Controller VM memory allocation requirements differ depending on the models and the features that are
being used.

CVM Memory and vCPU Configurations (G5/Broadwell)


This topic lists the recommended Controller VM memory allocations for workload categories.

Note: Nutanix Engineering has determined that memory requirements for each Controller VM in
your cluster are likely to increase for subsequent releases. Nutanix recommends that you plan to
upgrade memory.

Controller VM Memory Configurations for Base Models

Platform Default

Platform Recommended Default Memory vCPUs


Memory (GB) (GB)
Default configuration for all platforms 16 16 8
unless otherwise noted

The following table show the minimum amount of memory required for the Controller VM on each node for
platforms that do not follow the default. For the workload translation into models, see Platform Workload
Translation (G5/Broadwell) on page 62.
Note: To calculate the number of vCPUs for your model, use the number of physical cores per
socket in your model. The minimum number of vCPUS your Controller VM can have is eight and
the maximum number is 12.
If your CPU has less than eight logical cores, allocate a maximum of 75 percent of the cores of a
single CPU to the Controller VM. For example, if your CPU has 6 cores, allocate 4 vCPUs.

Nutanix Broadwell Models

The following table displays the categories for the platforms.


Platform Default Memory (GB)
VDI, server virtualization 16
Storage Heavy 24
Storage Only 24

Controller VM Memory Configurations | Field Installation Guide | Foundation | 61


Platform Default Memory (GB)
Large server, high-performance, all-flash 32

Platform Workload Translation (G5/Broadwell)


The following table maps workload types to the corresponding Nutanix and Lenovo models.

Workload Nutanix Lenovo


Features NX Model HX Model

VDI NX-1065S-G5 HX3310


SX-1065-G5 HX3310-F
NX-1065-G5 HX2310-E
NX-3060-G5 HX3510-G
NX-3155G-G5 HX3710
NX-3175-G5 HX3710-F
- HX2710-E

Storage Heavy NX-6155-G5 HX5510


NX-8035-G5 -
NX-6035-G5 -

Storage Only nodes NX-6035C-G5 HX5510-C

High Performance and All-Flash NX-8150-G5 HX7510


NX-9060-G5 -

CVM Memory and vCPU Configurations (G4/Haswell/Ivy Bridge)


This topic lists the recommended Controller VM memory allocations for models and features.
Note: Nutanix Engineering has determined that memory requirements for each Controller VM in
your cluster are likely to increase for subsequent releases. Nutanix recommends that you plan to
upgrade memory.

Controller VM Memory Configurations for Base Models

Platform Default

Platform Recommended Default Memory vCPUs


Memory (GB) (GB)
Default configuration for all platforms 16 16 8
unless otherwise noted

The following tables show the minimum amount of memory and vCPU requirements and recommendations
for the Controller VM on each node for platforms that do not follow the default.

Controller VM Memory Configurations | Field Installation Guide | Foundation | 62


Nutanix Platforms

Platform Recommended Default Memory vCPUs


Memory (GB) (GB)
NX-1020 12 12 4
NX-6035C 24 24 8
NX-6035-G4 24 16 8
NX-8150 32 32 8
NX-8150-G4 32 32 8
NX-9040 32 16 8
NX-9060-G4 32 16 8

Dell Platforms

Platform Recommended Default Memory vCPUs


Memory (GB) (GB)

XC730xd-24 32 16 8
XC6320-6AF
XC630-10AF

Lenovo Platforms

Platform Default Memory (GB) vCPUs

HX-3500 24 8
HX-5500
HX-7500

CVM Memory Configurations for Features


The following table lists the minimum amount of memory required when enabling features.
The memory size requirements are in addition to the default or recommended memory available for your
platform. The maximum additional memory required is 16 GB even if the total indicated for the features is
more than that.
Note: Total CVM memory required = recommended platform memory + memory required for each
enabled feature (max 16 GB)

Features Memory (GB)


Capacity tier deduplication (includes performance tier deduplication) 16
Redundancy factor 3 8
Performance tier deduplication 8
Cold-tier nodes + capacity tier deduplication 4

Controller VM Memory Configurations | Field Installation Guide | Foundation | 63


Features Memory (GB)
Capacity tier deduplication + redundancy factor 3 16
Self-service portal (AHV only) Variable

Note:
SSP requires a minimum of 24 GB of memory for the CVM. If the CVMs
already have 24 GB of memory, no additional memory is necessary to run
SSP.
If the CVMs have less than 24 GB of memory, increase the memory to 24 GB
to use SSP.
If the cluster is using any other features that require additional CVM memory,
add 4 GB for SSP in addition to the amount needed for the other features.

Controller VM Memory Configurations | Field Installation Guide | Foundation | 64


8
Hyper-V Installation Requirements
Ensure that the following requirements are met before installing Hyper-V.

Windows Active Directory Domain Controller

Requirements:
The primary domain controller version must at least be 2008 R2.
Note: If you have Volume Shadow Copy Service (VSS) based backup tool (for example
Veeam), functional level of Active Directory must be 2008 or higher.
Active Directory Web Services (ADWS) must be installed and running. By default, connections are
made over TCP port 9389, and firewall policies must enable an exception on this port for ADWS.
To test that ADWS is installed and running on a domain controller, log on by using a domain
administrator account in a Windows host other than the domain controller host that is joined to the
same domain and has the RSAT-AD-Powershell feature installed, and run the following PowerShell
command. If the command prints the primary name of the domain controller, then ADWS is installed and
the port is open.
> (Get-ADDomainController).Name

If the free version of Hyper-V is installed, the primary domain controller server must not block
PowerShell remoting.
To test this scenario, log on by using a domain administrator account in a Windows host and run the
following PowerShell command.
> Invoke-Command -ComputerName (Get-ADDomainController).Name -ScriptBlock {hostname}

If the command prints the name of the Active Directory server hostname, then PowerShell remoting to
the Active Directory server is not blocked.

The domain controller must run a DNS server.


Note: If any of the above requirements are not met, you need to manually create an Active
Directory computer object for the Nutanix storage in the Active Directory, and add a DNS entry
for the name.
Ensure that the Active Directory domain is configured correctly for consistent time synchronization.
Accounts and Privileges:
An Active Directory account with permission to create new Active Directory computer objects for either
a storage container or Organizational Unit (OU) where Nutanix nodes are placed. The credentials of this
account are not stored anywhere.
An account that has sufficient privileges to join a Windows host to a domain. The credentials of this
account are not stored anywhere. These credentials are only used to join the hosts to the domain.
Additional Information Required:
The IP address of the primary domain controller.

Hyper-V Installation Requirements | Field Installation Guide | Foundation | 65


Note: The primary domain controller IP address is set as the primary DNS server on all
the Nutanix hosts. It is also set as the NTP server in the Nutanix storage cluster to keep the
Controller VM, host, and Active Directory time synchronized.
The fully qualified domain name to which the Nutanix hosts and the storage cluster is going to be joined.

SCVMM

Note: Relevant only if you have SCVMM in your environment.

Requirements:
The SCVMM version must at least be 2012 R2 and it must be installed on Windows Server 2012 or a
newer version.

Note: Currently SCVMM 2016 and Windows Server 2016 version are not supported.

The SCVMM server must allow PowerShell remoting.


To test this scenario, log on by using the SCVMM administrator account in a Windows host and run the
following PowerShell command on a Windows host that is different to the SCVMM host (for example,
run the command from the domain controller). If they print the name of the SCVMM server, then
PowerShell remoting to the SCVMM server is not blocked.
> Invoke-Command -ComputerName scvmm_server -ScriptBlock {hostname} -Credential MYDOMAIN
\username

Replace scvmm_server with the SCVMM host name and MYDOMAIN with Active Directory domain
name.

Note: If the SCVMM server does not allow PowerShell remoting, you can perform the SCVMM
setup manually by using the SCVMM user interface.

The ipconfig command must run in a PowerShell window on the SCVMM server. To verify run the
following command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {ipconfig} -Credential
MYDOMAIN\username

Replace scvmm_server_name with the SCVMM host name and MYDOMAIN with Active Directory
domain name.
The SMB client configuration in the SCVMM server should have RequireSecuritySignature set to
False. To verify, run the following command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {Get-SMBClientConfiguration |
FL RequireSecuritySignature}

Replace scvmm_server_name with the SCVMM host name.


This can be set to True by a domain policy. In this case, the domain policy should be modified to set it
to False. Also, if it is True, this can be configured back to False, but might not get changed throughout
if there is a policy that reverts it back to True. To change it, you can use the following command in the
PowerShell on the SCVMM host by logging in as a domain administrator.
Set-SMBClientConfiguration -RequireSecuritySignature $False -Force

If you are changing it from True to False, it is important to confirm that the policies that are on the
SCVMM host have the correct value. On the SCVMM host run rsop.msc to review the resultant set
of policy details, and verify the value by navigating to, Servername > Computer Configuration >
Windows Settings > Security Settings > Local Policies > Security Options: Policy Microsoft

Hyper-V Installation Requirements | Field Installation Guide | Foundation | 66


network client: Digitally sign communications (always). The value displayed in RSOP must
be, Disabled or Not Defined for the change to persist. Also, the group policies that have been
configured in the domain to apply to the SCVMM server should to be updated to change this to
Disabled, if the RSOP shows it as Enabled. Otherwise, the RequireSecuritySignature changes
back to True at a later time. After setting the policy in Active Directory and propagating to the domain
controllers, refresh the SCVMM server policy by running the command gpupdate /force. Confirm in
RSOP that the value is Disabled.

Note: If security signing is mandatory, then you need to enable Kerberos in the Nutanix cluster.
In this case, it is important to ensure that the time remains synchronized between the Active
Directory server, the Nutanix hosts, and the Nutanix Controller VMs. The Nutanix hosts and the
Controller VMs set their NTP server as the Active Directory server, so it should be sufficient to
ensure that Active Directory domain is configured correctly for consistent time synchronization.

Accounts and Privileges:


When adding a host or a cluster to the SCVMM, the run-as account you are specifying for managing the
host or cluster must be different from the service account that was used to install SCVMM.
Run-as account must be a domain account and must have local administrator privileges on the Nutanix
hosts. This can be a domain administrator account. When the Nutanix hosts are joined to the domain,
the domain administrator accounts automatically takes administrator privileges on the host. If the
domain account used as the run-as account in SCVMM is not a domain administrator account, you
need to manually add it to the list of local administrators on each host by running sconfig.
SCVMM domain account with administrator privileges on SCVMM and PowerShell remote execution
privileges.
If you want to install SCVMM server, a service account with local administrator privileges on the
SCVMM server.

IP Addresses

One IP address for each Nutanix host.


One IP address for each Nutanix Controller VM.
One IP address for each Nutanix host IPMI interface.
One IP address for the Nutanix storage cluster.
One IP address for the Hyper-V failover cluster.

Note: For N nodes, (3*N + 2) IP addresses are required. All IP addresses must be in the same
subnet.

DNS Requirements

Each Nutanix host must be assigned a name of 15 characters or less, which gets automatically added
to the DNS server during domain joining.
The Nutanix storage cluster needs to be assigned a name of 15 characters or less, which must be
added to the DNS server when the storage cluster is joined to the domain.
The Hyper-V failover cluster must be assigned a name of 15 characters or less, which gets
automatically added to the DNS server when the failover cluster is created.
After the Hyper-V configuration, all names must resolve to an IP address in the Nutanix hosts, the
SCVMM server (if applicable), or any other host that needs access to the Nutanix storage, for example,
a host running the Hyper-V Manager.

Hyper-V Installation Requirements | Field Installation Guide | Foundation | 67


Storage Access Requirements

Virtual machine and virtual disk paths must always refer to the Nutanix storage cluster by name, not the
external IP address. If you use the IP address, it directs all the I/O to a single node in the cluster and
thereby compromises performance and scalability.

Note: For external non-Nutanix host that needs to access Nutanix SMB shares, see "Nutanix
SMB Shares Connection Requirements from Outside the Cluster" in the "Cluster Management"
chapter of Hyper-V Administration for Acropolis.

Host Maintenance Requirements

When applying Windows updates to the Nutanix hosts, the hosts should be restarted one at a time,
ensuring that Nutanix services comes up fully in the Controller VM of the restarted host before updating
the next host. This can be accomplished by using Cluster Aware Updating and using a Nutanix-provided
script, which can be plugged into the Cluster Aware Update Manager as a pre-update script. This pre-
update script ensures that the Nutanix services go down on only one host at a time ensuring availability
of storage throughout the update procedure. For more information about cluster-aware updating, see
"Installing Windows Updates with Cluster-Aware Updating" in the "Cluster Management" chapter of
Hyper-V Administration for Acropolis.

Note: Ensure that automatic Windows updates are not enabled for the Nutanix hosts in the
domain policies.

Hyper-V Installation Requirements | Field Installation Guide | Foundation | 68


9
Setting IPMI Static IP Address
You can assign a static IP address for an IPMI port by resetting the BIOS configuration.
To configure a static IP address for the IPMI port on a node, do the following:

1. Connect a VGA monitor and USB keyboard to the node.

2. Power on the node.

3. Press the Delete key during boot up when prompted to enter the BIOS setup mode.
The BIOS Setup Utility screen appears.

4. Click the IPMI tab to display the IPMI screen.

5. Select BMC Network Configuration and press the Enter key.

6. Select Update IPMI LAN Configuration, press Enter, and then select Yes in the pop-up window.

7. Select Configuration Address Source, press Enter, and then select Static in the pop-up window.

Setting IPMI Static IP Address | Field Installation Guide | Foundation | 69


8. Select Station IP Address, press Enter, and then enter the IP address for the IPMI port on that node in
the pop-up window.

9. Select Subnet Mask, press Enter, and then enter the corresponding submask value in the pop-up
window.

10. Select Gateway IP Address, press Enter, and then enter the IP address for the node's network
gateway in the pop-up window.

11. When all the field entries are correct, press the F4 key to save the settings and exit the BIOS setup
mode.

Setting IPMI Static IP Address | Field Installation Guide | Foundation | 70


10
Troubleshooting
This section provides guidance for fixing problems that might occur during a Foundation installation.
For help with IPMI configuration problems in a bare metal workflow, see Fixing IPMI Configuration
Problems on page 71.
For help with imaging problems, see Fixing Imaging Problems on page 72.
For answers to other common questions, see Frequently Asked Questions (FAQ) on page 73.

Fixing IPMI Configuration Problems


In a bare metal workflow when the IPMI port configuration fails for one or more nodes in the cluster, or
it works but type detection fails and complains that it cannot reach an IPMI IP address, the installation
process stops before imaging any of the nodes. (Foundation will not go to the imaging step after an IPMI
port configuration failure, but it will try to configure the port address on all nodes before stopping.) Possible
reasons for a failure include the following:
One or more IPMI MAC addresses are invalid or there are conflicting IP addresses. Go to the Block &
Node Config screen and correct the IPMI MAC and IP addresses as needed (see Configuring Node
Parameters on page 37).
There is a user name/password mismatch. Go to the Global Configuration screen and correct the IPMI
username and password fields as needed (see Configuring Global Parameters on page 42).
One or more nodes are connected to the switch through the wrong network interface. Go to the back of
the nodes and verify that the first 1GbE network interface of each node is connected to the switch (see
Setting Up the Network on page 35).
The Foundation VM is not in the same broadcast domain as the Controller VMs for discovered nodes
or the IPMI interface for added (bare metal or undiscovered) nodes. This problem typically occurs
because (a) you are not using a flat switch, (b) some node IP addresses are not in the same subnet as
the Foundation VM, and (c) multi-homing was not configured.
If all the nodes are in the Foundation VM subnet, go to the Block & Node Config screen and correct
the IP addresses as needed (see Configuring Node Parameters on page 37).
If the nodes are in multiple subnets, go to the Global Configuration screen and configure multi-
homing (see Configuring Global Parameters on page 42).
The IPMI interface is not set to failover. You can check for this through the BIOS (see Setting IPMI
Static IP Address on page 69 to access the BIOS setup utility).
To identify and resolve IPMI port configuration problems, do the following:

1. Go to the Block & Node Config screen and review the problem IP address for the failed nodes (nodes
with a red X next to the IPMI address field).
Hovering the cursor over the address displays a pop-up message with troubleshooting information. This
can help you diagnose the problem. See the service.log file (in /home/nutanix/foundation/log) and
the individual node log files for more detailed information.

Troubleshooting | Field Installation Guide | Foundation | 71


Figure: Foundation: IPMI Configuration Error

2. When you have corrected all the problems and are ready to try again, click the Configure IPMI button
at the top of the screen.

Figure: Configure IPMI Button

3. Repeat the preceding steps as necessary to fix all the IPMI configuration errors.

4. When all nodes have green check marks in the IPMI address column, click the Image Nodes button at
the top of the screen to begin the imaging step.
If you cannot fix the IPMI configuration problem for one or more of the nodes, you can bypass those
nodes and continue to the imaging step for the other nodes by clicking the Proceed button. In this case
you must configure the IPMI port address manually for each bypassed node (see Setting IPMI Static IP
Address on page 69).

Fixing Imaging Problems


When imaging fails for one or more nodes in the cluster, the progress bar turns red and a red check
appears next to the hypervisor address field for any node that was not imaged successfully. Possible
reasons for a failure include the following:
A type failure was detected. Check connectivity to the IPMI (bare metal workflow).
There were network connectivity issues such as the following:
The connection is dropping intermittently. If intermittent failures persist, look for conflicting IPs.
[Hyper-V only] SAMBA is not up. If Hyper-V complains that it failed to mount the install share, restart
SAMBA with the command " sudo service smb restart ".
Foundation ran out of disk space during the hypervisor or Phoenix preparation phase. Free up some
space by deleting extraneous ISO images. In addition, a Foundation crash could leave a /tmp/tmp*
directory that contains a copy of an ISO image which you can unmount (if necessary) and delete.
Foundation needs about 9 GB of free space for Hyper-V and about 3 GB for ESXi or AHV.
The host boots but complains it cannot reach the Foundation VM. The message varies per hypervisor.
For example, on ESXi you might see a "ks.cfg:line 12: "/.pre" script returned with an error" error
message. Make sure you have assigned the host an IP address on the same subnet as the Foundation
VM or you have configured multi-homing (see (see Configuring Global Parameters on page 42). Also
check for IP address conflicts.
To identify and resolve imaging problems, do the following:

1. See the individual log file for any failed nodes for information about the problem.
Controller VM location for Foundation logs: ~/data/logs/foundation and ~/data/logs/
foundation.out[.timestamp]
Bare metal location for Foundation logs: /home/nutanix/foundation/log

Troubleshooting | Field Installation Guide | Foundation | 72


2. When you have corrected the problems and are ready to try again, click the Image Nodes (bare metal
workflow) button.

Figure: Image Nodes Button (bare metal)

3. Repeat the preceding steps as necessary to fix all the imaging errors.
If you cannot fix the imaging problem for one or more of the nodes, you can image those nodes one at a
time (see Appendix: Imaging a Node (Phoenix) on page 80).

Frequently Asked Questions (FAQ)


This section provides answers to some common Foundation questions.

Installation Issues

What steps should I take when I encounter a problem?


Click the appropriate log link in the progress screen (see Creating the Cluster on page 20 or Monitoring
Progress on page 49) to view the relevant log file. In most cases the log file should provide some
information about the problem near the end of the file. If that information (plus the information in this
troubleshooting section) is sufficient to identify and solve the problem, fix the issue and then restart the
imaging process.
If you were unable to fix the problem, open a Nutanix support case. You can do this from the Nutanix
support portal (https://portal.nutanix.com/#/page/cases/form?targetAction=new). Upload relevant log
files as requested. The log files are in the following locations:
Standalone (bare metal) location for Foundation logs: /home/nutanix/foundation/log in your
Foundation VM. This directory contains a service.log file for Foundation-related log messages, a
log file for each node being imaged (named node_cvm_ip_addr.log), a log file for each cluster being
created (named cluster_cluster_name.log, cluster_1.log, and so on), http.access and http.error
files for server-related log messages, debug.log file that records every bit of information Foundation
outputs, and api.log file that records certain requests made to the Foundation API. Logs from
past installations are stored in /home/nutanix/foundation/log/archive. In addition, the state of the
current install process is stored in /home/nutanix/foundation/persisted_config.json. You can
download the entire log archive from the following URL: http://foundation_ip:8000/foundation/
log_archive_tar
Controller VM location for Foundation logs: ~/data/logs/foundation (see preceding content
description) and ~/data/logs/foundation.out[.timestamp], which corresponds to the service.log
file.

I want to troubleshoot the operating system installation during cluster creation.


Point a VNC console to the hypervisor host IP address of the target node at port 5901.

I need to restart Foundation on the Controller VM.


To restart Foundation, log on to the Controller VM with SSH and then run the following command:
nutanix@cvm$ pkill foundation && genesis restart

My installation hangs, and the service log complains about type detection.
Verify that all of your IPMI IPs are reachable through Foundation. (On rare occasion the IPMI IP
assignment will take some time.) If you get a complaint about authentication, double-check your
password. If the problem persists, try resetting the BMC.

Troubleshooting | Field Installation Guide | Foundation | 73


Installation fails with an error where Foundation cannot ping the configured IPMI IP addresses.
Verify that the LAN interface is set to failover mode in the IPMI settings for each node. You can find this
setting by logging into IPMI and going to Configuration > Network > Lan Interface. Verify that the
setting is Failover (not Dedicate).
The diagnostic box was checked to run after installation, but that test (diagnostics.py) does not
complete (hangs, fails, times out).
Running this test can result in timeouts or low IOPS if you are using 1G cables. Such cables might not
provide the performance necessary to run this test at a reasonable speed.
Foundation seems to be preparing the ISOs properly, but the nodes boot into <previous hypervisor>
and the install hangs.
The boot order for one or more nodes might be set incorrectly to select the USB over SATA DOM as the
first boot device instead of the CDROM. To fix this, boot the nodes into BIOS mode and either select
"restore optimized defaults" (F3 as of BIOS version 3.0.2) or give the CDROM boot priority. Reboot the
nodes and retry the installation.

I have misconfigured the IP addresses in the Foundation configuration page. How long is the timeout for
the call back function, and is there a way I can avoid the wait?
The call back timeout is 60 minutes. To stop the Foundation process and restart it, open up the terminal
in the Foundation VM and enter the following commands:
$ sudo /etc/init.d/foundation_service stop
$ cd ~/foundation/
$ mv persisted_config.json persisted_config.json.bak
$ sudo /etc/init.d/foundation_service start

Refresh the Foundation web page. If the nodes are still stuck, reboot them.

I need to reset a block to the default state.


Using the bare metal imaging workflow, download the desired Phoenix ISO image for AHV from the
support portal (see https://portal.nutanix.com/#/page/phoenix/list). Boot each node in the block to that
ISO and follow the prompts until the re-imaging process is complete. You should then be able to use
Foundation as usual.

The cluster create step is not working.


If you are installing NOS 3.5 or later, check the service.log file for messages about the problem. Next,
check the relevant cluster log (cluster_X.log) for cluster-specific messages. The cluster create step
in Foundation is not supported for earlier releases and will fail if you are using Foundation to image a
pre-3.5 NOS release. You must create the cluster manually (after imaging) for earlier NOS releases.

I want to re-image nodes that are part of an existing cluster.


Do a cluster destroy prior to discovery. (Nodes in an existing cluster are ignored during discovery.)
My Foundation VM is complaining that it is out of disk space. What can I delete to make room?
Unmount any temporarily-mounted file systems using the following commands:
$ sudo fusermount -u /home/nutanix/foundation/tmp/fuse
$ sudo umount /tmp/tmp*
$ sudo rm -rf /tmp/tmp*

If more space is needed, delete some of the Phoenix ISO images from the Foundation VM.

I keep seeing the message "tar: Exiting with failure status due to previous errors'tar rf /home/
nutanix/foundation/log/archive/log-archive-20140604-131859.tar -C /home/nutanix/foundation ./
persisted_config.json' failed; error ignored."

Troubleshooting | Field Installation Guide | Foundation | 74


This is a benign message. Foundation archives your persisted configuration file (persisted_config.json)
alongside the logs. Occasionally, there is no configuration file to back up. This is expected, and you may
ignore this message with no ill consequences.

Imaging fails after changing the language pack.


Do not change the language pack. Only the default English language pack is supported. Changing the
language pack can cause some scripts to fail during Foundation imaging. Even after imaging, character
set changes can cause problems for NOS.

[Hyper-V] I cannot reach the CVM console via ssh. How do I get to its console?
See KB article 1701 (https://portal.nutanix.com/#/page/kbs/details?targetId=kA0600000008fJhCAI).

[ESXi] Foundation is booting into pre-install Phoenix, but not the ESXi installer.
Check the BIOS version and verify it is supported. If it is not a supported version, upgrade it. See KB
article 1467 ( https://portal.nutanix.com/#/page/kbs/details?targetId=kA0600000008dDxCAI).

Network and Workstation Issues

I am having trouble installing VirtualBox on my Mac.


Turning off the WiFi can sometimes resolve this problem. For help with VirtualBox issues, see https://
www.virtualbox.org/wiki/End-user_documentation.
There can be a problem when the USB Ethernet adapter is listed as a 10/100 interface instead of a 1G
interface. To support a 1G interface, it is recommend that MacBook Air users connect to the network
with a thunderbolt network adapter rather than a USB network adapter.

I get "This Kernel requires an x86-64 CPU, but only detected an i686 CPU" when trying to boot the VM
on VirtualBox.
The VM needs to be configured to expose a 64-bit CPU. For more information, see https://
forums.virtualbox.org/viewtopic.php?f=8&t=58767.

I am running the network setup script, but I do not see eth0 when I run ifconfig.
This can happen when you make changes to your VirtualBox network adapters. VirtualBox typically
creates a new interface (eth1, then eth2, and so on) to accommodate your new settings. To fix this, run
the following commands:
$ sudo rm /etc/udev/rules.d/70-persistent-net-rules
$ sudo shutdown -r now

This should reboot your machine and reset your adapter to eth0.

I have plugged in the Ethernet cables according to the directions and I can reach the IPMI interface, but
discovery is not finding the nodes to image.
Your Foundation VM must be in the same broadcast domain as the Controller VMs to receive their IPv6
link-local traffic. If you are installing on a flat 1G switch, ensure that the 10G cables are not plugged in.
(If they are, the Controller VMs might choose to direct their traffic over that interface and never reach
your Foundation VM.) If you are installing on a 10G switch, ensure that only the IPMI 10/100 port and
the 10G ports are connected.

The switch is dropping my IPMI connections in the middle of imaging.


If your network connection seems to be dropping out in the middle of imaging, try using an unmanaged
switch with spanning tree protocol disabled.

Foundation is stalled on the ping home phase.

Troubleshooting | Field Installation Guide | Foundation | 75


The ping test will wait up to two minutes per NIC to receive a response, so a long delay in the ping
phase indicates a network connection issue. Check that your 10G cables are unplugged and your 1G
connection can reach Foundation.

How do I install on a 10/100 switch?


A 10/100 switch is not recommended, but it can be used for a few nodes. However, you may see
timeouts. It is highly recommend that you use a 1G or 10G switch if it is available to you.

Informational Topics

How can I determine whether a node was imaged with Foundation or standalone Phoenix?
A node imaged using standalone Phoenix will have the file /etc/nutanix/foundation_version in it,
but the contents will be unknown instead of a valid foundation version.
A node imaged using Foundation will have the file /etc/nutanix/foundation_version in it with a
valid foundation version.
Does first boot work when run more than once?
First boot creates a marker failure marker file whenever it fails and a success marker file whenever it
succeeds. If the first boot script needs to be executed again, delete these marker files and just manually
execute the script.
Do the first boot marker files contain anything?
They are just empty files.

Why might first boot fail?


Possible reasons include the following:
First boot may take more time than expected, in which case Foundation might time out.
NC team creation fails.
The Controller VM has a kernel panic when it boots.
Hostd does not come up in time.

What is the timeout for first boot?


The timeout is 90 minutes. A node may restart several times (requirements from certain driver
installations) during the execution of the first boot script, and this can increase the overall first boot time.

How does the Foundation process differ on a Dell system?


Foundation uses a different tool called racadm to talk to the IPMI interface of a Dell system, and the files
which have the hardware layout details are different. However, the overall Foundation work flow (series
of steps) remains the same.

How does the Foundation service start in the Controller VM-based and standalone versions?
Standalone: Manually start the Foundation service using foundation_service start (in the ~/
foundation/bin directory).
Controller VM-based: Genesis service takes care of starting the Foundation service. If the
Foundation service is not already running, use the genesis restart command to start Foundation. If
the Foundation service is already running, a genesis restart will not restart Foundation. You must
manually kill the Foundation service that is running currently before executing genesis restart. The
genesis status command lists the services running currently along with their PIDs.

Why doesnt the genesis restart command stop Foundation?

Troubleshooting | Field Installation Guide | Foundation | 76


Genesis restarts only the services required for a cluster to be up and running. Stopping Foundation
could cause failures to current imaging sessions. For example, when expanding a cluster Foundation
may be in the process of imaging a node, which should not be disrupted by restarting Genesis.

How is the installer VM created?


The Qemu library is part of Phoenix. The qemu command starts the VM by taking a hypervisor ISO and
disk details as input. This command is simply executed on Phoenix to launch the installer VM.

How do you validate that installation is complete and the node is ready with regards to firstboot?
This can be validated by checking the presence of a first boot success marker file. The marker file
varies per hypervisor:
ESXi: /bootbank/first_boot.log
AHV: /root/.firstboot_success
HyperV: D:\markers\firstboot_success

Does Repair CVM re-create partitions?


Repair CVM images AOS alone and recreates the partitions on the SSD. It does not touch the
SATADOM, which contains the hypervisor.

Can I use older Phoenix ISOs for manual imaging?


Use a Phoenix ISO that contains the AOS installation bundle and hypervisor ISO in it. Makefiles has a
separate target for building such a standalone Phoenix ISO.

What are the pre-checks run when a node is added?


The hypervisor type and version should match between the existing cluster and the new node.
The AOS version should match between the existing cluster and the new node.

Can I get a map of percent completion to step?


No. The percent completion does not have a one-to-one mapping to the step. Percent completion
depends on the different tasks which actually get executed during imaging.

Do the log folders contain past imaging session logs?


Yes. All the previous imaging session logs are compressed (on a session basis) and archived in the
folder ~/foundation/log/archive.

If I have two clusters in my lab, can I use one to do bare-metal imaging on the other?
No. This is because the tools and packages which are required for bare-metal imaging are typically not
present in the Controller VM.

How do you add a new node that needs to be imaged to an existing cluster?
If the cluster is running AOS 4.5 or later and the node also has 4.5 or later, you can use the "Expand
Cluster" option in the Prism web console. This option employs Foundation to image the new node (if
required) and then adds it to the existing cluster. You can also add the node through the nCLI: ncli
cluster add-node node-uuid=<uuid>. The UUID value can be found in the factory_config.json file on
the node.

Is is required to supply IPMI details when using the Controller VM-based Foundation?
It is optional to provide IPMI details in Controller VM-based Foundation. If IPMI information is provided,
Foundation will try to configure the IPMI interface as well.

Is it valid to use a share to hold AOS installation bundles and hypervisor ISO files?
AOS installation bundles and hypervisor ISO files can be present anywhere, but there needs to be a
link (as appropriate) in ~/foundation/nos or ~/foundation/isos/hypervisor/[esx|kvm|hyperv]/ to

Troubleshooting | Field Installation Guide | Foundation | 77


the appropriate share location. Foundation will pick up files in these locations only. As long as a file is
accessible from these standard locations inside Foundation using a link, Foundation will pick it up.

Where is Foundation located in the Controller VM?


/home/nutanix/foundation

How can I determine if a particular (standalone) Foundation VM can image a given cluster?
Execute the following command on the Foundation VM and see whether it returns successfully (exit
status 0):
ipmitool H ipmi_ip -U ipmi_username -P ipmi_password fru

If this command is successful, the Foundation VM can be used to image the node. This is the command
used by Foundation to get hardware details from the IPMI interface of the node. The exact tool used for
talking to the SMC IPMI interface is the following:
java -jar SMCIPMITool.jar ipmi_ip username password shell

If this command is able to open a shell, imaging will not fail because of an IPMI issue. Any other errors
like violating minimum requirements will be shown only after Foundation starts imaging the node.

How do I determine whether a particular hypervisor ISO will work?


The md5 hash of all qualified hypervisor ISO images are listed in the iso_whitelist.json file, which is
located in ~/foundation/config/. The latest version of the iso_whitelist.json file is available from the
Nutanix support portal (see Hypervisor ISO Images on page 56).

How does foundation mount an ISO over IPMI?


For SMC, Foundation uses the following commands:
cd foundation/lib/bin/smcipmitool
java -jar SMCIPMITool.jar ipmi_ip ipmi_username ipmi_password shell
vmwa dev2iso <path to iso file>

The java command starts a shell with access to the remote IPMI interface. The vmwa command
mounts the ISO file virtually over IPMI. Foundation then opens another terminal and uses the
following commands to set the first boot device to CD ROM and restart the node.
ipmitool H ipmi_ip -U ipmi_username -P ipmi_password chassis bootdev cdrom
ipmitool H ipmi_ip -U ipmi_username -P ipmi_password chassis power reset

For Dell, Foundation uses the following commands:


racadm -r ipmi_ip -u ipmi_username -p ipmi_password remoteimage -d
racadm -r ipmi_ip -u ipmi_username -p ipmi_password remoteimage -c -
l nfs_share_path_to_iso_file -u nutanix p nutanix/4uracadm -r ipmi_ip -u ipmi_username
-p ipmi_password remoteimage s
racadm -r ipmi_ip -u ipmi_username -p ipmi_password remoteimage s
racadm -r ipmi_ip -u ipmi_username -p ipmi_password config -g cfgServerInfo -o
cfgServerBootOnce 1
racadm -r ipmi_ip -u ipmi_username -p ipmi_password config -g cfgServerInfo -o
cfgServerFirstBootDevice vCD-DVD

The node can be rebooted using the following commands:


racadm -r ipmi_ip -u ipmi_username -p ipmi_password serveraction powerdown
racadm -r ipmi_ip -u ipmi_username -p ipmi_password serveraction powerup
Does phoenix download using IPv4 address? Yes.
Is there an integrity check on the files phoenix downloads?
Yes. The md5 sum of the files to be downloaded (AOS and hypervisor ISO) are passed to Phoenix
through a configuration file. (The HTTP path to the configuration file is passed as command line
input.) Phoenix verifies the md5 sum of the files after downloading and retries the download if an md5
mismatch is detected.

Troubleshooting | Field Installation Guide | Foundation | 78


Pynfs, what is it?
It is a Python implementation of NFS share used during the initial days. It is still used on platforms with
a 16 GB DOM.

Is there a reason for using port 8000?


No specific reason.

Troubleshooting | Field Installation Guide | Foundation | 79


11
Appendix: Imaging a Node (Phoenix)
Phoenix is an ISO-based installer that you can use to perform the following installation tasks on bare-metal
hardware one node at a time:
Configuring hypervisor settings, virtual switches, and so on after you install a hypervisor on a
replacement host boot disk. This option does not require you to include AOS and hypervisor installers in
the Phoenix ISO image.
Installing the Controller VM, which runs AOS. This option requires you to include the AOS installer in
the Phoenix ISO image.
Installing the hypervisor on a new or replacement node. This is an alternative to installing the hypervisor
by the use of the hypervisor manufacturer's ISO and reduces the two-step procedurethat of first
installing the hypervisor and then installing AOS (by the use of the Phoenix ISO image)to a single-
step procedure of installing both software, at once, by the use of only the Phoenix ISO image. However,
this option requires you to include the hypervisor ISO image and the AOS installer files in the Phoenix
ISO image.
Repairing the CVM.
For information about Phoenix ISO images, see Phoenix ISO Image on page 81. For information about
how to use a Phoenix ISO image, see Installing the Controller VM and Hypervisor by Using Phoenix on
page 100.

Note:
Phoenix is typically not necessary or recommended for imaging a single node. Use Phoenix to
image a single node only when using Prism or Foundation is not a viable option, such as in some
hardware replacement scenarios. Following are the recommended software for imaging, even
when you want to image a single node:
Foundation. See Imaging Bare Metal Nodes on page 29.
Prism web console. This functionality requires AOS 4.5 or later. See the "Expanding a Cluster"
section in the Web Console Guide.
Both software also install the hypervisor during the imaging process.
The following procedures describe how to obtain the ISO images you need and how to install a hypervisor
and AOS on a new or replacement node. The procedures also describe how to configure a hypervisor after
hypervisor boot disk replacement.
The procedures to perform depend on your hardware manufacturer. A summary is provided for each
hardware platform. Each summary outlines the tasks that you need to perform, and each step in the
summary includes a reference to the associated procedure. After performing a procedure that is referenced
in the summary, return to the summary and perform the next procedure.

Installation ISO Images


The installation ISO images you need depend on your requirement.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 80


Hypervisor ISO Image
Enables you to install a hypervisor. If you want to install ESXi or Hyper-V, obtain the ISO image from
the hypervisor manufacturer. If you want to install AHV, either download the AHV ISO image from the
Nutanix support portal or generate one from a Foundation VM instance.
After you have obtained a hypervisor installer ISO image, follow the hypervisor installation
instructions documented here for your hardware platform.

Note: You can also include the hypervisor ISO in the Phoenix ISO image and perform all
installation tasks from the Phoenix ISO image.

Phoenix ISO Image


Enables you to install and configure a hypervisor and install or repair the Controller VM.

Phoenix ISO Image


The utility of a Phoenix ISO image depends on the software it includes. If the Phoenix ISO image includes
AOS installer files, you can use the ISO image to install AOS. If it includes a hypervisor ISO, you can use it
to install the hypervisor. There are two ways to obtain a Phoenix ISO: you can download the Phoenix ISO
image from the Nutanix support portal or generate a Phoenix ISO image on a Foundation VM instance.

Phoenix ISO Image Available On the Nutanix Support Portal

Starting with Phoenix 3.5, the Phoenix ISO image that is available on the Nutanix support portal does not
include an AOS bundle, and you cannot use it to install AOS. You can use the Phoenix ISO image only to
configure the hypervisor after you replace the host boot disk and install a hypervisor on it.
To configure the hypervisor, Phoenix uses the AOS files that are available on the Controller VM boot
drive. With Phoenix using the AOS files that are available on the Controller VM boot drive, matching the
AOS release in the Phoenix ISO image with the AOS release on the Controller VM boot drive becomes
unnecessary. Not including the AOS installer files in the Phoenix ISO image also results in a smaller image
file and a faster download from the Nutanix support portal.

Phoenix ISO Image Generated On a Foundation VM Instance

Even though Foundation is the recommended software for installing AOS, you can use Phoenix to install
both AOS and a hypervisor on a single node if you include the AOS installer files and the hypervisor ISO
file in the Phoenix ISO image. The only way to obtain a Phoenix ISO image that includes these installation
files is to download the files to a Foundation VM instance and generate a Phoenix ISO image.
If the node will be added to a cluster after imaging, the version of the AOS installer file you include in the
Phoenix ISO image must match the version of AOS installed on the cluster.

Downloading a Phoenix ISO Image

This procedure describes how to download the Phoenix ISO image that you can use to configure the
hypervisor after a host boot disk replacement procedure. The ISO image is available on the Nutanix
support portal.
To download the Phoenix ISO image from the Nutanix support portal, do the following:

1. Open a web browser and log in to the Nutanix support portal: http://portal.nutanix.com.

2. In the Downloads menu, click Phoenix.

3. On the Phoenix page, click the name of the Phoenix ISO image in the Download column.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 81


Generating a Phoenix ISO Image

This procedure describes how to generate a Phoenix ISO image that you can use to install AOS, install a
hypervisor, or install both on a single node. To generate a Phoenix ISO image, you need a Foundation VM
instance. In the Phoenix ISO image, you can include AOS installer files, a hypervisor ISO image, or both.

Note: (For Nutanix Support and Systems Engineers) If you include the ISO image of a hypervisor
other than AHV, do not share or distribute the Phoenix ISO image or otherwise make it available to
other users in any form. Delete the ISO image after it has served its purpose.
To generate a Phoenix ISO image, do the following:

1. Prepare the environment as described in the Foundation documentation at Preparing Installation


Environment. (You do not have to perform the steps described in Setting Up the Network, because you
need the Foundation VM only to create the images you need.)

2. Kill the Foundation service running on the Foundation VM.


$ sudo pkill -9 foundation

3. Navigate to the /home/nutanix/foundation/nos directory and unpack the compressed AOS tar archive.
$ gunzip nutanix_installer_package-version#.tar.gz

4. Generate the Phoenix ISO file.


$ cd /home/nutanix/foundation/bin
$ ./generate_iso phoenix [-h] [--aos-package AOS_PACKAGE] \
[--temp-dir TEMP_DIR] [--skip-space-check] \
[--kvm KVM | --esx ESX | --hyperv HYPERV]

Replace AOS_PACKAGE with the full path to the AOS tar archive,
nutanix_installer_package-version#.tar.

Replace TEMP_DIR with the path to the temporary directory to use when generating the Phoenix ISO
image. Default: /home/nutanix/foundation/tmp/.
Replace KVM with the full path to the AHV ISO image or bundle.
Replace ESX with the full path to the ESXi ISO image.
Replace HYPERV with the full path to the Hyper-V ISO image.
The Phoenix ISO is created in the temporary directory, and is the file to use when Installing the
Controller VM and Hypervisor by Using Phoenix on page 100.

AHV ISO Image


If you want to install AHV on a single node, you need an AHV ISO. You can either download an AHV ISO
image from the Nutanix support portal or generate an ISO image from a Foundation VM instance.
The following procedure describes how to generate a Phoenix ISO image on a Foundation VM instance.
For information about downloading an AHV ISO image from the Nutanix support portal, see Downloading
Installation Files on page 53.

Generating an AHV ISO Image

This task describes how to generate an ISO image from a particular version of AHV on a Foundation VM
instance. Use this procedure if an ISO image of the required version is not available on the Nutanix support
portal.
Note: The command for generating an AHV ISO is supported only in Foundation 3.1 or later.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 82


To generate an ISO image from a particular version of AHV, do the following:

1. Prepare the installation environment as described in the Foundation documentation at Preparing


Installation Environment. (You do not have to perform the steps described in Setting Up the Network,
because you need the Foundation VM only to create the images you need.)

2. Kill the Foundation service running on the Foundation VM.


$ sudo pkill -9 foundation

3. Create an AHV ISO file:


$ cd /home/nutanix/foundation/bin
$ ./foundation --generate_kvm_iso --kvm=ahv_tar_archive

Replace ahv_tar_archive with the full path to the AHV tar archive,
host_bundle_el6.nutanix.version#.tar.gz. The command generates an AHV ISO file named
kvm.version#.iso in the current directory.

Installing a Hypervisor By Using Hypervisor Installation Media


This section and the procedures within this section describe how to use the hypervisor manufacturer's ISO
to install the hypervisor on various hardware platforms.
With the ISO provided by the hypervisor manufacturer, installation is a two-step process. You must first use
the manufacturer's ISO to install the hypervisor on the node and then use the Phoenix ISO to configure
the hypervisor to get the hypervisor working with the Controller VM. For information about how to use the
manufacturer's ISO to install the hypervisor, from the sets of procedures that follow, choose the set that
applies to your hardware platform. After you install the hypervisor, to install the Controller VM and configure
the hypervisor, see Installing the Controller VM and Hypervisor by Using Phoenix on page 100.
Note: With a Phoenix ISO image that includes both AOS installer files and the hypervisor ISO,
installation is a one-step process. For information about how to generate such a Phoenix ISO
image, see Generating a Phoenix ISO Image on page 82. For information about how to perform
installation tasks with the Phoenix ISO image, see Installing the Controller VM and Hypervisor by
Using Phoenix on page 100.

Nutanix NX Series Platforms


Summary: Imaging Nutanix NX Series Nodes

Before you begin: If you are adding a new node, physically install that node at your site. See the NX and
SX Series Hardware Administration and Reference for your model type. Imaging a new or replacement
node can be done either through the system management interface, which requires a network connection,
or through a direct attached USB. These instructions assume you are installing through the system
management interface.

Note: Imaging a node using Phoenix is restricted to Nutanix sales engineers, support engineers,
and partners. Contact Nutanix customer support or your partner for help with this procedure.

1. Obtain the ISO images you need (see Installation ISO Images on page 80).

2. Attach the hypervisor ISO image (see Attaching an ISO Image (Nutanix NX Series Platforms) on
page 84).

3. Install the hypervisor.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 83


Installing ESXi (Nutanix NX Series Platforms) on page 86
Installing Hyper-V (Nutanix NX Series Platforms) on page 87
Installing AHV (Nutanix NX Series Platforms) on page 90

4. Attach the Phoenix ISO image (see Attaching an ISO Image (Nutanix NX Series Platforms) on
page 84).

5. Install the Nutanix Controller VM and provision the hypervisor (see Installing the Controller VM and
Hypervisor by Using Phoenix on page 100).

Attaching an ISO Image (Nutanix NX Series Platforms)

This procedure describes how to attach an ISO image on Nutanix NX series platforms.
Before you begin: Obtain the ISO image you need. See Installation ISO Images on page 80.

Caution: The node must have a 64 GB DOM to install Hyper-V. Attempts to install Hyper-V on a
node with less DOM capacity will fail.
To attach an ISO image on a Nutanix NX series platform, do the following:

1. Verify that you have access to the IPMI interface on the node.

a. Connect the IPMI port on that node to the network if it is not already connected.
A 1 GbE or 10 GbE port connection is not required for imaging the node.

b. Assign an IP address (static or DHCP) to the IPMI interface on the node if it is not already assigned.
To assign a static address, see Setting IPMI Static IP Address on page 69.

2. Open a Web browser to the IPMI IP address of the node to be imaged.

3. Enter the IPMI login credentials in the login screen.


The default value for both user name and password is ADMIN (upper case).

Figure: IPMI Console Login Screen

The IPMI console main screen appears.


Note: The following steps might vary depending on the IPMI version on the node.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 84


Figure: IPMI Console Screen

4. Select Console Redirection from the Remote Console drop-down list of the main menu, and then
click the Launch Console button.

Figure: IPMI Console: Remote Control Menu

5. Select Virtual Storage from the Virtual Media drop-down list of the remote console main menu.

Figure: IPMI Remote Console: Virtual Media Menu

6. Click the CDROM&ISO tab in the Virtual Storage window, select ISO File from the Logical Drive Type
field drop-down list, and click the Open Image button.

Figure: IPMI Virtual Storage Window

7. In the browse window, go to where the ISO image is located, select the image, and then click the Open
button.

8. Click the Plug In button and then the OK button to close the Virtual Storage window.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 85


9. In the remote console main menu, select Set Power Reset in the Power Control drop-down list.
This causes the system to reboot to the selected image.

Figure: IPMI Remote Console: Power Control Menu

What to do next: If you booted to the hypervisor ISO image, complete installation by following the
installation steps for the hypervisor.
If you booted to the Phoenix ISO image, configure the hypervisor and, if the node does not have the
Controller VM installed, install the Controller VM.

Installing ESXi (Nutanix NX Series Platforms)

1. Click Continue at the installation screen and then accept the end user license agreement on the next
screen.

Figure: ESXi Installation Screen

2. On the Select a Disk screen, select the SATADOM as the storage device, click Continue, and then click
OK in the confirmation window.

Figure: ESXi Device Selection Screen

3. In the keyboard layout screen, select a layout (such as US Default) and then click Continue.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 86


4. In the root password screen, enter nutanix/4u as the root password.

Note: The root password must be nutanix/4u or the installation will fail.

5. Review the information on the Install Confirm screen and then click Install.

Figure: ESXi Installation Confirmation Screen

The installation begins and a dynamic progress bar appears.

6. When the Installation Complete screen appears, go back to the Virtual Storage screen, click the Plug
Out button, and then return to the Installation Complete screen and click Reboot.

Note: Do not assign host IP addresses before installing the Nutanix Controller VM. Doing so
can cause the Controller VM install to fail.

Installing Hyper-V (Nutanix NX Series Platforms)

1. Start the installation.

2. At the Press any key to boot from CD or DVD prompt, press SHIFT+F10.
A command prompt appears.

3. Partition and format the DOM.

a. Start the disk partitioning utility.


> diskpart

b. List the disks to determine which one is the 60 GB SATA DOM.


list disk

c. Find the disk in the displayed list that is about 60 GB (only one disk will be that size). Select that disk
and then run the clean command:
select disk number
clean

d. Create and format a primary partition (size 1024 and file system fat32).
create partition primary size=1024
select partition 1
format fs=fat32 quick

e. Create and format a second primary partition (default size and file system ntfs).
create partition primary
select partition 2
format fs=ntfs quick

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 87


f. Assign the drive letter "C" to the DOM install partition volume.
list volume
list partition

This displays a table of logical volumes and their associated drive letter, size, and file system type.
Locate the volume with an NTFS file system and size of approximately 50 GB. If this volume (which
is the DOM install partition) is drive letter "C", go to the next step.
Otherwise, do one of the following:
If drive letter "C" is assigned currently to another volume, enter the following commands to
remove the current "C" drive volume and reassign "C" to the DOM install partition volume:
select volume cdrive_volume_id#
remove
select volume dom_install_volume_id#
assign letter=c

If drive letter "C" is not assigned currently, enter the following commands to assign "C" to the
DOM install partition volume:
select volume dom_install_volume_id#
assign letter=c

g. Exit the diskpart utility.


exit

4. Continue installation of the hypervisor.

a. Start the server setup utility.


> setup.exe

b. In the language selection screen that reappears, again just click the Next button.

c. In the install screen that reappears click the Install now button.

d. In the operating system screen, select Windows Server 2012 Datacenter (Server Core
Installation) and then click the Next button.

Figure: Hyper-V Operating System Screen

e. In the license terms screen, check the I accept the license terms box and then click the Next
button.

f. In the type of installation screen, select Custom: Install Windows only (advanced).

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 88


Figure: Hyper-V Install Type Screen

g. In the where to install screen, select Partition 2 (the NTFS partition) of the DOM disk you just
formatted and then click the Next button.
Ignore the warning about free space. The installation location is Drive 6 Partition 2 in the example.

Figure: Hyper-V Install Disk Screen

The installation begins and a dynamic progress screen appears.

Figure: Hyper-V Progress Screen

h. After the installation is complete, manually boot the host.

5. After Windows boots up, click Ctrl-Alt-Delete and then log in as Administrator when prompted.

6. Change your password when prompted to nutanix/4u.

7. Install the Nutanix Controller VM and provision the hypervisor (see Installing the Controller VM and
Hypervisor by Using Phoenix on page 100).

8. Open a command prompt and enter the following two commands:


> schtasks /create /sc onstart /ru Administrator /rp "nutanix/4u" /tn `
firstboot /tr D:\firstboot.bat
> shutdown /r /t 0

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 89


This causes a reboot and the firstboot script to run, after which the host will reboot two more times.
This process can take substantial time (possibly 15 minutes) without any progress indicators. To monitor
progress, log into the VM after the initial reboot and enter the command notepad C:\Program Files
\Nutanix\Logs\first_boot.log. This displays a (static) snapshot of the log file. Repeat this command
as desired to see an updated version of the log file.

Note: A d:\firstboot_fail file appears when this process fails. If that file is not present, the
process is continuing (if slowly).

Installing AHV (Nutanix NX Series Platforms)

1. Monitor the installation process.


Installation starts automatically after the AHV ISO is mounted and the node reboots. A welcome screen
appears and then status messages appear as the installation progresses. When installation completes,
the node shuts down.

Figure: Welcome (Install Options) Screen

2. Power on the node.

3. After the system reboots, log back into the IPMI console, go to the CDROM&ISO tab in the Virtual
Storage window, select the AHV ISO file, and click the Plug Out button (and then the OK button) to
unmount the ISO (see Attaching an ISO Image (Nutanix NX Series Platforms) on page 84).

Lenovo Converged HX Series Platforms


Summary: Imaging Lenovo Converged HX Series Nodes

Before you begin: Physically prepare the node to be imaged (as needed). See the Lenovo Converged HX
Series Hardware Replacement Guide (https://support.lenovo.com/us/en/docs/um104436) for information
on how to replace the boot drive.
Note: Imaging a node using Phoenix is restricted to Lenovo sales engineers, support engineers,
and partners.

1. Obtain the ISO images you need (see Installation ISO Images on page 80).

2. Attach the hypervisor ISO image (see Attaching an ISO Image (Lenovo Converged HX Series
Platforms) on page 91).

3. Install the hypervisor.


Installing ESXi (Lenovo Converged HX Series Platforms) on page 93

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 90


Installing AHV (Lenovo Converged HX Series Platforms) on page 94

4. Attach the Phoenix ISO image (see Attaching an ISO Image (Lenovo Converged HX Series Platforms)
on page 91).

5. Install the Controller VM and provision the hypervisor (see Installing the Controller VM and Hypervisor
by Using Phoenix on page 100).

Attaching an ISO Image (Lenovo Converged HX Series Platforms)

This procedure describes how to attach an ISO image.


To install a hypervisor on a new or replacement node in the field, do the following:

1. Verify you have access to the IMM interface for the node.

a. Connect the IMM port on that node to the network if it is not already connected.
A data network connection is not required for imaging the node.

b. Assign an IP address (static or DHCP) to the IMM interface on the node if it is not already assigned.
Refer to the Lenovo Converged HX series documentation for instructions on setting the IMM IP
address.

2. Open a Web browser to the IMM IP address of the node to be imaged.

3. Enter the IMM login credentials in the login screen.


The default values for User name/Password are USERID/PASSW0RD (upper case).

Figure: IMM Console Login Screen

The IMM console main screen appears.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 91


Figure: IMM Console Screen

4. Select Remote Control then click the Start remote control in single-user mode button.

Figure: IMM Console: Remote Control Screen

Click Continue in the Security Warning dialog box then Run in the Java permission dialog box that
appear.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 92


5. In the virtual console, select Virtual Media > Activate.

6. Select Virtual Media > Select Devices to Mount.


The Select Devices to Mount window appears.

7. Click the Add Image button, go to where the ISO image is located, select that file, and then click Open.

8. Check Mapped next to the ISO and click Mount Selected.

9. Select Tools > Power > Reboot (if the system is running) or On (if the system is turned off).
The system starts using the selected image.

What to do next: If you booted to the hypervisor ISO image, complete installation by following the
installation steps for the hypervisor.
If you booted to the Phoenix ISO image, configure the hypervisor and, if the node does not have the
Controller VM installed, install the Controller VM.

Installing ESXi (Lenovo Converged HX Series Platforms)

Before you begin: Attach the hypervisor installation media (see Attaching an ISO Image (Lenovo
Converged HX Series Platforms) on page 91).

1. On the boot menu, select the installer and wait for it to load.

2. Press Enter to continue then F11 to accept the end user license agreement.

3. On the Select a Disk screen, select the appropriate storage device, click Continue, and then click OK in
the confirmation window.
For products with Broadwell processors, select the approximately 60 GiB DOM as the storage
device.

Figure: ESXi Device Selection Screen (Broadwell)


For products with Haswell processors, select the approximately 100 GiB ServerRAID as the storage
device.

Figure: ESXi Device Selection Screen (Haswell)

4. In the keyboard layout screen, select a layout (such as US Default) and then click Continue.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 93


5. In the root password screen, enter nutanix/4u as the root password.

Note: The root password must be nutanix/4u or the installation will fail.

6. Review the information on the Install Confirm screen and then click Install.
The installation begins and a dynamic progress bar appears.

7. When the Installation Complete screen appears, in the Video Viewer window click Virtual Media >
Unmount All and accept the warning message.

8. In the the Installation Complete screen select Reboot.


The system restarts.

Note: Do not assign host IP addresses before installing the Nutanix Controller VM. Doing so
can cause the Controller VM install to fail.

Installing AHV (Lenovo Converged HX Series Platforms)

1. Monitor the installation process.


Installation starts automatically after the AHV ISO is mounted and the node reboots. A welcome screen
appears and then status messages appear as the installation progresses. When installation completes,
the node shuts down.

Figure: Welcome (Install Options) Screen

2. Power on the node.

3. After the system reboots, log back into the IMM console and unmount the AHV ISO. In the Video Viewer
window click Virtual Media > Unmount All and accept the warning message.

Cisco UCS Platforms


Summary: Imaging Cisco UCS Nodes

Before you begin: Physically prepare the node to be imaged (as needed). See the Cisco UCS
documentation for information on how to replace the boot drive.

Note: Imaging a node using Phoenix is restricted to Nutanix sales engineers, support engineers,
and partners.

1. Obtain the ISO images you need (see Installation ISO Images on page 80).

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 94


2. Attach the hypervisor ISO image (see Attaching an ISO Image on Cisco UCS Platforms on
page 95).

3. Install the hypervisor.


Installing ESXi on Cisco UCS Platforms on page 96
Installation of AHV on Cisco UCS Platforms on page 98

4. Attach the Phoenix ISO image (see Attaching an ISO Image on Cisco UCS Platforms on page 95).

5. Install the Controller VM and provision the hypervisor (see Installing the Controller VM and Hypervisor
by Using Phoenix on page 100).

Attaching an ISO Image on Cisco UCS Platforms

This procedure describes how to attach an ISO image on Cisco UCSc platforms.

Attaching an ISO Image to a Server Running in Standalone Mode

Before you begin: Download the hypervisor ISO file to a workstation and connect the workstation to the
network.
To install a hypervisor on a new or replacement drive on a server running in standalone mode, do the
following:

1. From the workstation, log in to the CIMC web console by entering the CIMC IP address of the server
and the user name and password that you specified during the initial configuration of the CIMC.

2. In the toolbar, click Launch KVM Console to launch the KVM Console.

3. In the KVM Console, do the following:

a. Click Virtual Media.

b. Click Activate Virtual Devices to start a vMedia session that allows you to attach the ISO image file
from your local computer. If you have not allowed unsecured connections, the CIMC web console
prompts you to accept the session. Click Accept this session.

c. Click Map CD/DVD in Virtual Media and select the ISO image file.

d. Start the server and press F6 when prompted to open the boot menu screen. On the boot menu
screen, choose Cisco vKVM-Mapped vDVD1.22 and press Enter.
The server boots from the ISO image.

What to do next: If you booted to the hypervisor ISO image, complete installation by following the
installation steps for the hypervisor.
If you booted to the Phoenix ISO image, configure the hypervisor and, if the node does not have the
Controller VM installed, install the Controller VM.

Attaching an ISO Image to a Server Running in UCS Domain Mode

Before you begin:


Download the hypervisor ISO file to a workstation and connect the workstation to the network.
Install Java Web Start on the workstation.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 95


To install a hypervisor on a new or replacement drive on a server running in UCS Domain mode, do the
following:

1. From the workstation, log in to UCS Manager.

2. Start the KVM Console on the server on which you replaced the hypervisor boot drive.

a. Click Equipment > Servers.

b. Right-click the server and click KVM Console.


If you have not allowed unsecured connections, UCS Manager prompts you to accept the session.
Click Accept this session.

3. In the KVM Console, do the following:

a. Click Virtual Media.

b. Click Activate Virtual Devices to start a vMedia session that allows you to attach the ISO image file
from your local computer. If you have not allowed unsecured connections, the CIMC web console
prompts you to accept the session. Click Accept this session.

c. Click Map CD/DVD in Virtual Media and select the ISO image file.

d. Start the server and press F6 when prompted to open the boot menu screen. On the boot menu
screen, choose Cisco vKVM-Mapped vDVD1.22 and press Enter.
The server boots from the ISO image.

What to do next: If you booted to the hypervisor ISO image, complete installation by following the
installation steps for the hypervisor.
If you booted to the Phoenix ISO image, configure the hypervisor and, if the node does not have the
Controller VM installed, install the Controller VM.

Installing ESXi on Cisco UCS Platforms

1. On the boot menu, select the installer and wait for it to load.

2. Press Enter to continue, and then press F11 to accept the end-user license agreement.

3. On the Select a Disk screen, select the appropriate storage device, click Continue, and then click OK in
the confirmation window.
For C240 servers, select the 120 GB internal SSD as the storage device.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 96


Figure: ESXi Device Selection Screen (C240 Servers)
For C220 servers, select the 64 GB internal SD card as the storage device.

Figure: ESXi Device Selection Screen (C220 Servers)

4. In the keyboard layout screen, select a layout (such as US Default) and then click Continue.

5. In the root password screen, enter nutanix/4u as the root password.

Note: The root password must be nutanix/4u or the installation fails.

6. Review the information on the Install Confirm screen and then click Install.
The installation begins and a dynamic progress bar appears.

7. When the Installation Complete screen appears, in the Video Viewer window click Virtual Media >
Unmount All and accept the warning message.

8. On the Installation Complete screen select Reboot.


The system restarts.
Note: Do not assign host IP addresses before installing the Nutanix Controller VM. Doing so
can cause the Controller VM install to fail.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 97


Installation of AHV on Cisco UCS Platforms

Use the automatic installation option on C240 servers and the graphical installation option on C220
servers.

Installing AHV on C240 Servers

1. Monitor the installation process.


Installation starts automatically after the AHV ISO is mounted and the node reboots. A welcome screen
appears and then status messages appear as the installation progresses. When installation completes,
the node shuts down.

Figure: Welcome (Install Options) Screen

2. Power on the node.

3. After the system reboots, from the KVM console, unmount the AHV ISO.

What to do next:
1. Attach the Controller VM ISO image, install the Nutanix Controller VM, and configure the hypervisor
(see Attaching an ISO Image on Cisco UCS Platforms on page 95 and Installing the Controller VM
and Hypervisor by Using Phoenix on page 100).
2. Configure the network on the hypervisor host.
For information about configuring the network on an AHV host, see "Changing the IP Address of an
Acropolis Host" in the "Host Network Management" chapter of the AHV Administration Guide.
For information about configuring the network on an ESXi host, see "Configuring Host Networking
(ESXi)" in the "vSphere Networking" chapter of the vSphere Administration Guide for Acropolis (Using
vSphere Web Client).

Installing AHV on C220 Servers

1. Select Graphical Installer on the welcome screen before the countdown for automatic installation
expires.

2. Click Next on the CentOS 6 logo screen.

3. Modify the existing partition format to make the Cisco FlexFlash drives available on the partition layout
screen.

a. Press ALT+CTRL+F2 to switch from graphics mode to text mode.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 98


b. Start the parted utility to edit the FlexFlash drive.
root@hostname# parted /dev/sda

c. Display the current partition table.


(parted) print

d. Create a GUID partition table.


(parted) mklabel gpt

e. At the prompt, confirm that you want to destroy the existing partition table.
Use the print command to verify that the partition table was created.

f. Quit the parted utility.


(parted) quit

g. Press Alt+Ctrl+F6 to exit text mode and return to graphical mode.

4. On the language and keyboard selection screens, select the desired language and keyboard layout,
respectively.

5. On the storage devices screen, click Basic Storage Devices and click Next.

6. Click Fresh Installation if you are prompted to choose between and starting a fresh installation, and
then click Next.

7. On the next two screens, specify a host name and the time zone.

8. On the root password screen, enter nutanix/4u.

Note: The root password must be nutanix/4u or the installation fails.

9. On the installation type screen, click Create Custom Layout and then click Next.

10. Select the FlexFlash drive and click Create.

11. In Create Storage, select Standard partition.

12. In the Add Partition dialog box, specify the following details:
Mount Point. Enter a forward slash (/).
File System Type. Select ext4.
Allowable Drives. Select sda.
Make sure that none of the other check boxes are selected.

Size. Retain the default value.


Additional Size Options. Select Fill to maximum allowable size.

13. In the boot loader window, click Change device.

14. Select sda from the First BIOS drive list in the Boot loader device dialog box. Click Next.
Installation begins. Progress is indicated by a progress bar.

15. After the installation completes, do the following:

a. Go to the Virtual Storage screen and click Plug Out.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 99


b. Return to the Installation Complete screen and click Reboot.

16. After the system restarts, log on the hypervisor host with SSH. Log on with the user name root and
password nutanix/4u.

17. Specify /root as the directory to which log messages must be sent.
root@ahv# puppet apply --logdest /root/post-install-puppet.log -e "include kvm"

What to do next:
1. Attach the Controller VM ISO image, install the Nutanix Controller VM, and configure the hypervisor
(see Attaching an ISO Image on Cisco UCS Platforms on page 95 and Installing the Controller VM
and Hypervisor by Using Phoenix on page 100).
2. Configure the network on the hypervisor host.
For information about configuring the network on an AHV host, see "Changing the IP Address of an
Acropolis Host" in the "Host Network Management" chapter of the AHV Administration Guide.
For information about configuring the network on an ESXi host, see "Configuring Host Networking
(ESXi)" in the "vSphere Networking" chapter of the vSphere Administration Guide for Acropolis (Using
vSphere Web Client).

Installing the Controller VM and Hypervisor by Using Phoenix


This procedure describes how to use the Phoenix ISO image to install and configure a hypervisor on a
single node after hypervisor boot disk replacement and how to install the Nutanix Controller VM on a single
node. It also describes options to repair the Controller VM (must be performed under the supervision of
Nutanix customer support).
Before you begin: Attach the Phoenix ISO to the system.
To install a node by using Phoenix, do the following:

1. Do the following in the Nutanix Installer configuration screen:

a. Review the values in the upper eight fields to verify they are correct, or update them if necessary.
Only the Block ID, Node Serial, Node Position, and Node Cluster ID fields can be edited in this
screen. Node Position must be A for all single-node blocks.

b. Do one of the following in the next three fields (check boxes):


If you are imaging a new node (fully populated with drives and other components) and want to
either install only the Controller VM or install both the hypervisor and the Controller VM (and
additionally configure the hypervisor), select both Configure Hypervisor (to install and provision
the hypervisor) and Clean CVM (to install the Controller VM).

Note: Even if you want to only install the Controller VM, you must select both options.
Selecting Clean CVM by itself will fail. Also, for this step to work, you must use a Phoenix
ISO image that includes the requisite installer files. That is, if you want to install AOS,
the Phoenix ISO image must include the AOS installer files, and if you want to install
the hypervisor, the Phoenix ISO image must include the hypervisor ISO. For information
about Phoenix ISO files, see Phoenix ISO Image on page 81.
If you are imaging a replacement hypervisor boot drive or node (using existing drives), select
Configure Hypervisor only. This option configures the hypervisor without installing a new
Controller VM.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 100


For information about the Phoenix ISO files that you can use for this task, see Phoenix ISO
Image on page 81.
Caution: Do not select Clean CVM if you are replacing a node or SATA DOM because
this option cleans the disks as part of the process, which means existing data will be lost.

If you are instructed to do so by Nutanix customer support, select Repair CVM. This option is for
repairing certain problem conditions. Ignore this option unless Nutanix customer support instructs
you to select it.

c. When all the fields are correct, click the Start button.

Figure: Nutanix Installer Screen

Installation begins and takes about 30 minutes.

2. After installation completes, unmount the ISO.


(Nutanix NX series) In the Virtual Storage window, click CDROM&ISO > Plug Out.
(Lenovo Converved HX series) In the Video Viewer window click Virtual Media > Unmount All and
accept the warning message.

3. At the reboot prompt in the console, type Y to restart the node.

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 101


Figure: Installation Messages

On ESXi and AHV, the node restarts with the new image, additional configuration tasks run, and then
the host restarts again. Wait until this stage completes (typically 15-30 minutes depending on the
hypervisor) before accessing the node. No additional steps need to be performed.

Caution: Do not restart the host until the configuration is complete.

On Hyper-V, the node restarts, and a login prompt is displayed.

4. On Hyper-V, do the following:

a. At the login prompt, log in with administrator credentials.


A command prompt appears.

b. Schedule the firstboot script to run and restart the host.


> schtasks /create /sc onstart /ru Administrator /rp "nutanix/4u" /tn `
firstboot /tr D:\firstboot.bat
> shutdown /r /t 0

This causes a reboot and the firstboot script to run, after which the host will reboot two more times.
This process can take substantial time (possibly 15 minutes) without any progress indicators. To
monitor progress, log into the VM after the initial reboot and enter the command notepad C:\Program
Files\Nutanix\Logs\first_boot.log. This displays a (static) snapshot of the log file. Repeat this
command as desired to see an updated version of the log file.
Note: A d:\firstboot_fail file appears if this process fails. If that file is not present, the
process is continuing (if slowly).

Appendix: Imaging a Node (Phoenix) | Field Installation Guide | Foundation | 102

Das könnte Ihnen auch gefallen