Sie sind auf Seite 1von 57

Foundation 4.

Field Installation Guide


November 15, 2019
Contents

1. Field Installation Overview...........................................................................4

2. Foundation Guidelines................................................................................. 5
Platform-Specific Imaging Guidelines......................................................................................................... 5
Foundation Use Case Matrix...................................................................................................................... 5
CVM vCPU and vRAM Allocation...............................................................................................................6

3. Prepare Factory-Imaged Nodes for Foundation........................................ 8


Discover Nodes and Launch Foundation....................................................................................................9
Discover Nodes in the Same Broadcast Domain............................................................................ 9
Discover Nodes in a VLAN-Segmented Network...........................................................................10
Launch Foundation......................................................................................................................... 11
Verify Hypervisor Support......................................................................................................................... 12
Upgrade CVM Foundation by Using the Foundation Java Applet............................................................ 12

4. Prepare Bare Metal Nodes for Foundation...............................................14


Prepare the Installation Environment........................................................................................................15
Prepare Workstation....................................................................................................................... 15
Set Up the Foundation VM............................................................................................................ 16
Upload Installation Files to the Foundation VM............................................................................. 21
Verify Hypervisor Support.............................................................................................................. 21
Set Up the Network........................................................................................................................22
Update the Foundation Service......................................................................................................24

5. Foundation App (Beta Release)................................................................ 25


Requirements.............................................................................................................................................25
Limitations..................................................................................................................................................25
Install Foundation App on macOS............................................................................................................25
Install Foundation App on Windows......................................................................................................... 26
Uninstall Foundation App on macOS....................................................................................................... 26
Uninstall Foundation App on Windows..................................................................................................... 27

6. Upgrade Foundation................................................................................... 28

7. Run Foundation...........................................................................................30
Automate filling Foundation UI..................................................................................................................30
Configure Nodes with Foundation.............................................................................................................30

8. Post Installation Steps............................................................................... 34


Configure a New Cluster in Prism............................................................................................................ 34

ii
9. Hypervisor ISO Images.............................................................................. 36

10. Network Requirements............................................................................. 38

11. Hyper-V Installation Requirements......................................................... 40

12. Set IPMI Static IP Address.......................................................................44

13. Troubleshooting........................................................................................ 46
Fix IPMI Configuration Problems.............................................................................................................. 46
Fix Imaging Problems............................................................................................................................... 47
Frequently Asked Questions (FAQ).......................................................................................................... 48

Appendix A: Single-Node Configuration (Phoenix)..................................... 55

Copyright...................................................................................................... 56
License.......................................................................................................................................................56
Conventions............................................................................................................................................... 56
Default Cluster Credentials....................................................................................................................... 56
Version.......................................................................................................................................................57

iii
1
FIELD INSTALLATION OVERVIEW
Nutanix installs AHV and the Nutanix Controller VM at the factory before shipping a node to a customer. To use a
different hypervisor (ESXi or Hyper-V) on factory nodes or to use any hypervisor on bare metal nodes, the nodes
must be imaged in the field. This guide provides step-by-step instructions on how to use the Foundation tool to do
a field installation, which consists of installing a hypervisor and the Nutanix Controller VM on each node and then
creating a cluster. You can also use Foundation to create just a cluster from nodes that are already imaged or to image
nodes without creating a cluster.
Use the procedures described in this document with Nutanix nodes and nodes from OEM partners. These nodes are
pre-imaged with a hypervisor and AOS at the factory, and you can either create a cluster from these nodes or re-
image them with the desired AOS and hypervisor and then create a cluster. Nodes from other vendors do not have a
hypervisor and AOS installed on them, and you must perform the bare-metal imaging procedures specifically adapted
for those nodes. For those procedures, see the vendor-specific field installation guides, a complete listing of which
is available on the Nutanix support portal. On the portal, click the hamburger menu icon, go to Documentation >
Hardware Compatibility Lists, and then select the vendor from the Platform filter available on the page.
Use the Prism web console (in clusters running AOS 4.5 or later) to image factory-prepared nodes and then add them
to an existing cluster. See the "Expanding a Cluster" section in the Web Console Guide for this procedure.

Note: Use Foundation to image factory-prepared (or bare metal) nodes and create a new cluster from those nodes. Use
the Prism web console (in clusters running AOS 4.5 or later) to image factory-prepared nodes and then add them to an
existing cluster. See the "Expanding a Cluster" section in the Web Console Guide for this procedure.

A field installation can be performed for either factory-prepared nodes or bare metal nodes.

• See Prepare Factory-Imaged Nodes for Foundation on page 8 to image factory-prepared nodes or create a
cluster from these nodes or both.
• See Prepare Bare Metal Nodes for Foundation on page 14 to image bare metal nodes and optionally configure
them into one or more clusters.

Note: Foundation supports imaging an ESXi, Hyper-V, or AHV hypervisor on nearly all Nutanix hardware
models with some restrictions. Click here (or log into the Nutanix support portal and select Documentation >
Compatibility Matrix from the main menu) for a list of supported configurations. To check a particular configuration,
go to the Filter By fields and select the desired model, AOS version, and hypervisor in the first three fields. In addition,
check the notes at the bottom of the table.
2
FOUNDATION GUIDELINES
This section provides the various guidelines, compatibility list, limitations, and capabilities of Foundation.

Platform-Specific Imaging Guidelines


Use the following guidelines to determine the correct Foundation version and imaging mode for your
hardware platform:

Table 1: Guidelines to Determine the Correct Foundation Version and Imaging Mode for Hardware
Platforms

Platform Guideline

Nutanix platforms Use Controller VM–based Foundation or Portable


Foundation app.

Lenovo platforms Use Controller VM–based Foundation.

Dell platforms Dell Services should perform deployment. If


necessary, use Controller VM–based foundation.
Refer to Dell Hardware Compatibility List and use the
qualified Foundation version.

All other platforms Use standalone Foundation 4.3.x

Note: When installing AHV on Lenovo platforms or the Dell XC430-4 platform, do not use the 1GbE interface for
imaging. Instead, install a supported gigabit interface converter (GBIC) on the 10 GbE interface and connect that
10GbE interface to a 1GbE switch for imaging. For more information, see KB 2422.

Foundation Use Case Matrix


The following matrix provides a list of use cases and its supportability with the various Foundation bits
available for download.

Table 2: Foundation Use Case Matrix

CVM Foundation Portable Foundation Standalone Foundation


(Windows, macOS) VM

Function
• Factory-imaged nodes • Factory-imaged nodes • Factory-imaged nodes
• Bare-metal nodes • Bare-metal nodes

Foundation | Foundation Guidelines | 5


CVM Foundation Portable Foundation Standalone Foundation
(Windows, macOS) VM

Hardware
• Any • Nutanix (NX) • Any

If IPv6 is disabled Cannot image nodes IPMI IPv4 required on IPMI IPv4 required on
the nodes the nodes

VLAN support No No Yes

LACP support Yes Yes Yes

Multi-homing support N/A Yes Yes

RDMA support Yes Yes Yes

How to use? Use Java Applet Launch the executable Deploy as a VM on


that redirects to the for Windows 10+ or VirtualBox, Fusion,
Foundation service on macOS 10.13.1+ Workstation, AHV, ESX,
the CVM etc
Or
Access using http://
CVM_IP:8000/

CVM vCPU and vRAM Allocation


vCPU Allocation
All vCPUs will be pinned to the NUMA node that is connected to the storage controller. Pinning vCPUs to a NUMA
node enables the CVM to maximize its performance under heavy load. Under heavy CVM-bound load, this pinning
improves I/O performance.

Table 3: vCPU Allocation

Platform vCPU
General Between 8 and 12 (inclusive)
Systems with at least 1 NVMe drive Between 8 and 12 (inclusive)
Low-end systems 0.75 * total number of logical cores

Note:

• If there are fewer than 8 logical cores per socket (for example, a 6-core CPU without hyper-threading),
allocate 8 vCPUs and do not pin them to any NUMA node.
• On low-end platforms, allocate as many vCPUs as possible without exceeding 75% of the number of
logical cores. For example, on an 8-core CPU without hyper-threading, allocate 6 vCPUs.
• Foundation GUI and Prism does not provide an option to override vCPU defaults.

Foundation | Foundation Guidelines | 6


vRAM Allocation
Every platform's layout module in Phoenix defines a hardware attribute called "default_workload", which classifies
the default purpose of that platform into one of the following categories

Table 4: vRAM Allocation

Platform vRAM

General / VDI / Server virtualization 20 GB

Storage-only 28 GB

Large server / High performance / All flash 32 GB

Features like dedupe and compression require vRAM beyond the defaults. For such cases, you can override the
default values by performing one of the following tasks:

• Manually changing the value in the Foundation GUI.


• The allocation can also be changed from Prism after installation, by going to Configure CVM Options from the
gear icon in the web console.

Note: vRAM allocation is not pinned to any NUMA node.

Foundation | Foundation Guidelines | 7


3
PREPARE FACTORY-IMAGED NODES
FOR FOUNDATION
This procedure describes how to install a selected hypervisor and the Nutanix Controller VM on discovered
nodes and how to configure the nodes into a cluster. "Discovered nodes" are factory prepared nodes on
the same subnet that are not part of a cluster currently. This procedure runs the Foundation tool through
the Nutanix Controller VM (Controller VM–based Foundation).

Before you begin

• Make sure that the nodes that you want to image are factory-prepared nodes that have not been configured in any
way and are not part of a cluster.
• Physically install the Nutanix nodes at your site. For general installation instructions, see "Mounting the Block"
in the Getting Started Guide. For installation instructions specific to your model type, see "Rack Mounting" in the
NX and SX Series Hardware Administration Guide.
Your workstation must be connected to the network on the same subnet as the nodes you want to image.
Foundation does not require an IPMI connection or any special network port configuration to image discovered
nodes. See Network Requirements for general information about the network topology and port access required
for a cluster.
• Determine the appropriate network (gateway and DNS server IP addresses), cluster (name, virtual IP address), and
node (Controller VM, hypervisor, and IPMI IP address ranges) parameter values needed for installation.

Note: The use of a DHCP server is not supported for Controller VMs, so make sure to assign static IP addresses to
Controller VMs.

Note: Nutanix uses an internal virtual switch to manage network communications between the Controller VM
and the hypervisor host. This switch is associated with a private network on the default VLAN and uses the
192.168.5.0/24 address space. For the hypervisor, IPMI interface, and other devices on the network (including
the guest VMs that you create on the cluster), do not use a subnet that overlaps with the 192.168.5.0/24 subnet on
the default VLAN. If you want to use an overlapping subnet for such devices, make sure that you use a different
VLAN.

• Download the following files from Nutanix Portal:

• AOS installer named nutanix_installer_package-version#.tar.gz from the AOS (NOS) download page.
• Hypervisor ISO if you want to instal Hyper-V or ESXi. The user must provide the supported Hyper-V or ESXi
ISO (see Hypervisor ISO Images on page 36); Hyper-V and ESXi ISOs are not available on the support
portal.
It is not necessary to download AHV because the AOS bundle includes an AHV installation bundle. However,
you can download an AHV installation bundle if you want to install a non-default version.
• Make sure that IPv6 is enabled on the network to which the nodes are connected and IPv6 multicast is supported.
• If the nodes contain self-encrypting drives (SEDs), disable encryption on the SEDs before imaging the nodes.
If the nodes contain only SEDs, you can enable encryption after you image the nodes. If the nodes contain both

Foundation | Prepare Factory-Imaged Nodes for Foundation | 8


regular hard disk drives (HDDs) and SEDs, do not enable encryption on the SEDs at any time during the lifetime
of the cluster.

Note: Disable encryption on the SEDs only when the nodes require re-imaging.

For information about enabling and disabling encryption, see the "Data-at-Rest Encryption" chapter in the Prism
Web Console Guide.

About this task

Note: This method can image discovered nodes or create a single cluster from discovered nodes or both. This method
is limited to factory prepared nodes running AOS 4.5 or later. If you want to image factory prepared nodes running
an earlier AOS (NOS) version, or image bare metal nodes, see Prepare Bare Metal Nodes for Foundation on
page 14.

To image the nodes and create a cluster, do the following:

Procedure

1. Run discovery and launch Foundation (see Discover Nodes and Launch Foundation on page 9).

2. (Optional) Update Foundation to the latest version (see Upgrade CVM Foundation by Using the Foundation Java
Applet on page 12).

3. Run CVM Foundation (see Configure Nodes with Foundation on page 30).

4. After the cluster is created successfully, begin configuring the cluster (see Configure a New Cluster in Prism on
page 34).

Discover Nodes and Launch Foundation


The procedure to follow for node discovery depends on whether all the devices involved—the Nutanix nodes and the
workstation that you use for imaging—are in the same broadcast domain or in a VLAN-segmented network.

Discover Nodes in the Same Broadcast Domain

About this task


Perform the following steps to discover nodes in a network that does not use VLANs.

Procedure

1. Access the Nutanix support portal (https://my.nutanix.com).

2. Browse to Downloads > Foundation, and then click FoundationApplet-offline.zip.

Foundation | Prepare Factory-Imaged Nodes for Foundation | 9


3. Extract the contents of the downloaded ZIP file into the workstation that you want to use for imaging, and then
double-click nutanix_foundation_applet.jnlp.
The discovery process begins and a window appears with a list of discovered nodes.

Note:
A security warning message may appear indicating this is from an unknown source. Click the accept and
run buttons to run the application.

Figure 1: Foundation Launcher Window

Discover Nodes in a VLAN-Segmented Network


Nutanix nodes running AHV version 20160215 (or later) include a network configuration tool that you
can use to assign a VLAN tag to the public interface on the Controller VM and to one or more physical
interfaces on the host. You can also use the tool to assign an IP address to the Controller VM and
hypervisor. After network configuration is complete, you can use the Foundation service running on the
Controller VM of that host to discover and image other Nutanix nodes. Foundation uses the VLAN sniffer
provided in CVM, to detect free Nutanix nodes, including nodes in other VLANs. The VLAN sniffer uses
the Neighbor Discovery protocol for IP version 6 and therefore requires that the physical switch to which
the nodes are connected supports IPv6 broadcast and multicast. During the imaging process, Controller
VM–based Foundation also assigns the specified VLAN tag (assumed to be that of the production VLAN)
to the corresponding interfaces on the selected nodes, eliminating the need to perform additional VLAN
assignment tasks for those nodes.

Before you begin


Connect the Nutanix nodes to a switch.

About this task

Note: Use the network configuration tool only on factory-prepared nodes that are not part of a cluster. Use of the tool
on a node that is part of a cluster makes the node inaccessible to the other nodes in the cluster, and the only way to
resolve the issue is to reconfigure the node to the previous IP addresses by using the network configuration tool again.

To configure the network for a node, do the following:

Foundation | Prepare Factory-Imaged Nodes for Foundation | 10


Procedure

1. Connect a console to one of the nodes and log on to the Acropolis host with root credentials.

2. Change your working directory to /root/nutanix-network-crashcart/, and then start the network
configuration utility.
root@ahv# ./network_configuration

3. In the network configuration utility, do the following:

a. Review the network card details to ascertain interface properties and identify connected interfaces.
b. Use the arrow keys to shift focus to the interface that you want to configure, and then use the Spacebar key
to select the interface.
Repeat this step for each interface that you want to configure.
c. Use the arrow keys to navigate through the user interface and specify values for the following parameters:

• VLAN Tag. VLAN tag to use for the selected interfaces.


• Netmask. Network mask of the subnet to which you want to assign the interfaces.
• Gateway. Default gateway for the subnet.
• Controller VM IP. IP address for the Controller VM.
• Hypervisor IP. IP address for the hypervisor.
d. Use the arrow keys to move the focus to Done, and then hit Enter.
The network configuration utility configures the interfaces.

Launch Foundation
How you launch Foundation depends on whether you used the Foundation Applet to discover nodes in the
same broadcast domain or the crash cart user interface to discover nodes in a VLAN-segmented network.

About this task


To launch the Foundation user interface, do one of the following:

Procedure

• If you used the Foundation Applet to discover nodes in the same broadcast domain, do the following:

a. Select the node on which you want to run Foundation.


The selected node will be imaged first and then be used to image the other nodes. You can select only nodes
with a status field value of Free, which indicates it is not currently part of a cluster. A value of Unavailable
indicates it is part of an existing cluster or otherwise unavailable. To rerun the discovery process, click the
Retry discovery button.

Note: A warning message may appear stating this is not the highest available version of Foundation found in
the discovered nodes. If you select a node using an earlier Foundation version (one that does not recognize one
or more of the node models), installation may fail when Foundation attempts to image a node of an unknown
model. Therefore, select the node with the highest Foundation version among the nodes to be imaged. (You can

Foundation | Prepare Factory-Imaged Nodes for Foundation | 11


ignore the warning and proceed if you do not intend to select any of the nodes that have the higher Foundation
version.)

b. (Optional but recommended) Upgrade Foundation on the selected node to the latest version. See Upgrade
CVM Foundation by Using the Foundation Java Applet on page 12.
c. With the node having the latest Foundation version selected, click the Launch Foundation button.
Foundation searches the network subnet for unconfigured Nutanix nodes (factory prepared nodes that are
not part of a cluster) and then displays information about the discovered blocks and nodes in the Discovered
Nodes screen. (It does not display information about nodes that are powered off or in a different subnet.) The
discovery process normally takes just a few seconds.

Note: If you want Foundation to image nodes from an existing cluster, you must first either remove the target
nodes from the cluster or destroy the cluster.

• If you used the crash cart user interface to discover nodes in a VLAN-segmented network, in a browser on your
workstation, enter the following URL: http://CVM_IP_address:8000
Replace CVM_IP_address with the IP address that you assigned to the Controller VM when using the network
configuration tool.

Verify Hypervisor Support


The list of supported ISO images appears in a iso_whitelist.json file used by Foundation to validate
ISO images. ISO files are identified in the whitelist by their MD5 value (not file name), so verify that the
MD5 value of the ISO you want to use is listed in the whitelist file.

Before you begin


Download the latest whitelist file from the Foundation page on the Nutanix support portal (https://
portal.nutanix.com/#/page/foundation/list). For information about the contents of the whitelist file, see
Hypervisor ISO Images on page 36.

About this task

Note: All versions of AHV released after 20170830.270 are not listed in the iso_whitelist.json file and are
deemed as qualified by default.

To determine whether a hypervisor is supported, do the following:

Procedure

1. Obtain the MD5 checksum of the ISO that you want to use.

2. Open the downloaded whitelist file in a text editor and perform a search for the MD5 checksum.

What to do next
If the MD5 checksum is listed in the whitelist file, save the file to the workstation that hosts the Foundation
VM. If the whitelist file on the Foundation VM does not contain the MD5 checksum, you can replace that file
with the downloaded file before you begin installation.

Upgrade CVM Foundation by Using the Foundation Java Applet


A Foundation update is optional but recommended. The Foundation Java applet includes an option to
upgrade or downgrade Foundation on a discovered node. The node must not be configured already.
Upgrade Foundation on any one node, and that node will upgrade Foundation on the other nodes selected
for imaging. If the node is configured, do not use the Java applet. Instead, update Foundation by using

Foundation | Prepare Factory-Imaged Nodes for Foundation | 12


the Prism web console (see Cluster Management > Software and Firmware Upgrades > Upgrading
Foundation in the Prism Web Console Guide).

Before you begin


1. Download the Foundation tarball from the Nutanix support portal to the workstation on which you plan to run the
Foundation Java applet.
2. Download and start the Foundation Java applet.

About this task


To upgrade Foundation on a discovered node by using the Foundation Java applet, do the following:

Procedure

1. In the Foundation Java applet, select the node on which you want to upgrade Foundation, and then click Upgrade
Foundation.

2. Browse to the folder to which you downloaded the Foundation tarball and double-click the tarball.
The upgrade process begins. After the upgrade completes, Genesis is restarted on the node, and that in turn restarts
the Foundation service. After the Foundation service becomes available, the upgrade process reports success.

What to do next
Run Foundation on page 30

Foundation | Prepare Factory-Imaged Nodes for Foundation | 13


4
PREPARE BARE METAL NODES FOR
FOUNDATION
This procedure describes how to install a selected hypervisor and the Nutanix Controller VM on bare metal
nodes and optionally configure the nodes into one or more clusters. "Bare metal" nodes are those that
are not factory prepared or cannot be detected through discovery. You can also use this method to image
factory prepared nodes that you do not want to configure into a cluster.

Before you begin

Note: Imaging or configuring bare metal nodes should be performed only by Nutanix sales engineers and partners. For
any assistance with this procedure, sales engineers and partners should contact Nutanix support.

• Physically install the nodes at your site. For installing Nutanix hardware platforms, see the NX and SX
Series Hardware Administration and Reference for your model type. For installing hardware from any other
manufacturer, see that manufacturer's documentation.
• Set up the installation environment (see Prepare the Installation Environment on page 15).

Note: If you changed the boot device order in the BIOS to boot from a USB flash drive, you will get a Foundation
timeout error if you do not change the boot order back to virtual CD-ROM in the BIOS.

Note: If STP (spanning tree protocol) is enabled on the ports that are connected to the Nutanix host, Foundation
might time out during the imaging process. Therefore, you must disable STP by using PortFast or an equivalent
feature on the ports that are connected to the Nutanix host before starting Foundation.

Note: Avoid connecting any device (that is, plugging it into a USB port on a node) that presents virtual media, such
as CDROM. This could conflict with the foundation installation when it tries to mount the virtual CDROM hosting
the install ISO.

• Have ready the appropriate global, node, and cluster parameter values needed for installation. The use of a DHCP
server is not supported for Controller VMs, so make sure to assign static IP addresses to Controller VMs.

Note: If the Foundation VM IP address set previously was configured in one (typically public) network
environment and you are imaging the cluster on a different (typically private) network in which the current address
is no longer correct, repeat step 13 in Prepare Workstation on page 15 to configure a new static IP address for
the Foundation VM.

• If the nodes contain self-encrypting drives (SEDs), disable encryption on the SEDs before imaging the nodes.
If the nodes contain only SEDs, you can enable encryption after you image the nodes. If the nodes contain both
regular hard disk drives (HDDs) and SEDs, do not enable encryption on the SEDs at any time during the lifetime
of the cluster.

Note: Disable encryption on the SEDs when the nodes require re-imaging.

For information about enabling and disabling encryption, see the "Data-at-Rest Encryption" chapter in the Prism
Web Console Guide.

Foundation | Prepare Bare Metal Nodes for Foundation | 14


About this task

• Prepare the Installation Environment on page 15


• Configure Nodes with Foundation on page 30

Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may not reflect the latest
features described in this section.)

Prepare the Installation Environment


About this task
Standalone (bare metal) imaging is performed from a workstation with access to the IPMI interfaces of the nodes in
the cluster. Imaging a cluster in the field requires first installing certain tools on the workstation and then setting the
environment to run those tools. This requires two preparation tasks:
1. Prepare the workstation. Preparing the workstation can be done on or off site at any time prior to installation. This
includes downloading ISO images, installing Oracle VM VirtualBox, and using VirtualBox to configure various
parameters on the Foundation VM (see Prepare Workstation on page 15).
2. Set up the network. The nodes and workstation must have network access to each other through a switch at the site
(see Set Up the Network on page 22).
3. Upgrade Foundation (see Update the Foundation Service on page 24.

Prepare Workstation
A workstation is needed to host the Foundation VM during imaging. You can perform these steps either
before going to the installation site (if you use a portable laptop) or at the site (if you can connect to the
web).

Before you begin

• Get a workstation (laptop or desktop computer) that you can use for the installation. The workstation must have
at least 3 GB of memory (Foundation VM size plus 1 GB), 25 GB of disk space (preferably SSD), and a physical
(wired) network adapter.
• Go to the Nutanix support portal and download the following files to a temporary directory on the workstation.

Foundation_VM_OVF-version#.tar This is the Foundation tar file. It includes the following


files:
To download this file, go to Downloads >
Foundation. • Foundation_VM-version#.ovf. This
is the Foundation VM OVF configuration
file for the version# release, for example
Foundation_VM-3.1.ovf.
• Foundation_VM-version#-disk1.vmdk.
This is the Foundation VM VMDK file
for the version# release, for example
Foundation_VM-3.1-disk1.vmdk.

nutanix_installer_package-version#.tar.gz This is the tar file used for imaging the desired AOS
release. Go to the AOS (NOS) download page on the
To download this file, go to Downloads > AOS
support portal to download this file.
(NOS).

Foundation | Prepare Bare Metal Nodes for Foundation | 15


Diagnostics file for the hypervisor that you plan to
install (if you want to run a diagnostics test after • AHV: diagnostic.raw.img.gz
creating a cluster).
• ESXi: diagnostics-disk1.vmdk,
To download any of these files, go to Downloads > diagnostics.mf, and diagnostics.ovf
Tools & Firmware.
• Hyper-V: diagnostics_uvm.vhd.gz

• If you intend to install a hypervisor other than AHV, you must provide the ISO image (see Hypervisor ISO Images
on page 36). Make sure that the hypervisor ISO image is available on the workstation.
• This procedure describes how to use Oracle VM VirtualBox, a free open source tool used to create a virtualized
environment on the workstation. Download the installer for Oracle VM VirtualBox and install it with the
default options. See the Oracle VM VirtualBox User Manual for installation and start up instructions (https://
www.virtualbox.org/wiki/Documentation).

Note: You can also use a tool such as VMware vSphere instead of Oracle VM VirtualBox.

About this task


To prepare the workstation, do the following:

Procedure

1. Create a folder called VirtualBox VMs in your home directory.


On a Windows system, for example, this is typically C:\Users\user_name\VirtualBox VMs.

2. Go to the location to which you downloaded the Foundation tar file and extract its contents.
$ tar -xf Foundation_VM_OVF-version#.tar

Note: If the tar utility is not available, use the corresponding utility for your environment.

3. Copy the extracted files to the VirtualBox VMs folder that you created.

Set Up the Foundation VM

About this task


To install the Foundation VM on the workstation, do the following:

Foundation | Prepare Bare Metal Nodes for Foundation | 16


Procedure

1. Start Oracle VM VirtualBox.

Figure 2: VirtualBox Welcome Screen

2. Click the File option of the main menu and then select Import Appliance from the pull-down list.

3. Find and select the Foundation_VM-version#.ovf file, and then click Next.

4. Click the Import button.

5. In the left column of the main screen, select Foundation_VM-version# and click Start.
The Foundation VM console launches and the VM operating system boots.

6. At the login screen, login as the Nutanix user with the password nutanix/4u.
The Foundation VM desktop appears (after it loads).

Foundation | Prepare Bare Metal Nodes for Foundation | 17


7. If you want to enable file drag-and-drop functionality between your workstation and the Foundation VM, install
Oracle Additions as follows:

a. On the VirtualBox window for the Foundation VM, select Devices > Insert Guest Additions CD
Image... from the menu.
A VBOXADDITIONS CD entry appears on the Foundation VM desktop.
b. Click OK when prompted to Open Autorun Prompt and then click Run.
c. Enter the root password (nutanix/4u) and then click Authenticate.

d. After the installation is complete, press the return key to close the VirtualBox Guest Additions installation
window.
e. Right-click the VBOXADDITIONS CD entry on the desktop and select Eject.
f. Reboot the Foundation VM by selecting System > Shutdown... > Restart from the Linux GUI.

Note: A reboot is necessary for the changes to take effect.

g. After the Foundation VM reboots, select Devices > Drag 'n' Drop > Bidirectional from the menu on the
VirtualBox window for the Foundation VM.

Foundation | Prepare Bare Metal Nodes for Foundation | 18


8. Open a terminal session and run the ifconfig command to determine if the Foundation VM was able to get an
IP address from the DHCP server.
If the Foundation VM has a valid IP address, skip to the next step. Otherwise, configure a static IP as follows:

Note: Normally, the Foundation VM needs to be on a public network in order to copy selected ISO files to the
Foundation VM in the next two steps. This might require setting a static IP address now and setting it again when
the workstation is on a different (typically private) network for the installation.

a. Double click the set_foundation_ip_address icon on the Foundation VM desktop.

Figure 3: Foundation VM: Desktop


b. In the pop-up window, click the Run in Terminal button.

Figure 4: Foundation VM: Terminal Window


c. In the Select Action box in the terminal window, select Device Configuration.

Note: Selections in the terminal window can be made using the indicated keys only. (Mouse clicks do not
work.)

Foundation | Prepare Bare Metal Nodes for Foundation | 19


Figure 5: Foundation VM: Action Box
d. In the Select a Device box, select eth0.

Figure 6: Foundation VM: Device Configuration Box


e. In the Network Configuration box, remove the asterisk in the Use DHCP field (which is set by default),
enter appropriate addresses in the Static IP, Netmask, and Default gateway IP fields, and then click the
OK button.

Figure 7: Foundation VM: Network Configuration Box


f. Click the Save button in the Select a Device box and the Save & Quit button in the Select Action box.
This save the configuration and closes the terminal window.

Foundation | Prepare Bare Metal Nodes for Foundation | 20


Upload Installation Files to the Foundation VM
The file system on the Foundation VM includes hypervisor-specific directories to which you must copy the
files you downloaded from the Nutanix support portal or obtained from a hypervisor vendor.

About this task


To upload the installation and installation-related files to the Foundation VM, do the following:

Procedure

1. Copy nutanix_installer_package-version#.tar.gz to the /home/nutanix/foundation/nos


directory.

2. If you are installing hypervisors other than AHV, copy the ISO files to the corresponding directory on the
Foundation VM.

» ESXi ISO image: /home/nutanix/foundation/isos/hypervisor/esx

» Hyper-V ISO image: /home/nutanix/foundation/isos/hypervisor/hyperv

Note: You do not have to provide an AHV image. Foundation includes an AHV tar file in /home/nutanix/
foundation/isos/hypervisor/kvm. However, if you want to install a different version of AHV, download the
AHV tar file from the Nutanix support portal and copy it to the directory.

3. If you downloaded diagnostics files for one or more hypervisors to your workstation, copy them to the appropriate
directory on the Foundation VM. The directories for the diagnostic files are as follows:

» Diagnostic file for AHV (diagnostic.raw.img.gz): /home/nutanix/foundation/isos/diags/kvm

» Diagnostic file for ESXi (diagnostics-disk1.vmdk, diagnostics.mf, and diagnostics.ovf): /


home/nutanix/foundation/isos/diags/esx
» Diagnostic file for Hyper-V (diagnostics_uvm.vhd.gz): /home/nutanix/foundation/isos/diags/
hyperv

Verify Hypervisor Support


The list of supported ISO images appears in a iso_whitelist.json file used by Foundation to validate
ISO images. ISO files are identified in the whitelist by their MD5 value (not file name), so verify that the
MD5 value of the ISO you want to use is listed in the whitelist file.

Before you begin


Download the latest whitelist file from the Foundation page on the Nutanix support portal (https://
portal.nutanix.com/#/page/foundation/list). For information about the contents of the whitelist file, see
Hypervisor ISO Images on page 36.

About this task

Note: All versions of AHV released after 20170830.270 are not listed in the iso_whitelist.json file and are
deemed as qualified by default.

To determine whether a hypervisor is supported, do the following:

Procedure

1. Obtain the MD5 checksum of the ISO that you want to use.

2. Open the downloaded whitelist file in a text editor and perform a search for the MD5 checksum.

Foundation | Prepare Bare Metal Nodes for Foundation | 21


What to do next
If the MD5 checksum is listed in the whitelist file, save the file to the workstation that hosts the Foundation
VM. If the whitelist file on the Foundation VM does not contain the MD5 checksum, you can replace that file
with the downloaded file before you begin installation.

Set Up the Network

About this task


The network must be set up properly on site before imaging nodes through the Foundation tool. To set up the network
connections, do the following:

Note: You can connect to either a managed switch (routing tables) or a flat switch (no routing tables). A flat switch is
often recommended to protect against configuration errors that could affect the production environment. Foundation
includes a multi-homing feature that allows you to image the nodes using production IP addresses despite being
connected to a flat switch. See Network Requirements on page 38 for general information about the network
topology and port access required for a cluster.

Procedure

1. Make sure that IPv6 is enabled on the network to which the nodes are connected and IPv6 multicast is supported.

2. If you are using a shared IPMI port to reinstall the hypervisor, follow the instructions in KB 3834.

3. (Nutanix NX Series) Connect the dedicated IPMI port and any one of the data ports to the switch. We highly
recommend that you use the dedicated IPMI port and a 10G data port. You may use a 1G port instead of a 10G
port at the cost of increased imaging time or imaging failure. If you use SFP+ 10G NICs and a 1G RJ45 switch
for imaging, connect the 10G port to the switch using one of our approved GBICs. You may also use the shared
IPMI/1G port in place of the dedicated port, as long as the BMC is configured to use it, but it is less reliable than
the dedicated port. Regardless of the ports you choose, physically disconnect all other ports. The IPMI LAN
interfaces of the nodes must be in failover mode (factory default setting).
Use the following guideline when connecting the IPMI port on G4 and later platforms: if you use the shared IPMI
port, make sure that the connected switch can auto-negotiate to 100 Mbps. This auto-negotiation capability is
required because the shared IPMI port can support 1 Gbps throughput only when the host is online. If the switch
cannot auto-negotiate to 100 Mbps when the host goes offline, make sure to use the dedicated IPMI port instead

Foundation | Prepare Bare Metal Nodes for Foundation | 22


of the shared port (the dedicated IPMI ports support 1 Gbps throughput at all times). Older platforms support only
10/100 Mbps throughput.

Note:

• Foundation does not support imaging nodes in an environment using LACP without fallback
enabled.
• Foundation does not support configuring nodes' virtual switches to use LACP. This has to be done
manually post imaging.
• Foundation does not support configuring network adapters to use jumbo frames during imaging. This
has to be done manually post imaging.

The exact location of the port depends on the model type. See the hardware documentation for your model to
determine the port location. The following figure illustrates the location of the network ports on the back of an
NX-3050 (middle RJ-45 interface).

Figure 8: Port Locations (NX-3050)

4. (Lenovo Converged HX Series) Lenovo HX-series systems require that you connect both the system management
(IMM) port and one of the 10 GbE ports. The following figure illustrates the location of the network ports on the
back of the HX3500 and HX5500.

Figure 9: Port Locations (HX System)

5. (Dell XC series) Connect the iDRAC port and one of the data ports. While some Dell XC Series systems, such as
the Dell XC430-4, support imaging over a 1 GbE network connection, other systems, such as the Dell XC640-10,

Foundation | Prepare Bare Metal Nodes for Foundation | 23


require a 10 GbE connection for imaging. Nutanix recommends that you use a 10 GbE port regardless of the
model of your appliance.

Figure 10: Port Locations (XC System)

6. (IBM POWER Servers) Connect the dedicated IPMI port and a data port to the network to which the Foundation
VM is connected.

7. Connect the installation workstation (see Prepare Workstation on page 15) to the same switch as the nodes.

Update the Foundation Service


The Foundation user interface enables you to perform one-click updates either over the air or from a tarball
that you manually upload to the Foundation VM. The over-the-air update process downloads and installs
the latest Foundation version from the Nutanix support portal. By design, the over-the-air update process
downloads and installs a tarball that does not include Lenovo packages. Therefore, for Lenovo platforms,
update Foundation by using an uploaded tarball.

Before you begin


If you want to install from a tarball of your choice (required for Lenovo platforms and optional for other
platforms), download the Foundation tarball to the workstation that you use to access or run Foundation.
Installers are available on the Foundation download page at (https://portal.nutanix.com/#/page/foundation/list).

About this task


To update Foundation by using the graphical user interface, do the following:

Procedure

1. Open the CVM Foundation user interface.

2. In the gear icon menu at the top-right corner of the Foundation user interface, click Update Foundation.

3. In the Update Foundation dialog box, do one of the following:

» (Not to be used with Lenovo platforms) To perform a one-click, over-the-air update, click Update.
The dialog box displays the version to which you can update Foundation.
» (For Lenovo platforms; optional for other platforms) To update Foundation by using an installer that you
downloaded to the workstation, click Browse, browse to and select the tarball, and then click Install.
5
FOUNDATION APP (BETA RELEASE)
Foundation is available as a native Mac or Windows application that launches Foundation UI in a browser. Unlike
standalone VM, this provides a simpler alternative that skips configuring and mounting a VM.

Requirements
The Foundation app should be installed on a Mac or Windows computer that meets the following recommended
requirements:

• Windows 10 x64 / macOS 10.13.1 or higher.


• Latest version of 64-bit Java JDK.
• Quad-core CPU or higher.
• 8 GB RAM or higher.
• Wired connection between the computer and switch.

Limitations
• Only Nutanix nodes can be configured with Foundation app.
• The app is currently in beta phase. Your experience may not be optimal.

Install Foundation App on macOS

About this task


Perform the following steps to install and launch Foundation app on macOS.

Procedure

1. Disable "stealth mode" of macOS Firewall.

2. Download latest version of 64-bit Java JDK from http://www.oracle.com/ and install it.

3. Download the Foundation .dmg file from Nutanix Portal.

4. Double-click the Foundation .dmg file.

5. Drag the Foundation app to Application folder. This installs the Foundation app.

6. Double-click the Foundation app in Application folder. This launches Foundation UI in the default browser or you
can manually visit http://localhost:8000 on your preferred browser.

Note: To upgrade the app, download and install a higher version of app from Nutanix portal.

7. Allow the app to accept incoming connections when prompted by your Mac computer.

Foundation | Foundation App (Beta Release) | 25


8. To close the app, right-click on Foundation icon in Launcher and click Force Quit.

Install Foundation App on Windows

About this task


Perform the following steps on the Windows PC to install and launch Foundation app.

Note: The installation kills any running Foundation process. If you have initiated Foundation with a previously
installed app, ensure that it is complete before launching the installation.

Procedure

1. Download latest version of 64-bit Java JDK from http://www.oracle.com/ and install it.

2. Ensure that the environment variable path is set for Java.

3. Ensure IPv6 is enabled on the network interface that connects the Windows PC to the switch.

4. Download the Foundation .msi installer file from Nutanix Portal.

5. You can perform the installation either as a wizard or a silent installation.

a. Double-click the downloaded MSI file to run the installation wizard.


OR
b. Run the command: msiexec.exe /i portable_foundation.msi /qb /l*v install.log
/qb+: Displays basic user interface, and a modal dialog box when installation completes.
/q: Performs completely silent installation.
This command saves the installation log in install.log file, which can help in debugging a failed
installation.

6. Double-click the Foundation icon in Desktop or Start Menu. This launches Foundation UI in the default browser
or you can manually visit http://localhost:8000/gui/index.html on your preferred browser.

Note: To upgrade the app, download a higher version of app from Nutanix portal and perform a fresh installation.
The fresh installation kills any running Foundation operation and updates the older version to the higher version. If
you have initiated Foundation with the older app, ensure that it is complete before doing a fresh installation of the
higher version.

Uninstall Foundation App on macOS

About this task


Perform the following steps to uninstall Foundation app on macOS.

Procedure

1. If Foundation app is running, right-click on Foundation icon in Launcher and click Force Quit.

2. Delete the downloaded Foundation .dmg file.

3. Drag the Foundation app from Application folder to Trash.

Foundation | Foundation App (Beta Release) | 26


Uninstall Foundation App on Windows

About this task


Perform the following steps to uninstall Foundation app on Windows.

Note: Uninstallation does not remove the log and configuration files that are created by Foundation app. So, for clean
installation, it is recommended to delete these files manually.

Procedure

1. Uninstall through Windows Apps & features.


OR

2. Run the command: msiexec.exe /X{BCD56AA1-664C-4EE8-8E01-AED3F0368234} /qb+ /l*v


uninstall.log
/qb+: Displays basic user interface, and a modal dialog box when uninstallation completes.
/q: Performs completely silent uninstallation.
This command saves the uninstallation log in uninstall.log file, which can help in debugging a failed
uninstallation.

Foundation | Foundation App (Beta Release) | 27


6
UPGRADE FOUNDATION
Upgrade from the Graphical User Interface

• You can upgrade CVM Foundation from version 3.12 or later to 4.3.x by using the Foundation Java applet
or the Prism web console. For information about upgrading Foundation by using the Java applet, see
Upgrade CVM Foundation by Using the Foundation Java Applet on page 12.
• For information about upgrading Foundation by using the Prism web console, see Cluster Management
> Software and Firmware Upgrades > Upgrading Foundation in the Prism Web Console Guide.

Note: The over-the-air update functionality for CVM Foundation is available in the Prism web console
only from AOS release 5.0 or higher. If you are running an earlier AOS release, upload a Foundation
tarball to the cluster and perform a one-click upgrade from the Prism web console.

• To upgrade CVM or standalone Foundation from the Foundation UI, click the version link in the
Foundation UI.
• To upgrade the Foundation app, download a higher version of app from Nutanix portal and perform a
fresh installation. The fresh installation kills any running Foundation operation and updates the older
version to the higher version. If you have initiated Foundation with the older app, ensure that it is
complete before doing a fresh installation of the higher version.
Upgrade from the Command-Line Interface

• To upgrade Foundation from version 3.1 or later to version 4.3.x, by using the command line, do the
following:
1. Download the Foundation upgrade bundle (foundation-version#.tar.gz) from the support portal to the
/home/nutanix/ directory.
2. Change your working directory to /home/nutanix/.
3. Upgrade Foundation.
$ ./foundation/bin/foundation_upgrade -t foundation-version#.tar.gz

• To upgrade Foundation to version 4.3.x from a version earlier than 3.1, by using the command line, do the
following:
1. Download the Foundation upgrade bundle (foundation-version#.tar.gz) from the support portal
to the /home/nutanix directory.
2. Copy the /home/nutanix/foundation/config/foundation_settings.json file to a safe
location. (You will copy it back in step 5.)
3. If you want to save the existing log files, copy the /home/nutanix/foundation/log directory to a
safe location or download the log archive from the following URL: http://foundation_ip:8000/
foundation/log_archive_tar.
4. If you want to preserve the existing ISO files (the contents of the isos and nos directories), enter the
following commands:
$ cd /home/nutanix

Foundation | Upgrade Foundation | 28


$ cp -r foundation/isos .
$ cp -r foundation/nos .

5. Do one of the following:

• If upgrading from 3.0.x to 4.3.x, enter the following commands:


$ cd /home/nutanix # If not already there
$ pkill -9 foundation
$ rm -rf foundation
$ tar xf foundation-version#.tar.gz
$ cp <path>/foundation_settings.json config/foundation_settings.json
# If step 4 done, save new AHV files and restore backed up ISO files
$ mv foundation/isos/hypervisor/kvm/* isos/hypervisor/kvm/
$ mv isos foundation/isos
$ mv nos foundation/nos
$ sudo service foundation_service restart

• If upgrading from 2.x to 4.3.x, enter the following commands:


$ cd /home/nutanix # If not already there
$ pkill -9 foundation
$ sudo fusermount -uz foundation/tmp/fuse # It is okay if this complains at you
$ rm -rf foundation
$ tar xf foundation-version#.tar.gz
$ cd /etc/init.d
$ sudo rm foundation_service
$ sudo ln -s /home/nutanix/foundation/bin/foundation_service
$ sudo yum -y install libunwind-devel
$ cd /home/nutanix
$ cp <path>/foundation_settings.json config/foundation_settings.json
# If step 4 done, save new AHV files and restore backed up ISO files
$ rm -rf foundation/isos
$ rm -rf foundation/nos
$ mv isos foundation/isos
$ mv nos foundation/nos
$ sudo service foundation_service restart

Foundation | Upgrade Foundation | 29


7
RUN FOUNDATION
• Automate filling Foundation UI on page 30
• Configure Nodes with Foundation on page 30

Automate filling Foundation UI


You can automate filling Foundation UI fields, using a configuration file. This file stores answers to most inputs that
are sought by the Foundation UI. To create or edit this configuration file, login at https://install.nutanix.com with
your Nutanix Portal credentials. You can provide partial or complete Foundation UI configuration details. When you
run Foundation, import this file to load these configuration details. Using configuration file has several benefits as
follows:

• Serves as a reusable baseline file that helps skip repeat manual entering of configuration details in repeat or
similar Foundation operations.
• Plan the configuration details in advance and keep it ready in the file. When you run Foundation later, import this
configuration file.
• Invite others to review and edit your planned configuration.
• Import NX nodes from a Salesforce order to avoid manually adding NX nodes.

Note: The configuration file only stores configuration settings and not AOS or hypervisor images. You can upload
images only in Foundation UI.

Configure Nodes with Foundation


Before you begin
Complete Prepare Factory-Imaged Nodes for Foundation on page 8 or Prepare Bare Metal Nodes for Foundation
on page 14.

Note:

• During this procedure, you assign IP addresses to the hypervisor host, the Controller VMs, and the
IPMI interfaces. Do not assign IP addresses from a subnet that overlaps with the 192.168.5.0/24 address
space on the default VLAN. Nutanix uses an internal virtual switch to manage network communications
between the Controller VM and the hypervisor host. This switch is associated with a private network on
the default VLAN and uses the 192.168.5.0/24 address space. If you want to use an overlapping subnet,
make sure that you use a different VLAN.
• Mixed-vendor cluster is not supported. For restriction details in Nutanix nodes' cluster, see the section
Product Mixing Restrictions in NX and SX Series Hardware Administration Guide.
• For a single imaged node that you need to re-image or to form a 1-node cluster using CVM Foundation,
ensure that you launch the CVM Foundation from another node. CVM Foundation can configure its'
own node only if the operation includes one or more nodes along with its' own node.

Foundation | Run Foundation | 30


• You may need to update Foundation to a higher or relevant version. For details, see Upgrade Foundation
on page 28 .

Procedure

1. Launch the Foundation Start page in one of the following ways:


For Standalone Foundation:
1. On the Foundation VM desktop, double-click the Nutanix Foundation icon.

Note: See Prepare the Installation Environment on page 15 if Oracle VM VirtualBox is not started or the
Foundation VM is not running currently.

2. In a web browser inside the Foundation VM, visit the URL http://localhost:8000/gui/index.html.
3. After you have assigned an IP address to the Foundation VM, visit the URL http://<foundation-vm-ip-
address>:8000/gui/index.html from a web browser outside the VM .
For CVM Foundation:
1. In the Foundation Java applet, select a node and click the Launch Foundation button.
2. If you used the crash cart user interface to discover nodes in a VLAN-segmented network, visit the URL
http://<foundation-cvm-ip-address>:8000 from a browser in a workstation connected to the node's
network .
For Foundation app:
1. Double-click the Foundation executable file. For installation process and launch details, see the sections
Install Foundation App on macOS on page 25 or Install Foundation App on Windows on page 26.

2. On the Start page, you can:

• Import a Foundation configuration file created on the Portal https://install.nutanix.com. For details, see
Automate filling Foundation UI on page 30.
• (Optional) configure LACP or LAG for network connections between the nodes and switch.
• Assign VLANs to IPMI and CVM/host networks with standalone Foundation 4.3.2 or higher.
• Select the workstation network adapter that connects to the nodes' network.
• Specify the subnets and gateway addresses for the cluster and the IPMI network.
• Create and assign two IP addresses to Foundation app or standalone workstation for multi-homing.

3. The Nodes page discovers blocks of nodes and lists them in an ascending order.
You can do the following:

• From the Tools menu, add nodes manually.

Note: You can manually add nodes only in standalone Foundation.

• If you are manually adding multiple blocks in a single instance, all added blocks get the same number of
nodes. To add blocks with different numbers of nodes, add multiple blocks with highest number of nodes

Foundation | Run Foundation | 31


and then delete nodes for each block, as applicable. Alternatively, you can also repeat the add process to
separately add blocks with different number of nodes.
• From the Tools menu, reorder or re-arrange the blocks to match the order of IP addresses and hypervisor
hostnames that you want to assign.
• To remove a node from the Foundation process, un-select a node row and click Remove Unselected Rows
on the Tools menu.
• Individually assign the IP addresses and hypervisor hostname for each node or click Range Autofill in the
Tools menu to bulk-assign these details using the autofill row.

Note: Unlike CVM Foundation, standalone Foundation does not validate these IP addresses by checking for
uniqueness. Hence, manually cross-check and ensure that the IP addresses are unique and valid.

4. The Cluster page lets you provide cluster details and configure cluster formation or just image the nodes without
forming a cluster. You can also enable network segmentation to separate CVM network traffic from user VMs and
hypervisors network traffic.

Note:

• The Cluster Virtual IP field is essential for Hyper-V clusters but optional for ESXi and AHV
clusters.
• To provide multiple DNS or NTP servers, enter a comma-separated list of IP addresses.
• For best practices in configuring NTP servers, see the section Recommendations for Time
Synchronization in Prism Web Console Guide.

5. On the AOS page, you can specify and upload AOS images and also view version of existing installed AOS
image on the nodes. You can skip updating CVMs with AOS if all discovered nodes' CVMs already have same
AOS version that you want to use.

6. On the Hypervisor page, you can:

• Specify and upload hypervisor image file(s).


• View version of existing installed hypervisors on the nodes.
• Upload the latest hypervisor whitelist JSON file that can be downloaded from the Nutanix Portal. This file lists
the supported hypervisors.

Note:

• You can select one or more nodes to be Storage-Only nodes, which host AHV only. You need to
image rest of the nodes with another hypervisor and form a multi-hypervisor cluster.
• For discovered nodes, if you skip updating CVMs with AOS, you can still re-image the hypervisors.
Hypervisor-only imaging is supported. However, imaging CVMs with AOS without imaging
hypervisors is not supported.
• [Hyper-V only] If you choose Hyper-V, from the Choose Hyper-V SKU list that is displayed,
select the SKU that you want to use.
Four Hyper-V SKUs are supported: Standard, Datacenter, Standard with GUI, and
Datacenter with GUI.

Foundation | Run Foundation | 32


7. Standalone Foundation needs IPMI remote access to nodes. Standalone Foundation UI has the additional IPMI
page, where you need to provide the IPMI access credentials for each node.
To provide credentials for all nodes at one go, click the Tools menu to either use the autofill row or assign a
vendor's default IPMI credentials to all nodes.

8. The Installation in Progress page displays the progress status and lets you view the individual Log for in-
progress or completed operations of all the nodes. You can click Back to Configuration to have a read-only
view of the configuration details while the installation is in progress.

Note: You can abort an ongoing installation in standalone Foundation but not in CVM Foundation.

Results
Post successful completion of all operations, the Installation finished page is displayed.

Note: If you have missed something and want to reconfigure and redo the installation, you can click Reset to go back
to the Start page and redo the Foundation process.

Foundation | Run Foundation | 33


8
POST INSTALLATION STEPS
Configure a New Cluster in Prism
About this task
After creating the cluster, you can configure it through the Prism web console. A storage pool and a container are
created automatically when the cluster is created, but many other set up options require user action. The following are
common cluster set up steps typically done soon after creating a cluster. (All the sections cited in the following steps
are in the Prism Web Console Guide.)

Procedure

1. Verify the cluster has passed the latest Nutanix Cluster Check (NCC) tests.

a. Check the installed NCC version and update it if a later version is available (see the "Software and Firmware
Upgrades" section).
b. Run NCC if you downloaded a newer version or did not run it as part of the install.
Running NCC must be done from a command line. Open a command window, log on to any Controller VM in
the cluster with SSH, and then run the following command:
nutanix@cvm$ ncc health_checks run_all

If the check reports a status other than PASS, resolve the reported issues before proceeding. If you are unable
to resolve the issues, contact Nutanix support for assistance.
c. Configure NCC so that the cluster checks are run and emailed according to your desired frequency.
nutanix@cvm$ ncc --set_email_frequency=num_hrs
where num_hrs is a positive integer of at least 4 to specify how frequently NCC is run and results are emailed.
For example, to run NCC and email results every 12 hours, specify 12; or every 24 hours, specify 24, and so
on. For other commands related to automatically emailing NCC results, see "Automatically Emailing NCC
Results" in the Nutanix Cluster Check (NCC) Guide for your version of NCC.

2. Specify the timezone of the cluster.


Specifying the timezone must be done from the Nutanix command line (nCLI). While logged in to the Controller
VM (see previous step), run the following commands:
nutanix@cvm$ ncli
ncli> cluster set-timezone timezone=cluster_timezone

Replace cluster_timezone with the timezone of the cluster (for example, America/Los_Angeles, Europe/
London, or Asia/Tokyo). Restart all Controller VMs in the cluster after changing the timezone. Because a
cluster can tolerate only a single Controller VM unavailable at any one time, restart the Controller VMs in a
series, waiting until one has finished starting before proceeding to the next. See the Command Reference for more
information about using the nCLI.

3. Specify an outgoing SMTP server (see the "Configuring an SMTP Server" section).

Foundation | Post Installation Steps | 34


4. If the site security policy allows Nutanix customer support to access the cluster, enable the remote support tunnel
(see the "Controlling Remote Connections" section).

CAUTION: Failing to enable remote support prevents Nutanix support from directly addressing cluster issues.
Nutanix recommends that all customers allow email alerts at minimum because it allows proactive support of
customer issues.

5. If the site security policy allows Nutanix support to collect cluster status information, enable the Pulse feature (see
the "Configuring Pulse" section).
This information is used by Nutanix support to diagnose potential problems and provide more informed and
proactive help.

6. Add a list of alert email recipients, or if the security policy does not allow it, disable alert emails (see the
"Configuring Email Alerts" section).
You also have the option to specify email recipients for specific alerts (see the "Configuring Alert Policies"
section).

7. If the site security policy allows automatic downloads to update AOS and other upgradeable cluster elements,
enable that feature (see the "Software and Firmware Upgrades" section).

Note: Allow access to the following through your firewall to ensure that automatic download of updates can
function:

• *.compute-*.amazonaws.com:80
• release-api.nutanix.com:80

8. License the cluster (see the "License Management" section).

9. For ESXi and Hyper-V clusters, add the hosts to the appropriate management interface.

• vCenter: See the Nutanix vSphere Administration Guide.


• SCVMM: See the Nutanix Hyper-V Administration Guide.

Foundation | Post Installation Steps | 35


9
HYPERVISOR ISO IMAGES
An AHV ISO image is included as part of Foundation. However, customers must provide ISO images for other
hypervisors. Check with your hypervisor manufacturer's representative, or download an ISO image from their support
site:

Note: For the Lenovo Converged HX Series platform, use the custom ISOs that are available on the VMware website
(www.vmware.com) at Downloads > Product Downloads > vSphere > Custom ISOs.

Make sure that the MD5 checksum of the hypervisor ISO image is listed in the ISO whitelist file used by Foundation.
See Verify Hypervisor Support on page 12.
The following table describes the fields that appear in the iso_whitelist.json file for each ISO image.

Table 5: iso_whitelist.json Fields

Name Description

(n/a) Displays the MD5 value for that ISO image.


min_foundation Displays the earliest Foundation version that supports this ISO image. For
example, "2.1" indicates you can install this ISO image using Foundation
version 2.1 or later (but not an earlier version).
hypervisor Displays the hypervisor type (esx, hyperv, or kvm). The "kvm" designation
means AHV. Entries with a "linux" hypervisor are not available; they are
for Nutanix internal use only.
min_nos Displays the earliest AOS version compatible with this hypervisor ISO. A
null value indicates there are no restrictions.
friendly_name Displays a descriptive name for the hypervisor version, for example "ESX
6.0" or "Windows 2012r2".
version Displays the hypervisor version, for example "6.0" or "2012r2".
unsupported_hardware Lists the Nutanix models on which this ISO cannot be used. A blank list
indicates there are no model restrictions. However, conditional restrictions
such as the limitation that Haswell-based models support only ESXi
version 5.5 U2a or later may not be reflected in this field.
skus (Hyper-V only) Lists which Hyper-V types (datacenter and standard) are supported with
this ISO image.
compatible_versions Reflects through regular expressions the hypervisor versions that can co-
exist with the ISO version in an Acropolis cluster (primarily for internal
use).
deprecated (optional field) Indicates that this hypervisor image is not supported by the mentioned
Foundation version and higher versions. If the value is “null”, the image is
supported by all Foundation versions to date.

Foundation | Hypervisor ISO Images | 36


The following are sample entries from the whitelist for an ESX and an AHV image.
"iso_whitelist": {
"478e2c6f7a875dd3dacaaeb2b0b38228": {
"min_foundation": "2.1",
"hypervisor": "esx",
"min_nos": null,
"friendly_name": "ESX 6.0",
"version": "6.0",
"unsupported_hardware": [],
"compatible_versions": {
"esx": ["^6\\.0.*"]
},

"a2a97a6af6a3e397b43e3a4c7a86ee37": {
"min_foundation": "3.0",
"hypervisor": "kvm",
"min_nos": null,
"friendly_name": "20160127",
"compatible_versions": {
"kvm": [
"^el6.nutanix.20160127$"
]
},
"version": "20160127",
"deprecated": "3.1",
"unsupported_hardware": []
},

Foundation | Hypervisor ISO Images | 37


10
NETWORK REQUIREMENTS
When configuring a Nutanix block, you will need to ask for the IP addresses of components that should already
exist in the customer network, as well as IP addresses that can be assigned to the Nutanix cluster. You will also need
to make sure to open the software ports that are used to manage cluster components and to enable communication
between components such as the Controller VM, Web console, Prism Central, hypervisor, and the Nutanix hardware.
Nutanix recommends that you specify information such as a DNS server and NTP server even if the cluster is not
connected to the Internet or not running in a production environment.

Existing Customer Network


You will need the following information during the cluster configuration:

• Default gateway
• Network mask
• DNS server
• NTP server
You should also check whether a proxy server is in place in the network. If so, you will need the IP address and port
number of that server when enabling Nutanix support on the cluster.

New IP Addresses
Each node in a Nutanix cluster requires three IP addresses, one for each of the following components:

• IPMI interface
• Hypervisor host
• Nutanix Controller VM
All Controller VMs and hypervisor hosts must be on the same subnet. No systems other than the Controller VMs and
hypervisor hosts can be on this network, which must be isolated and protected.

Software Ports Required for Management and Communication


The following Nutanix network port diagrams show the ports that must be open for supported hypervisors. The
diagrams also shows ports that must be opened for infrastructure services.

Foundation | Network Requirements | 38


Figure 11: Nutanix Network Port Diagram for VMware ESXi

Figure 12: Nutanix Network Port Diagram for AHV

Figure 13: Nutanix Network Port Diagram for Microsoft Hyper-V

Foundation | Network Requirements | 39


11
HYPER-V INSTALLATION REQUIREMENTS
Ensure that the following requirements are met before installing Hyper-V.

Windows Active Directory Domain Controller


Requirements:

• The primary domain controller version must at least be 2008 R2.

Note: If you have Volume Shadow Copy Service (VSS) based backup tool (for example Veeam), functional level
of Active Directory must be 2008 or higher.

• Active Directory Web Services (ADWS) must be installed and running. By default, connections are made over
TCP port 9389, and firewall policies must enable an exception on this port for ADWS.
To test that ADWS is installed and running on a domain controller, log on by using a domain administrator
account in a Windows host other than the domain controller host that is joined to the same domain and has the
RSAT-AD-Powershell feature installed, and run the following PowerShell command. If the command prints the
primary name of the domain controller, then ADWS is installed and the port is open.
> (Get-ADDomainController).Name

• The domain controller must run a DNS server.

Note: If any of the above requirements are not met, you need to manually create an Active Directory computer
object for the Nutanix storage in the Active Directory, and add a DNS entry for the name.

• Ensure that the Active Directory domain is configured correctly for consistent time synchronization.
Accounts and Privileges:

• An Active Directory account with permission to create new Active Directory computer objects for either a storage
container or Organizational Unit (OU) where Nutanix nodes are placed. The credentials of this account are not
stored anywhere.
• An account that has sufficient privileges to join a Windows host to a domain. The credentials of this account are
not stored anywhere. These credentials are only used to join the hosts to the domain.
Additional Information Required:

• The IP address of the primary domain controller.

Note: The primary domain controller IP address is set as the primary DNS server on all the Nutanix hosts. It is
also set as the NTP server in the Nutanix storage cluster to keep the Controller VM, host, and Active Directory time
synchronized.

• The fully qualified domain name to which the Nutanix hosts and the storage cluster is going to be joined.

Foundation | Hyper-V Installation Requirements | 40


SCVMM

Note: Relevant only if you have SCVMM in your environment.

Requirements:

• The SCVMM version must at least be 2012 R2 and it must be installed on Windows Server 2012 or a newer
version.
• The SCVMM server must allow PowerShell remoting.
To test this scenario, log on by using the SCVMM administrator account in a Windows host and run the following
PowerShell command on a Windows host that is different to the SCVMM host (for example, run the command
from the domain controller). If they print the name of the SCVMM server, then PowerShell remoting to the
SCVMM server is not blocked.
> Invoke-Command -ComputerName scvmm_server -ScriptBlock {hostname} -Credential MYDOMAIN
\username

Replace scvmm_server with the SCVMM host name and MYDOMAIN with Active Directory domain name.

Note: If the SCVMM server does not allow PowerShell remoting, you can perform the SCVMM setup manually by
using the SCVMM user interface.

• The ipconfig command must run in a PowerShell window on the SCVMM server. To verify run the following
command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {ipconfig} -Credential
MYDOMAIN\username

Replace scvmm_server_name with the SCVMM host name and MYDOMAIN with Active Directory domain name.
• The SMB client configuration in the SCVMM server should have RequireSecuritySignature set to False. To
verify, run the following command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {Get-SMBClientConfiguration |
FL RequireSecuritySignature}

Replace scvmm_server_name with the SCVMM host name.


This can be set to True by a domain policy. In this case, the domain policy should be modified to set it to False.
Also, if it is True, this can be configured back to False, but might not get changed throughout if there is a policy
that reverts it back to True. To change it, you can use the following command in the PowerShell on the SCVMM
host by logging in as a domain administrator.
Set-SMBClientConfiguration -RequireSecuritySignature $False -Force

If you are changing it from True to False, it is important to confirm that the policies that are on the SCVMM
host have the correct value. On the SCVMM host run rsop.msc to review the resultant set of policy details,
and verify the value by navigating to, Servername > Computer Configuration > Windows Settings >
Security Settings > Local Policies > Security Options: Policy Microsoft network client: Digitally
sign communications (always). The value displayed in RSOP must be, Disabled or Not Defined
for the change to persist. Also, the group policies that have been configured in the domain to apply to the
SCVMM server should to be updated to change this to Disabled, if the RSOP shows it as Enabled. Otherwise, the
RequireSecuritySignature changes back to True at a later time. After setting the policy in Active Directory and
propagating to the domain controllers, refresh the SCVMM server policy by running the command gpupdate /
force. Confirm in RSOP that the value is Disabled.

Note: If security signing is mandatory, then you need to enable Kerberos in the Nutanix cluster. In this case, it is
important to ensure that the time remains synchronized between the Active Directory server, the Nutanix hosts,
and the Nutanix Controller VMs. The Nutanix hosts and the Controller VMs set their NTP server as the Active

Foundation | Hyper-V Installation Requirements | 41


Directory server, so it should be sufficient to ensure that Active Directory domain is configured correctly for
consistent time synchronization.

Accounts and Privileges:

• When adding a host or a cluster to the SCVMM, the run-as account you are specifying for managing the host or
cluster must be different from the service account that was used to install SCVMM.
• Run-as account must be a domain account and must have local administrator privileges on the Nutanix hosts.
This can be a domain administrator account. When the Nutanix hosts are joined to the domain, the domain
administrator accounts automatically takes administrator privileges on the host. If the domain account used as the
run-as account in SCVMM is not a domain administrator account, you need to manually add it to the list of local
administrators on each host by running sconfig.

• SCVMM domain account with administrator privileges on SCVMM and PowerShell remote execution
privileges.
• If you want to install SCVMM server, a service account with local administrator privileges on the SCVMM
server.

IP Addresses

• One IP address for each Nutanix host.


• One IP address for each Nutanix Controller VM.
• One IP address for each Nutanix host IPMI interface.
• One IP address for the Nutanix storage cluster.
• One IP address for the Hyper-V failover cluster.

Note: For N nodes, (3*N + 2) IP addresses are required. All IP addresses must be in the same subnet.

DNS Requirements

• Each Nutanix host must be assigned a name of 15 characters or less, which gets automatically added to the DNS
server during domain joining.
• The Nutanix storage cluster needs to be assigned a name of 15 characters or less, which must be added to the DNS
server when the storage cluster is joined to the domain.
• The Hyper-V failover cluster must be assigned a name of 15 characters or less, which gets automatically added to
the DNS server when the failover cluster is created.
• After the Hyper-V configuration, all names must resolve to an IP address in the Nutanix hosts, the SCVMM server
(if applicable), or any other host that needs access to the Nutanix storage, for example, a host running the Hyper-V
Manager.

Storage Access Requirements

• Virtual machine and virtual disk paths must always refer to the Nutanix storage cluster by name, not the external
IP address. If you use the IP address, it directs all the I/O to a single node in the cluster and thereby compromises
performance and scalability.

Note: For external non-Nutanix host that needs to access Nutanix SMB shares, see Nutanix SMB Shares
Connection Requirements from Outside the Cluster..

Foundation | Hyper-V Installation Requirements | 42


Host Maintenance Requirements

• When applying Windows updates to the Nutanix hosts, the hosts should be restarted one at a time, ensuring that
Nutanix services comes up fully in the Controller VM of the restarted host before updating the next host. This can
be accomplished by using Cluster Aware Updating and using a Nutanix-provided script, which can be plugged
into the Cluster Aware Update Manager as a pre-update script. This pre-update script ensures that the Nutanix
services go down on only one host at a time ensuring availability of storage throughout the update procedure. For
more information about cluster-aware updating, see Installing Windows Updates with Cluster-Aware Updating.

Note: Ensure that automatic Windows updates are not enabled for the Nutanix hosts in the domain policies.

Foundation | Hyper-V Installation Requirements | 43


12
SET IPMI STATIC IP ADDRESS
You can assign a static IP address for an IPMI port by resetting the BIOS configuration.

About this task


To configure a static IP address for the IPMI port on a node, do the following:

Procedure

1. Connect a VGA monitor and USB keyboard to the node.

2. Power on the node.

3. Press the Delete key during boot up when prompted to enter the BIOS setup mode.
The BIOS Setup Utility screen appears.

4. Click the IPMI tab to display the IPMI screen.

5. Select BMC Network Configuration and press the Enter key.

6. Select Update IPMI LAN Configuration, press Enter, and then select Yes in the pop-up window.

Foundation | Set IPMI Static IP Address | 44


7. Select Configuration Address Source, press Enter, and then select Static in the pop-up window.

8. Select Station IP Address, press Enter, and then enter the IP address for the IPMI port on that node in the
pop-up window.

9. Select Subnet Mask, press Enter, and then enter the corresponding submask value in the pop-up window.

10. Select Gateway IP Address, press Enter, and then enter the IP address for the node's network gateway in the
pop-up window.

11. When all the field entries are correct, press the F4 key to save the settings and exit the BIOS setup mode.

Foundation | Set IPMI Static IP Address | 45


13
TROUBLESHOOTING
This section provides guidance for fixing problems that might occur during a Foundation installation.

• For help with IPMI configuration problems in a bare metal workflow, see Fix IPMI Configuration Problems on
page 46.
• For help with imaging problems, see Fix Imaging Problems on page 47.
• For answers to other common questions, see Frequently Asked Questions (FAQ) on page 48.

Fix IPMI Configuration Problems


About this task
In a bare metal workflow when the IPMI port configuration fails for one or more nodes in the cluster, or it works
but type detection fails and complains that it cannot reach an IPMI IP address, the installation process stops before
imaging any of the nodes. (Foundation will not go to the imaging step after an IPMI port configuration failure,
but it will try to configure the port address on all nodes before stopping.) Possible reasons for a failure include the
following:

• One or more IPMI MAC addresses are invalid or there are conflicting IP addresses. Go to the IPMI screen and
correct the IPMI MAC and IP addresses as needed.
• There is a user name/password mismatch. Go to the IPMI page and correct the IPMI username and password
fields as needed.
• One or more nodes are connected to the switch through the wrong network interface. Go to the back of the nodes
and verify that the first 1GbE network interface of each node is connected to the switch (see Set Up the Network
on page 22).
• The Foundation VM is not in the same broadcast domain as the Controller VMs for discovered nodes or the IPMI
interface for added (bare metal or undiscovered) nodes. This problem typically occurs because (a) you are not
using a flat switch, (b) some node IP addresses are not in the same subnet as the Foundation VM, and (c) multi-
homing was not configured.

• If all the nodes are in the Foundation VM subnet, go to the Node page and correct the IP addresses as needed.
• If the nodes are in multiple subnets, go to the Cluster page and configure multi-homing.
• The IPMI interface is not set to failover. You can check for this through the BIOS.
To identify and resolve IPMI port configuration problems, do the following:

Foundation | Troubleshooting | 46
Procedure

1. Go to the Block & Node Config screen and review the problem IP address for the failed nodes (nodes with a red
X next to the IPMI address field).
Hovering the cursor over the address displays a pop-up message with troubleshooting information. This can
help you diagnose the problem. See the service.log file (in /home/nutanix/foundation/log) and the
individual node log files for more detailed information.

Figure 14: Foundation: IPMI Configuration Error

2. When you have corrected all the problems and are ready to try again, click the Configure IPMI button at the top
of the screen.

Figure 15: Configure IPMI Button

3. Repeat the preceding steps as necessary to fix all the IPMI configuration errors.

4. When all nodes have green check marks in the IPMI address column, click the Image Nodes button at the top of
the screen to begin the imaging step.
If you cannot fix the IPMI configuration problem for one or more of the nodes, you can bypass those nodes and
continue to the imaging step for the other nodes by clicking the Proceed button. In this case you must configure
the IPMI port address manually for each bypassed node (see Set IPMI Static IP Address on page 44).

Fix Imaging Problems


About this task
When imaging fails for one or more nodes in the cluster, the progress bar turns red and a red check appears next to
the hypervisor address field for any node that was not imaged successfully. Possible reasons for a failure include the
following:

• A type failure was detected. Check connectivity to the IPMI (bare metal workflow).
• There were network connectivity issues such as the following:

• The connection is dropping intermittently. If intermittent failures persist, look for conflicting IPs.
• [Hyper-V only] SAMBA is not up. If Hyper-V complains that it failed to mount the install share, restart
SAMBA with the command "sudo service smb restart".
• Foundation ran out of disk space during the hypervisor or Phoenix preparation phase. Free up some space by
deleting extraneous ISO images. In addition, a Foundation crash could leave a /tmp/tmp* directory that contains
a copy of an ISO image which you can unmount (if necessary) and delete. Foundation needs about 9 GB of free
space for Hyper-V and about 3 GB for ESXi or AHV.

Foundation | Troubleshooting | 47
• The host boots but complains it cannot reach the Foundation VM. The message varies per hypervisor. For
example, on ESXi you might see a "ks.cfg:line 12: "/.pre" script returned with an error" error message. Make sure
you have assigned the host an IP address on the same subnet as the Foundation VM or you have configured multi-
homing. Also check for IP address conflicts.
To identify and resolve imaging problems, do the following:

Procedure

1. See the individual log file for any failed nodes for information about the problem.

• Controller VM location for Foundation logs: ~/data/logs/foundation and ~/data/logs/


foundation.out[.timestamp]
• Bare metal location for Foundation logs: /home/nutanix/foundation/log

2. When you have corrected the problems and are ready to try again, click the Image Nodes (bare metal workflow)
button.

Figure 16: Image Nodes Button (bare metal)

3. Repeat the preceding steps as necessary to fix all the imaging errors.
If you cannot fix the imaging problem for one or more of the nodes, you can image those nodes one at a time
(Contact Support for help).

Frequently Asked Questions (FAQ)


This section provides answers to some common Foundation questions.

Installation Issues

• How do I deploy a 1-node or 2-node cluster?


Run Foundation just like how you would run it to deploy a 3+ nodes' cluster.
• What steps should I take when I encounter a problem?
Click the appropriate log link on Foundation UI. In most cases the log file should provide some information about
the problem near the end of the file. If that information (plus the information in this troubleshooting section) is
sufficient to identify and solve the problem, fix the issue and then restart the imaging process.
If you were unable to fix the problem, open a Nutanix support case. You can do this from the Nutanix support
portal (https://portal.nutanix.com/#/page/cases/form?targetAction=new). Upload relevant log files as requested.
The log files are in the following locations:

• Standalone (bare metal) location for Foundation logs: /home/nutanix/foundation/log in your


Foundation VM. This directory contains a service.log file for Foundation-related log messages, a log
file for each node being imaged (named node_cvm_ip_addr.log), a log file for each cluster being created
(named cluster_cluster_name.log, cluster_1.log, and so on), http.access and http.error files
for server-related log messages, debug.log file that records every bit of information Foundation outputs,
and api.log file that records certain requests made to the Foundation API. Logs from past installations are
stored in /home/nutanix/foundation/log/archive. In addition, the state of the current install process

Foundation | Troubleshooting | 48
is stored in /home/nutanix/foundation/persisted_config.json. You can download the entire log
archive from the following URL: http://foundation_ip:8000/foundation/log_archive_tar
• Controller VM location for Foundation logs: ~/data/logs/foundation (see preceding content description)
and ~/data/logs/foundation.out[.timestamp], which corresponds to the service.log file.
• I want to troubleshoot the operating system installation during cluster creation.
Point a VNC console to the hypervisor host IP address of the target node at port 5901.
• I need to restart Foundation on the Controller VM.
To restart Foundation, log on to the Controller VM with SSH and then run the following command:
nutanix@cvm$ pkill foundation && genesis restart

• My installation hangs, and the service log complains about type detection.
Verify that all of your IPMI IPs are reachable through Foundation. (On rare occasion the IPMI IP assignment will
take some time.) If you get a complaint about authentication, double-check your password. If the problem persists,
try resetting the BMC.
• Installation fails with an error where Foundation cannot ping the configured IPMI IP addresses.
Verify that the LAN interface is set to failover mode in the IPMI settings for each node. You can find this setting
by logging into IPMI and going to Configuration > Network > Lan Interface. Verify that the setting is
Failover (not Dedicate).
• The diagnostic box was checked to run after installation, but that test (diagnostics.py) does not complete
(hangs, fails, times out).
Running this test can result in timeouts or low IOPS if you are using 1G cables. Such cables might not provide the
performance necessary to run this test at a reasonable speed.
• Foundation seems to be preparing the ISOs properly, but the nodes boot into <previous hypervisor> and the install
hangs.
The boot order for one or more nodes might be set incorrectly to select the USB over SATA DOM as the first boot
device instead of the CDROM. To fix this, boot the nodes into BIOS mode and either select "restore optimized
defaults" (F3 as of BIOS version 3.0.2) or give the CDROM boot priority. Reboot the nodes and retry the
installation.
• I have misconfigured the IP addresses in the Foundation configuration page. How long is the timeout for the call
back function, and is there a way I can avoid the wait?
The call back timeout is 60 minutes. To stop the Foundation process and restart it, open up the terminal in the
Foundation VM and enter the following commands:
$ sudo /etc/init.d/foundation_service stop
$ cd ~/foundation/
$ mv persisted_config.json persisted_config.json.bak
$ sudo /etc/init.d/foundation_service start
Refresh the Foundation web page. If the nodes are still stuck, reboot them.
• I need to reset a block to the default state.
Using the bare metal imaging workflow, download the desired Phoenix ISO image for AHV from the support
portal (see https://portal.nutanix.com/#/page/phoenix/list). Boot each node in the block to that ISO and follow the
prompts until the re-imaging process is complete. You should then be able to use Foundation as usual.
• The cluster create step is not working.
If you are installing NOS 3.5 or later, check the service.log file for messages about the problem. Next, check
the relevant cluster log (cluster_X.log) for cluster-specific messages. The cluster create step in Foundation is

Foundation | Troubleshooting | 49
not supported for earlier releases and will fail if you are using Foundation to image a pre-3.5 NOS release. You
must create the cluster manually (after imaging) for earlier NOS releases.
• I want to re-image nodes that are part of an existing cluster.
Do a cluster destroy prior to discovery. (Nodes in an existing cluster are ignored during discovery.)
• My Foundation VM is complaining that it is out of disk space. What can I delete to make room?
Unmount any temporarily-mounted file systems using the following commands:
$ sudo fusermount -u /home/nutanix/foundation/tmp/fuse
$ sudo umount /tmp/tmp*
$ sudo rm -rf /tmp/tmp*
If more space is needed, delete some of the Phoenix ISO images from the Foundation VM.
• I keep seeing the message “"tar: Exiting with failure status due to previous errors'tar rf /home/nutanix/foundation/
log/archive/log-archive-20140604-131859.tar -C /home/nutanix/foundation ./persisted_config.json' failed; error
ignored."
This is a benign message. Foundation archives your persisted configuration file (persisted_config.json) alongside
the logs. Occasionally, there is no configuration file to back up. This is expected, and you may ignore this
message with no ill consequences.
• Imaging fails after changing the language pack.
Do not change the language pack. Only the default English language pack is supported. Changing the language
pack can cause some scripts to fail during Foundation imaging. Even after imaging, character set changes can
cause problems for NOS.
• [ESXi] Foundation is booting into pre-install Phoenix, but not the ESXi installer.
Check the BIOS version and verify it is supported. If it is not a supported version, upgrade it.

Network and Workstation Issues

• I am having trouble installing VirtualBox on my Mac.


Turning off the WiFi can sometimes resolve this problem. For help with VirtualBox issues, see https://
www.virtualbox.org/wiki/End-user_documentation.
There can be a problem when the USB Ethernet adapter is listed as a 10/100 interface instead of a 1G interface.
To support a 1G interface, it is recommend that MacBook Air users connect to the network with a thunderbolt
network adapter rather than a USB network adapter.
• I get "This Kernel requires an x86-64 CPU, but only detected an i686 CPU" when trying to boot the VM on
VirtualBox.
The VM needs to be configured to expose a 64-bit CPU. For more information, see https://forums.virtualbox.org/
viewtopic.php?f=8&t=58767.
• I am running the network setup script, but I do not see eth0 when I run ifconfig.
This can happen when you make changes to your VirtualBox network adapters. VirtualBox typically creates
a new interface (eth1, then eth2, and so on) to accommodate your new settings. To fix this, run the following
commands:
$ sudo rm /etc/udev/rules.d/70-persistent-net-rules
$ sudo shutdown -r now
This should reboot your machine and reset your adapter to eth0.

Foundation | Troubleshooting | 50
• I have plugged in the Ethernet cables according to the directions and I can reach the IPMI interface, but discovery
is not finding the nodes to image.
Your Foundation VM must be in the same broadcast domain as the Controller VMs to receive their IPv6 link-
local traffic. If you are installing on a flat 1G switch, ensure that the 10G cables are not plugged in. (If they are,
the Controller VMs might choose to direct their traffic over that interface and never reach your Foundation VM.)
If you are installing on a 10G switch, ensure that only the IPMI 10/100 port and the 10G ports are connected.
• The switch is dropping my IPMI connections in the middle of imaging.
If your network connection seems to be dropping out in the middle of imaging, try using an unmanaged switch
with spanning tree protocol disabled.
• Foundation is stalled on the ping home phase.
The ping test will wait up to two minutes per NIC to receive a response, so a long delay in the ping phase indicates
a network connection issue. Check that your 10G cables are unplugged and your 1G connection can reach
Foundation.
• How do I install on a 10/100 switch?
A 10/100 switch is not recommended, but it can be used for a few nodes. However, you may see timeouts. It is
highly recommend that you use a 1G or 10G switch if it is available to you.

Informational Topics

• How can I determine whether a node was imaged with Foundation or standalone Phoenix?

• A node imaged using standalone Phoenix will have the file /etc/nutanix/foundation_version in it, but
the contents will be “unknown” instead of a valid foundation version.
• A node imaged using Foundation will have the file /etc/nutanix/foundation_version in it with a valid
foundation version.
• Does first boot work when run more than once?
First boot creates a marker failure marker file whenever it fails and a success marker file whenever it succeeds. If
the first boot script needs to be executed again, delete these marker files and just manually execute the script.
• Do the first boot marker files contain anything?
They are just empty files.
• Why might first boot fail?
Possible reasons include the following:

• First boot may take more time than expected, in which case Foundation might time out.
• NC team creation fails.
• The Controller VM has a kernel panic when it boots.
• Hostd does not come up in time.
• What is the timeout for first boot?
The timeout is 90 minutes. A node may restart several times (requirements from certain driver installations)
during the execution of the first boot script, and this can increase the overall first boot time.

Foundation | Troubleshooting | 51
• How does the Foundation process differ on a Dell system?
Foundation uses a different tool called racadm to talk to the IPMI interface of a Dell system, and the files which
have the hardware layout details are different. However, the overall Foundation work flow (series of steps)
remains the same.
• How does the Foundation service start in the Controller VM-based and standalone versions?

• Standalone: Manually start the Foundation service using foundation_service start (in the ~/
foundation/bin directory).
• Controller VM-based: Genesis service takes care of starting the Foundation service. If the Foundation service
is not already running, use the genesis restart command to start Foundation. If the Foundation service
is already running, a genesis restart will not restart Foundation. You must manually kill the Foundation
service that is running currently before executing genesis restart. The genesis status command lists
the services running currently along with their PIDs.
• Why doesn’t the genesis restart command stop Foundation?
Genesis restarts only the services required for a cluster to be up and running. Stopping Foundation could cause
failures to current imaging sessions. For example, when expanding a cluster Foundation may be in the process of
imaging a node, which should not be disrupted by restarting Genesis.
• How is the installer VM created?
The Qemu library is part of Phoenix. The qemu command starts the VM by taking a hypervisor ISO and disk
details as input. This command is simply executed on Phoenix to launch the installer VM.
• How do you validate that installation is complete and the node is ready with regards to firstboot?
This can be validated by checking the presence of a first boot success marker file. The marker file varies per
hypervisor:

• ESXi: /bootbank/first_boot.log
• AHV: /root/.firstboot_success
• HyperV: D:\markers\firstboot_success
• Does Repair CVM re-create partitions?
Repair CVM images AOS alone and recreates the partitions on the SSD. It does not touch the SATADOM, which
contains the hypervisor.
• Can I use older Phoenix ISOs for manual imaging?
Use a Phoenix ISO that contains the AOS installation bundle and hypervisor ISO in it. Makefiles has a separate
target for building such a standalone Phoenix ISO.
• What are the pre-checks run when a node is added?

• The hypervisor type and version should match between the existing cluster and the new node.
• The AOS version should match between the existing cluster and the new node.
• Can I get a map of percent completion to step?
No. The percent completion does not have a one-to-one mapping to the step. Percent completion depends on the
different tasks which actually get executed during imaging.

Foundation | Troubleshooting | 52
• Do the log folders contain past imaging session logs?
Yes. All the previous imaging session logs are compressed (on a session basis) and archived in the folder ~/
foundation/log/archive.
• If I have two clusters in my lab, can I use one to do bare-metal imaging on the other?
No. This is because the tools and packages which are required for bare-metal imaging are typically not present in
the Controller VM.
• How do you add a new node that needs to be imaged to an existing cluster?
If the cluster is running AOS 4.5 or later and the node also has 4.5 or later, you can use the "Expand Cluster"
option in the Prism web console. This option employs Foundation to image the new node (if required) and then
adds it to the existing cluster. You can also add the node through the nCLI: ncli cluster add-node node-
uuid=<uuid>. The UUID value can be found in the factory_config.json file on the node.
• Is is required to supply IPMI details when using the Controller VM-based Foundation?
It is optional to provide IPMI details in Controller VM-based Foundation. If IPMI information is provided,
Foundation will try to configure the IPMI interface as well.
• Is it valid to use a share to hold AOS installation bundles and hypervisor ISO files?
AOS installation bundles and hypervisor ISO files can be present anywhere, but there needs to be a link (as
appropriate) in ~/foundation/nos or ~/foundation/isos/hypervisor/[esx|kvm|hyperv]/ to the
appropriate share location. Foundation will pick up files in these locations only. As long as a file is accessible
from these standard locations inside Foundation using a link, Foundation will pick it up.
• Where is Foundation located in the Controller VM?
/home/nutanix/foundation
• How can I determine if a particular (standalone) Foundation VM can image a given cluster?
Execute the following command on the Foundation VM and see whether it returns successfully (exit status 0):
ipmitool –H ipmi_ip -U ipmi_username -P ipmi_password fru
If this command is successful, the Foundation VM can be used to image the node. This is the command used by
Foundation to get hardware details from the IPMI interface of the node. The exact tool used for talking to the
SMC IPMI interface is the following:
java -jar SMCIPMITool.jar ipmi_ip username password shell
If this command is able to open a shell, imaging will not fail because of an IPMI issue. Any other errors like
violating minimum requirements will be shown only after Foundation starts imaging the node.
• How do I determine whether a particular hypervisor ISO will work?
The md5 hash of all qualified hypervisor ISO images are listed in the iso_whitelist.json file, which is
located in ~/foundation/config/. The latest version of the iso_whitelist.json file is available from the
Nutanix support portal (see Hypervisor ISO Images on page 36).
• How does foundation mount an ISO over IPMI?

• For SMC, Foundation uses the following commands:


cd foundation/lib/bin/smcipmitool
java -jar SMCIPMITool.jar ipmi_ip ipmi_username ipmi_password shell

Foundation | Troubleshooting | 53
vmwa dev2iso <path to iso file>

The java command starts a shell with access to the remote IPMI interface. The vmwa command mounts the
ISO file virtually over IPMI. Foundation then opens another terminal and uses the following commands to set
the first boot device to CD ROM and restart the node.
ipmitool –H ipmi_ip -U ipmi_username -P ipmi_password chassis bootdev cdrom
ipmitool –H ipmi_ip -U ipmi_username -P ipmi_password chassis power reset

• For Dell, Foundation uses the following commands:


racadm -r ipmi_ip -u ipmi_username -p ipmi_password remoteimage -d
racadm -r ipmi_ip -u ipmi_username -p ipmi_password remoteimage -c -
l nfs_share_path_to_iso_file -u nutanix –p nutanix/4uracadm -r ipmi_ip -u ipmi_username -
p ipmi_password remoteimage –s
racadm -r ipmi_ip -u ipmi_username -p ipmi_password remoteimage –s
racadm -r ipmi_ip -u ipmi_username -p ipmi_password config -g cfgServerInfo -o
cfgServerBootOnce 1
racadm -r ipmi_ip -u ipmi_username -p ipmi_password config -g cfgServerInfo -o
cfgServerFirstBootDevice vCD-DVD
The node can be rebooted using the following commands:
racadm -r ipmi_ip -u ipmi_username -p ipmi_password serveraction powerdown
racadm -r ipmi_ip -u ipmi_username -p ipmi_password serveraction powerup

• Does phoenix download using IPv4 address? Yes.


• Is there an integrity check on the files phoenix downloads?
Yes. The md5 sum of the files to be downloaded (AOS and hypervisor ISO) are passed to Phoenix through a
configuration file. (The HTTP path to the configuration file is passed as command line input.) Phoenix verifies the
md5 sum of the files after downloading and retries the download if an md5 mismatch is detected.
• Pynfs, what is it?
It is a Python implementation of NFS share used during the initial days. It is still used on platforms with a 16 GB
DOM.
• Is there a reason for using port 8000?
No specific reason.
Appendix

A
A SINGLE-NODE CONFIGURATION
(PHOENIX)
To configure a single node, to reinstall the hypervisor after you replace a hypervisor boot drive, or to install
or repair a Nutanix Controller VM, use Phoenix, which is an ISO installer.
For more information about using Phoenix, see KB 5591. To use Phoenix, contact Nutanix Support.

Warning:

• Use of Phoenix to re-image or reinstall AOS with the Action titled "Install CVM" on a node that is
already part of a cluster is not supported by Nutanix and can lead to data loss.
• Use of Phoenix to repair the AOS software on a node with the Action titled "Repair CVM" is to be done
only with the direct assistance of Nutanix Support.
• Use of Phoenix to recover a node after a hypervisor boot disk failure is not necessary in most cases.
Please refer to the Hardware Replacement Documentation for your platform model and AOS version to
see how this recovery is automated through Prism.

Foundation | A Single-Node Configuration (Phoenix) | 55


COPYRIGHT
Copyright 2019 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the United States and/or other
jurisdictions. All other brand and product names mentioned herein are for identification purposes only and may be
trademarks of their respective holders.

License
The provision of this software to you does not grant any licenses or other rights under any Microsoft patents with
respect to anything other than the file server implementation portion of the binaries for this software, including no
licenses or any other rights in any hardware or any devices or software that are used to communicate with or in
connection with this software.

Conventions
Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command The commands are executed in the Nutanix nCLI.

user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.

root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.

> command The commands are executed in the Hyper-V host shell.

output The information is displayed as output from a command or in a log file.

Default Cluster Credentials


Interface Target Username Password

Nutanix web console Nutanix Controller VM admin Nutanix/4u

vSphere Web Client ESXi host root nutanix/4u

vSphere client ESXi host root nutanix/4u

SSH client or console ESXi host root nutanix/4u

SSH client or console AHV host root nutanix/4u

Foundation |
Interface Target Username Password

SSH client or console Hyper-V host Administrator nutanix/4u

SSH client Nutanix Controller VM nutanix nutanix/4u

SSH client Nutanix Controller VM admin Nutanix/4u

SSH client or console Acropolis OpenStack root admin


Services VM (Nutanix
OVM)

SSH client or console Xtract VM nutanix nutanix/4u

SSH client or console Xplorer VM nutanix nutanix/4u

Version
Last modified: November 15, 2019 (2019-11-15T19:05:19+05:30)

Foundation |