Sie sind auf Seite 1von 58

Foundation 4.5.

Field Installation Guide


June 2, 2020
Contents

1. Field Installation Overview...................................................................................4

2. Foundation Considerations..................................................................................5
Foundation Use Case Matrix.................................................................................................................................5
CVM vCPU and vRAM Allocation....................................................................................................................... 6

3. Prepare Factory-Imaged Nodes for Foundation.........................................9


Discover Nodes and Launch Foundation.......................................................................................................10
Discover Nodes in the Same Broadcast Domain............................................................................10
Discover Nodes in a VLAN-Segmented Network............................................................................ 11
Launch Foundation......................................................................................................................................12
Verify Hypervisor Support....................................................................................................................................13
Upgrade CVM Foundation by Using the Foundation Java Applet.......................................................13

4. Prepare Bare Metal Nodes for Foundation..................................................15


Prepare the Installation Environment.............................................................................................................. 16
Prepare Workstation................................................................................................................................... 16
Set Up the Foundation VM......................................................................................................................17
Upload Installation Files to the Foundation VM.............................................................................22
Verify Hypervisor Support.......................................................................................................................22
Set Up the Network...................................................................................................................................23
Updating the Foundation Service........................................................................................................ 25

5.  Foundation App.................................................................................................... 26


Install Foundation App on macOS...................................................................................................................26
Install Foundation App on Windows.............................................................................................................. 26
Uninstall Foundation App on macOS............................................................................................................. 27
Uninstall Foundation App on Windows......................................................................................................... 27

6.  Upgrade Foundation........................................................................................... 29

7.  Run Foundation..................................................................................................... 30


Automate filling Foundation UI.........................................................................................................................30
Configure Nodes with Foundation.................................................................................................................. 30

8. Post Installation Steps........................................................................................ 35


Configuring a New Cluster in Prism................................................................................................................ 35

9. Hypervisor ISO Images.......................................................................................37

ii
10. Network Requirements..................................................................................... 39

11. Hyper-V Installation Requirements................................................................ 41

12. Set IPMI Static IP Address...............................................................................45

13.  Troubleshooting................................................................................................... 47
Fix IPMI Configuration Problems...................................................................................................................... 47
Fix Imaging Problems........................................................................................................................................... 48
Frequently Asked Questions (FAQ)................................................................................................................ 49

Appendix A: Appendix: Single-Node Configuration (Phoenix)............... 56

Copyright...................................................................................................................57
License......................................................................................................................................................................... 57
Conventions............................................................................................................................................................... 57
Default Cluster Credentials..................................................................................................................................57
Version......................................................................................................................................................................... 58

iii
1
FIELD INSTALLATION OVERVIEW
For a node to join a Nutanix cluster, you must image it with a hypervisor and AOS combination that Nutanix
supports. AOS is the operating system of the Nutanix Controller VM, which is a VM that must be running in
the hypervisor to provide Nutanix-specific functionality. Find the complete list of supported hypervisor/AOS
combinations at https://portal.nutanix.com/page/documents/compatibility-matrix.
Foundation is the official deployment software of Nutanix. Foundation allows you to configure a pre-imaged node, or
image a node with a hypervisor and an AOS of your choice. Foundation also allows you to form a cluster out of nodes
whose hypervisor and AOS versions are the same, with or without re-imaging. Foundation is available for download
at https://portal.nutanix.com/#/page/Foundation.
If you already have a running cluster and want to add nodes to it, you must use the Expand Cluster option in Prism,
instead of using Foundation. Expand Cluster allows you to directly re-image a node whose hypervisor/AOS version
does not match the cluster's version, or a node that is only running DiscoveryOS. More details on DiscoveryOS is
provided further.
Nutanix and its OEM partners install some software on a node at the factory, before shipping it to the customer. For
shipments inside the USA, this software is a hypervisor and an AOS. For Nutanix factory nodes, the hypervisor is
AHV. In case of the OEM factories, it is up to the vendor to decide what hypervisor to ship to the customer. However,
they always install AOS, regardless of the hypervisor.
For shipments outside the USA, Nutanix installs a light-weight software called DiscoveryOS, which allows the node
to be discovered in Foundation or in the Expand Cluster option of Prism.
Since a node with DiscoveryOS is not pre-imaged with a hypervisor and an AOS, it must go through imaging
first before joining a cluster. Both Foundation and Expand Cluster allow you to directly image it with the correct
hypervisor and AOS.
Vendors who do not have an OEM agreement with Nutanix ship a node without any software (not even DiscoveryOS)
installed on it. Foundation supports imaging such a node and is called bare-metal imaging. In contrast, Expand
Cluster does not support direct bare-metal imaging. Therefore, if you want to add a software-less node to an existing
cluster, you must first image it using Foundation, then use Expand Cluster.
This document only explains procedures that apply to NX and OEM nodes. For non-OEM nodes, you must perform
the bare-metal imaging procedures specifically adapted for those nodes. For those procedures, see the vendor-specific
field installation guides, a complete listing of which is available on the Nutanix Support portal. On the portal, click
the hamburger menu icon, go to Documentation > Hardware Compatibility Lists, and then select the
vendor from the Platform filter available on the page.

• See Prepare Factory-Imaged Nodes for Foundation on page 9 to re-image factory-prepared nodes, or create a
cluster from these nodes, or both.
• See Prepare Bare Metal Nodes for Foundation on page 15 to image bare-metal nodes and optionally configure
them into a cluster.
2
FOUNDATION CONSIDERATIONS
This section provides the various guidelines, compatibility list, limitations, and capabilities of Foundation.

Foundation Use Case Matrix


The following matrix provides a list of use cases and its supportability with the various Foundation bits
available for download.

Table 1: Foundation Use Case Matrix

CVM Foundation Portable Foundation Standalone Foundation


(Windows, macOS) VM

Function
• Factory-imaged nodes • Factory-imaged nodes • Factory-imaged nodes
• Bare-metal nodes • Bare-metal nodes

Hardware Any, if you image


• Any • Any
discovered nodes.
If you image nodes without
discovery the hardware
support is limited to the
following.

• Nutanix: Only G4 and


above
• Dell
• HPE
• Lenovo: Only Cascade
Lake and above

If IPv6 is disabled Cannot image nodes. IPMI IPv4 required on IPMI IPv4 required on
the nodes the nodes

Can configure the VLAN No. Manually configure No. Manually configure Yes
of Foundation. in the vSwitch of the in Windows or macOS.
host.

Can configure the VLAN Yes Yes Yes


of CVM/hosts

LACP support Yes Yes Yes

Multi-homing support Yes Yes Yes

Foundation | Foundation Considerations | 5


CVM Foundation Portable Foundation Standalone Foundation
(Windows, macOS) VM

RDMA support Yes Yes Yes

How to use? Access using http:// Launch the executable Deploy as a VM on


CVM_IP:8000/ for Windows 10+ or VirtualBox, Fusion,
macOS 10.13.1+ Workstation, AHV, ESX,
and so on

CVM vCPU and vRAM Allocation


vCPU Allocation
The number of vCPUs is set to the number of physical cores in the CVM NUMA node (NUMA node that is
connected to the SSD storage controller) or to the following minimum/maximum:

Table 2: vCPU Allocation

Platform vCPU
AMD Naples, AMD Rome, or PowerPC
• Fixed to 8 for AMD Naples
• Fixed to 12 for AMD Rome
• Fixed to 6 for PowerPC

If the platform has dense storage, that is, 2+ NUMA Up to 14


nodes and one of the following:

• 32+ TB of HDD
• OR 48+ TB of SSD (excluding NVMe)

If the platform is high performance, that is, 2+


NUMA nodes and 8+ physical cores in the CVM • Between 8 and 12 (inclusive)
NUMA node and one of the following: • Between 12 and 16 (inclusive) if:
• 2+ NVMe drives • Hyperthreading is enabled
• RDMA is enabled • Or 12+ physical cores in the CVM NUMA node

If the platform is generic, that is, 8+ physical cores Between 8 and 12 (inclusive)
in the CVM NUMA node OR both of the following:

• 6+ physical cores in the CVM NUMA node


• Hyper-threading is enabled

For low end platforms Equal to the number of physical cores in the CVM
NUMA node

Note:

• Foundation GUI and Prism do not provide an option to override vCPU defaults.

Foundation | Foundation Considerations | 6


vRAM Allocation
Every layout module of the platform in Phoenix defines a hardware attribute called "default_workload", which
classifies the default purpose of that platform into one of the following categories.

Table 3: vRAM Allocation

Platform vRAM

“vdi”: general / VDI / server virtualization 20 GB

“storage_heavy” or “minimal_compute_node” 28 GB

Unless the platform has hardware attribute 32 GB


“all_flash_low_perf” (rare), the above 2 rules are
ignored if the platform is All Flash

“high_perf” 32 GB

“dense”, or if the node has 2+ NUMA nodes and 40 GB


one of the following:

• 32+ TB of HDD
• OR 48+ TB of SSD and NVMe combined

If there is a hardware attribute called The value defined by default_cvm_memory_in_gb


“default_cvm_memory_in_gb” (rare), all the above
are ignored

If AOS is older than 5.1, or total memory is <62GB Subtract 4 GB from the previous value, only if the
value is < 32 GB

If the determined value leaves <6 GB of remaining As much memory that leaves 6 GB of remaining
memory, then all the above are ignored memory

Features like dedupe and compression require vRAM beyond the defaults. For such cases, you can override the
default values by performing one of the following tasks:

• Manually changing the value in the Foundation GUI. If this value leaves <6 GB of remaining memory, then it is
ignored and instead the default values mentioned earlier is used.
• The allocation can also be changed from Prism after installation, by going to Configure CVM from the gear
icon in the web console.

Exceptions for Old Platforms


The preceding vCPU and vRAM policies do not apply to the following old platforms:

• All NX platforms before Gen 5


• All Dell platforms before Gen 14
• Lenovo HX3500, HX5500, HX7500
• HPE DL380p Gen8
For these platforms, vCPU allocation is fixed to 8.
The default vRAM allocation is 20 GB, except the following platforms:

Foundation | Foundation Considerations | 7


Table 4: vRAM Values

Platform vRAM

NX-6035C, XC730xd-12C, HX5500, HX7500 28 GB

NX-8150, NX-8150-G4, NX-9040 32 GB

If AOS is older than 5.1, or total memory is <62 GB Subtract 4 GB from the previous value, only if the
value is <32GB.

If the determined value leaves <6 GB of remaining As much memory that leaves 6 GB of remaining
memory, all the above are ignored memory

NUMA Pinning
Pinning is enabled only when both of the following conditions are met:

• vRAM allocation <= RAM of the CVM NUMA node


This is regardless of whether the allocation is one of the default values or one provided by you.
• vCPU allocation <= logical cores of the CVM NUMA node

Note:

• “CVM NUMA node” means the NUMA node that connects to the SSD storage controller.
• When pinning is enabled, all vCPUs is pinned to this single NUMA node. Pinning vCPUs to a NUMA
node enables the CVM to maximize its I/O performance under heavy load.
• vRAM allocation is not pinned to any NUMA node to ensure that enough memory is available for the
CVM when it starts after shutdowns like maintenance mode.
3
PREPARE FACTORY-IMAGED NODES
FOR FOUNDATION
This procedure describes how to install a selected hypervisor and the Nutanix Controller VM on discovered
nodes and how to configure the nodes into a cluster. "Discovered nodes" are factory prepared nodes that
are not currently part of any cluster, and are reachable within the same subnet. This procedure runs the
Foundation tool through the Nutanix Controller VM (Controller VM–based Foundation).

Before you begin

• Make sure that the nodes that you want to image are factory-prepared nodes that have not been configured in any
way and are not part of a cluster.
• Physically install the Nutanix nodes at your site. For general installation instructions, see "Mounting the Block"
in the Getting Started Guide. For installation instructions specific to your model type, see "Rack Mounting" in the
NX and SX Series Hardware Administration Guide.
Your workstation must be connected to the network on the same subnet as the nodes you want to image.
Foundation does not require an IPMI connection or any special network port configuration to image discovered
nodes. See Network Requirements for general information about the network topology and port access required
for a cluster.
• Determine the appropriate network (gateway and DNS server IP addresses), cluster (name, virtual IP address), and
node (Controller VM, hypervisor, and IPMI IP address ranges) parameter values needed for installation.

Note: The use of a DHCP server is not supported for Controller VMs, so make sure to assign static IP addresses to
Controller VMs.

Note: Nutanix uses an internal virtual switch to manage network communications between the Controller VM
and the hypervisor host. This switch is associated with a private network on the default VLAN and uses the
192.168.5.0/24 address space. For the hypervisor, IPMI interface, and other devices on the network (including
the guest VMs that you create on the cluster), do not use a subnet that overlaps with the 192.168.5.0/24 subnet on
the default VLAN. If you want to use an overlapping subnet for such devices, make sure that you use a different
VLAN.

• Download the following files from Nutanix Portal:

• AOS installer named nutanix_installer_package-version#.tar.gz from the AOS (NOS) download page.
• Hypervisor ISO if you want to instal Hyper-V or ESXi. The user must provide the supported Hyper-V or ESXi
ISO (see Hypervisor ISO Images on page 37); Hyper-V and ESXi ISOs are not available on the support
portal.
It is not necessary to download AHV because the AOS bundle includes an AHV installation bundle. However,
you can download an AHV installation bundle if you want to install a non-default version.
• Make sure that IPv6 is enabled on the network to which the nodes are connected and IPv6 multicast is supported.
• If the nodes contain self-encrypting drives (SEDs), disable encryption on the SEDs before imaging the nodes.
If the nodes contain only SEDs, you can enable encryption after you image the nodes. If the nodes contain both

Foundation | Prepare Factory-Imaged Nodes for Foundation | 9


regular hard disk drives (HDDs) and SEDs, do not enable encryption on the SEDs at any time during the lifetime
of the cluster.
For information about enabling and disabling encryption, see the "Data-at-Rest Encryption" chapter in the Prism
Web Console Guide.

About this task

Note: This method can image discovered nodes or create a single cluster from discovered nodes or both. This method
is limited to factory prepared nodes running AOS 4.5 or later. If you want to image factory prepared nodes running
an earlier AOS (NOS) version, or image bare metal nodes, see Prepare Bare Metal Nodes for Foundation on
page 15.

To image the nodes and create a cluster, do the following:

Procedure

1. Run discovery and launch Foundation (see Discover Nodes and Launch Foundation on page 10).

2. Update Foundation to the latest version (see Upgrade CVM Foundation by Using the Foundation Java Applet on
page 13).

Note:

• This step is optional for platforms other than HPE DX.


• For HPE DX platform, the minimum supported version of Foundation is 4.4.1. If the Foundation
version installed on the discovered nodes is older than 4.4.1, upgrade Foundation to 4.4.1 or a newer
version. Perform the upgrade process with a Linux workstation. Do not use a Windows workstation
to perform the upgrade as it is not supported.

3. Run CVM Foundation (see Configure Nodes with Foundation on page 30).

4. After the cluster is created successfully, begin configuring the cluster (see Configuring a New Cluster in Prism on
page 35).

Discover Nodes and Launch Foundation


The procedure to follow for node discovery depends on whether all the devices involved—the Nutanix nodes and the
workstation that you use for imaging—are in the same broadcast domain or in a VLAN-segmented network.

Discover Nodes in the Same Broadcast Domain

About this task


Perform the following steps to discover nodes in a network that does not use VLANs.

Procedure

1. Access the Nutanix support portal (https://my.nutanix.com).

2. Browse to Downloads > Foundation, and then click FoundationApplet-offline.zip.

Foundation | Prepare Factory-Imaged Nodes for Foundation | 10


3. Extract the contents of the downloaded ZIP file into the workstation that you want to use for imaging, and then
double-click nutanix_foundation_applet.jnlp.
The discovery process begins and a window appears with a list of discovered nodes.

Note:
A security warning message may appear indicating this is from an unknown source. Click the accept and
run buttons to run the application.

Figure 1: Foundation Launcher Window

Discover Nodes in a VLAN-Segmented Network


Nutanix nodes running AHV version 20160215 (or later) include a network configuration tool that you
can use to assign a VLAN tag to the public interface on the Controller VM and to one or more physical
interfaces on the host. You can also use the tool to assign an IP address to the Controller VM and
hypervisor. After network configuration is complete, you can use the Foundation service running on the
Controller VM of that host to discover and image other Nutanix nodes. Foundation uses the VLAN sniffer
provided in CVM, to detect free Nutanix nodes, including nodes in other VLANs. The VLAN sniffer uses
the Neighbor Discovery protocol for IP version 6 and therefore requires that the physical switch to which
the nodes are connected supports IPv6 broadcast and multicast. During the imaging process, Controller
VM–based Foundation also assigns the specified VLAN tag (assumed to be that of the production VLAN)
to the corresponding interfaces on the selected nodes, eliminating the need to perform additional VLAN
assignment tasks for those nodes.

Before you begin


Connect the Nutanix nodes to a switch.

About this task

Note: Use the network configuration tool only on factory-prepared nodes that are not part of a cluster. Use of the tool
on a node that is part of a cluster makes the node inaccessible to the other nodes in the cluster, and the only way to
resolve the issue is to reconfigure the node to the previous IP addresses by using the network configuration tool again.

To configure the network for a node, do the following:

Foundation | Prepare Factory-Imaged Nodes for Foundation | 11


Procedure

1. Connect a console to one of the nodes and log on to the Acropolis host with root credentials.

2. Change your working directory to /root/nutanix-network-crashcart/, and then start the network
configuration utility.
root@ahv# ./network_configuration

3. In the network configuration utility, do the following:

a. Review the network card details to ascertain interface properties and identify connected interfaces.
b. Use the arrow keys to shift focus to the interface that you want to configure, and then use the Spacebar key
to select the interface.
Repeat this step for each interface that you want to configure.
c. Use the arrow keys to navigate through the user interface and specify values for the following parameters:

• VLAN Tag. VLAN tag to use for the selected interfaces.


• Netmask. Network mask of the subnet to which you want to assign the interfaces.
• Gateway. Default gateway for the subnet.
• Controller VM IP. IP address for the Controller VM.
• Hypervisor IP. IP address for the hypervisor.
d. Use the arrow keys to move the focus to Done, and then hit Enter.
The network configuration utility configures the interfaces.

Launch Foundation
How you launch Foundation depends on whether you used the Foundation Applet to discover nodes in the
same broadcast domain or the crash cart user interface to discover nodes in a VLAN-segmented network.

About this task


To launch the Foundation user interface, do one of the following:

Procedure

• If you used the Foundation Applet to discover nodes in the same broadcast domain, do the following:

a. Select the node on which you want to run Foundation.


The selected node will be imaged first and then be used to image the other nodes. You can select only nodes
with a status field value of Free, which indicates it is not currently part of a cluster. A value of Unavailable
indicates it is part of an existing cluster or otherwise unavailable. To rerun the discovery process, click the
Retry discovery button.

Note: A warning message may appear stating this is not the highest available version of Foundation found in
the discovered nodes. If you select a node using an earlier Foundation version (one that does not recognize one
or more of the node models), installation may fail when Foundation attempts to image a node of an unknown
model. Therefore, select the node with the highest Foundation version among the nodes to be imaged. (You can

Foundation | Prepare Factory-Imaged Nodes for Foundation | 12


ignore the warning and proceed if you do not intend to select any of the nodes that have the higher Foundation
version.)

b. (Optional but recommended) Upgrade Foundation on the selected node to the latest version. See Upgrade
CVM Foundation by Using the Foundation Java Applet on page 13.
c. With the node having the latest Foundation version selected, click the Launch Foundation button.
Foundation searches the network subnet for unconfigured Nutanix nodes (factory prepared nodes that are
not part of a cluster) and then displays information about the discovered blocks and nodes in the Discovered
Nodes screen. (It does not display information about nodes that are powered off or in a different subnet.) The
discovery process normally takes just a few seconds.

Note: If you want Foundation to image nodes from an existing cluster, you must first either remove the target
nodes from the cluster or destroy the cluster.

• If you used the crash cart user interface to discover nodes in a VLAN-segmented network, in a browser on your
workstation, enter the following URL: http://CVM_IP_address:8000
Replace CVM_IP_address with the IP address that you assigned to the Controller VM when using the network
configuration tool.

Verify Hypervisor Support


The list of supported ISO images appears in a iso_whitelist.json file used by Foundation to validate
ISO image files. The files are identified in the whitelist by their MD5 value (not file name), therefore verify
that the MD5 value of the ISO you want to use is listed in the whitelist file.

Before you begin


Download the latest whitelist file from the Foundation page on the Nutanix support portal (https://
portal.nutanix.com/#/page/Foundation). For information about the contents of the whitelist file, see Hypervisor
ISO Images on page 37.

About this task


To determine whether a hypervisor is supported, do the following:

Procedure

1. Obtain the MD5 checksum of the ISO that you want to use.

2. Open the downloaded whitelist file in a text editor and perform a search for the MD5 checksum.

What to do next
If the MD5 checksum is listed in the whitelist file, save the file to the workstation that hosts the Foundation
VM. If the whitelist file on the Foundation VM does not contain the MD5 checksum, you can replace that file
with the downloaded file before you begin installation.

Upgrade CVM Foundation by Using the Foundation Java Applet


A Foundation update is optional but recommended. The Foundation Java applet includes an option to
upgrade or downgrade Foundation on a discovered node. The node must not be configured already.
Upgrade Foundation on any one node, and that node will upgrade Foundation on the other nodes selected
for imaging. If the node is configured, do not use the Java applet. Instead, update Foundation by using the
Prism web console (see Cluster Management > Software and Firmware Upgrades > Upgrading
Foundation in the Prism Web Console Guide).

Foundation | Prepare Factory-Imaged Nodes for Foundation | 13


Before you begin
1. Download the Foundation tar file from the Nutanix support portal to the workstation on which you plan to run the
Foundation Java applet.
2. Download and start the Foundation Java applet.

About this task


To upgrade Foundation on a discovered node by using the Foundation Java applet, do the following:

Procedure

1. In the Foundation Java applet, select the node on which you want to upgrade Foundation, and then click
Upgrade Foundation.

2. Browse to the folder to which you downloaded the Foundation tar file and double-click the tar file.
The upgrade process begins. After the upgrade completes, Genesis is restarted on the node, and that in turn restarts
the Foundation service. After the Foundation service becomes available, the upgrade process reports success.

What to do next
Run Foundation on page 30
4
PREPARE BARE METAL NODES FOR
FOUNDATION
This procedure describes how to install a selected hypervisor and the Nutanix Controller VM on bare metal
nodes and optionally configure the nodes into one or more clusters. "Bare metal" nodes are those that
are not factory prepared or cannot be detected through discovery. You can also use this method to image
factory prepared nodes that you do not want to configure into a cluster.

Before you begin

Note: Nutanix recommends that imaging and configuration of bare metal nodes be performed or supervised by Nutanix
sales engineers or partners. If a Nutanix sales engineer or partner is unavailable and you need assistance with this
procedure, contact Nutanix support.

• Physically install the nodes at your site. For installing Nutanix hardware platforms, see the NX and SX
Series Hardware Administration and Reference for your model type. For installing hardware from any other
manufacturer, see that manufacturer's documentation.
• Set up the installation environment (see Prepare the Installation Environment on page 16).

Note: If you change the boot device order in the BIOS to boot from a USB flash drive, you will get a Foundation
timeout error unless the first boot device is set to CD-ROM in BIOS (under boot order menu).

Note: If STP (spanning tree protocol) is enabled on the ports that are connected to the Nutanix host, Foundation
might time out during the imaging process. Therefore, you must disable STP by using PortFast or an equivalent
feature on the ports that are connected to the Nutanix host before starting Foundation.

Note: Avoid connecting any device (that is, plugging it into a USB port on a node) that presents virtual media,
such as CD-ROM. This could conflict with the foundation installation when it tries to mount the virtual CD-ROM
hosting the installation ISO.

• Have ready the appropriate global, node, and cluster parameter values needed for installation. The use of a DHCP
server is not supported for Controller VMs, so make sure to assign static IP addresses to Controller VMs.

Note: If the Foundation VM is configured with an IP address in a network that is different from such of the cluster
pending imaging (for example Foundation VM is configured with a public IP address while the cluster resides
in a private network), repeat step 8 in https://portal.nutanix.com/page/documents/details/?targetId=Field-
Installation-Guide-v4_5:v45-foundation-vm-install-on-workstation-t.html to configure a new static IP
address for the Foundation VM.

• If the nodes contain self-encrypting drives (SEDs), disable encryption on the SEDs before imaging the nodes.
If the nodes contain only SEDs, you can enable encryption after you image the nodes. If the nodes contain both
regular hard disk drives (HDDs) and SEDs, do not enable encryption on the SEDs at any time during the lifetime
of the cluster.
For information about enabling and disabling encryption, see the "Data-at-Rest Encryption" chapter in the Prism
Web Console Guide.

Foundation | Prepare Bare Metal Nodes for Foundation | 15


About this task

• Prepare the Installation Environment on page 16


• Configure Nodes with Foundation on page 30

Prepare the Installation Environment


About this task
Standalone (bare metal) imaging is performed from a workstation with access to the IPMI interfaces of the nodes in
the cluster. Imaging a cluster in the field requires first installing certain tools on the workstation and then setting the
environment to run those tools. This requires two preparation tasks:
1. Prepare the workstation. Preparing the workstation can be done on or off site at any time prior to installation. This
includes downloading ISO images, installing Oracle VM VirtualBox, and using VirtualBox to configure various
parameters on the Foundation VM (see Prepare Workstation on page 16).
2. Set up the network. The nodes and workstation must have network access to each other through a switch at the site
(see Set Up the Network on page 23).
3. Upgrade Foundation (see Updating the Foundation Service on page 25.

Prepare Workstation
A workstation is needed to host the Foundation VM during imaging. You can perform these steps either
before going to the installation site (if you use a portable laptop) or at the site (if an active internet
connection is available).

Before you begin

• Get a workstation (laptop or desktop computer) that you can use for the installation. The workstation must have
at least 3 GB of memory (Foundation VM size plus 1 GB), 25 GB of disk space (preferably SSD), and a physical
(wired) network adapter.
• Go to the Nutanix support portal and download the following files to a temporary directory on the workstation.

Foundation_VM-version#.tar This is the Foundation tar file. It includes the following


files:
To download this file, go to Downloads >
Foundation. • Foundation_VM-version#.ovf. This
is the Foundation VM OVF configuration
file for the version# release, for example
Foundation_VM-3.1.ovf.
• Foundation_VM-version#-disk1.vmdk.
This is the Foundation VM VMDK file
for the version# release, for example
Foundation_VM-3.1-disk1.vmdk.

nutanix_installer_package-version#.tar.gz This is the tar file used for imaging the desired AOS
release. Go to the AOS (NOS) download page on the
To download this file, go to Downloads > AOS
support portal to download this file.
(NOS).

Diagnostics file for the hypervisor that you plan to


install (if you want to run a diagnostics test after • AHV: diagnostic.raw.img.gz
creating a cluster).
• ESXi: diagnostics-disk1.vmdk,
diagnostics.mf, and diagnostics.ovf

Foundation | Prepare Bare Metal Nodes for Foundation | 16


To download any of these files, go to Downloads > • Hyper-V: diagnostics_uvm.vhd.gz
Tools & Firmware.

• If you intend to install a hypervisor other than AHV, you must provide the ISO image (see Hypervisor ISO Images
on page 37). Make sure that the hypervisor ISO image is available on the workstation.
• This procedure describes how to use Oracle VM VirtualBox, a free open source tool used to create a virtualized
environment on the workstation. Download the installer for Oracle VM VirtualBox and install it with the
default options. See the Oracle VM VirtualBox User Manual for installation and start up instructions (https://
www.virtualbox.org/wiki/Documentation).

Note: You can also use a tool such as VMware vSphere instead of Oracle VM VirtualBox.

About this task


To prepare the workstation, do the following:

Procedure

1. Create a folder called VirtualBox VMs in your home directory.


On a Windows system, for example, this is typically C:\Users\user_name\VirtualBox VMs.

2. Go to the location to which you downloaded the Foundation tar file and extract its contents.
$ tar -xf Foundation_VM-version#.tar

Note: If the tar utility is not available, use the corresponding utility for your environment.

3. Copy the extracted files to the VirtualBox VMs folder that you created.

Set Up the Foundation VM

About this task


To install the Foundation VM on the workstation, do the following:

Foundation | Prepare Bare Metal Nodes for Foundation | 17


Procedure

1. Start Oracle VM VirtualBox.

Figure 2: VirtualBox Welcome Screen

2. Click the File option of the main menu and then select Import Appliance from the pull-down list.

3. Find and select the Foundation_VM-version#.ovf file, and then click Next.

4. Click the Import button.

5. In the left column of the main screen, select Foundation_VM-version# and click Start.
The Foundation VM console launches and the VM operating system boots.

6. At the login screen, login as the Nutanix user with the password nutanix/4u.
The Foundation VM desktop appears (after it loads).

Foundation | Prepare Bare Metal Nodes for Foundation | 18


7. If you want to enable file drag-and-drop functionality between your workstation and the Foundation VM, install
Oracle Additions as follows:

a. On the VirtualBox window for the Foundation VM, select Devices > Insert Guest Additions CD
Image... from the menu.
A VBOXADDITIONS CD entry appears on the Foundation VM desktop.
b. Click OK when prompted to Open Autorun Prompt and then click Run.

c. Enter the root password (nutanix/4u) and then click Authenticate.

d. After the installation is complete, press the return key to close the VirtualBox Guest Additions installation
window.
e. Right-click the VBOXADDITIONS CD entry on the desktop and select Eject.

f. Reboot the Foundation VM by selecting System > Shutdown... > Restart from the Linux GUI.

Note: A reboot is necessary for the changes to take effect.

g. After the Foundation VM reboots, select Devices > Drag 'n' Drop > Bidirectional from the menu on the
VirtualBox window for the Foundation VM.

Foundation | Prepare Bare Metal Nodes for Foundation | 19


8. Open a terminal session and run the ifconfig command to determine if the Foundation VM was able to get an
IP address from the DHCP server.
If the Foundation VM has a valid IP address, skip to the next step. Otherwise, configure a static IP as follows:

Note: Normally, the Foundation VM needs to be on a public network in order to copy selected ISO files to the
Foundation VM in the next two steps. This might require setting a static IP address now and setting it again when
the workstation is on a different (typically private) network for the installation.

a. Double click the set_foundation_ip_address icon on the Foundation VM desktop.

Figure 3: Foundation VM: Desktop


b. In the pop-up window, click the Run in Terminal button.

Figure 4: Foundation VM: Terminal Window


c. In the Select Action box in the terminal window, select Device Configuration.

Note: Selections in the terminal window can be made using the indicated keys only. (Mouse clicks do not
work.)

Foundation | Prepare Bare Metal Nodes for Foundation | 20


Figure 5: Foundation VM: Action Box
d. In the Select a Device box, select eth0.

Figure 6: Foundation VM: Device Configuration Box


e. In the Network Configuration box, remove the asterisk in the Use DHCP field (which is set by default),
enter appropriate addresses in the Static IP, Netmask, and Default gateway IP fields, and then click the
OK button.

Figure 7: Foundation VM: Network Configuration Box


f. Press the spacebar on the Save button in the Select a Device box and the Save & Quit button in the Select
Action box.
This saves the configuration and closes the terminal window.

Foundation | Prepare Bare Metal Nodes for Foundation | 21


Upload Installation Files to the Foundation VM
The file system on the Foundation VM includes hypervisor-specific directories to which you must copy the
files you downloaded from the Nutanix Support portal or obtained from a hypervisor vendor.

About this task


To upload the installation and installation-related files to the Foundation VM, do the following:

Procedure

1. Copy nutanix_installer_package-version#.tar.gz to the /home/nutanix/foundation/nos


directory.

2. If you are installing hypervisors other than AHV, copy the ISO files to the corresponding directory on the
Foundation VM.

» ESXi ISO image: /home/nutanix/foundation/isos/hypervisor/esx

» Hyper-V ISO image: /home/nutanix/foundation/isos/hypervisor/hyperv

Note: You do not have to provide an AHV image. Foundation includes an AHV tar file in /home/nutanix/
foundation/isos/hypervisor/kvm. However, if you want to install a different version of AHV, download the
AHV tar file from the Nutanix Support portal and copy it to the directory.

3. If you downloaded diagnostics files for one or more hypervisors to your workstation, copy them to the appropriate
directory on the Foundation VM. The directories for the diagnostic files are as follows:

» Diagnostic file for AHV (diagnostic.raw.img.gz): /home/nutanix/foundation/isos/diags/kvm

» Diagnostic file for ESXi (diagnostics-disk1.vmdk, diagnostics.mf, and diagnostics.ovf): /


home/nutanix/foundation/isos/diags/esx
» Diagnostic file for Hyper-V (diagnostics_uvm.vhd.gz): /home/nutanix/foundation/isos/diags/
hyperv

Verify Hypervisor Support


The list of supported ISO images appears in a iso_whitelist.json file used by Foundation to validate
ISO image files. The files are identified in the whitelist by their MD5 value (not file name), therefore verify
that the MD5 value of the ISO you want to use is listed in the whitelist file.

Before you begin


Download the latest whitelist file from the Foundation page on the Nutanix support portal (https://
portal.nutanix.com/#/page/Foundation). For information about the contents of the whitelist file, see Hypervisor
ISO Images on page 37.

About this task


To determine whether a hypervisor is supported, do the following:

Procedure

1. Obtain the MD5 checksum of the ISO that you want to use.

2. Open the downloaded whitelist file in a text editor and perform a search for the MD5 checksum.

Foundation | Prepare Bare Metal Nodes for Foundation | 22


What to do next
If the MD5 checksum is listed in the whitelist file, save the file to the workstation that hosts the Foundation
VM. If the whitelist file on the Foundation VM does not contain the MD5 checksum, you can replace that file
with the downloaded file before you begin installation.

Set Up the Network

About this task


The network must be set up properly on site before imaging nodes through the Foundation tool. To set up the network
connections, do the following:

Note: You can connect to either a managed switch (routing tables) or a flat switch (no routing tables). A flat switch is
often recommended to protect against configuration errors that could affect the production environment. Foundation
includes a multi-homing feature that allows you to image the nodes using production IP addresses despite being
connected to a flat switch. See Network Requirements on page 39 for general information about the network
topology and port access required for a cluster.

Procedure

1. Make sure that IPv6 is enabled on the network to which the nodes are connected and IPv6 multicast is supported.

2. If you are using a shared IPMI port to reinstall the hypervisor, follow the instructions in KB 3834.

3. (Nutanix NX Series) Connect the dedicated IPMI port and any one of the data ports to the switch. We highly
recommend that you use a 10G data port in addition to the dedicated IPMI port. Also, ensure that you use the
dedicated IPMI port instead of the shared IPMI port. You may use a 1G port instead of a 10G port at the cost of
increased imaging time or imaging failure. If you use SFP+ 10G NICs and a 1G RJ45 switch for imaging, connect
the 10G port to the switch using one of our approved GBICs. You may also use the shared IPMI/1G port in place
of the dedicated port, as long as the BMC is configured to use it, but it is less reliable than the dedicated port.
Regardless of the ports you choose, physically disconnect all other ports. The IPMI LAN interfaces of the nodes
must be in failover mode (factory default setting).
Use the following guideline when connecting the IPMI port on G4 and later platforms: if you use the shared IPMI
port, make sure that the connected switch can auto-negotiate to 100 Mbps. This auto-negotiation capability is
required because the shared IPMI port can support 1 Gbps throughput only when the host is online. If the switch
cannot auto-negotiate to 100 Mbps when the host goes offline, make sure to use the dedicated IPMI port instead

Foundation | Prepare Bare Metal Nodes for Foundation | 23


of the shared port (the dedicated IPMI ports support 1 Gbps throughput at all times). Older platforms support only
10/100 Mbps throughput.

Note:

• Foundation does not support imaging nodes in an environment using LACP without fallback
enabled.
• Foundation does not support configuring nodes' virtual switches to use LACP. This has to be done
manually post imaging.
• Foundation does not support configuring network adapters to use jumbo frames during imaging. This
has to be done manually post imaging.

The exact location of the port depends on the model type. See the hardware documentation for your model to
determine the port location. The following figure illustrates the location of the network ports on the back of an
NX-3050 (middle RJ-45 interface).

Figure 8: Port Locations (NX-3050)

4. (Lenovo Converged HX Series) Lenovo HX-series systems require that you connect both the system management
(IMM) port and one of the 10 GbE ports. The following figure illustrates the location of the network ports on the
back of the HX3500 and HX5500.

Figure 9: Port Locations (HX System)

5. (Dell XC series) Connect the iDRAC port and one of the data ports. While some Dell XC Series systems, such as
the Dell XC430-4, support imaging over a 1 GbE network connection, other systems, such as the Dell XC640-10,

Foundation | Prepare Bare Metal Nodes for Foundation | 24


require a 10 GbE connection for imaging. Nutanix recommends that you use a 10 GbE port regardless of the
model of your appliance.

Figure 10: Port Locations (XC System)

6. (IBM POWER Servers) Connect the dedicated IPMI port and a data port to the network to which the Foundation
VM is connected.

7. (HPE DX series) Connect the iLO port and the data ports to your network switch. Ensure that the connected data
ports are of the same data speed and not a combination of different data speeds. Also, ensure that the TCP port
8000 is open between Foundation and the iLO port.
In the DX360 and DX380 models, there are four data ports located to the right of the iLO port. Do not connect
these data ports as they are not supported.

8. Connect the installation workstation (see Prepare Workstation on page 16) to the same switch as the nodes.

Updating the Foundation Service


The Foundation user interface enables you to perform one-click updates either over the air or from a tar file
that you manually upload to the Foundation VM. The over-the-air update process downloads and installs
the latest Foundation version from the Nutanix support portal. By design, the over-the-air update process
downloads and installs a tar file that does not include Lenovo packages. Therefore, for Lenovo platforms,
update Foundation by using an uploaded tar file.

Before you begin


If you want to install from a tar file of your choice (required for Lenovo platforms and optional for other
platforms), download the Foundation tar file to the workstation that you use to access or run Foundation.
Installers are available on the Foundation download page at (https://portal.nutanix.com/#/page/foundation/list).

About this task


To update Foundation by using the graphical user interface, do the following:

Procedure

1. Open the CVM Foundation user interface.

2. In the gear icon menu at the top-right corner of the Foundation user interface, click Update Foundation.

3. In the Update Foundation dialog box, do one of the following:

» (Not to be used with Lenovo platforms) To perform a one-click, over-the-air update, click Update.
The dialog box displays the version to which you can update Foundation.
» (For Lenovo platforms; optional for other platforms) To update Foundation by using an installer that you
downloaded to the workstation, click Browse, browse to and select the tar file, and then click Install.

Foundation | Prepare Bare Metal Nodes for Foundation | 25


5
FOUNDATION APP
Foundation is available as a native Mac or Windows application that launches Foundation UI in a browser. Unlike
standalone VM, this provides a simpler alternative that skips configuring and mounting a VM.

Install Foundation App on macOS

About this task


Perform the following steps to install and launch Foundation app on macOS.

Procedure

1. Disable "stealth mode" of macOSfirewall.

2. Download the Foundation .dmg file from Nutanix portal.

3. Double-click the Foundation .dmg file.

4. To install the Foundation app, drag the Foundation app to the Application folder.

5. Double-click the Foundation app in the Application folder.

Note: To upgrade the app, download and install a higher version of app from Nutanix portal.

The Foundation UI launches in the default browser or you can manually visit http://localhost:8000 on your
preferred browser.

6. Allow the app to accept incoming connections when prompted by your Mac computer.

7. To close the app, right-click on the Foundation icon in the launcher, and click Force Quit.

Install Foundation App on Windows

About this task


To install and launch Foundation app, perform the following steps on the Windows PC.

Note: The installation stops any running Foundation process. If you have initiated Foundation with a previously
installed app, ensure that it is complete before launching the installation.

Procedure

1. Ensure to enable IPv6 on the network interface that connects the Windows PC to the switch.

2. Download the Foundation .msi installer file from Nutanix portal.

Foundation | Foundation App | 26


3. Install either through the wizard or a silent installation.

a. Double-click the downloaded MSI file to run the installation wizard.


OR
b. Run the command: msiexec.exe /i portable_foundation.msi /qb /l*v install.log
/qb: Displays basic user interface, and a modal dialog box when installation completes.
/q: Performs a silent installation.
This command saves the log of the installation in the install.log file that helps in debugging a failed
installation.

4. Double-click the Foundation icon on the desktop or start menu.

Note: To upgrade the app, download a higher version of app from Nutanix portal and perform a new installation.
The fresh installation stops any running Foundation operation and updates the older version to the higher version. If
you have initiated Foundation with the older app, ensure that it is complete before doing a fresh installation of the
higher version.

The Foundation UI launches in the default browser or you can manually visit http://localhost:8000/gui/index.html
on your preferred browser.

Uninstall Foundation App on macOS

About this task


Perform the following steps to uninstall Foundation app on macOS.

Procedure

1. If Foundation app is running, right-click on Foundation icon in Launcher and click Force Quit.

2. Delete the downloaded Foundation .dmg file.

3. Drag the Foundation app from Application folder to Trash.

Uninstall Foundation App on Windows

About this task


Perform the following steps to uninstall Foundation app on Windows.

Note: Uninstallation does not remove the log and configuration files that are created by Foundation app. So, for clean
installation, it is recommended to delete these files manually.

Procedure

1. Uninstall through Windows Apps & features.


OR

Foundation | Foundation App | 27


2. Run the command: msiexec.exe /X{BCD56AA1-664C-4EE8-8E01-AED3F0368234} /qb+ /l*v
uninstall.log
/qb+: Displays basic user interface, and a modal dialog box when uninstallation completes.
/q: Performs completely silent uninstallation.
This command saves the uninstallation log in uninstall.log file, which can help in debugging a failed
uninstallation.
6
UPGRADE FOUNDATION
Upgrade from the Graphical User Interface

• You can upgrade CVM Foundation from version 3.12 or later to 4.5.x by using the Foundation Java
applet, Prism web console or the Foundation GUI. For information about upgrading Foundation by using
the Java applet, see " Upgrade CVM Foundation by Using the Foundation Java Applet" in the "Prepare
Factory-Imaged Nodes for Foundation" chapter of the Field Installation Guide.
• For information about upgrading Foundation by using the Prism web console, see Cluster
Management > Software and Firmware Upgrades > Upgrading Foundation in the Prism
Web Console Guide.
• To update using Foundation GUI, click the version link in Foundation GUI. You can perform the
following updates:

• Upgrade CVM or standalone Foundation to a higher version.


• Update just the foundation-platforms sub-module. This enables Foundation to support the latest
hardware models or components qualified after the release of installed Foundation version.
• To upgrade the Foundation app, download a higher version of app from Nutanix portal and perform a
fresh installation. The fresh installation kills any running Foundation operation and updates the older
version to the higher version. If you have initiated Foundation with the older app, ensure that it is
complete before doing a fresh installation of the higher version.
Upgrade from the Command-Line Interface

• To upgrade Foundation from version 3.1 or later to version 4.5.x, by using the command line, do the
following:
1. Download the Foundation upgrade bundle (foundation-version#.tar.gz) from the support portal to the
/home/nutanix/ directory.
2. Change your working directory to /home/nutanix/.
3. Upgrade Foundation.
$ ./foundation/bin/foundation_upgrade -t foundation-version#.tar.gz

Foundation | Upgrade Foundation | 29


7
RUN FOUNDATION
• Automate filling Foundation UI on page 30
• Configure Nodes with Foundation on page 30

Automate filling Foundation UI


You can automate filling Foundation UI fields, using a configuration file. This file stores answers to most inputs that
are sought by the Foundation UI. To create or edit this configuration file, login at https://install.nutanix.com with
your Nutanix Portal credentials. You can provide partial or complete Foundation UI configuration details. When you
run Foundation, import this file to load these configuration details. Using configuration file has several benefits as
follows:

• Serves as a reusable baseline file that helps skip repeat manual entering of configuration details in repeat or
similar Foundation operations.
• Plan the configuration details in advance and keep it ready in the file. When you run Foundation later, import this
configuration file.
• Invite others to review and edit your planned configuration.
• Import NX nodes from a Salesforce order to avoid manually adding NX nodes.

Note: The configuration file only stores configuration settings and not AOS or hypervisor images. You can upload
images only in Foundation UI.

Configure Nodes with Foundation


Before you begin
Complete Prepare Factory-Imaged Nodes for Foundation on page 9 or Prepare Bare Metal Nodes for Foundation
on page 15.

Note:

• During this procedure, you assign IP addresses to the hypervisor host, the Controller VMs, and the
IPMI interfaces. Do not assign IP addresses from a subnet that overlaps with the 192.168.5.0/24 address
space on the default VLAN. Nutanix uses an internal virtual switch to manage network communications
between the Controller VM and the hypervisor host. This switch is associated with a private network on
the default VLAN and uses the 192.168.5.0/24 address space. If you want to use an overlapping subnet,
make sure that you use a different VLAN.
• Mixed-vendor cluster is not supported. For restriction details in Nutanix nodes' cluster, see the section
Product Mixing Restrictions in NX and SX Series Hardware Administration Guide.
• For a single imaged node that you need to re-image or to form a 1-node cluster using CVM Foundation,
ensure that you launch the CVM Foundation from another node. CVM Foundation can configure its'
own node only if the operation includes one or more nodes along with its' own node.

Foundation | Run Foundation | 30


• You may need to upgrade Foundation to a higher or relevant version. You can also update the
foundation-platforms sub-module on Foundation. Updating the sub-module enables Foundation to
support the latest hardware models or components qualified after the release of installed Foundation
version. For details, see Upgrade Foundation on page 29.

Procedure

1. Launch the Foundation Start page in one of the following ways:


For Standalone Foundation:
1. On the Foundation VM desktop, double-click the Nutanix Foundation icon.

Note: See Prepare the Installation Environment on page 16 if Oracle VM VirtualBox is not started or the
Foundation VM is not running currently.

2. In a web browser inside the Foundation VM, visit the URL http://localhost:8000/gui/index.html.
3. After you have assigned an IP address to the Foundation VM, visit the URL http://<foundation-vm-ip-
address>:8000/gui/index.html from a web browser outside the VM .
For CVM Foundation:
1. In the Foundation Java applet, select a node and click the Launch Foundation button.
2. If you used the crash cart user interface to discover nodes in a VLAN-segmented network, visit the URL
http://<foundation-cvm-ip-address>:8000 from a browser in a workstation connected to the node's
network .
For Foundation app:
1. Double-click the Foundation executable file. For installation process and launch details, see the sections
Install Foundation App on macOS on page 26 or Install Foundation App on Windows on page 26.

Foundation | Run Foundation | 31


2. On the Start page, you can:

• Import a Foundation configuration file created on the Portal https://install.nutanix.com. For details, see
Automate filling Foundation UI on page 30.
• For RDMA, configure NICs to passthrough to CVM.
• (Optional) Configure LACP or LAG for network connections between the nodes and switch.

Note:

• From Hyper-V 2019, if LACP/LAG are not chosen, SET will be the default teaming mode in
Foundation. Hyper-V 2019 is supported on NX-G5+ models.
• For Hyper-V 2016, if LACP/LAG are not chosen, the teaming mode is Switch Independent
LBFO teaming.
• For Hyper-V (2016 and 2019), if LACP/LAG is chosen, the teaming mode is Switch Dependant
LBFO teaming.
• For SET, the NICs of same model and speed are teamed. Otherwise (in LACP/LAG 2019 and in
LBFO 2016) the NICs of same speed are teamed.

• Assign VLANs to IPMI and CVM/host networks with standalone Foundation 4.3.2 or higher.
• Select the workstation network adapter that connects to the nodes' network.
• Specify the subnets and gateway addresses for the cluster and the IPMI network.
• Create and assign two IP addresses to Foundation app or standalone workstation for multi-homing.

3. The Nodes page discovers blocks of nodes and lists them in the table. It populates the following values.

Note:

If the Nodes are not automatically populated, then, to add compute-only nodes click adding
compute only nodes and to add nodes (compute and storage), click Click-here.

• Block Serial
• Node
• IPMI MAC
• IPMI IP
• Host IP
• CVM IP
• Hostname of Host
To make additional updates to the table use the Tools menu. The tools menu has the following options:

• Add Nodes Manually:To manually add additional nodes to the list if they are not populated automatically.

Note: You can manually add nodes only in standalone Foundation.


If you are manually adding multiple blocks in a single instance, all added blocks get the same
number of nodes. To add blocks with different numbers of nodes, add multiple blocks with highest

Foundation | Run Foundation | 32


number of nodes and then delete nodes for each block, as applicable. Alternatively, you can also
repeat the add process to separately add blocks with different number of nodes

• Add Compute-only Nodes: To add a compute-only node. For more information about compute only
nodes, see Compute-Only Node Configuration (AHV Only) in the Prism Web Console Guide.
• Range Autofill: To bulk-assign the IP addresses and hostname for each node.

Note: Unlike CVM Foundation, standalone Foundation does not validate these IP addresses by checking for
uniqueness. Hence, manually cross-check and ensure that the IP addresses are unique and valid.

• Reorder Blocks: To match the order of IP addresses and hypervisor hostnames that you want to assign.
• Select Only Failed Nodes: To select all the failed nodes.
• Remove Unselected Rows: To remove a node from the Foundation process, un-select a node row and
then click Remove Unselected Rows.

4. The Cluster page lets you provide cluster details and configure cluster formation or just image the nodes without
forming a cluster. You can also enable network segmentation to separate CVM network traffic from user VMs and
hypervisors network traffic.

Note:

• The Cluster Virtual IP field is essential for Hyper-V clusters but optional for ESXi and AHV
clusters.
• To provide multiple DNS or NTP servers, enter a comma-separated list of IP addresses.
• For best practices in configuring NTP servers, see the section Recommendations for Time
Synchronization in Prism Web Console Guide.

5. On the AOS page, you can specify and upload AOS images and also view version of existing installed AOS
image on the nodes. You can skip updating CVMs with AOS if all discovered nodes' CVMs already have same
AOS version that you want to use.

Foundation | Run Foundation | 33


6. On the Hypervisor page, you can:

• Specify and upload hypervisor image file(s).


• View version of existing installed hypervisors on the nodes.
• Upload the latest hypervisor whitelist JSON file that can be downloaded from the Nutanix Portal. This file lists
the supported hypervisors.

Note:

• You can select one or more nodes to be Storage-Only nodes, which host AHV only. You need to
image rest of the nodes with another hypervisor and form a multi-hypervisor cluster.
• For discovered nodes, if you skip updating CVMs with AOS, you can still re-image the hypervisors.
Hypervisor-only imaging is supported. However, imaging CVMs with AOS without imaging
hypervisors is not supported.
• [Hyper-V only] If you choose Hyper-V, from the Choose Hyper-V SKU list that is displayed,
select the SKU that you want to use.
Four Hyper-V SKUs are supported: Standard, Datacenter, Standard with GUI, and
Datacenter with GUI.

7. Standalone Foundation needs IPMI remote access to nodes. Standalone Foundation UI has the additional IPMI
page, where you need to provide the IPMI access credentials for each node.
To provide credentials for all nodes at one go, click the Tools menu to either use the autofill row or assign a
vendor's default IPMI credentials to all nodes.

8. The Installation in Progress page displays the progress status and lets you view the individual Log for in-
progress or completed operations of all the nodes. You can click Review Configuration to have a read-only
view of the configuration details while the installation is in progress.

Note: You can abort an ongoing installation in standalone Foundation but not in CVM Foundation.

Results
Post successful completion of all operations, the Installation finished page is displayed.

Note: If you have missed something and want to reconfigure and redo the installation, you can click Reset to go back
to the Start page and redo the Foundation process.
8
POST INSTALLATION STEPS

Configuring a New Cluster in Prism


About this task
After creating the cluster, you can configure it through the Prism web console. A storage pool and a container are
created automatically when the cluster is created, but many other set-up options require user action. The following are
common cluster set-up steps typically done soon after creating a cluster. (All the sections cited in the following steps
are in the Prism Web Console Guide.)

Procedure

1. Verify that the cluster has passed the latest Nutanix Cluster Check (NCC) tests.

a. Check the installed NCC version and update it if a later version is available (see the "Software and Firmware
Upgrades" section).
b. Run NCC if you downloaded a newer version or did not run it as part of the install.
Run NCC from a command line. Open a command window, log on to any Controller VM in the cluster with
SSH, and then run the following command:
nutanix@cvm$ ncc health_checks run_all

If the check reports a status other than PASS, resolve the reported issues before proceeding. If you are unable
to resolve the issues, contact Nutanix Support for assistance.
c. Configure NCC so that the cluster checks run and emailed according to your desired frequency.
nutanix@cvm$ ncc --set_email_frequency=num_hrs
where num_hrs is a positive integer of at least 4 to specify how frequently NCC runs and results are emailed.
For example, to run NCC and email results every 12 hours, specify 12; or every 24 hours, specify 24, and so
on. For other commands related to automatically emailing NCC results, see "Automatically Emailing NCC
Results" in the Nutanix Cluster Check (NCC) Guide for your version of NCC.

2. Specify the timezone of the cluster.


Specifying the timezone from the Nutanix command line (nCLI). While logged on to the Controller VM (see
previous step), run the following commands:
nutanix@cvm$ ncli
ncli> cluster set-timezone timezone=cluster_timezone

Replace cluster_timezone with the timezone of the cluster (for example, America/Los_Angeles, Europe/
London, or Asia/Tokyo). Restart all Controller VMs in the cluster after changing the timezone. Because a
cluster can tolerate only a single Controller VM unavailable at any one time, restart the Controller VMs in a series
while waiting until one starts before proceeding to the next. For more information about using the nCLI, see the
Command Reference.

3. Specify an outgoing SMTP server (see the "Configuring an SMTP Server" section).

Foundation | Post Installation Steps | 35


4. If the site security policy allows Nutanix customer support to access the cluster, enable the remote support tunnel
(see the "Controlling Remote Connections" section).

CAUTION: Failing to enable remote support prevents Nutanix Support from directly addressing cluster issues.
Nutanix recommends that all customers allow email alerts at minimum because it allows proactive support of
customer issues.

5. If the site security policy allows Nutanix Support to collect cluster status information, enable the Pulse feature
(see the "Configuring Pulse" section).
This information is used by Nutanix Support to diagnose potential problems and provide more informed and
proactive help.

6. Add a list of alert email recipients, or if the security policy does not allow it, disable alert emails (see the
"Configuring Email Alerts" section).
You can also specify email recipients for specific alerts (see the "Configuring Alert Policies" section).

7. If the site security policy allows automatic downloads to update AOS and other upgradeable cluster elements,
enable that feature (see the "Software and Firmware Upgrades" section).

Note: To ensure that automatic download of updates can function, allow access to the following URLs through
your firewall:

• *.compute-*.amazonaws.com:80
• release-api.nutanix.com:80

8. License the cluster (see the "License Management" section).

9. For ESXi and Hyper-V clusters, add the hosts to the appropriate management interface.

• vCenter: See the Nutanix vSphere Administration Guide.


• SCVMM: See the Nutanix Hyper-V Administration Guide.
9
HYPERVISOR ISO IMAGES
An AHV ISO image is included as part of Foundation. However, customers must provide ISO images for other
hypervisors. Check with your hypervisor manufacturer's representative, or download an ISO image from their support
site:

Note: For the Lenovo Converged HX Series platform, use the custom ISOs that are available on the VMware website
(www.vmware.com) at Downloads > Product Downloads > vSphere > Custom ISOs.

Make sure that the MD5 checksum of the hypervisor ISO image is listed in the ISO whitelist file used by Foundation.
See Verify Hypervisor Support on page 13.
The following table describes the fields that appear in the iso_whitelist.json file for each ISO image.

Table 5: iso_whitelist.json Fields

Name Description

(n/a) Displays the MD5 value for that ISO image.


min_foundation Displays the earliest Foundation version that supports this ISO image. For
example, "2.1" indicates you can install this ISO image using Foundation
version 2.1 or later (but not an earlier version).
hypervisor Displays the hypervisor type (esx, hyperv, or kvm). The "kvm" designation
means AHV. Entries with a "linux" hypervisor are not available; they are
for Nutanix internal use only.
min_nos Displays the earliest AOS version compatible with this hypervisor ISO. A
null value indicates there are no restrictions.
friendly_name Displays a descriptive name for the hypervisor version, for example "ESX
6.0" or "Windows 2012r2".
version Displays the hypervisor version, for example "6.0" or "2012r2".
unsupported_hardware Lists the Nutanix models on which this ISO cannot be used. A blank list
indicates there are no model restrictions. However, conditional restrictions
such as the limitation that Haswell-based models support only ESXi
version 5.5 U2a or later may not be reflected in this field.
skus (Hyper-V only) Lists which Hyper-V types (datacenter and standard) are supported with
this ISO image.
compatible_versions Reflects through regular expressions the hypervisor versions that can co-
exist with the ISO version in an Acropolis cluster (primarily for internal
use).
deprecated (optional field) Indicates that this hypervisor image is not supported by the mentioned
Foundation version and higher versions. If the value is “null”, the image is
supported by all Foundation versions to date.

Foundation | Hypervisor ISO Images | 37


Name Description
filesize Displays the file size of the hypervisor ISO image.

The following are sample entries from the whitelist for an ESX and an AHV image.
"iso_whitelist": {
"478e2c6f7a875dd3dacaaeb2b0b38228": {
"min_foundation": "2.1",
"hypervisor": "esx",
"min_nos": null,
"friendly_name": "ESX 6.0",
"version": "6.0",
"filesize": 329611264,
"unsupported_hardware": [],
"compatible_versions": {
"esx": ["^6\\.0.*"]
},

"a2a97a6af6a3e397b43e3a4c7a86ee37": {
"min_foundation": "3.0",
"hypervisor": "kvm",
"min_nos": null,
"friendly_name": "20160127",
"compatible_versions": {
"kvm": [
"^el6.nutanix.20160127$"
]
},
"version": "20160127",
"deprecated": "3.1",
"unsupported_hardware": []
},
10
NETWORK REQUIREMENTS
When configuring a Nutanix block, ensure that the IP addresses of components exist in the customer network and
the IP addresses can be assigned to the Nutanix cluster. You must also ensure to open the required software ports
to manage cluster components to enable communication between components such as the Controller VM, Web
console, Prism Central, hypervisor, and the Nutanix hardware. Nutanix recommends that you specify information
such as a DNS server and NTP server even if the cluster is not connected to the Internet or not running in a production
environment.

Existing Customer Network


You will need the following information during the cluster configuration:

• Default gateway
• Network mask
• DNS server
• NTP server
Check whether a proxy server is in place in the network. To perform this check, you need the IP address and port
number of that server when enabling Nutanix Support on the cluster.

New IP Addresses
Each node in a Nutanix cluster requires three IP addresses, one for each of the following components:

• IPMI interface
• Hypervisor host
• Nutanix Controller VM
All Controller VMs and hypervisor hosts must be on the same subnet. No systems other than the Controller VMs and
hypervisor hosts can be on this network.

Software Ports Required for Management and Communication


The following Nutanix network port diagrams show the ports that must be open for supported hypervisors. The
diagrams also show ports to open for infrastructure services.

Foundation | Network Requirements | 39


Figure 11: Nutanix Network Port Diagram for VMware ESXi

Figure 12: Nutanix Network Port Diagram for AHV

Figure 13: Nutanix Network Port Diagram for Microsoft Hyper-V


11
HYPER-V INSTALLATION
REQUIREMENTS
Ensure that the following requirements are met before installing Hyper-V:

Windows Active Directory Domain Controller


Requirements:

• The primary domain controller version must at least be 2008 R2.

Note: If you have Volume Shadow Copy Service (VSS) based back up tool (for example Veeam), functional level
of Active Directory must be 2008 or higher.

• Install and run Active Directory Web Services (ADWS). By default, connections are made over TCP port 9389
and firewall policies enable an exception on this port for ADWS.
To test that ADWS is installed and run on a domain controller, log on by using a domain administrator account.
In addition, log onto a Windows host other than the domain controller host that joins the same domain and has the
RSAT-AD-Powershell feature installed and run the following PowerShell command:
> (Get-ADDomainController).Name
If the command prints the primary name of the domain controller, then ADWS is installed and the port is open.

• The domain controller must run a DNS server.

Note: If any of the preceding requirements are not met, you must manually create an Active Directory computer
object for the Nutanix storage in the Active Directory, and add a DNS entry for the name.

• Ensure that the Active Directory domain is configured correctly for consistent time synchronization.
Accounts and Privileges:

• An Active Directory account with permission to create new Active Directory computer objects for either a storage
container or Organizational Unit (OU) where Nutanix nodes are placed. The credentials of this account are not
stored anywhere.
• An account that has sufficient privileges to join a Windows host to a domain. The credentials of this account are
not stored anywhere. These credentials are only used to join the hosts to the domain.
The following additional information are required:

• The IP address of the primary domain controller.

Note: The primary domain controller IP address is set as the primary DNS server on all the Nutanix hosts. It is
also set as the NTP server in the Nutanix storage cluster to keep the Controller VM, host, and Active Directory time
synchronized.

• The fully qualified domain name to which the Nutanix hosts and the storage cluster is going to be joined.

Foundation | Hyper-V Installation Requirements | 41


SCVMM

Note: Relevant only if you have SCVMM in your environment.

Requirements:

• The SCVMM version must at least be 2012 R2 and it must be installed on Windows Server 2012 or a newer
version.
• The SCVMM server must allow PowerShell remoting.
To test this scenario, log on by using the SCVMM administrator account in a Windows host and run the following
PowerShell command on a Windows host that is different to the SCVMM host (for example, run the command
from the domain controller). If they print the name of the SCVMM server, then PowerShell remoting to the
SCVMM server is not blocked.
> Invoke-Command -ComputerName scvmm_server -ScriptBlock {hostname} -Credential MYDOMAIN
\username

Replace scvmm_server with the SCVMM host name and MYDOMAIN with Active Directory domain name.

Note: If the SCVMM server does not allow PowerShell remoting, you can perform the SCVMM setup manually by
using the SCVMM user interface.

• The ipconfig command must run in a PowerShell window on the SCVMM server. To verify run the following
command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {ipconfig} -Credential
MYDOMAIN\username

Replace scvmm_server_name with the SCVMM host name and MYDOMAIN with Active Directory domain name.
• The SMB client configuration in the SCVMM server should have RequireSecuritySignature set to False. To
verify, run the following command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {Get-SMBClientConfiguration |
FL RequireSecuritySignature}

Replace scvmm_server_name with the SCVMM host name.


This configuration can be set to True by a domain policy. In this case, modify the domain policy to set it to False.
Also, if it is True, set the configuration back to False, but might not get changed throughout if there is a policy that
reverts it back to True. To change it, you can use the following command in the PowerShell on the SCVMM host
by logging in as a domain administrator.
Set-SMBClientConfiguration -RequireSecuritySignature $False -Force

If you are changing it from True to False, it is important to confirm that the policies that are on the SCVMM
host have the correct value. On the SCVMM host, run rsop.msc to review the resultant set of policy details,
and verify the value by going to, Servername > Computer Configuration > Windows Settings
> Security Settings > Local Policies > Security Options: Policy Microsoft network client:
Digitally sign communications (always). The value displayed in RSOP must be, Disabled or
Not Defined for the change to persist. Also, if the RSOP shows it as Enabled, the group policies that
are configured in the domain to apply to the SCVMM server must be updated to Disabled. Otherwise, the
RequireSecuritySignature changes back to True at a later time. After setting the policy in Active Directory and
propagating to the domain controllers, refresh the SCVMM server policy by running the command gpupdate /
force. Confirm in RSOP that the value is Disabled.

Note: If security signing is mandatory, then you must enable Kerberos in the Nutanix cluster. In this case, it
is important to ensure that the time remains synchronized between the Active Directory server, the Nutanix
hosts, and the Nutanix Controller VMs. The Nutanix hosts and the Controller VMs set their NTP server as the

Foundation | Hyper-V Installation Requirements | 42


Active Directory server. So, ensure that Active Directory domain is configured correctly for consistent time
synchronization.

Accounts and Privileges:

• When adding a host or a cluster to the SCVMM, the run-as account you are specifying for managing the host or
cluster must be different from the service account that was used to install SCVMM.
• Run-as account must be a domain account and must have local administrator privileges on the Nutanix hosts.
This can be a domain administrator account. When the Nutanix hosts are joined to the domain, the domain
administrator accounts automatically takes administrator privileges on the host. If the domain account used as the
run-as account in SCVMM is not a domain administrator account, you must manually add it to the list of local
administrators on each host by running sconfig.

• SCVMM domain account with administrator privileges on SCVMM and PowerShell remote execution
privileges.
• If you want to install SCVMM server, a service account with local administrator privileges on the SCVMM
server.

IP Addresses

• One IP address for each Nutanix host.


• One IP address for each Nutanix Controller VM.
• One IP address for each Nutanix host IPMI interface.
• One IP address for the Nutanix storage cluster.
• One IP address for the Hyper-V failover cluster.

Note: For N nodes, (3*N + 2) IP addresses are required. All IP addresses must be in the same subnet.

DNS Requirements

• Each Nutanix host must be assigned a name of 15 characters or less, which gets automatically added to the DNS
server during domain joining.
• The Nutanix storage cluster must be assigned a name of 15 characters or less. Then, add the name to the DNS
server when the storage cluster joins the domain.
• The Hyper-V failover cluster must be assigned a name of 15 characters or less, which gets automatically added to
the DNS server when the failover cluster is created.
• After the Hyper-V configuration, all names must resolve to an IP address in the Nutanix hosts, the SCVMM server
(if applicable), or any other host that needs access to the Nutanix storage, for example, a host running the Hyper-V
Manager.

Storage Access Requirements

• Virtual machine and virtual disk paths must always refer to the Nutanix storage cluster by name, not the external
IP address. If you use the IP address, it directs all the I/O to a single node in the cluster and compromises
performance and scalability.

Note: For external non-Nutanix host that must access Nutanix SMB shares, see Nutanix SMB Shares
Connection Requirements from Outside the Cluster..

Foundation | Hyper-V Installation Requirements | 43


Host Maintenance Requirements

• When applying Windows updates to the Nutanix hosts, restart the hosts one at a time, ensuring that Nutanix comes
up fully in the Controller VM of the restarted host before updating the next host. This task can be accomplished by
using Cluster Aware Updating and using a nutanix-provided script, which can be plugged into the Cluster Aware
Update Manager as a pre-update script. This pre-update script ensures that the Nutanix Services services go down
on only one host at a time ensuring availability of storage throughout the update procedure. For more information
about cluster-aware updating, see Installing Windows Updates with Cluster-Aware Updating.

Note: Ensure that automatic Windows updates are not enabled for the Nutanix hosts in the domain policies.
12
SET IPMI STATIC IP ADDRESS
You can assign a static IP address for an IPMI port by resetting the BIOS configuration.

About this task

Note: Do not perform the following procedure for HPE DX series. The label on the HPE DX chassis contains the iLO
MAC address, using which MAC-based imaging can be performed.

To configure a static IP address for the IPMI port on a node, do the following:

Procedure

1. Connect a VGA monitor and USB keyboard to the node.

2. Power on the node.

3. Press the Delete key during boot up when prompted to enter the BIOS setup mode.
The BIOS Setup Utility screen appears.

4. Click the IPMI tab to display the IPMI screen.

5. Select BMC Network Configuration and press the Enter key.

6. Select Update IPMI LAN Configuration, press Enter, and then select Yes in the pop-up window.

Foundation | Set IPMI Static IP Address | 45


7. Select Configuration Address Source, press Enter, and then select Static in the pop-up window.

8. Select Station IP Address, press Enter, and then enter the IP address for the IPMI port on that node in the
pop-up window.

9. Select Subnet Mask, press Enter, and then enter the corresponding submask value in the pop-up window.

10. Select Gateway IP Address, press Enter, and then enter the IP address for the node's network gateway in
the pop-up window.

11. When all the field entries are correct, press the F4 key to save the settings and exit the BIOS setup mode.
13
TROUBLESHOOTING
This section provides guidance for fixing problems that might occur during a Foundation installation.

• For help with IPMI configuration problems in a bare metal workflow, see Fix IPMI Configuration Problems on
page 47.
• For help with imaging problems, see Fix Imaging Problems on page 48.
• For answers to other common questions, see Frequently Asked Questions (FAQ) on page 49.

Fix IPMI Configuration Problems


About this task
In a bare metal workflow when the IPMI port configuration fails for one or more nodes in the cluster, or it works
but type detection fails and complains that it cannot reach an IPMI IP address, the installation process stops before
imaging any of the nodes. (Foundation will not go to the imaging step after an IPMI port configuration failure,
but it will try to configure the port address on all nodes before stopping.) Possible reasons for a failure include the
following:

• One or more IPMI MAC addresses are invalid or there are conflicting IP addresses. Go to the IPMI screen and
correct the IPMI MAC and IP addresses as needed.
• There is a user name/password mismatch. Go to the IPMI page and correct the IPMI username and password
fields as needed.
• One or more nodes are connected to the switch through the wrong network interface. Go to the back of the nodes
and verify that the first 1GbE network interface of each node is connected to the switch (see Set Up the Network
on page 23).
• The Foundation VM is not in the same broadcast domain as the Controller VMs for discovered nodes or the IPMI
interface for added (bare metal or undiscovered) nodes. This problem typically occurs because (a) you are not
using a flat switch, (b) some node IP addresses are not in the same subnet as the Foundation VM, and (c) multi-
homing was not configured.

• If all the nodes are in the Foundation VM subnet, go to the Node page and correct the IP addresses as needed.
• If the nodes are in multiple subnets, go to the Cluster page and configure multi-homing.
• The IPMI interface is not set to failover. You can check for this through the BIOS.
To identify and resolve IPMI port configuration problems, do the following:

Foundation | Troubleshooting | 47
Procedure

1. Go to the Block & Node Config screen and review the problem IP address for the failed nodes (nodes with a red
X next to the IPMI address field).
Hovering the cursor over the address displays a pop-up message with troubleshooting information. This can
help you diagnose the problem. See the service.log file (in /home/nutanix/foundation/log) and the
individual node log files for more detailed information.

Figure 14: Foundation: IPMI Configuration Error

2. When you have corrected all the problems and are ready to try again, click the Configure IPMI button at the top
of the screen.

Figure 15: Configure IPMI Button

3. Repeat the preceding steps as necessary to fix all the IPMI configuration errors.

4. When all nodes have green check marks in the IPMI address column, click the Image Nodes button at the top of
the screen to begin the imaging step.
If you cannot fix the IPMI configuration problem for one or more of the nodes, you can bypass those nodes and
continue to the imaging step for the other nodes by clicking the Proceed button. In this case you must configure
the IPMI port address manually for each bypassed node (see Set IPMI Static IP Address on page 45).

Fix Imaging Problems


About this task
When imaging fails for one or more nodes in the cluster, the progress bar turns red and a red check appears next to
the hypervisor address field for any node that was not imaged successfully. Possible reasons for a failure include the
following:

• A type failure was detected. Check connectivity to the IPMI (bare metal workflow).
• There were network connectivity issues such as the following:

• The connection is dropping intermittently. If intermittent failures persist, look for conflicting IPs.
• [Hyper-V only] SAMBA is not up. If Hyper-V complains that it failed to mount the install share, restart
SAMBA with the command "sudo service smb restart".
• Foundation ran out of disk space during the hypervisor or Phoenix preparation phase. Free up some space by
deleting extraneous ISO images. In addition, a Foundation crash could leave a /tmp/tmp* directory that contains
a copy of an ISO image which you can unmount (if necessary) and delete. Foundation needs about 9 GB of free
space for Hyper-V and about 3 GB for ESXi or AHV.

Foundation | Troubleshooting | 48
• The host boots but complains it cannot reach the Foundation VM. The message varies per hypervisor. For
example, on ESXi you might see a "ks.cfg:line 12: "/.pre" script returned with an error" error message. Make sure
you have assigned the host an IP address on the same subnet as the Foundation VM or you have configured multi-
homing. Also check for IP address conflicts.
To identify and resolve imaging problems, do the following:

Procedure

1. See the individual log file for any failed nodes for information about the problem.

• Controller VM location for Foundation logs: ~/data/logs/foundation and ~/data/logs/


foundation.out[.timestamp]
• Bare metal location for Foundation logs: /home/nutanix/foundation/log

2. When you have corrected the problems and are ready to try again, click the Image Nodes (bare metal
workflow) button.

Figure 16: Image Nodes Button (bare metal)

3. Repeat the preceding steps as necessary to fix all the imaging errors.
If you cannot fix the imaging problem for one or more of the nodes, you can image those nodes one at a time
(Contact Support for help).

Frequently Asked Questions (FAQ)


This section provides answers to some common Foundation questions.

Installation Issues

• How do I deploy a 1-node or 2-node cluster?


Run Foundation just like how you would run it to deploy a 3+ nodes' cluster.
• What steps should I take when I encounter a problem?
Click the appropriate log link on Foundation UI. In most cases the log file should provide some information about
the problem near the end of the file. If that information (plus the information in this troubleshooting section) is
sufficient to identify and solve the problem, fix the issue and then restart the imaging process.
If you were unable to fix the problem, open a Nutanix support case. You can do this from the Nutanix support
portal (https://portal.nutanix.com/#/page/cases/form?targetAction=new). Upload relevant log files as requested.
The log files are in the following locations:

• Standalone (bare metal) location for Foundation logs: /home/nutanix/foundation/log in your


Foundation VM. This directory contains a service.log file for Foundation-related log messages, a log
file for each node being imaged (named node_cvm_ip_addr.log), a log file for each cluster being created
(named cluster_cluster_name.log, cluster_1.log, and so on), http.access and http.error files
for server-related log messages, debug.log file that records every bit of information Foundation outputs,
and api.log file that records certain requests made to the Foundation API. Logs from past installations are
stored in /home/nutanix/foundation/log/archive. In addition, the state of the current install process

Foundation | Troubleshooting | 49
is stored in /home/nutanix/foundation/persisted_config.json. You can download the entire log
archive from the following URL: http://foundation_ip:8000/foundation/log_archive_tar
• Controller VM location for Foundation logs: ~/data/logs/foundation (see preceding content description)
and ~/data/logs/foundation.out[.timestamp], which corresponds to the service.log file.
• I want to troubleshoot the operating system installation during cluster creation.
Point a VNC console to the hypervisor host IP address of the target node at port 5901.
• I need to restart Foundation on the Controller VM.
To restart Foundation, log on to the Controller VM with SSH and then run the following command:
nutanix@cvm$ pkill foundation && genesis restart

• My installation hangs, and the service log complains about type detection.
Verify that all of your IPMI IPs are reachable through Foundation. (On rare occasion the IPMI IP assignment will
take some time.) If you get a complaint about authentication, double-check your password. If the problem persists,
try resetting the BMC.
• Installation fails with an error where Foundation cannot ping the configured IPMI IP addresses.
Verify that the LAN interface is set to failover mode in the IPMI settings for each node. You can find this setting
by logging into IPMI and going to Configuration > Network > Lan Interface. Verify that the setting is
Failover (not Dedicate).
• The diagnostic box was checked to run after installation, but that test (diagnostics.py) does not complete
(hangs, fails, times out).
Running this test can result in timeouts or low IOPS if you are using 1G cables. Such cables might not provide the
performance necessary to run this test at a reasonable speed.
• Foundation seems to be preparing the ISOs properly, but the nodes boot into <previous hypervisor> and the install
hangs.
The boot order for one or more nodes might be set incorrectly to select the USB over SATA DOM as the first boot
device instead of the CDROM. To fix this, boot the nodes into BIOS mode and either select "restore optimized
defaults" (F3 as of BIOS version 3.0.2) or give the CDROM boot priority. Reboot the nodes and retry the
installation.
• I have misconfigured the IP addresses in the Foundation configuration page. How long is the timeout for the call
back function, and is there a way I can avoid the wait?
The call back timeout is 60 minutes. To stop the Foundation process and restart it, open up the terminal in the
Foundation VM and enter the following commands:
$ sudo /etc/init.d/foundation_service stop
$ cd ~/foundation/
$ mv persisted_config.json persisted_config.json.bak
$ sudo /etc/init.d/foundation_service start
Refresh the Foundation web page. If the nodes are still stuck, reboot them.
• I need to reset a block to the default state.
Using the bare metal imaging workflow, download the desired Phoenix ISO image for AHV from the support
portal (see https://portal.nutanix.com/#/page/phoenix/list). Boot each node in the block to that ISO and follow the
prompts until the re-imaging process is complete. You should then be able to use Foundation as usual.
• The cluster create step is not working.
If you are installing NOS 3.5 or later, check the service.log file for messages about the problem. Next, check
the relevant cluster log (cluster_X.log) for cluster-specific messages. The cluster create step in Foundation is

Foundation | Troubleshooting | 50
not supported for earlier releases and will fail if you are using Foundation to image a pre-3.5 NOS release. You
must create the cluster manually (after imaging) for earlier NOS releases.
• I want to re-image nodes that are part of an existing cluster.
Do a cluster destroy prior to discovery. (Nodes in an existing cluster are ignored during discovery.)
• My Foundation VM is complaining that it is out of disk space. What can I delete to make room?
Unmount any temporarily-mounted file systems using the following commands:
$ sudo fusermount -u /home/nutanix/foundation/tmp/fuse
$ sudo umount /tmp/tmp*
$ sudo rm -rf /tmp/tmp*
If more space is needed, delete some of the Phoenix ISO images from the Foundation VM.
• I keep seeing the message “"tar: Exiting with failure status due to previous errors'tar rf /home/nutanix/foundation/
log/archive/log-archive-20140604-131859.tar -C /home/nutanix/foundation ./persisted_config.json' failed; error
ignored."
This is a benign message. Foundation archives your persisted configuration file (persisted_config.json) alongside
the logs. Occasionally, there is no configuration file to back up. This is expected, and you may ignore this
message with no ill consequences.
• Imaging fails after changing the language pack.
Do not change the language pack. Only the default English language pack is supported. Changing the language
pack can cause some scripts to fail during Foundation imaging. Even after imaging, character set changes can
cause problems for NOS.
• [ESXi] Foundation is booting into pre-install Phoenix, but not the ESXi installer.
Check the BIOS version and verify it is supported. If it is not a supported version, upgrade it.

Network and Workstation Issues

• I am having trouble installing VirtualBox on my Mac.


Turning off the WiFi can sometimes resolve this problem. For help with VirtualBox issues, see https://
www.virtualbox.org/wiki/End-user_documentation.
There can be a problem when the USB Ethernet adapter is listed as a 10/100 interface instead of a 1G interface.
To support a 1G interface, it is recommend that MacBook Air users connect to the network with a thunderbolt
network adapter rather than a USB network adapter.
• I get "This Kernel requires an x86-64 CPU, but only detected an i686 CPU" when trying to boot the VM on
VirtualBox.
The VM needs to be configured to expose a 64-bit CPU. For more information, see https://forums.virtualbox.org/
viewtopic.php?f=8&t=58767.
• I am running the network setup script, but I do not see eth0 when I run ifconfig.
This can happen when you make changes to your VirtualBox network adapters. VirtualBox typically creates
a new interface (eth1, then eth2, and so on) to accommodate your new settings. To fix this, run the following
commands:
$ sudo rm /etc/udev/rules.d/70-persistent-net-rules
$ sudo shutdown -r now
This should reboot your machine and reset your adapter to eth0.

Foundation | Troubleshooting | 51
• I have plugged in the Ethernet cables according to the directions and I can reach the IPMI interface, but discovery
is not finding the nodes to image.
Your Foundation VM must be in the same broadcast domain as the Controller VMs to receive their IPv6 link-
local traffic. If you are installing on a flat 1G switch, ensure that the 10G cables are not plugged in. (If they are,
the Controller VMs might choose to direct their traffic over that interface and never reach your Foundation VM.)
If you are installing on a 10G switch, ensure that only the IPMI 10/100 port and the 10G ports are connected.
• The switch is dropping my IPMI connections in the middle of imaging.
If your network connection seems to be dropping out in the middle of imaging, try using an unmanaged switch
with spanning tree protocol disabled.
• Foundation is stalled on the ping home phase.
The ping test will wait up to two minutes per NIC to receive a response, so a long delay in the ping phase indicates
a network connection issue. Check that your 10G cables are unplugged and your 1G connection can reach
Foundation.
• How do I install on a 10/100 switch?
A 10/100 switch is not recommended, but it can be used for a few nodes. However, you may see timeouts. It is
highly recommend that you use a 1G or 10G switch if it is available to you.

Informational Topics

• How can I determine whether a node was imaged with Foundation or standalone Phoenix?

• A node imaged using standalone Phoenix will have the file /etc/nutanix/foundation_version in it, but
the contents will be “unknown” instead of a valid foundation version.
• A node imaged using Foundation will have the file /etc/nutanix/foundation_version in it with a valid
foundation version.
• Does first boot work when run more than once?
First boot creates a marker failure marker file whenever it fails and a success marker file whenever it succeeds. If
the first boot script needs to be executed again, delete these marker files and just manually execute the script.
• Do the first boot marker files contain anything?
They are just empty files.
• Why might first boot fail?
Possible reasons include the following:

• First boot may take more time than expected, in which case Foundation might time out.
• NC team creation fails.
• The Controller VM has a kernel panic when it boots.
• Hostd does not come up in time.
• What is the timeout for first boot?
The timeout is 90 minutes. A node may restart several times (requirements from certain driver installations)
during the execution of the first boot script, and this can increase the overall first boot time.

Foundation | Troubleshooting | 52
• How does the Foundation process differ on a Dell system?
Foundation uses a different tool called racadm to talk to the IPMI interface of a Dell system, and the files which
have the hardware layout details are different. However, the overall Foundation work flow (series of steps)
remains the same.
• How does the Foundation service start in the Controller VM-based and standalone versions?

• Standalone: Manually start the Foundation service using foundation_service start (in the ~/
foundation/bin directory).
• Controller VM-based: Genesis service takes care of starting the Foundation service. If the Foundation service
is not already running, use the genesis restart command to start Foundation. If the Foundation service
is already running, a genesis restart will not restart Foundation. You must manually kill the Foundation
service that is running currently before executing genesis restart. The genesis status command lists
the services running currently along with their PIDs.
• Why doesn’t the genesis restart command stop Foundation?
Genesis restarts only the services required for a cluster to be up and running. Stopping Foundation could cause
failures to current imaging sessions. For example, when expanding a cluster Foundation may be in the process of
imaging a node, which should not be disrupted by restarting Genesis.
• How is the installer VM created?
The Qemu library is part of Phoenix. The qemu command starts the VM by taking a hypervisor ISO and disk
details as input. This command is simply executed on Phoenix to launch the installer VM.
• How do you validate that installation is complete and the node is ready with regards to firstboot?
This can be validated by checking the presence of a first boot success marker file. The marker file varies per
hypervisor:

• ESXi: /bootbank/first_boot.log
• AHV: /root/.firstboot_success
• HyperV: D:\markers\firstboot_success
• Does Repair CVM re-create partitions?
Repair CVM images AOS alone and recreates the partitions on the SSD. It does not touch the SATADOM, which
contains the hypervisor.
• Can I use older Phoenix ISOs for manual imaging?
Use a Phoenix ISO that contains the AOS installation bundle and hypervisor ISO in it. Makefiles has a separate
target for building such a standalone Phoenix ISO.
• What are the pre-checks run when a node is added?

• The hypervisor type and version should match between the existing cluster and the new node.
• The AOS version should match between the existing cluster and the new node.
• Can I get a map of percent completion to step?
No. The percent completion does not have a one-to-one mapping to the step. Percent completion depends on the
different tasks which actually get executed during imaging.

Foundation | Troubleshooting | 53
• Do the log folders contain past imaging session logs?
Yes. All the previous imaging session logs are compressed (on a session basis) and archived in the folder ~/
foundation/log/archive.
• If I have two clusters in my lab, can I use one to do bare-metal imaging on the other?
No. This is because the tools and packages which are required for bare-metal imaging are typically not present in
the Controller VM.
• How do you add a new node that needs to be imaged to an existing cluster?
If the cluster is running AOS 4.5 or later and the node also has 4.5 or later, you can use the "Expand Cluster"
option in the Prism web console. This option employs Foundation to image the new node (if required) and then
adds it to the existing cluster. You can also add the node through the nCLI: ncli cluster add-node node-
uuid=<uuid>. The UUID value can be found in the factory_config.json file on the node.
• Is is required to supply IPMI details when using the Controller VM-based Foundation?
It is optional to provide IPMI details in Controller VM-based Foundation. If IPMI information is provided,
Foundation will try to configure the IPMI interface as well.
• Is it valid to use a share to hold AOS installation bundles and hypervisor ISO files?
AOS installation bundles and hypervisor ISO files can be present anywhere, but there needs to be a link (as
appropriate) in ~/foundation/nos or ~/foundation/isos/hypervisor/[esx|kvm|hyperv]/ to the
appropriate share location. Foundation will pick up files in these locations only. As long as a file is accessible
from these standard locations inside Foundation using a link, Foundation will pick it up.
• Where is Foundation located in the Controller VM?
/home/nutanix/foundation
• How can I determine if a particular (standalone) Foundation VM can image a given cluster?
Execute the following command on the Foundation VM and see whether it returns successfully (exit status 0):
ipmitool –H ipmi_ip -U ipmi_username -P ipmi_password fru
If this command is successful, the Foundation VM can be used to image the node. This is the command used by
Foundation to get hardware details from the IPMI interface of the node. The exact tool used for talking to the
SMC IPMI interface is the following:
java -jar SMCIPMITool.jar ipmi_ip username password shell
If this command is able to open a shell, imaging will not fail because of an IPMI issue. Any other errors like
violating minimum requirements will be shown only after Foundation starts imaging the node.
• How do I determine whether a particular hypervisor ISO will work?
The md5 hash of all qualified hypervisor ISO images are listed in the iso_whitelist.json file, which is
located in ~/foundation/config/. The latest version of the iso_whitelist.json file is available from the
Nutanix support portal (see Hypervisor ISO Images on page 37).
• How does foundation mount an ISO over IPMI?

• For SMC, Foundation uses the following commands:


cd foundation/lib/bin/smcipmitool
java -jar SMCIPMITool.jar ipmi_ip ipmi_username ipmi_password shell

Foundation | Troubleshooting | 54
vmwa dev2iso <path to iso file>

The java command starts a shell with access to the remote IPMI interface. The vmwa command mounts the
ISO file virtually over IPMI. Foundation then opens another terminal and uses the following commands to set
the first boot device to CD ROM and restart the node.
ipmitool –H ipmi_ip -U ipmi_username -P ipmi_password chassis bootdev cdrom
ipmitool –H ipmi_ip -U ipmi_username -P ipmi_password chassis power reset

• For Dell, Foundation uses the following commands:


racadm -r ipmi_ip -u ipmi_username -p ipmi_password remoteimage -d
racadm -r ipmi_ip -u ipmi_username -p ipmi_password remoteimage -c -
l nfs_share_path_to_iso_file -u nutanix –p nutanix/4uracadm -r ipmi_ip -u ipmi_username -
p ipmi_password remoteimage –s
racadm -r ipmi_ip -u ipmi_username -p ipmi_password remoteimage –s
racadm -r ipmi_ip -u ipmi_username -p ipmi_password config -g cfgServerInfo -o
cfgServerBootOnce 1
racadm -r ipmi_ip -u ipmi_username -p ipmi_password config -g cfgServerInfo -o
cfgServerFirstBootDevice vCD-DVD
The node can be rebooted using the following commands:
racadm -r ipmi_ip -u ipmi_username -p ipmi_password serveraction powerdown
racadm -r ipmi_ip -u ipmi_username -p ipmi_password serveraction powerup

• Does phoenix download using IPv4 address? Yes.


• Is there an integrity check on the files phoenix downloads?
Yes. The md5 sum of the files to be downloaded (AOS and hypervisor ISO) are passed to Phoenix through a
configuration file. (The HTTP path to the configuration file is passed as command line input.) Phoenix verifies the
md5 sum of the files after downloading and retries the download if an md5 mismatch is detected.
• Pynfs, what is it?
It is a Python implementation of NFS share used during the initial days. It is still used on platforms with a 16 GB
DOM.
• Is there a reason for using port 8000?
No specific reason.

Foundation | Troubleshooting | 55
Appendix

A
A APPENDIX: SINGLE-NODE
CONFIGURATION (PHOENIX)
To configure a single node, to reinstall the hypervisor after you replace a hypervisor boot drive, or to install
or repair a Nutanix Controller VM, use Phoenix, which is an ISO installer.
For more information about using Phoenix, see KB 5591. To use Phoenix, contact Nutanix Support.

Warning:

• Use of Phoenix to re-image or reinstall AOS with the Action titled "Install CVM" on a node that is
already part of a cluster is not supported by Nutanix and can lead to data loss.
• Use of Phoenix to repair the AOS software on a node with the Action titled "Repair CVM" is to be done
only with the direct assistance of Nutanix Support.
• Use of Phoenix to recover a node after a hypervisor boot disk failure is not necessary in most cases.
Please refer to the Hardware Replacement Documentation for your platform model and AOS version to
see how this recovery is automated through Prism.
COPYRIGHT
Copyright 2020 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the United States and/or other
jurisdictions. All other brand and product names mentioned herein are for identification purposes only and may be
trademarks of their respective holders.

License
The provision of this software to you does not grant any licenses or other rights under any Microsoft patents with
respect to anything other than the file server implementation portion of the binaries for this software, including no
licenses or any other rights in any hardware or any devices or software that are used to communicate with or in
connection with this software.

Conventions
Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command The commands are executed in the Nutanix nCLI.

user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.

root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.

> command The commands are executed in the Hyper-V host shell.

output The information is displayed as output from a command or in a log file.

Default Cluster Credentials


Interface Target Username Password

Nutanix web console Nutanix Controller VM admin Nutanix/4u

vSphere Web Client ESXi host root nutanix/4u

vSphere Client ESXi host root nutanix/4u

SSH client or console ESXi host root nutanix/4u

SSH client or console AHV host root nutanix/4u

Foundation |
Interface Target Username Password

SSH client or console Hyper-V host Administrator nutanix/4u

SSH client Nutanix Controller VM nutanix nutanix/4u

SSH client Nutanix Controller VM admin Nutanix/4u

IPMI web interface or ipmitool Nutanix node ADMIN ADMIN

SSH client or console Acropolis OpenStack Services root admin


VM (Nutanix OVM)

SSH client or console Xtract VM nutanix nutanix/4u

SSH client or console Xplorer VM nutanix nutanix/4u

Version
Last modified: June 2, 2020 (2020-06-02T17:42:45+05:30)

Das könnte Ihnen auch gefallen