Beruflich Dokumente
Kultur Dokumente
Foundation 3.7
27-Feb-2017
Notice
Copyright
Copyright 2017 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.
License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.
Conventions
Convention Description
user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.
root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.
> command The commands are executed in the Hyper-V host shell.
Version
Last modified: February 27, 2017 (2017-02-27 10:48:08 GMT-8)
2: Creating a Cluster................................................................................... 7
Discovering Nodes and Launching Foundation...................................................................................8
Discovering Nodes That Are In the Same Broadcast Domain................................................. 8
Discovering Nodes in a VLAN-Segmented Network................................................................ 9
Launching Foundation.............................................................................................................10
Updating Foundation..........................................................................................................................11
Selecting the Nodes to Image........................................................................................................... 11
Defining the Cluster........................................................................................................................... 13
Setting Up the Nodes........................................................................................................................ 15
Selecting the Images......................................................................................................................... 17
Creating the Cluster...........................................................................................................................20
Configuring a New Cluster................................................................................................................ 23
Creating a Cluster (Single-Node Replication Target)........................................................................ 24
6: Network Requirements......................................................................... 58
4
9: Setting IPMI Static IP Address.............................................................69
10: Troubleshooting...................................................................................71
Fixing IPMI Configuration Problems.................................................................................................. 71
Fixing Imaging Problems................................................................................................................... 72
Frequently Asked Questions (FAQ)...................................................................................................73
5
1
Field Installation Overview
Nutanix installs AHV and the Nutanix Controller VM at the factory before shipping a node to a customer.
To use a different hypervisor (ESXi or Hyper-V) on factory nodes or to use any hypervisor on bare metal
nodes, the nodes must be imaged in the field. This guide provides step-by-step instructions on how to
use the Foundation tool to do a field installation, which consists of installing a hypervisor and the Nutanix
Controller VM on each node and then creating a cluster. You can also use Foundation to create just a
cluster from nodes that are already imaged or to image nodes without creating a cluster.
Note: Use Foundation to image factory-prepared (or bare metal) nodes and create a new cluster
from those nodes. Use the Prism web console (in clusters running AOS 4.5 or later) to image
factory-prepared nodes and then add them to an existing cluster. See the "Expanding a Cluster"
section in the Web Console Guide for this procedure.
A field installation can be performed for either factory-prepared nodes or bare metal nodes.
See Creating a Cluster on page 7 to image factory-prepared nodes and create a cluster from those
nodes (or just create a cluster for nodes that are already imaged).
See Imaging Bare Metal Nodes on page 29 to image bare metal nodes and optionally configure
them into one or more clusters.
Note: Foundation supports imaging an ESXi, Hyper-V, or AHV hypervisor on nearly all Nutanix
hardware models with some restrictions. Click here (or log into the Nutanix support portal and
select Documentation > Compatibility Matrix from the main menu) for a list of supported
configurations. To check a particular configuration, go to the Filter By fields and select the
desired model, AOS version, and hypervisor in the first three fields and then set the last field to
Foundation. In addition, check the notes at the bottom of the table.
Determine the appropriate network (gateway and DNS server IP addresses), cluster (name, virtual IP
address), and node (Controller VM, hypervisor, and IPMI IP address ranges) parameter values needed
for installation.
Note: The use of a DHCP server is not supported for Controller VMs, so make sure to assign
static IP addresses to Controller VMs.
Note: Nutanix uses an internal virtual switch to manage network communications between the
Controller VM and the hypervisor host. This switch is associated with a private network on the
default VLAN and uses the 192.168.5.0/24 address space. For the hypervisor, IPMI interface,
and other devices on the network (including the guest VMs that you create on the cluster), do
not use a subnet that overlaps with the 192.168.5.0/24 subnet on the default VLAN. If you want
to use an overlapping subnet for such devices, make sure that you use a different VLAN.
Download the following files, as described in Downloading Installation Files on page 53:
AOS installer named nutanix_installer_package-version#.tar.gz from the AOS (NOS) download
page.
Hypervisor ISO if you want to instal Hyper-V or ESXi. The user must provide the supported Hyper-V
or ESXi ISO (see Hypervisor ISO Images on page 56); Hyper-V and ESXi ISOs are not available
on the support portal.
It is not necessary to download an AHV ISO because both Foundation and the AOS bundle include
an AHV installation bundle. However, you have the option to download an AHV upgrade installation
bundle if you want to install a non-default version of AHV.
Make sure that IPv6 is enabled on the network to which the nodes are connected and that IPv6
broadcast and multicast are supported.
If the nodes contain self-encrypting drives (SEDs), disable encryption on the SEDs before imaging the
nodes. If the nodes contain only SEDs, you can enable encryption after you image the nodes. If the
nodes contain both regular hard disk drives (HDDs) and SEDs, do not enable encryption on the SEDs
at any time during the lifetime of the cluster.
Note: This method creates a single cluster from discovered nodes. This method is limited to
factory prepared nodes running AOS 4.5 or later. If you want to image discovered nodes without
creating a cluster, image factory prepared nodes running an earlier AOS (NOS) version, or image
bare metal nodes, see Imaging Bare Metal Nodes on page 29.
To image the nodes and create a cluster, do the following:
1. Run discovery and launch Foundation (see Discovering Nodes and Launching Foundation on
page 8).
2. (Optional) Upgrade Foundation to the latest version (see Updating Foundation on page 11).
3. Select the nodes to image and specify a redundancy factor (see Selecting the Nodes to Image on
page 11).
4. Define cluster parameters and, optionally, enable health tests, which run after the cluster is created (see
Defining the Cluster on page 13).
5. Configure the discovered nodes (see Setting Up the Nodes on page 15).
6. Select the AOS and hypervisor images to use (see Selecting the Images on page 17).
7. Start the process and monitor progress as the nodes are imaged and the cluster is created (see
Creating the Cluster on page 20).
8. After the cluster is created successfully, begin configuring the cluster (see Configuring a New Cluster on
page 23).
If the workstation that you want to use for imaging has internet access, on that workstation, do the
following:
If the workstation that you want to use for imaging does not have internet access, on a workstation that
has internet access, do the following:
c. Copy the downloaded bundle to the workstation that you want to use for imaging.
The discovery process begins and a window appears with a list of discovered nodes.
Note:
A security warning message may appear indicating this is from an unknown source. Click the
accept and run buttons to run the application.
1. Connect a console to one of the nodes and log on to the Acropolis host with root credentials.
2. Change your working directory to /root/nutanix-network-crashcart/, and then start the network
configuration utility.
root@ahv# ./network_configuration
a. Review the network card details to ascertain interface properties and identify connected interfaces.
b. Use the arrow keys to shift focus to the interface that you want to configure, and then use the
Spacebar key to select the interface.
Repeat this step for each interface that you want to configure.
c. Use the arrow keys to navigate through the user interface and specify values for the following
parameters:
VLAN Tag. VLAN tag to use for the selected interfaces.
Netmask. Network mask of the subnet to which you want to assign the interfaces.
Gateway. Default gateway for the subnet.
Controller VM IP. IP address for the Controller VM.
Hypervisor IP. IP address for the hypervisor.
d. Use the arrow keys to move the focus to Done, and then hit Enter.
The network configuration utility configures the interfaces.
Launching Foundation
How you launch Foundation depends on whether you used the Foundation Applet to discover nodes in the
same broadcast domain or the crash cart user interface to discover nodes in a VLAN-segmented network.
To launch the Foundation user interface, do one of the following:
If you used the Foundation Applet to discover nodes in the same broadcast domain. do the following:
Note: A warning message may appear stating this is not the highest available version of
Foundation found in the discovered nodes. If you select a node using an earlier Foundation
version (one that does not recognize one or more of the node models), installation may fail
Note: If you want Foundation to image nodes from an existing cluster, you must first either
remove the target nodes from the cluster or destroy the cluster.
If you used the crash cart user interface to discover nodes in a VLAN-segmented network, in a browser
on your workstation, enter the following URL: http://CVM_IP_address:8000
Replace CVM_IP_address with the IP address that you assigned to the Controller VM when using the
network configuration tool.
Updating Foundation
You can use the Foundation user interface to update Foundation. Updating Foundation to the latest version
is optional but recommended.
To update Foundation, do the following:
1. From the Foundation download page on the Nutanix support portal (https://portal.nutanix.com/#/page/
foundation/list), download the Foundation tarball to the workstation that you plan to use for imaging.
2. Open the Foundation user interface. See Configuring Node Parameters on page 37 if you are using
standalone Foundation. See Discovering Nodes and Launching Foundation on page 8 if you are
using CVM Foundation.
3. In the gear icon menu at the top-right corner of the Foundation user interface, click Update
Foundation.
a. Click Browse, and then browse to and select the Foundation tarball you downloaded.
b. Click Install.
Note: A cluster requires a minimum of three nodes. Therefore, you must select at least three
nodes.
Note: If a discovered node has a VLAN tag, that tag is displayed. Foundation applies an
existing VLAN tag when imaging the node, but you cannot use Foundation to edit that tag or
add a tag to a node without one.
To display just the available nodes, select the Show only new nodes option from the pull-down
list on the right of the screen. (Blocks with unavailable nodes only do not appear, but a block with
both available and unavailable nodes does appear with the exclamation mark icon displayed for the
unavailable nodes in that block.)
To deselect nodes you do not want to image, uncheck the boxes for those nodes. Alternately, click
the Deselect All button to uncheck all the nodes and then select those you want to image. (The
Select All button checks all the nodes.)
If you want to select only those nodes that can form a cluster of two or more ESXi nodes and one
AHV node, with the AHV node being used for storage only, click select all capable nodes in the
message that indicates that Foundation detected such nodes.
If Foundation detects nodes that can form such a cluster, the Block & Node Config screen displays
a message prompting you to select those nodes. For information about which nodes can form a
mixed-hypervisor cluster, see "Product Mixing Restrictions" in the NX and SX Series Hardware
Administration and Reference.
Note: You can get help or reset the configuration at any time from the gear icon pull-down
menu (top right). Internet access is required to display the help pages, which are located in the
Nutanix support portal.
b. Click (check) the RF 3 button and then click the Save Changes button.
The window disappears and the RF button changes to Change RF (3) indicating the redundancy
factor is now set to 3.
3. By default, the user interface displays only new nodes. To show all nodes, from the menu at the top-
right corner of the page, select Show all nodes.
4. Click the Next button at the bottom of the screen to configure cluster parameters (see Defining the
Cluster on page 13).
c. NTP Server Address (optional): Enter the NTP server IP address or (pool) domain name.
3. In the CVM and Hypervisor section of the Network Information area, do the following in the indicated
fields:
a. Netmask: Enter the netmask for the CVM and hypervisor subnet.
c. CVM Memory (optional): Specify a memory size for the Controller VM from the pull-down list.
For more information about Controller VM memory configuration, see CVM Memory and vCPU
Configurations (G4/Haswell/Ivy Bridge) on page 62.
d. Note: The following fields appear only if you selected Configure IPMI in the previous step.
f. IPMI Username: Enter the IPMI user name. The default user name is ADMIN.
g. IPMI Password: Enter the IPMI password. The default password is ADMIN.
Check the show password box to display the password.
4. (optional) Select Enable Testing to run Nutanix Cluster Check (NCC) after the cluster is created.
NCC is a test suite that checks a variety of health metrics in the cluster. The results are stored in the ~/
foundation/logs/ncc directory.
5. Click the Next button at the bottom of the screen to configure the cluster nodes (see Setting Up the
Nodes on page 15).
The lower section includes fields for the host names and IP addresses that are automatically generated
based on your entries in the upper section. You can edit the values in these fields if you want to make
changes.
1. If you want the blocks to be in a particular order before assigning addresses, click Reorder Blocks. In
the Reorder Blocks dialog box, drag blocks to their desired positions, and then click Done.
If you reorder blocks after manually assigning each IP address, the blocks retains their IP addresses,
even in their new positions. If you reorder blocks after the user interface generates IP addresses for
you, the reorder operation regenerates the IP addresses, and any changes that you make to individual
IP address assignments are lost.
2. In the Hostname and IP Range section, do the following in the indicated fields:
a. Hypervisor Hostname: Enter a base host name for the set of nodes. Host names should contain
only digits, letters, and hyphens.
The base name with a suffix of "-1" is assigned as the host name of the first node, and the base
name with "-2", "-3" and so on are assigned automatically as the host names of the remaining nodes.
b. CVM IP: Enter the starting IP address for the set of Controller VMs across the nodes.
Enter a starting IP address in the FROM/TO line of the CVM IP column. The entered address is
assigned to the Controller VM of the first node, and consecutive IP addresses (sequentially from
the entered address) are assigned automatically to the remaining nodes. Discovered nodes are
sorted first by block ID and then by position, so IP assignments are sequential. If you do not want
all addresses to be consecutive, you can change the IP address for specific nodes by updating the
address in the appropriate fields for those nodes.
d. IPMI IP (when enabled): Repeat the previous step for this field.
3. In the Manual Input section, review the assigned host names and IP addresses. If any of the names or
addresses are not correct, enter the desired name or IP address in the appropriate field.
There is a section for each block with a line for each node in the block. The letter designation (A, B, C,
and D) indicates the position of that node in the block.
4. When all the host names and IP addresses are correct, click the Validate Network button at the bottom
of the screen.
This does a ping test to each of the assigned IP addresses to check whether any of those addresses
are being used currently.
If there are no conflicts (none of the addresses return a ping), the process continues (see Selecting
the Images on page 17).
If there is a conflict (one or more addresses returned a ping), this screen reappears with the
conflicting addresses highlighted in red. Foundation will not continue until the conflict is resolved.
1. If the prompt to choose between creating a single-hypervisor cluster and a multi-hypervisor cluster is
displayed, click the type of cluster you want, and then click Continue.
If you choose the multi-hypervisor option, an additional item named ESX+AHV is included in the
Hypervisor Type list and selected by default. If you choose the single-hypervisor option, Foundation
displays the default Image Selection screen shown earlier.
a. From the Available Packages list in the AOS section, select the AOS installer package that you
want to use.
Note: The list displays the packages that are available in the ~/foundation/nos directory
on the Foundation VM. If the desired AOS package does not appear in the list, or if the list is
empty, you must download a package to the ~/foundation/nos directory. If the list is empty
or the package you want to use is not available, perform the next step.
b. If you want to upload a package to the ~/foundation/nos directory, click Manage above the
Available Packages list. Click Add, and then click Choose File. Browse to and upload the package
that you want to use, and then click Close. Finally, from the Available Packages list, select the AOS
installer package that you uploaded.
3. From the Hypervisor Type list, select the hypervisor type that you want to install (for multi-hypervisor
clusters, the selected hypervisor is installed on the compute nodes and AHV is installed on the storage-
only nodes.
Selecting Hyper-V as the hypervisor displays a SKU list.
Note: The hypervisor field is not active until the AOS installation bundle is selected (uploaded)
in the previous step. Foundation comes with an AHV image. If that is the correct image to use,
skip to the next step.
If the AHV ISO that you want to use is not listed, click Manage above the Available Packages list,
upload the ISO file that you want to use, and then select the file from the Available Packages list, as
described earlier for uploading an AOS package.
Only approved hypervisor versions are permitted; Foundation will not image nodes with an unapproved
version. To verify your version is on the approved list, click the See Whitelist link and select the
appropriate hypervisor tab in the pop-up window. Nutanix updates the list as new versions are
approved, and the current version of Foundation may not have the latest list. If your version does not
appear on the list, click the Update the whitelist link to download the latest whitelist from the Nutanix
support portal.
5. From the list of AHV bundles for the storage-only nodes, backup nodes, or multi-cluster nodes (based
on the mix of selected nodes, the list is named Available AHV Packages (Storage Only Nodes),
Available AHV Packages (Backup Nodes), or just Available AHV Packages, select the AHV package
that you want to use.
If the AHV ISO that you want to use is not listed, click Manage above the Available Packages list,
upload the ISO file that you want to use, and then select the file from the Available Packages list, as
described earlier for uploading an AOS package.
6. [Hyper-V only] From the Choose Hyper-V SKU list, select the SKU for the Hyper-V version to use.
Five Hyper-V versions are supported: Free, Standard, Datacenter, Standard with GUI, Datacenter
with GUI. This list appears only when you select Hyper-V.
7. When both images are uploaded and ready, do one of the following:
To image the nodes and then create the new cluster, click the Create button at the bottom of the
screen.
To create the cluster without imaging the nodes, click the Skip button (in either case see Creating
the Cluster on page 20).
Note: The Skip option requires that all the nodes have the same hypervisor and AOS
version. This option is disabled if they are not all the same (with the exception of any model
NX-6035C "cold" storage nodes in the cluster that run AHV regardless of the hypervisor
running on the other nodes).
The status message for each node (in the Node Status section) displays the imaging percentage
complete and current step. The selected node (see Launching Foundation on page 10) is imaged
first. When that imaging is complete, the remaining nodes are imaged in parallel. The imaging process
takes about 30 minutes, so the total time is about an hour (30 minutes for the first node and another 30
minutes for the other nodes imaged in parallel). You can monitor overall progress by clicking the Log
link at the top, which displays the foundation.out contents in a separate tab or window. Click on the
Log link for a node to display the log file for that node in a separate tab or window.
2. When processing completes successfully, either open the Prism web console and begin configuring the
cluster (see Configuring a New Cluster on page 23) or exit from Foundation.
When processing completes successfully, a "Cluster creation successful" message appears. This
means imaging both the hypervisor and Nutanix Controller VM across all the nodes in the cluster was
successful (when imaging was not skipped) and cluster creation was successful.
To configure the cluster, click the Prism link. This opens the Prism web console (login required using
the default "admin" username and password). See Configuring a New Cluster on page 23 for
initial cluster configuration steps.
To download the log files, click the Export Logs link. This packages all the log files into a
log_archive.tar file and allows you to download that file to your workstation.
The Foundation service shuts down two hours after imaging. If you go to the cluster creation success
page after a long absence and theExport Logs link does not work (or your terminal went to sleep
Note: If nothing loads when you refresh the page (or it loads one of the configuration pages),
the web browser might have missed the hand-off between the node that starts imaging and the
first node imaged. This can happen because the web browser went to sleep, you closed the
browser, or you lost connectivity for some other reason. In this case, enter http://cvm ip for
any Controller VM, which should open the Prism GUI if imaging has completed. If this does not
work, enter http://cvm ip:8000/guion each of the Controller VMs in the cluster until you see
the progress screen, from which you can continue monitoring progress.
3. If processing does not complete successfully, review and correct the problem(s), and then restart the
process.
If the progress bar turns red with a "There were errors in the installation" message and one or more
node or cluster entries have a red X in the status column, the installation failed at the node imaging or
cluster creation step. To correct such problems, see Fixing Imaging Problems on page 72. Clicking
the Back to config button returns you to the configuration screens to correct any entries. The default
per-node installation timeout is 30 minutes for ESXi or 60 minutes for Hyper-V and AHV, so you can
expect all the nodes (in each run of up to 20 nodes) to finish successfully or encounter a problem in that
amount of time.
Note: If an imaging problem occurs, it typically appears when imaging the first node. In that
case Foundation will not attempt to image the other nodes, so only the first node will be in an
unstable state. Once the problem is resolved, the first node can be re-imaged and then the
other nodes imaged normally.
1. Verify the cluster has passed the latest Nutanix Cluster Check (NCC) tests.
a. Check the installed NCC version and update it if a later version is available (see the "Software and
Firmware Upgrades" section).
b. Run NCC if you downloaded a newer version or did not run it as part of the install.
Running NCC must be done from a command line. Open a command window, log on to any
Controller VM in the cluster with SSH, and then run the following command:
nutanix@cvm$ ncc health_checks run_all
If the check reports a status other than PASS, resolve the reported issues before proceeding. If you
are unable to resolve the issues, contact Nutanix support for assistance.
c. Configure NCC so that the cluster checks are run and emailed according to your desired frequency.
nutanix@cvm$ ncc --set_email_frequency=num_hrs
where num_hrs is a positive integer of at least 4 to specify how frequently NCC is run and results are
emailed. For example, to run NCC and email results every 12 hours, specify 12; or every 24 hours,
specify 24, and so on. For other commands related to automatically emailing NCC results, see
"Automatically Emailing NCC Results" in the Nutanix Cluster Check (NCC) Guide for your version of
NCC.
3. Specify an outgoing SMTP server (see the "Configuring an SMTP Server" section).
4. If the site security policy allows Nutanix customer support to access the cluster, enable the remote
support tunnel (see the "Controlling Remote Connections" section).
Caution: Failing to enable remote support prevents Nutanix support from directly addressing
cluster issues. Nutanix recommends that all customers allow email alerts at minimum because
it allows proactive support of customer issues.
5. If the site security policy allows Nutanix support to collect cluster status information, enable the Pulse
feature (see the "Configuring Pulse" section).
This information is used by Nutanix support to diagnose potential problems and provide more informed
and proactive help.
6. Add a list of alert email recipients, or if the security policy does not allow it, disable alert emails (see the
"Configuring Email Alerts" section).
You also have the option to specify email recipients for specific alerts (see the "Configuring Alert
Policies" section).
7. If the site security policy allows automatic downloads to update AOS and other upgradeable cluster
elements, enable that feature (see the "Software and Firmware Upgrades" section).
Note: Allow access to the following through your firewall to ensure that automatic download of
updates can function:
*.compute-*.amazonaws.com:80
release-api.nutanix.com:80
9. For ESXi and Hyper-V clusters, add the hosts to the appropriate management interface.
vCenter: See the Nutanix vSphere Administration Guide.
SCVMM: See the Nutanix Hyper-V Administration Guide.
Note:
1. Download the required files, start the cluster creation GUI, and run discovery. Follow the steps
mentioned in the Discovering Nodes and Launching Foundation on page 8 and Launching
Foundation on page 10 sections to get complete information on how to download the files, start the
cluster creation GUI, and set the redundancy factor.
Note: You can only configure redundancy factor of 2 in the single-node replication target
clusters. However, you can configure redundancy factor of 2 or 3 on the primary clusters.
The step that is specific for creating primary and single-node replication target cluster together is as
follows.
In the Discovery Nodes tab you have an option to create the primary and single-node replication
target cluster in parallel. Foundation detects and provides you with the information if any single node is
detected.
a. Select the nodes that you want image for the primary cluster (three nodes or more) and select the
node that you want to image as a single-node replication target.
2. Define the cluster parameters; specify Controller VM, hypervisor, and (optionally) IPMI global network
addresses; and (optionally) enable health tests after the cluster is created. See Defining the Cluster on
page 13 for detailed description about the different fields that appear in the screen.
The steps that are specific for creating primary and single-node replication target cluster together are as
follows.
b. (Optional) Type the virtual IP address for the primary and replication target cluster.
c. In the Backup Configuration field, if you do not deselect the Remote Site Names Same As the
Cluster Names check box, both the primary and replication target clusters will be setup as the
remote sites for each other after the cluster creation process is completed. By default this option is
selected. However, you can deselect the check box and provide the remote site name of the primary
cluster and remote site name of the replication target cluster in the respective text boxes.
d. Perform rest of the steps as described in Defining the Cluster on page 13 topic.
3. Configure the discovered nodes. See Setting Up the Nodes on page 15 for detailed description
about the different fields that appear in the screen.
The steps that are specific for creating primary and single-node replication target cluster together are as
follows.
a. Type the hypervisor hostname, Controller VM IP address, and hypervisor IP address of the primary
and single-node replication target cluster in the respective fields.
b. Perform rest of the steps as described in Setting Up the Nodes on page 15 topic.
4. Select the AOS and hypervisor images to use. See Selecting the Images on page 17 for more
information on how to complete this screen.
For single-node replication target cluster, you do not need to import the images as the AHV image is
bundled with AOS release.
5. Start the process and monitor progress as the nodes that are imaged for the primary and single-node
replication target cluster. See Creating the Cluster on page 20 for more information.
6. After the cluster is created successfully, begin configuring the cluster. See Configuring a New Cluster on
page 23 for more information.
You can also create a standalone single-node replication target cluster and then add it as a replication
target (remote site) for the primary cluster. Perform the same procedure as described above, but fill in
the fields that are specific to creating replication target cluster.
After creating the standalone single-node replication target cluster, you need to add it as a replication
target (remote site) for the primary cluster. For information on the adding the single-node replication
target cluster as a remote site, see Prism Web console guide.
Physically install the nodes at your site. For installing Nutanix hardware platforms, see the NX and SX
Series Hardware Administration and Reference for your model type. For installing hardware from any
other manufacturer, see that manufacturer's documentation.
Set up the installation environment (see Preparing Installation Environment on page 30).
Note: If you changed the boot device order in the BIOS to boot from a USB flash drive, you will
get a Foundation timeout error if you do not change the boot order back to virtual CD-ROM in
the BIOS.
Note: If STP (spanning tree protocol) is enabled, it can cause Foundation to timeout during the
imaging process. Therefore, disable STP before starting Foundation.
Note: Avoid connecting any device (that is, plugging it into a USB port on a node) that presents
virtual media, such as CDROM. This could conflict with the foundation installation when it tries
to mount the virtual CDROM hosting the install ISO.
Have ready the appropriate global, node, and cluster parameter values needed for installation. The use
of a DHCP server is not supported for Controller VMs, so make sure to assign static IP addresses to
Controller VMs.
Note: If the Foundation VM IP address set previously was configured in one (typically public)
network environment and you are imaging the cluster on a different (typically private) network
in which the current address is no longer correct, repeat step 13 in Preparing a Workstation on
page 30 to configure a new static IP address for the Foundation VM.
If the nodes contain self-encrypting drives (SEDs), disable encryption on the SEDs before imaging the
nodes. If the nodes contain only SEDs, you can enable encryption after you image the nodes. If the
nodes contain both regular hard disk drives (HDDs) and SEDs, do not enable encryption on the SEDs
at any time during the lifetime of the cluster.
For information about enabling and disabling encryption, see the "Data-at-Rest Encryption" chapter in
the Prism Web Console Guide.
Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may
not reflect the latest features described in this section.)
b. Connect the workstation and nodes to be imaged to the network (Setting Up the Network on
page 35).
2. Start the Foundation VM and configure global parameters (see Configuring Global Parameters on
page 42).
3. Configure the nodes to image (see Configuring Node Parameters on page 37).
4. Select the images to use (see Configuring Image Parameters on page 44).
5. [optional] Configure one or more clusters to create and assign nodes to the clusters (see Configuring
Cluster Parameters on page 46).
6. Start the imaging process and monitor progress (see Monitoring Progress on page 49).
7. If a problem occurs during configuration or imaging, evaluate and resolve the problem (see
Troubleshooting on page 71).
8. [optional] Clean up the Foundation environment after completing the installation (see Cleaning Up After
Installation on page 52).
Preparing a Workstation
A workstation is needed to host the Foundation VM during imaging. To prepare the workstation, do the
following:
Note: You can perform these steps either before going to the installation site (if you use a portable
laptop) or at the site (if you can connect to the web).
1. Get a workstation (laptop or desktop computer) that you can use for the installation.
The workstation must have at least 3 GB of memory (Foundation VM size plus 1 GB), 25 GB of disk
space (preferably SSD), and a physical (wired) network adapter.
2. Go to the Foundation download page in the Nutanix support portal (see Downloading Installation Files
on page 53) and download the following files to a temporary directory on the workstation.
Foundation_VM_OVF-version#.tar. This tar file includes the following files:
Note: Links to the VirtualBox files may not appear on the download page for every
Foundation version. (The Foundation 2.0 download page has links to the VirtualBox files.)
nutanix_installer_package-version#.tar.gz. This is the tar file used for imaging the desired AOS
release. Go to the AOS (NOS) download page on the support portal to download this file.
If you want to run the diagnostics test after creating a cluster, download the diagnostics test file(s) for
your hypervisor from the Tools & Firmware download page on the support portal:
AHV: diagnostic.raw.img.gz
ESXi: diagnostics-disk1.vmdk, diagnostics.mf, and diagnostics.ovf
Hyper-V: diagnostics_uvm.vhd.gz
Note: This assumes the tar command is available. If it is not, use the corresponding tar utility
for your environment.
4. Open the Oracle VM VirtualBox installer and install Oracle VM VirtualBox using the default options.
See the Oracle VM VirtualBox User Manual for installation and start up instructions (https://
www.virtualbox.org/wiki/Documentation).
Note: This section describes how to use Oracle VM VirtualBox to create a virtual environment.
Optionally, you can use an alternate tool such as VMware vSphere in place of Oracle VM
VirtualBox.
8. Click the File option of the main menu and then select Import Appliance from the pull-down list.
9. Find and select the Foundation_VM-version#.ovf file, and then click Next.
11. In the left column of the main screen, select Foundation_VM-version# and click Start.
The Foundation VM console launches and the VM operating system boots.
12. At the login screen, login as the Nutanix user with the password nutanix/4u.
The Foundation VM desktop appears (after it loads).
13. If you want to enable file drag-and-drop functionality between your workstation and the Foundation VM,
install Oracle Additions as follows:
a. On the VirtualBox window for the Foundation VM, select Devices > Insert Guest Additions CD
Image... from the menu.
A VBOXADDITIONS CD entry appears on the Foundation VM desktop.
b. Click OK when prompted to Open Autorun Prompt and then click Run.
d. After the installation is complete, press the return key to close the VirtualBox Guest Additions
installation window.
f. Reboot the Foundation VM by selecting System > Shutdown... > Restart from the Linux GUI.
Note: A reboot is necessary for the changes to take effect.
g. After the Foundation VM reboots, select Devices > Drag 'n' Drop > Bidirectional from the menu on
the VirtualBox window for the Foundation VM.
14. Open a terminal session and run the ifconfig command to determine if the Foundation VM was able to
get an IP address from the DHCP server.
c. In the Select Action box in the terminal window, select Device Configuration.
Note: Selections in the terminal window can be made using the indicated keys only. (Mouse
clicks do not work.)
e. In the Network Configuration box, remove the asterisk in the Use DHCP field (which is set by
default), enter appropriate addresses in the Static IP, Netmask, and Default gateway IP fields, and
then click the OK button.
f. Click the Save button in the Select a Device box and the Save & Quit button in the Select Action
box.
This save the configuration and closes the terminal window.
16. If you intend to install ESXi or Hyper-V, you must provide a supported ESXi or Hyper-V ISO image
(see Hypervisor ISO Images on page 56). Therefore, download the hypervisor ISO image into the
appropriate folder for that hypervisor.
Note: You do not have to provide an AHV image because Foundation includes an AHV tar file
in /home/nutanix/foundation/isos/hypervisor/kvm. However, if you want to install a different
version of AHV, download the AHV tar file from the Nutanix support portal (see Downloading
Installation Files on page 53).
17. If you intend to run the diagnostics test after the cluster is created, download the diagnostic test file(s)
into the appropriate folder for that hypervisor:
AHV (diagnostic.raw.img.gz): /home/nutanix/foundation/isos/diags/kvm
ESXi (diagnostics-disk1.vmdk, diagnostics.mf, and diagnostics.ovf): /home/nutanix/foundation/
isos/diags/esx
Hyper-V (diagnostics_uvm.vhd.gz): /home/nutanix/foundation/isos/diags/hyperv
1. Connect the first 1 GbE network interface of each node to a 1GbE Ethernet switch. The IPMI LAN
interfaces of the nodes must be in failover mode (factory default setting).
The exact location of the port depends on the model type. See the hardware documentation for your
model to determine the port location.
(Nutanix NX Series) The following figure illustrates the location of the network ports on the back of
an NX-3050 (middle RJ-45 interface).
Note: If you want to install AHV by using Foundation, connect both the IMM port and a 10
GbE port.
2. Connect the installation workstation (see Preparing a Workstation on page 30) to the same 1 GbE
switch as the nodes.
Updating Foundation
You can use the Foundation user interface to update Foundation. Updating Foundation to the latest version
is optional but recommended.
To update Foundation, do the following:
1. From the Foundation download page on the Nutanix support portal (https://portal.nutanix.com/#/page/
foundation/list), download the Foundation tarball to the workstation that you plan to use for imaging.
2. Open the Foundation user interface. See Configuring Node Parameters on page 37 if you are using
standalone Foundation. See Discovering Nodes and Launching Foundation on page 8 if you are using
CVM Foundation.
3. In the gear icon menu at the top-right corner of the Foundation user interface, click Update
Foundation.
a. Click Browse, and then browse to and select the Foundation tarball you downloaded.
b. Click Install.
Note: During this procedure, you assign IP addresses to the hypervisor host, the Controller
VMs, and the IPMI interfaces. Do not assign IP addresses from a subnet that overlaps with the
192.168.5.0/24 address space on the default VLAN. Nutanix uses an internal virtual switch to
manage network communications between the Controller VM and the hypervisor host. This switch
is associated with a private network on the default VLAN and uses the 192.168.5.0/24 address
space. If you want to use an overlapping subnet, make sure that you use a different VLAN.
1. Click the Nutanix Foundation icon on the Foundation VM desktop to start the Foundation GUI.
The Node Config screen appears. This screen allows you to configure discovered nodes and add other
(bare metal) nodes to be imaged. Upon opening this screen, Foundation searches the network for
unconfigured Nutanix nodes (that is, factory prepared nodes that are not part of a cluster) and then
displays information about the discovered blocks and nodes. The discovery process can take several
minutes if there are many nodes on the network. Wait for the discovery process to complete before
proceeding. The message "Searching for nodes. This may take a while" appears during discovery.
Note: Foundation discovers nodes on the same subnet as the Foundation VM only. Any nodes
to be imaged that reside on a different subnet must be added explicitly (see step 2). In addition,
Foundation discovers unconfigured Nutanix nodes only. If you are running Foundation on a
preconfigured block with an existing cluster and you want Foundation to image those nodes,
you must first destroy the existing cluster in order for Foundation to discover those nodes.
If Foundation detects nodes that can form a mixed-hypervisor cluster running ESXi and AHV, with the
AHV node used for storage only, the Node Config screen displays a message prompting you to select
those nodes.
3. To image additional (bare metal) nodes, click the Add Blocks button.
A window appears to add a new block. Do the following in the indicated fields:
b. Nodes per Block: Enter the number of nodes to add in each block.
All added blocks get the same number of nodes. To add multiple blocks with differing nodes per
block, add the blocks as separate actions.
The window closes and the new blocks appear at the end of the discovered blocks table.
a. Block ID: Do nothing in this field because it is a unique identifier for the block that is assigned
automatically.
b. Position: Uncheck the boxes for any nodes you do not want to be imaged.
c. IPMI MAC Address: For any nodes you added in step 2, enter the MAC address of the IPMI
interface in this field.
Foundation requires that you provide the MAC address for nodes it has not discovered. (This field is
read-only for discovered nodes and displays a value of "N/A" for those nodes.) The MAC address of
the IPMI interface normally appears on a label on the back of each node. (Make sure you enter the
MAC address from the label that starts with "IPMI:", not the one that starts with "LAN:".) The MAC
address appears in the standard form of six two-digit hexadecimal numbers separated by colons, for
example 00:25:90:D9:01:98.
Caution: Any existing data on the node will be destroyed during imaging. If you are using
the add node option to re-image a previously used node, do not proceed until you have
saved all the data on the node that you want to keep.
d. IPMI IP, Hypervisor IP, and CVM IP: Do one of the following in each of these fields:
Note: If you are using a flat switch, the IPMI IP addresses must be on the same subnet as
the Foundation VM unless you configure multi-homing (see Configuring Global Parameters
on page 42).
Caution: The Nutanix high availability features require that both hypervisor and Controller
VM be in the same subnet. Putting them in different subnets reduces the failure protection
provided by Nutanix and can lead to other problems. Therefore, it is strongly recommended
that you keep both hypervisor and Controller VM in the same subnet.
Specify the IP addresses for each node manually.
Specify the starting address in the row immediately below the table header. The user interface
assigns the IP address to the first node, and it generates consecutive IP addresses for the
remaining nodes. Discovered nodes are sorted first by block ID and then by position, so
IP address assignments are sequential. You can modify these generated IP addresses as
necessary.
Note: Automatic assignment is not used for addresses ending in 0, 1, 254, or 255
because such addresses are commonly reserved by network administrators.
You can specify a step increment or decrement value along with the starting IP address. IP
addresses for the second and subsequent nodes are generated by adding the value to the IP
address in the previous row. For example, if you enter 192.0.2.10 +3, nodes are assigned IP
addresses 192.0.2.10, 192.0.2.13, 192.0.2.16, and so on. If you enter 192.0.2.20 -2, nodes are
assigned IP addresses 192.0.2.20, 192.0.2.18, 192.0.2.16, and so on.
If you want the blocks to be in a particular order before assigning addresses, click Reorder
Blocks. In the Reorder Blocks dialog box, drag blocks to their desired positions, and then click
Done.
f. NX-6035C : Check this box for any node that is a model NX-6035C.
Model NX-6035C nodes are used for "cold" storage and run nothing but a Controller VM; user VMs
are not allowed. NX-6035C nodes run AHV (and so will be imaged with AHV) regardless of what
hypervisor runs on the other nodes in a cluster (see Configuring Image Parameters on page 44).
g. If a platform that functions as a single-node replication target is discovered on the network and you
want to image the node, select its check box in the Backup Node column.
5. To check which IP addresses are active and reachable, click Ping Scan.
This does a ping test to each IP address in the IPMI, hypervisor, and CVM IP address fields. A
(returned response) or (no response) icon appears next to that field to indicate the ping test result
for each node. This feature is most useful when imaging a previously unconfigured set of nodes. None
of the selected IPs should be pingable. Successful pings usually indicate a conflict with the existing
infrastructure.
Note: When re-imaging a configured set of nodes using the same network configuration, failure
to ping indicates a networking issue.
c. Username: Enter the IPMI user name. The default user name is ADMIN.
2. In the CVM and Hypervisor section, specify global information for the CVM and hypervisor network:
d. CVM Memory: Specify a memory size for the Controller VM from the pull-down list. For more
information about Controller VM memory configuration, see CVM Memory and vCPU Configurations
(G4/Haswell/Ivy Bridge) on page 62.
This field is set initially to default. The default value varies by platform, and represents the
recommended value for the platform. The options are 16 GB, 24 GB, 32 GB, and 64 GB. Assigning
more than the default might be appropriate in certain situations.
3. Select the cluster's time zone from the Time Zone list. Time zone selection is optional.
4. If you are using a flat switch (no routing tables) for installation and require access to multiple subnets,
check the Multi-Homing box in the bottom section of the screen.
This section includes fields that you can use to assign the Foundation VM IP addresses in each of
the subnets configured in the upper half of this page, namely the subnet of the IPMI IP address and
the subnet of the CVMs and hypervisor hosts. Multi-homing enables the Foundation VM to configure
production IP addresses when connected to a flat switch. If you do not configure multi-homing, all IP
addresses must be either on the same subnet or routable.
In IPMI IP, enter an unused IP address in the same subnet as the IPMI interfaces of the Nutanix
hosts.
In CVM AND Hypervisor IP, enter an unused IP address in the same subnet as the CVMs and
hypervisor hosts.
5. Click Next.
1. If the prompt is displayed, click the type of cluster you want, and then click Continue.
One of the following occurs:
If you choose the multi-hypervisor option, an additional item named ESX+AHV is included in the
Hypervisor Type list and selected by default.
If you choose the single-hypervisor option, Foundation displays the default Image Selection screen
shown earlier.
2. From the Available Packages list, select the AOS installer package that you want to use.
Note: The list displays the packages that are available in the ~/foundation/nos directory
on the Foundation VM. If the desired AOS package does not appear in the list, or if the list is
empty, you must upload a package from your workstation to the ~/foundation/nos directory. To
upload the package to the directory, perform the next step.
c. Browse to and upload the package that you want to use, and then click Close.
The package you uploaded appears in the Available Packages list.
4. From the Hypervisor Type list, select the hypervisor type that you want to install (for multi-hypervisor
clusters, the selected hypervisor is installed on the compute nodes and AHV is installed on the storage-
only nodes.
Selecting Hyper-V as the hypervisor displays a SKU list.
Caution: To install Hyper-V, the nodes must have a 64 GB DOM. Attempts to install Hyper-
V on nodes with less DOM capacity will fail. See Hyper-V Installation Requirements on
page 65 for additional considerations when installing a Hyper-V cluster.
5. From the Available Packages list, select the hypervisor ISO image that you want to use.
If the hypervisor ISO that you want to use is not listed, click Manage above the Available Packages
list, upload the ISO file that you want to use, and then select the file from the Available Packages list,
as described earlier for uploading an AOS package.
6. From the list of AHV bundles for the storage-only nodes, backup nodes, or multi-cluster nodes (based
on the mix of selected nodes, the list is named Available AHV Packages (Storage Only Nodes),
Available AHV Packages (Backup Nodes), or just Available AHV Packages, select the AHV package
that you want to use.
If the AHV ISO that you want to use is not listed, click Manage above the Available Packages list,
upload the ISO file that you want to use, and then select the file from the Available Packages list, as
described earlier for uploading an AOS package.
7. [Hyper-V only] From the Choose Hyper-V SKU list, select the SKU for the Hyper-V version to use.
Five Hyper-V versions are supported: Free, Standard, Datacenter, Standard with GUI, Datacenter
with GUI. This column appears only if you select Hyper-V.
1. To add a new cluster that will be created after imaging the nodes, click Create New Cluster in the
Cluster Creation section at the top of the screen.
This section includes a table that is empty initially. A blank line appears in the table for the new cluster.
Enter the following information in the indicated fields:
c. CVM DNS Servers: Enter the Controller VM DNS server IP address or URL.
Enter a comma separated list to specify multiple server addresses in this field (and the next two
fields).
d. CVM NTP Servers: Enter the Controller VM NTP server IP address or URL.
You must enter an NTP server that the Controller VMs can reach. If the NTP server is not reachable
or if the time on the Controller VMs is ahead of the current time, cluster services may fail to start.
Note: For Hyper-V clusters, the CVM NTP Servers parameter must be set to the Active
Directory domain controller.
e. Hypervisor NTP Servers: Enter the hypervisor NTP server IP address or URL.
f. Max Redundancy Factor: Select a redundancy factor (2 or 3) for the cluster from the pull-down list.
This parameter specifies the number of times each piece of data is replicated in the cluster (either 2
or 3 copies). It sets how many simultaneous node failures the cluster can tolerate and the minimum
number of nodes required to support that protection.
Note: For single-node replication target clusters, you can only configure redundancy factor
of 2.
In the single-node replication target clusters, all the nodes are automatically added. You
need to enter just the name of the cluster.
In the single-node replication target clusters, all the nodes are automatically added. You need to enter
just the name of the cluster.
2. To run cluster diagnostic and/or health checks after creating a cluster, check the appropriate boxes in
the Post Image Testing section.
Note: You must download the appropriate diagnostics test file(s) from the support portal to
run this test (see Preparing a Workstation on page 30).
Check the NCC Testing box to run the Nutanix Cluster Check (NCC) test suite. This is a suite of
tests that check a variety of health metrics in the cluster. The results are stored in the ~/foundation/
logs/ncc directory.
3. To assign nodes to a new cluster (from step 1), check the boxes for each node in the Block and Nodes
field to be included in that cluster.
A section for each new cluster appears in the bottom of the screen. Each section includes all the nodes
to be imaged. You can assign a node to any of the clusters (or leave it unassigned), but a node cannot
be assigned to more than one cluster.
Note: This assignment is to a new cluster only. Uncheck the boxes for any nodes you want to
add to an existing cluster, which can be done through the web console or nCLI at a later time.
4. When all settings are correct, click the Run Installation button at the top of the screen to start the
installation process (see Monitoring Progress on page 49).
Monitoring Progress
Before you begin: Complete Configuring Cluster Parameters on page 46 (or Configuring Image
Parameters on page 44 if you are not creating a cluster).
When all the global, node, and cluster settings are correct, do the following:
Note: If the IPMI port configuration fails for one or more nodes in the cluster, the installation
process stops before imaging any of the nodes. To correct a port configuration problem, see
Fixing IPMI Configuration Problems on page 71.
The status message for each node (in the Node Status section) displays the imaging percentage
complete and current step. Nodes are imaged in parallel, and the imaging process takes about 45
minutes. You can monitor overall progress by clicking the Log link at the top, which displays the
service.log contents in a separate tab or window. Click on the Log link for a node to display the log file
for that node in a separate tab or window.
3. If the progress bar turns red with a "There were errors in the installation" message and one or more
node or cluster entries have a red X in the status column, the installation failed at the node imaging or
cluster creation step. To correct such problems, see Fixing Imaging Problems on page 72. Clicking
the Back to config button returns you to the configuration screens to correct any entries. The default
per-node installation timeout is 30 minutes for ESXi or 60 minutes for Hyper-V and AHV, so you can
expect all the nodes (in each run of up to 20 nodes) to finish successfully or encounter a problem in that
amount of time.
To remove the persistent information after an installation, go to a configuration screen and then click the
Reset Configuration option from the gear icon pull-down list in the upper right of the screen.
Clicking this button reinitializes the progress monitor, destroys the persisted configuration data, and
returns the Foundation environment to a fresh state.
1. Open a web browser and log in to the Nutanix Support portal: http://portal.nutanix.com.
2. Click Downloads from the main menu (at the top) and then select the desired page: AOS (NOS) to
download AOS files and Foundation to download Foundation files.
3. To download a Foundation installation bundle (see Foundation Files on page 54), go to the
Foundation page and do one (or more) of the following:
To download the Java applet used in discovery (see Creating a Cluster on page 7), click link to
jnlp from online bundle. This downloads nutanix_foundation_applet.jnlp and allows you to start
discovery immediately.
To download an offline bundle containing the Java applet, click offline bundle. This downloads an
installation bundle that can be taken to environments which do not allow Internet access.
To download the standalone Foundation bundle (see Imaging Bare Metal Nodes on page 29), click
Foundation_VM-version#.ovf.tar. (The exact file name varies by release.) This downloads an
installation bundle that includes OVF and VMDK files.
To download an installation bundle used to upgrade standalone Foundation, click
foundation-version#.tar.gz.
To download the current hypervisor ISO whitelist, click iso_whitelist.json.
Note: Use the filter option to display the files for a specific Foundation release.
4. To download an AOS release bundle, go to the AOS (NOS) page and click the button or link for the
desired release.
Clicking the Download version# button in the upper right of the screen downloads the latest
AOS release. You can download an earlier AOS release by clicking the appropriate Download
version# link under the ADDITIONAL RELEASES heading. The tar file to download is named
nutanix_installer_package-version#.tar.gz.
5. To download AHV, go to the Hypervisor Details page and do one of the following:
Download an AHV tar archive. AHV tar archives are named host-bundle-
el6.nutanix.version#.tar.gz, where version# is the AHV version.
Download an AHV tar archive if the tar archive that is included by default in the standalone
Foundation VM is not of the desired version. If you want to use a downloaded AHV tar archive to
image a node, you might also have to download the latest hypervisor ISO whitelist when using
Foundation.
In the Foundation VM, you can also generate an AHV ISO file from the downloaded tar archive and
use the ISO file to manually image nodes.
Download an AHV ISO file. AHV ISO files are named installer-el6.nutanix.version#.iso, where
version# is the AHV version.
Download the ISO file if you want to manually image each node.
For information about using the Foundation VM to image nodes, see Imaging Bare Metal Nodes on
page 29. For information about manually imaging nodes, see Appendix: Imaging a Node (Phoenix) on
page 80.
Foundation Files
The following table describes the files required to install Foundation. Use the latest Foundation version
available unless instructed by Nutanix customer support to use an earlier version.
The following table describes the fields that appear in the iso_whitelist.json file for each ISO image.
iso_whitelist.json Fields
Name Description
The following are sample entries from the whitelist for an ESX and an AHV image.
"iso_whitelist": {
"478e2c6f7a875dd3dacaaeb2b0b38228": {
"min_foundation": "2.1",
"hypervisor": "esx",
"min_nos": null,
"friendly_name": "ESX 6.0",
"version": "6.0",
"unsupported_hardware": [],
"compatible_versions": {
"esx": ["^6\\.0.*"]
},
"a2a97a6af6a3e397b43e3a4c7a86ee37": {
"min_foundation": "3.0",
"hypervisor": "kvm",
"min_nos": null,
"friendly_name": "20160127",
"compatible_versions": {
"kvm": [
"^el6.nutanix.20160127$"
]
},
"version": "20160127",
"deprecated": "3.1",
"unsupported_hardware": []
},
You will need the following information during the cluster configuration:
Default gateway
Network mask
DNS server
NTP server
You should also check whether a proxy server is in place in the network. If so, you will need the IP address
and port number of that server when enabling Nutanix support on the cluster.
New IP Addresses
Each node in a Nutanix cluster requires three IP addresses, one for each of the following components:
IPMI interface
Hypervisor host
Nutanix Controller VM
All Controller VMs and hypervisor hosts must be on the same subnet. No systems other than the Controller
VMs and hypervisor hosts can be on this network, which must be isolated and protected.
The following Nutanix network port diagrams show the ports that must be open for supported hypervisors.
The diagrams also shows ports that must be opened for infrastructure services.
Note: Nutanix Engineering has determined that memory requirements for each Controller VM in
your cluster are likely to increase for subsequent releases. Nutanix recommends that you plan to
upgrade memory.
Platform Default
The following table show the minimum amount of memory required for the Controller VM on each node for
platforms that do not follow the default. For the workload translation into models, see Platform Workload
Translation (G5/Broadwell) on page 62.
Note: To calculate the number of vCPUs for your model, use the number of physical cores per
socket in your model. The minimum number of vCPUS your Controller VM can have is eight and
the maximum number is 12.
If your CPU has less than eight logical cores, allocate a maximum of 75 percent of the cores of a
single CPU to the Controller VM. For example, if your CPU has 6 cores, allocate 4 vCPUs.
Platform Default
The following tables show the minimum amount of memory and vCPU requirements and recommendations
for the Controller VM on each node for platforms that do not follow the default.
Dell Platforms
XC730xd-24 32 16 8
XC6320-6AF
XC630-10AF
Lenovo Platforms
HX-3500 24 8
HX-5500
HX-7500
Note:
SSP requires a minimum of 24 GB of memory for the CVM. If the CVMs
already have 24 GB of memory, no additional memory is necessary to run
SSP.
If the CVMs have less than 24 GB of memory, increase the memory to 24 GB
to use SSP.
If the cluster is using any other features that require additional CVM memory,
add 4 GB for SSP in addition to the amount needed for the other features.
Requirements:
The primary domain controller version must at least be 2008 R2.
Note: If you have Volume Shadow Copy Service (VSS) based backup tool (for example
Veeam), functional level of Active Directory must be 2008 or higher.
Active Directory Web Services (ADWS) must be installed and running. By default, connections are
made over TCP port 9389, and firewall policies must enable an exception on this port for ADWS.
To test that ADWS is installed and running on a domain controller, log on by using a domain
administrator account in a Windows host other than the domain controller host that is joined to the
same domain and has the RSAT-AD-Powershell feature installed, and run the following PowerShell
command. If the command prints the primary name of the domain controller, then ADWS is installed and
the port is open.
> (Get-ADDomainController).Name
If the free version of Hyper-V is installed, the primary domain controller server must not block
PowerShell remoting.
To test this scenario, log on by using a domain administrator account in a Windows host and run the
following PowerShell command.
> Invoke-Command -ComputerName (Get-ADDomainController).Name -ScriptBlock {hostname}
If the command prints the name of the Active Directory server hostname, then PowerShell remoting to
the Active Directory server is not blocked.
SCVMM
Requirements:
The SCVMM version must at least be 2012 R2 and it must be installed on Windows Server 2012 or a
newer version.
Note: Currently SCVMM 2016 and Windows Server 2016 version are not supported.
Replace scvmm_server with the SCVMM host name and MYDOMAIN with Active Directory domain
name.
Note: If the SCVMM server does not allow PowerShell remoting, you can perform the SCVMM
setup manually by using the SCVMM user interface.
The ipconfig command must run in a PowerShell window on the SCVMM server. To verify run the
following command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {ipconfig} -Credential
MYDOMAIN\username
Replace scvmm_server_name with the SCVMM host name and MYDOMAIN with Active Directory
domain name.
The SMB client configuration in the SCVMM server should have RequireSecuritySignature set to
False. To verify, run the following command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {Get-SMBClientConfiguration |
FL RequireSecuritySignature}
If you are changing it from True to False, it is important to confirm that the policies that are on the
SCVMM host have the correct value. On the SCVMM host run rsop.msc to review the resultant set
of policy details, and verify the value by navigating to, Servername > Computer Configuration >
Windows Settings > Security Settings > Local Policies > Security Options: Policy Microsoft
Note: If security signing is mandatory, then you need to enable Kerberos in the Nutanix cluster.
In this case, it is important to ensure that the time remains synchronized between the Active
Directory server, the Nutanix hosts, and the Nutanix Controller VMs. The Nutanix hosts and the
Controller VMs set their NTP server as the Active Directory server, so it should be sufficient to
ensure that Active Directory domain is configured correctly for consistent time synchronization.
IP Addresses
Note: For N nodes, (3*N + 2) IP addresses are required. All IP addresses must be in the same
subnet.
DNS Requirements
Each Nutanix host must be assigned a name of 15 characters or less, which gets automatically added
to the DNS server during domain joining.
The Nutanix storage cluster needs to be assigned a name of 15 characters or less, which must be
added to the DNS server when the storage cluster is joined to the domain.
The Hyper-V failover cluster must be assigned a name of 15 characters or less, which gets
automatically added to the DNS server when the failover cluster is created.
After the Hyper-V configuration, all names must resolve to an IP address in the Nutanix hosts, the
SCVMM server (if applicable), or any other host that needs access to the Nutanix storage, for example,
a host running the Hyper-V Manager.
Virtual machine and virtual disk paths must always refer to the Nutanix storage cluster by name, not the
external IP address. If you use the IP address, it directs all the I/O to a single node in the cluster and
thereby compromises performance and scalability.
Note: For external non-Nutanix host that needs to access Nutanix SMB shares, see "Nutanix
SMB Shares Connection Requirements from Outside the Cluster" in the "Cluster Management"
chapter of Hyper-V Administration for Acropolis.
When applying Windows updates to the Nutanix hosts, the hosts should be restarted one at a time,
ensuring that Nutanix services comes up fully in the Controller VM of the restarted host before updating
the next host. This can be accomplished by using Cluster Aware Updating and using a Nutanix-provided
script, which can be plugged into the Cluster Aware Update Manager as a pre-update script. This pre-
update script ensures that the Nutanix services go down on only one host at a time ensuring availability
of storage throughout the update procedure. For more information about cluster-aware updating, see
"Installing Windows Updates with Cluster-Aware Updating" in the "Cluster Management" chapter of
Hyper-V Administration for Acropolis.
Note: Ensure that automatic Windows updates are not enabled for the Nutanix hosts in the
domain policies.
3. Press the Delete key during boot up when prompted to enter the BIOS setup mode.
The BIOS Setup Utility screen appears.
6. Select Update IPMI LAN Configuration, press Enter, and then select Yes in the pop-up window.
7. Select Configuration Address Source, press Enter, and then select Static in the pop-up window.
9. Select Subnet Mask, press Enter, and then enter the corresponding submask value in the pop-up
window.
10. Select Gateway IP Address, press Enter, and then enter the IP address for the node's network
gateway in the pop-up window.
11. When all the field entries are correct, press the F4 key to save the settings and exit the BIOS setup
mode.
1. Go to the Block & Node Config screen and review the problem IP address for the failed nodes (nodes
with a red X next to the IPMI address field).
Hovering the cursor over the address displays a pop-up message with troubleshooting information. This
can help you diagnose the problem. See the service.log file (in /home/nutanix/foundation/log) and
the individual node log files for more detailed information.
2. When you have corrected all the problems and are ready to try again, click the Configure IPMI button
at the top of the screen.
3. Repeat the preceding steps as necessary to fix all the IPMI configuration errors.
4. When all nodes have green check marks in the IPMI address column, click the Image Nodes button at
the top of the screen to begin the imaging step.
If you cannot fix the IPMI configuration problem for one or more of the nodes, you can bypass those
nodes and continue to the imaging step for the other nodes by clicking the Proceed button. In this case
you must configure the IPMI port address manually for each bypassed node (see Setting IPMI Static IP
Address on page 69).
1. See the individual log file for any failed nodes for information about the problem.
Controller VM location for Foundation logs: ~/data/logs/foundation and ~/data/logs/
foundation.out[.timestamp]
Bare metal location for Foundation logs: /home/nutanix/foundation/log
3. Repeat the preceding steps as necessary to fix all the imaging errors.
If you cannot fix the imaging problem for one or more of the nodes, you can image those nodes one at a
time (see Appendix: Imaging a Node (Phoenix) on page 80).
Installation Issues
My installation hangs, and the service log complains about type detection.
Verify that all of your IPMI IPs are reachable through Foundation. (On rare occasion the IPMI IP
assignment will take some time.) If you get a complaint about authentication, double-check your
password. If the problem persists, try resetting the BMC.
I have misconfigured the IP addresses in the Foundation configuration page. How long is the timeout for
the call back function, and is there a way I can avoid the wait?
The call back timeout is 60 minutes. To stop the Foundation process and restart it, open up the terminal
in the Foundation VM and enter the following commands:
$ sudo /etc/init.d/foundation_service stop
$ cd ~/foundation/
$ mv persisted_config.json persisted_config.json.bak
$ sudo /etc/init.d/foundation_service start
Refresh the Foundation web page. If the nodes are still stuck, reboot them.
If more space is needed, delete some of the Phoenix ISO images from the Foundation VM.
I keep seeing the message "tar: Exiting with failure status due to previous errors'tar rf /home/
nutanix/foundation/log/archive/log-archive-20140604-131859.tar -C /home/nutanix/foundation ./
persisted_config.json' failed; error ignored."
[Hyper-V] I cannot reach the CVM console via ssh. How do I get to its console?
See KB article 1701 (https://portal.nutanix.com/#/page/kbs/details?targetId=kA0600000008fJhCAI).
[ESXi] Foundation is booting into pre-install Phoenix, but not the ESXi installer.
Check the BIOS version and verify it is supported. If it is not a supported version, upgrade it. See KB
article 1467 ( https://portal.nutanix.com/#/page/kbs/details?targetId=kA0600000008dDxCAI).
I get "This Kernel requires an x86-64 CPU, but only detected an i686 CPU" when trying to boot the VM
on VirtualBox.
The VM needs to be configured to expose a 64-bit CPU. For more information, see https://
forums.virtualbox.org/viewtopic.php?f=8&t=58767.
I am running the network setup script, but I do not see eth0 when I run ifconfig.
This can happen when you make changes to your VirtualBox network adapters. VirtualBox typically
creates a new interface (eth1, then eth2, and so on) to accommodate your new settings. To fix this, run
the following commands:
$ sudo rm /etc/udev/rules.d/70-persistent-net-rules
$ sudo shutdown -r now
This should reboot your machine and reset your adapter to eth0.
I have plugged in the Ethernet cables according to the directions and I can reach the IPMI interface, but
discovery is not finding the nodes to image.
Your Foundation VM must be in the same broadcast domain as the Controller VMs to receive their IPv6
link-local traffic. If you are installing on a flat 1G switch, ensure that the 10G cables are not plugged in.
(If they are, the Controller VMs might choose to direct their traffic over that interface and never reach
your Foundation VM.) If you are installing on a 10G switch, ensure that only the IPMI 10/100 port and
the 10G ports are connected.
Informational Topics
How can I determine whether a node was imaged with Foundation or standalone Phoenix?
A node imaged using standalone Phoenix will have the file /etc/nutanix/foundation_version in it,
but the contents will be unknown instead of a valid foundation version.
A node imaged using Foundation will have the file /etc/nutanix/foundation_version in it with a
valid foundation version.
Does first boot work when run more than once?
First boot creates a marker failure marker file whenever it fails and a success marker file whenever it
succeeds. If the first boot script needs to be executed again, delete these marker files and just manually
execute the script.
Do the first boot marker files contain anything?
They are just empty files.
How does the Foundation service start in the Controller VM-based and standalone versions?
Standalone: Manually start the Foundation service using foundation_service start (in the ~/
foundation/bin directory).
Controller VM-based: Genesis service takes care of starting the Foundation service. If the
Foundation service is not already running, use the genesis restart command to start Foundation. If
the Foundation service is already running, a genesis restart will not restart Foundation. You must
manually kill the Foundation service that is running currently before executing genesis restart. The
genesis status command lists the services running currently along with their PIDs.
How do you validate that installation is complete and the node is ready with regards to firstboot?
This can be validated by checking the presence of a first boot success marker file. The marker file
varies per hypervisor:
ESXi: /bootbank/first_boot.log
AHV: /root/.firstboot_success
HyperV: D:\markers\firstboot_success
If I have two clusters in my lab, can I use one to do bare-metal imaging on the other?
No. This is because the tools and packages which are required for bare-metal imaging are typically not
present in the Controller VM.
How do you add a new node that needs to be imaged to an existing cluster?
If the cluster is running AOS 4.5 or later and the node also has 4.5 or later, you can use the "Expand
Cluster" option in the Prism web console. This option employs Foundation to image the new node (if
required) and then adds it to the existing cluster. You can also add the node through the nCLI: ncli
cluster add-node node-uuid=<uuid>. The UUID value can be found in the factory_config.json file on
the node.
Is is required to supply IPMI details when using the Controller VM-based Foundation?
It is optional to provide IPMI details in Controller VM-based Foundation. If IPMI information is provided,
Foundation will try to configure the IPMI interface as well.
Is it valid to use a share to hold AOS installation bundles and hypervisor ISO files?
AOS installation bundles and hypervisor ISO files can be present anywhere, but there needs to be a
link (as appropriate) in ~/foundation/nos or ~/foundation/isos/hypervisor/[esx|kvm|hyperv]/ to
How can I determine if a particular (standalone) Foundation VM can image a given cluster?
Execute the following command on the Foundation VM and see whether it returns successfully (exit
status 0):
ipmitool H ipmi_ip -U ipmi_username -P ipmi_password fru
If this command is successful, the Foundation VM can be used to image the node. This is the command
used by Foundation to get hardware details from the IPMI interface of the node. The exact tool used for
talking to the SMC IPMI interface is the following:
java -jar SMCIPMITool.jar ipmi_ip username password shell
If this command is able to open a shell, imaging will not fail because of an IPMI issue. Any other errors
like violating minimum requirements will be shown only after Foundation starts imaging the node.
The java command starts a shell with access to the remote IPMI interface. The vmwa command
mounts the ISO file virtually over IPMI. Foundation then opens another terminal and uses the
following commands to set the first boot device to CD ROM and restart the node.
ipmitool H ipmi_ip -U ipmi_username -P ipmi_password chassis bootdev cdrom
ipmitool H ipmi_ip -U ipmi_username -P ipmi_password chassis power reset
Note:
Phoenix is typically not necessary or recommended for imaging a single node. Use Phoenix to
image a single node only when using Prism or Foundation is not a viable option, such as in some
hardware replacement scenarios. Following are the recommended software for imaging, even
when you want to image a single node:
Foundation. See Imaging Bare Metal Nodes on page 29.
Prism web console. This functionality requires AOS 4.5 or later. See the "Expanding a Cluster"
section in the Web Console Guide.
Both software also install the hypervisor during the imaging process.
The following procedures describe how to obtain the ISO images you need and how to install a hypervisor
and AOS on a new or replacement node. The procedures also describe how to configure a hypervisor after
hypervisor boot disk replacement.
The procedures to perform depend on your hardware manufacturer. A summary is provided for each
hardware platform. Each summary outlines the tasks that you need to perform, and each step in the
summary includes a reference to the associated procedure. After performing a procedure that is referenced
in the summary, return to the summary and perform the next procedure.
Note: You can also include the hypervisor ISO in the Phoenix ISO image and perform all
installation tasks from the Phoenix ISO image.
Starting with Phoenix 3.5, the Phoenix ISO image that is available on the Nutanix support portal does not
include an AOS bundle, and you cannot use it to install AOS. You can use the Phoenix ISO image only to
configure the hypervisor after you replace the host boot disk and install a hypervisor on it.
To configure the hypervisor, Phoenix uses the AOS files that are available on the Controller VM boot
drive. With Phoenix using the AOS files that are available on the Controller VM boot drive, matching the
AOS release in the Phoenix ISO image with the AOS release on the Controller VM boot drive becomes
unnecessary. Not including the AOS installer files in the Phoenix ISO image also results in a smaller image
file and a faster download from the Nutanix support portal.
Even though Foundation is the recommended software for installing AOS, you can use Phoenix to install
both AOS and a hypervisor on a single node if you include the AOS installer files and the hypervisor ISO
file in the Phoenix ISO image. The only way to obtain a Phoenix ISO image that includes these installation
files is to download the files to a Foundation VM instance and generate a Phoenix ISO image.
If the node will be added to a cluster after imaging, the version of the AOS installer file you include in the
Phoenix ISO image must match the version of AOS installed on the cluster.
This procedure describes how to download the Phoenix ISO image that you can use to configure the
hypervisor after a host boot disk replacement procedure. The ISO image is available on the Nutanix
support portal.
To download the Phoenix ISO image from the Nutanix support portal, do the following:
1. Open a web browser and log in to the Nutanix support portal: http://portal.nutanix.com.
3. On the Phoenix page, click the name of the Phoenix ISO image in the Download column.
This procedure describes how to generate a Phoenix ISO image that you can use to install AOS, install a
hypervisor, or install both on a single node. To generate a Phoenix ISO image, you need a Foundation VM
instance. In the Phoenix ISO image, you can include AOS installer files, a hypervisor ISO image, or both.
Note: (For Nutanix Support and Systems Engineers) If you include the ISO image of a hypervisor
other than AHV, do not share or distribute the Phoenix ISO image or otherwise make it available to
other users in any form. Delete the ISO image after it has served its purpose.
To generate a Phoenix ISO image, do the following:
3. Navigate to the /home/nutanix/foundation/nos directory and unpack the compressed AOS tar archive.
$ gunzip nutanix_installer_package-version#.tar.gz
Replace AOS_PACKAGE with the full path to the AOS tar archive,
nutanix_installer_package-version#.tar.
Replace TEMP_DIR with the path to the temporary directory to use when generating the Phoenix ISO
image. Default: /home/nutanix/foundation/tmp/.
Replace KVM with the full path to the AHV ISO image or bundle.
Replace ESX with the full path to the ESXi ISO image.
Replace HYPERV with the full path to the Hyper-V ISO image.
The Phoenix ISO is created in the temporary directory, and is the file to use when Installing the
Controller VM and Hypervisor by Using Phoenix on page 100.
This task describes how to generate an ISO image from a particular version of AHV on a Foundation VM
instance. Use this procedure if an ISO image of the required version is not available on the Nutanix support
portal.
Note: The command for generating an AHV ISO is supported only in Foundation 3.1 or later.
Replace ahv_tar_archive with the full path to the AHV tar archive,
host_bundle_el6.nutanix.version#.tar.gz. The command generates an AHV ISO file named
kvm.version#.iso in the current directory.
Before you begin: If you are adding a new node, physically install that node at your site. See the NX and
SX Series Hardware Administration and Reference for your model type. Imaging a new or replacement
node can be done either through the system management interface, which requires a network connection,
or through a direct attached USB. These instructions assume you are installing through the system
management interface.
Note: Imaging a node using Phoenix is restricted to Nutanix sales engineers, support engineers,
and partners. Contact Nutanix customer support or your partner for help with this procedure.
1. Obtain the ISO images you need (see Installation ISO Images on page 80).
2. Attach the hypervisor ISO image (see Attaching an ISO Image (Nutanix NX Series Platforms) on
page 84).
4. Attach the Phoenix ISO image (see Attaching an ISO Image (Nutanix NX Series Platforms) on
page 84).
5. Install the Nutanix Controller VM and provision the hypervisor (see Installing the Controller VM and
Hypervisor by Using Phoenix on page 100).
This procedure describes how to attach an ISO image on Nutanix NX series platforms.
Before you begin: Obtain the ISO image you need. See Installation ISO Images on page 80.
Caution: The node must have a 64 GB DOM to install Hyper-V. Attempts to install Hyper-V on a
node with less DOM capacity will fail.
To attach an ISO image on a Nutanix NX series platform, do the following:
1. Verify that you have access to the IPMI interface on the node.
a. Connect the IPMI port on that node to the network if it is not already connected.
A 1 GbE or 10 GbE port connection is not required for imaging the node.
b. Assign an IP address (static or DHCP) to the IPMI interface on the node if it is not already assigned.
To assign a static address, see Setting IPMI Static IP Address on page 69.
4. Select Console Redirection from the Remote Console drop-down list of the main menu, and then
click the Launch Console button.
5. Select Virtual Storage from the Virtual Media drop-down list of the remote console main menu.
6. Click the CDROM&ISO tab in the Virtual Storage window, select ISO File from the Logical Drive Type
field drop-down list, and click the Open Image button.
7. In the browse window, go to where the ISO image is located, select the image, and then click the Open
button.
8. Click the Plug In button and then the OK button to close the Virtual Storage window.
What to do next: If you booted to the hypervisor ISO image, complete installation by following the
installation steps for the hypervisor.
If you booted to the Phoenix ISO image, configure the hypervisor and, if the node does not have the
Controller VM installed, install the Controller VM.
1. Click Continue at the installation screen and then accept the end user license agreement on the next
screen.
2. On the Select a Disk screen, select the SATADOM as the storage device, click Continue, and then click
OK in the confirmation window.
3. In the keyboard layout screen, select a layout (such as US Default) and then click Continue.
Note: The root password must be nutanix/4u or the installation will fail.
5. Review the information on the Install Confirm screen and then click Install.
6. When the Installation Complete screen appears, go back to the Virtual Storage screen, click the Plug
Out button, and then return to the Installation Complete screen and click Reboot.
Note: Do not assign host IP addresses before installing the Nutanix Controller VM. Doing so
can cause the Controller VM install to fail.
2. At the Press any key to boot from CD or DVD prompt, press SHIFT+F10.
A command prompt appears.
c. Find the disk in the displayed list that is about 60 GB (only one disk will be that size). Select that disk
and then run the clean command:
select disk number
clean
d. Create and format a primary partition (size 1024 and file system fat32).
create partition primary size=1024
select partition 1
format fs=fat32 quick
e. Create and format a second primary partition (default size and file system ntfs).
create partition primary
select partition 2
format fs=ntfs quick
This displays a table of logical volumes and their associated drive letter, size, and file system type.
Locate the volume with an NTFS file system and size of approximately 50 GB. If this volume (which
is the DOM install partition) is drive letter "C", go to the next step.
Otherwise, do one of the following:
If drive letter "C" is assigned currently to another volume, enter the following commands to
remove the current "C" drive volume and reassign "C" to the DOM install partition volume:
select volume cdrive_volume_id#
remove
select volume dom_install_volume_id#
assign letter=c
If drive letter "C" is not assigned currently, enter the following commands to assign "C" to the
DOM install partition volume:
select volume dom_install_volume_id#
assign letter=c
b. In the language selection screen that reappears, again just click the Next button.
c. In the install screen that reappears click the Install now button.
d. In the operating system screen, select Windows Server 2012 Datacenter (Server Core
Installation) and then click the Next button.
e. In the license terms screen, check the I accept the license terms box and then click the Next
button.
f. In the type of installation screen, select Custom: Install Windows only (advanced).
g. In the where to install screen, select Partition 2 (the NTFS partition) of the DOM disk you just
formatted and then click the Next button.
Ignore the warning about free space. The installation location is Drive 6 Partition 2 in the example.
5. After Windows boots up, click Ctrl-Alt-Delete and then log in as Administrator when prompted.
7. Install the Nutanix Controller VM and provision the hypervisor (see Installing the Controller VM and
Hypervisor by Using Phoenix on page 100).
Note: A d:\firstboot_fail file appears when this process fails. If that file is not present, the
process is continuing (if slowly).
3. After the system reboots, log back into the IPMI console, go to the CDROM&ISO tab in the Virtual
Storage window, select the AHV ISO file, and click the Plug Out button (and then the OK button) to
unmount the ISO (see Attaching an ISO Image (Nutanix NX Series Platforms) on page 84).
Before you begin: Physically prepare the node to be imaged (as needed). See the Lenovo Converged HX
Series Hardware Replacement Guide (https://support.lenovo.com/us/en/docs/um104436) for information
on how to replace the boot drive.
Note: Imaging a node using Phoenix is restricted to Lenovo sales engineers, support engineers,
and partners.
1. Obtain the ISO images you need (see Installation ISO Images on page 80).
2. Attach the hypervisor ISO image (see Attaching an ISO Image (Lenovo Converged HX Series
Platforms) on page 91).
4. Attach the Phoenix ISO image (see Attaching an ISO Image (Lenovo Converged HX Series Platforms)
on page 91).
5. Install the Controller VM and provision the hypervisor (see Installing the Controller VM and Hypervisor
by Using Phoenix on page 100).
1. Verify you have access to the IMM interface for the node.
a. Connect the IMM port on that node to the network if it is not already connected.
A data network connection is not required for imaging the node.
b. Assign an IP address (static or DHCP) to the IMM interface on the node if it is not already assigned.
Refer to the Lenovo Converged HX series documentation for instructions on setting the IMM IP
address.
4. Select Remote Control then click the Start remote control in single-user mode button.
Click Continue in the Security Warning dialog box then Run in the Java permission dialog box that
appear.
7. Click the Add Image button, go to where the ISO image is located, select that file, and then click Open.
9. Select Tools > Power > Reboot (if the system is running) or On (if the system is turned off).
The system starts using the selected image.
What to do next: If you booted to the hypervisor ISO image, complete installation by following the
installation steps for the hypervisor.
If you booted to the Phoenix ISO image, configure the hypervisor and, if the node does not have the
Controller VM installed, install the Controller VM.
Before you begin: Attach the hypervisor installation media (see Attaching an ISO Image (Lenovo
Converged HX Series Platforms) on page 91).
1. On the boot menu, select the installer and wait for it to load.
2. Press Enter to continue then F11 to accept the end user license agreement.
3. On the Select a Disk screen, select the appropriate storage device, click Continue, and then click OK in
the confirmation window.
For products with Broadwell processors, select the approximately 60 GiB DOM as the storage
device.
4. In the keyboard layout screen, select a layout (such as US Default) and then click Continue.
Note: The root password must be nutanix/4u or the installation will fail.
6. Review the information on the Install Confirm screen and then click Install.
The installation begins and a dynamic progress bar appears.
7. When the Installation Complete screen appears, in the Video Viewer window click Virtual Media >
Unmount All and accept the warning message.
Note: Do not assign host IP addresses before installing the Nutanix Controller VM. Doing so
can cause the Controller VM install to fail.
3. After the system reboots, log back into the IMM console and unmount the AHV ISO. In the Video Viewer
window click Virtual Media > Unmount All and accept the warning message.
Before you begin: Physically prepare the node to be imaged (as needed). See the Cisco UCS
documentation for information on how to replace the boot drive.
Note: Imaging a node using Phoenix is restricted to Nutanix sales engineers, support engineers,
and partners.
1. Obtain the ISO images you need (see Installation ISO Images on page 80).
4. Attach the Phoenix ISO image (see Attaching an ISO Image on Cisco UCS Platforms on page 95).
5. Install the Controller VM and provision the hypervisor (see Installing the Controller VM and Hypervisor
by Using Phoenix on page 100).
This procedure describes how to attach an ISO image on Cisco UCSc platforms.
Before you begin: Download the hypervisor ISO file to a workstation and connect the workstation to the
network.
To install a hypervisor on a new or replacement drive on a server running in standalone mode, do the
following:
1. From the workstation, log in to the CIMC web console by entering the CIMC IP address of the server
and the user name and password that you specified during the initial configuration of the CIMC.
2. In the toolbar, click Launch KVM Console to launch the KVM Console.
b. Click Activate Virtual Devices to start a vMedia session that allows you to attach the ISO image file
from your local computer. If you have not allowed unsecured connections, the CIMC web console
prompts you to accept the session. Click Accept this session.
c. Click Map CD/DVD in Virtual Media and select the ISO image file.
d. Start the server and press F6 when prompted to open the boot menu screen. On the boot menu
screen, choose Cisco vKVM-Mapped vDVD1.22 and press Enter.
The server boots from the ISO image.
What to do next: If you booted to the hypervisor ISO image, complete installation by following the
installation steps for the hypervisor.
If you booted to the Phoenix ISO image, configure the hypervisor and, if the node does not have the
Controller VM installed, install the Controller VM.
2. Start the KVM Console on the server on which you replaced the hypervisor boot drive.
b. Click Activate Virtual Devices to start a vMedia session that allows you to attach the ISO image file
from your local computer. If you have not allowed unsecured connections, the CIMC web console
prompts you to accept the session. Click Accept this session.
c. Click Map CD/DVD in Virtual Media and select the ISO image file.
d. Start the server and press F6 when prompted to open the boot menu screen. On the boot menu
screen, choose Cisco vKVM-Mapped vDVD1.22 and press Enter.
The server boots from the ISO image.
What to do next: If you booted to the hypervisor ISO image, complete installation by following the
installation steps for the hypervisor.
If you booted to the Phoenix ISO image, configure the hypervisor and, if the node does not have the
Controller VM installed, install the Controller VM.
1. On the boot menu, select the installer and wait for it to load.
2. Press Enter to continue, and then press F11 to accept the end-user license agreement.
3. On the Select a Disk screen, select the appropriate storage device, click Continue, and then click OK in
the confirmation window.
For C240 servers, select the 120 GB internal SSD as the storage device.
4. In the keyboard layout screen, select a layout (such as US Default) and then click Continue.
6. Review the information on the Install Confirm screen and then click Install.
The installation begins and a dynamic progress bar appears.
7. When the Installation Complete screen appears, in the Video Viewer window click Virtual Media >
Unmount All and accept the warning message.
Use the automatic installation option on C240 servers and the graphical installation option on C220
servers.
3. After the system reboots, from the KVM console, unmount the AHV ISO.
What to do next:
1. Attach the Controller VM ISO image, install the Nutanix Controller VM, and configure the hypervisor
(see Attaching an ISO Image on Cisco UCS Platforms on page 95 and Installing the Controller VM
and Hypervisor by Using Phoenix on page 100).
2. Configure the network on the hypervisor host.
For information about configuring the network on an AHV host, see "Changing the IP Address of an
Acropolis Host" in the "Host Network Management" chapter of the AHV Administration Guide.
For information about configuring the network on an ESXi host, see "Configuring Host Networking
(ESXi)" in the "vSphere Networking" chapter of the vSphere Administration Guide for Acropolis (Using
vSphere Web Client).
1. Select Graphical Installer on the welcome screen before the countdown for automatic installation
expires.
3. Modify the existing partition format to make the Cisco FlexFlash drives available on the partition layout
screen.
e. At the prompt, confirm that you want to destroy the existing partition table.
Use the print command to verify that the partition table was created.
4. On the language and keyboard selection screens, select the desired language and keyboard layout,
respectively.
5. On the storage devices screen, click Basic Storage Devices and click Next.
6. Click Fresh Installation if you are prompted to choose between and starting a fresh installation, and
then click Next.
7. On the next two screens, specify a host name and the time zone.
9. On the installation type screen, click Create Custom Layout and then click Next.
12. In the Add Partition dialog box, specify the following details:
Mount Point. Enter a forward slash (/).
File System Type. Select ext4.
Allowable Drives. Select sda.
Make sure that none of the other check boxes are selected.
14. Select sda from the First BIOS drive list in the Boot loader device dialog box. Click Next.
Installation begins. Progress is indicated by a progress bar.
16. After the system restarts, log on the hypervisor host with SSH. Log on with the user name root and
password nutanix/4u.
17. Specify /root as the directory to which log messages must be sent.
root@ahv# puppet apply --logdest /root/post-install-puppet.log -e "include kvm"
What to do next:
1. Attach the Controller VM ISO image, install the Nutanix Controller VM, and configure the hypervisor
(see Attaching an ISO Image on Cisco UCS Platforms on page 95 and Installing the Controller VM
and Hypervisor by Using Phoenix on page 100).
2. Configure the network on the hypervisor host.
For information about configuring the network on an AHV host, see "Changing the IP Address of an
Acropolis Host" in the "Host Network Management" chapter of the AHV Administration Guide.
For information about configuring the network on an ESXi host, see "Configuring Host Networking
(ESXi)" in the "vSphere Networking" chapter of the vSphere Administration Guide for Acropolis (Using
vSphere Web Client).
a. Review the values in the upper eight fields to verify they are correct, or update them if necessary.
Only the Block ID, Node Serial, Node Position, and Node Cluster ID fields can be edited in this
screen. Node Position must be A for all single-node blocks.
Note: Even if you want to only install the Controller VM, you must select both options.
Selecting Clean CVM by itself will fail. Also, for this step to work, you must use a Phoenix
ISO image that includes the requisite installer files. That is, if you want to install AOS,
the Phoenix ISO image must include the AOS installer files, and if you want to install
the hypervisor, the Phoenix ISO image must include the hypervisor ISO. For information
about Phoenix ISO files, see Phoenix ISO Image on page 81.
If you are imaging a replacement hypervisor boot drive or node (using existing drives), select
Configure Hypervisor only. This option configures the hypervisor without installing a new
Controller VM.
If you are instructed to do so by Nutanix customer support, select Repair CVM. This option is for
repairing certain problem conditions. Ignore this option unless Nutanix customer support instructs
you to select it.
c. When all the fields are correct, click the Start button.
On ESXi and AHV, the node restarts with the new image, additional configuration tasks run, and then
the host restarts again. Wait until this stage completes (typically 15-30 minutes depending on the
hypervisor) before accessing the node. No additional steps need to be performed.
This causes a reboot and the firstboot script to run, after which the host will reboot two more times.
This process can take substantial time (possibly 15 minutes) without any progress indicators. To
monitor progress, log into the VM after the initial reboot and enter the command notepad C:\Program
Files\Nutanix\Logs\first_boot.log. This displays a (static) snapshot of the log file. Repeat this
command as desired to see an updated version of the log file.
Note: A d:\firstboot_fail file appears if this process fails. If that file is not present, the
process is continuing (if slowly).