Beruflich Dokumente
Kultur Dokumente
AHV
Nutanix Best Practices
Copyright
Copyright 2018 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws.
Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other
marks and names mentioned herein may be trademarks of their respective companies.
Copyright | 2
Docker Containers on AHV
Contents
1. Executive Summary................................................................................ 5
2. Introduction..............................................................................................6
2.1. Audience........................................................................................................................ 6
2.2. Purpose..........................................................................................................................6
8. Conclusion............................................................................................. 39
Appendix......................................................................................................................... 41
Nutanix Resources..............................................................................................................41
Additional Docker Ecosystem Components........................................................................41
Docker Best Practices........................................................................................................ 41
Further Research................................................................................................................ 41
About the Author.................................................................................................................42
3
Docker Containers on AHV
About Nutanix......................................................................................................................42
List of Figures................................................................................................................43
List of Tables................................................................................................................. 44
4
Docker Containers on AHV
1. Executive Summary
Nutanix supports Docker’s rapid application container “build, run, and deploy” paradigm with
a single virtualization platform defined by consumer-grade simplicity and elastic scale. The
Nutanix architecture moves beyond legacy three-tier virtualization infrastructures via automatic
scaling as well as pooling and tiering locally attached storage. The Nutanix driver for Docker
Machine allows cloud-like provisioning of virtual machines (VMs) that are then enabled to run
a Docker Engine immediately after startup. Rightsized, virtualized environments like those built
on Nutanix invisible infrastructure dispense with the need to manage large, oversized, individual
white box server estates. Hosting containers in VMs on Nutanix Acropolis allows for container
migration, persistent storage for containers using the Nutanix Docker Volume driver, and network
and security configuration. The Acropolis Distributed Storage Fabric (DSF) easily handles
mixed-workload environments that include both legacy apps (such as Oracle or Microsoft SQL
Server) and containerized applications. The DSF also ensures data colocation for VMs hosting
containers and continual service for those containers from the most performant SSD-backed
storage tiers.
Nutanix facilitates DevOps-style workflows with rapid VM snapshot and cloning technologies.
These features enable the "provision-manage-retire" cycles required across any deployment
scenario. Administrators can manage these cycles either programmatically with a REST API or
with Prism, a single, intuitive, browser-based GUI. Prism provides rich analytics to allow full stack
monitoring and alerting; single-click, no-downtime upgrades of the Nutanix appliance software
(AOS); VM-centric snapshot and backup; and technologies that facilitate the transfer between
hybrid cloud infrastructures. The Docker on Nutanix solution supports rapid deployment and
scale out, making it an ideal platform for any distributed or microservices architecture, from initial
development and QA through production.
1. Executive Summary | 5
Docker Containers on AHV
2. Introduction
2.1. Audience
This best practices guide is a part of the Nutanix Solutions Library and is intended to provide
an overview of the combination of the native Nutanix hypervisor, AHV, with Docker container
technologies. It is intended for IT architects and administrators as a technical introduction to the
solution.
2.2. Purpose
This document covers the following subject areas:
• Overview of the Nutanix solution.
• Overview of Docker container technology.
• Guidelines for installing and optimizing the Docker container stack on AHV.
• The benefits of implementing the Docker container stack on AHV.
Version
Published Notes
Number
1.0 January 2016 Original publication.
1.1 April 2016 Updated platform overview.
1.2 October 2016 Updated for AOS 4.7.
1.3 March 2017 Updated the Provisioning Dockerized Virtual Machines section.
Updated the Overview, Running Docker on AHV, and Docker
2.0 April 2018
Storage Considerations sections.
2. Introduction | 6
Docker Containers on AHV
⁃ The ability to run the final image across hybrid cloud environments is the key feature of
application assembly and deployment that supports both continuous development and
integration. The Nutanix App Mobility Fabric allows all stakeholders in the DevOps delivery
chain to locate applications based on a requirement for either elasticity or predictability.
The App Mobility Fabric reduces associated OPEX costs as organizations move toward
adaptive infrastructures while using a more agile software approach to compress release
cycle times.
• Tiered storage pool and data locality.
⁃ By maintaining VM working sets on the most performant SSD-backed storage tiers, the
Nutanix platform can deliver high-performance I/O across all container-based application
workloads. Nutanix CVMs provide data locality using ILM. Reads are satisfied from memory
or SSD; writes go to SSD and then drain to spinning disks. All operations are performed
with a preference for data coming from local storage, on the same physical system where
the VM accessing it is located.
• Data services provide clone and snapshot functionality.
⁃ Nutanix Acropolis delivers a variety of VM-granular service levels with backups, efficient
disaster recovery, and nondisruptive upgrades. These features improve application
availability by providing nearly instantaneous crash-consistent backups via snapshot
capabilities. Snapshots also enable engineering and QA to deploy high-performance test
environments quickly with complete cloned copies of production datasets.
• Reduced infrastructure operational complexity.
⁃ Reduce administrative overhead by hundreds of hours per year by practically eliminating
the need for storage management, using intuitive, centralized, VM-centric management and
REST APIs or PowerShell toolkits.
• Deep performance insight.
⁃ Simplify performance troubleshooting, resolving problems in minutes to hours versus days
and weeks with end-to-end detailed visibility into application VMs and infrastructure.
• These instructions are valid for Docker EE for Centos and for Docker EE for Linux, which
includes access to Docker EE for all Linux distributions. To install Docker EE, you need to
know the Docker EE repository URL associated with your trial or subscription. To get this
information:
⁃ Navigate to https://store.docker.com/my-content.
⁃ The list on your content page includes each subscription or trial you have access to. Click
Setup for Docker Enterprise Edition for Centos.
⁃ Copy the URL from the field labeled Copy and paste this URL to download your
Edition.
⁃ Use this URL instead of the placeholder text <DOCKER-EE-URL>.
• On production systems using devicemapper, you must use direct- lvm mode, which requires
one or more dedicated block devices. Fast storage such as SSD is recommended.
• Update existing yum packages.
$ sudo yum update yum
• Store your Docker EE repository URL in a yum variable in /etc/yum/vars/. This command
relies on the variable you stored in the previous step.
$ sudo -E sh -c 'echo "$DOCKERURL/centos" > /etc/yum/vars/dockerurl'
• Ensure that Docker starts when you boot the guest VM.
$ sudo systemctl enable docker
• If you want to avoid having to use sudo, create a user that has the appropriate sudo
permissions and add it to the Docker group, which has root equivalency.
⁃ Log on as that user.
⁃ Create the Docker group and add your user.
sudo groupadd docker
sudo usermod -aG docker <your_username>
• Consult the Docker website for post-installation tasks that you may need to complete.
driver within Docker Machine, thereby provisioning VMs in a cloud-like fashion. These VMs
conform to the AHV format, and on boot they have the necessary Docker Engine installed and
enabled. We can then deploy containers on the Dockerized VM.
Figure 4: Docker Machine Can Now Provision Dockerized VMs on Nutanix AHV
For more details on how to download, configure, and install the required software, please refer to
the Acropolis Container Services documentation on the Nutanix support portal.
To provision Dockerized VMs on AHV:
• Ensure that you are running Acropolis Operating System (AOS) version 4.7 or later.
• Provide the cluster with a data services IP address—either via the Prism GUI or nCLI.
ncli> cluster set-external-ip-address external-ip-address=10.68.64.254
• Download the Docker Machine driver for Nutanix from the Nutanix support portal to your
laptop or workstation. We currently support Windows, Linux, and Mac/OSX as laptop or
workstation operating systems.
ls -l /usr/local/bin/*nutanix
-rwxr-xr-x 1 root root /usr/local/bin/docker-machine-driver-nutanix
• Download the Docker host VM image and use the Prism image service to upload it to the
container named ImageStore.
<acropolis> image.create Docker-Host-VM-Image
source_url=http://download.nutanix.com/utils/container-host-image-20160628.qcow2
container=ImageStore image_type=kIsoImage
• Create a Docker host VM from your laptop using the Docker CLI.
docker-machine create -d nutanix –nutanix-username admin \
-–nutanix-password ‘nutanix/4u’ \
--nutanix-endpoint ’10.68.64.55:9440’ \
-–nutanix-vm-image Docker-Host-VM-Image \
--nutanix-vm-network ‘VM-Network’ dbhost01
The Acropolis Container Services documentation presents additional options for the Docker
Machine when using the Nutanix driver. Alternatively, use the built-in command line help.
docker-machine create -d nutanix [Enter]
This command returns Nutanix driver-related options that allow you to create VMs with the
desired RAM (--nutanix-vm-mem) and CPU or core count (--nutanix-vm-cpus/--nutanix-vm-cores)
using the Docker Machine CLI.
docker-machine create -d nutanix –nutanix-username admin \
-–nutanix-password ‘nutanix/4u’ \
--nutanix-endpoint ’10.68.64.55:9440’ \
-–nutanix-vm-image Docker-Host-VM-Image \
--nutanix-vm-cpus 1 \
--nutanix-vm-cores 8
--nutanix-vm-mem 16384 \
--nutanix-vm-network ‘VM-Network’ dbhost01
Bear in mind that you can also update the VMs you’ve created via the Prism GUI. The following
screenshot shows VMs created via the Docker Machine CLI, which you can administer like any
other VMs.
Stateless Applications
In the example Dockerfile below for building an nginx application container, we have a series of
instructions. Each instruction, when run using docker build and subsequently committed, builds
the various layers of our Docker image.
Note: If the build fails at any stage, a usable image is still available.
FROM centos:centos7
MAINTAINER NGINX Docker Maintainers "docker-maint@nginx.com"
RUN yum install -y wget
# Download certificate and key from the customer portal (https://cs.nginx.com)
# and copy to the build context
ADD nginx-repo.crt /etc/ssl/nginx/
ADD nginx-repo.key /etc/ssl/nginx/
# Get other files required for installation
RUN wget -q -O /etc/ssl/nginx/CA.crt https://cs.nginx.com/static/files/CA.crt
RUN wget -q -O /etc/yum.repos.d/nginx-plus-7.repo \ https://cs.nginx.com/static/files/nginx-
plus-7.repo
# Install NGINX Plus
RUN yum install -y nginx-plus
# forward request logs to Docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
The first instruction, FROM, tells us which base image to use for the container. In this example
we are using CentOS 7 as the base operating system. The MAINTAINER instruction gives us the
image’s author and their contact details. The ADD instruction copies files from the build context
or directory to the image. Invocations of RUN perform various commands on the container; here
we are installing the required packages. The CMD instruction tells us how to run the binary and
what options to enable. The EXPOSE instruction specifies which port the Docker container uses.
Note that this command does not open the port but rather maps to a port on the underlying host
when the container actually runs.
With the Dockerfile, nginx-repo-crt, and nginx-repo.key files in the same build context or
directory, run the following command to create a Docker image called nginxplus:
# docker build --no-cache -t nginxplus .
Note: The --no-cache option tells Docker to build the image from scratch and
ensures that the latest version of NGINX Plus is installed.
Next, we can run a container from that image. We give the container a specific name (--
name=mynginxplus), map the required ports (-P), and detach the container in order for it to be a
long-running process (-d):
# docker run --name mynginxplus -P -d nginxplus
1cc87a4623b0f10d35fd3df0a4961277efe631ff857c15f906cdd013adb005ed
From the above output, note the mapping for port 80 (http) to obtain the port mapping between
the container and the Docker host, then verify that the container is running as expected:
# curl http://localhost:32783
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Persistent Applications
Here is an example of an application running in a container that uses data volumes to persist
data:
FROM centos:centos6
MAINTAINER "ray hassan" ray@nutanix.com
RUN groupadd mongod && useradd mongod -g mongod
COPY mongodb3.2-repo /etc/yum.repos.d/mongodb.repo
RUN yum update -y yum && yum install -y mongodb-org
RUN mkdir -p /data/db && chown -R mongod:mongod /data/db
VOLUME ["/data/db"]
WORKDIR /data
EXPOSE 27017
CMD ["/usr/bin/mongod"]
One additional instruction is VOLUME, which creates a directory within the container that
bypasses the union filesystem and is accessible directly via the Docker host. We are building
an image from this Dockerfile. Ensure that the file containing the mongodb package repository
information (mongodb3.2-repo) is in the build context or same directory along with the Dockerfile:
# docker build –t mongodb-image .
To verify that an application is actually writing data to a data volume, the following command
finds the data volume mount point and lists its contents:
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
7b6697b27b60 mongodb-image "/usr/bin/mongod" 3 seconds ago Up 2 seconds
PORTS NAMES
0.0.0.0:32770->27017/tcp mongo-dev3
# docker inspect 7b6697b27b60
<snip>
"Mounts": [
{
"Name": "7e77660537529923fc0f64cdf688b834a99aa84b96620a29b907cc51b59e562f",
"Source": "/var/lib/docker/
volumes/7e77660537529923fc0f64cdf688b834a99aa84b96620a29b907cc51b59e562f/_data",
"Destination": "/data/db",
"Driver": "local",
"Mode": "",
"RW": true
}
<snip>
The “Source” entry above shows the location of the application data with respect to the data
volume mount point. We can list the contents to show that the application is writing files to this
location:
# ls /var/lib/docker/
volumes/7e77660537529923fc0f64cdf688b834a99aa84b96620a29b907cc51b59e562f/_data
collection-0--3079353261234304984.wt journal sizeStorer.wt WiredTigerLAS.wt
WiredTiger.wt diagnostic.data mdb_catalog.wt storage.bson WiredTiger.lock
index-1--3079353261234304984.wt mongod.lock WiredTiger WiredTiger.turtle
In Docker version 1.9, data volumes became first-class citizens, so you can manage, create,
delete, and inspect them via their own separate command syntax:
# docker volume create --name=dblog
dblog
# docker volume create --name=dbdata
dbdata
The following command shows how container creation uses such Docker data volumes:
# docker run --name mongo-dev -P -d -v dblog:/var/log/mongodb \
-v dbdata:/data/db mongodb-image
5dbd6aa827657c52a817472dc409cb6ca6347e4afa4f76533187190242936d44
# docker ps
5dbd6aa82765 mongodb-image "/usr/bin/mongod" About an hour ago Up About an
hour 0.0.0.0:32768->27017/tcp mongo-dev
We can then inspect the volume itself directly, rather than look at it via the container data as we
did previously:
# docker volume inspect dbdata
[
{
"Name": "dbdata",
"Driver": "local",
"Mountpoint": "/var/lib/docker/volumes/dbdata/_data"
}
]
# ls /var/lib/docker/volumes/dbdata/_data
collection-0-6588193751102076236.wt index-1-6588193751102076236.wt _mdb_catalog.wt
sizeStorer.wt WiredTiger WiredTiger.lock WiredTiger.wt diagnostic.data journal
mongod.lock storage.bson WiredTigerLAS.wt WiredTiger.turtle
As our final example of how Docker containers can allow application persistence, we map a data
volume (/data/db) to a Docker host directory (/opt/mongodb)—this could be a mountpoint for a
logical volume if needed:
# docker run --name mongo-dev2 -P -d -v /opt/mongodb:/data/db mongodb-image
d3f30ba7ec265978108c57a4ac6c4a335b14eda35b513c0f40a801023dbfa407
# cd /opt/mongodb
# ls
collection-0--721115936743755716.wt index-1--721115936743755716.wt _mdb_catalog.wt
sizeStorer.wt WiredTiger WiredTiger.lock WiredTiger.wt diagnostic.data journal
mongod.lock storage.bson WiredTigerLAS.wt WiredTiger.turtle
As a test for data persistence, obtain the port mapping for a running container created from any
one of the above methods:
# docker ps
CONTAINER ID IMAGE COMMAND CREATED
5dbd6aa82765 mongodb-image "/usr/bin/mongod" 7 minutes ago
STATUS PORTS NAMES
Up 7 minutes 0.0.0.0:32768->27017/tcp mongo-dev
Then connect to the running MongoDB instance via a mongo shell session:
# mongo localhost:32768
>
> use test
switched to db test
> for (var i=1; i<=25; i++) {
... db.test.insert( {x:i} )
... }
WriteResult({ "nInserted" : 1 })
> db.test.find()
{ "_id" : ObjectId("567309f3e88d05a7fa4d503b"), "x" : 1 }
{ "_id" : ObjectId("567309f3e88d05a7fa4d503c"), "x" : 2 }
{ "_id" : ObjectId("567309f3e88d05a7fa4d503d"), "x" : 3 }
{ "_id" : ObjectId("567309f3e88d05a7fa4d503e"), "x" : 4 }
{ "_id" : ObjectId("567309f3e88d05a7fa4d503f"), "x" : 5 }
{ "_id" : ObjectId("567309f3e88d05a7fa4d5040"), "x" : 6 }
… <snipped for brevity>
{ "_id" : ObjectId("567309f3e88d05a7fa4d504f"), "x" : 21 }
{ "_id" : ObjectId("567309f3e88d05a7fa4d5050"), "x" : 22 }
{ "_id" : ObjectId("567309f3e88d05a7fa4d5051"), "x" : 23 }
{ "_id" : ObjectId("567309f3e88d05a7fa4d5052"), "x" : 24 }
{ "_id" : ObjectId("567309f3e88d05a7fa4d5053"), "x" : 25 }
Next, restart your container and verify that the data written to the database files is still available
and has not been erased or overwritten:
# docker container restart 5dbd6aa82765
Docker Compose
The native or local data volumes described above have a single point of failure (SPOF): if the
host running the container goes away for any reason, then we lose access to the data. By
implementing persistent data volumes via the Nutanix Docker volume plugin, we provide storage
that can exist independently of both the container and host runtime. This setup is analogous in
many ways to the independent volumes used in the public cloud.
The following examples show how to use Docker Compose to deploy a stateful container that
uses the Nutanix Docker volume driver to create self-contained storage that can run regardless
of what happens to the host.
• Install Docker Compose on your Dockerized VM.
• Create a YAML file to use with Docker Compose.
# cat mongo.yaml
mongodb:
image: mongo
volume_driver: nutanix
volumes:
- dbdata02:/data/db
ports:
- 27017:27017
command: [/usr/bin/mongod]
net: host
Note: Here, we are pulling the latest mongodb image: from the Docker Hub. We
request the Nutanix volume_driver. We name the resulting volumes: dbdata02
and map to the directory /data/db within the container. We are mapping the ports:
address in the container to match the port address on the host, and we specify
using host networking via the net: directive. For more information, see the Docker
Compose file reference.
• The -d option detaches the container from the terminal or starts the service in the background.
Without adding this option, the screen would flood with the service’s startup and log
messages. If you’re debugging, you would leave this part of the command out.
• Check that the container service is running using the following command:
# docker-compose -f ./mongo.yaml ps
• You can also create Docker volumes outside of container runtime and simply use them
when the container instance is started. In this example, we created Docker volumes via the
command line, again using the Nutanix driver.
# docker volume create -d nutanix --name dbdata03
dbdata03
• The following command confirms that the volume has been created using the Nutanix-
supplied driver:
# docker volume inspect dbdata03
[
{
“Name”: “dbdata03”
“Driver”: “nutanix”
“Mountpoint”: “/var/lib/nutanix/dbdata03/dbdata03”
“Labels”: {}
}
]
• To use a precreated volume with Docker Compose, we must make some changes to our
original Compose file syntax.
# cat mongo.yaml
version: ‘2’
services:
db:
image: mongo
volumes:
- dbdata03:/data/db
ports:
- 27017:27017
command: [/usr/bin/mongod]
network_mode: “host”
volumes:
dbdata03:
external: true
• In the above Compose file we use version ‘2’ syntax to specify that we are using an external
volume. We have broken the container out into its own services stanza (labeled db:), so there
is a separate stanza for the volumes. In all other respects this Compose file is the same as the
one discussed earlier. We still use the underlying host networking in our network_mode: as
well.
• Invoke the above file using the same Docker Compose syntax we used before.
docker-compose -f ./mongo.yaml up -d
• You can also monitor, update, and manage the VGs created to support your Docker volumes
via Prism.
Allocate on Demand
As new data is written to a container backed by the devicemapper backend’s thin-provisioned
volumes, that data needs to have a new block allocated and mapped into the container. In direct-
lvm configuration, the default block size is 64 KB, so a write that is smaller than that still allocates
a 64 KB block. You could see a performance impact if your containers perform lots of small
writes.
Copy-on-Write Procedure
When overwriting data in a container, the devicemapper copy-on-write procedure copies data
from the image (or image snapshot) to the container (or container snapshot). This process
has the same 64 KB granularity as the block allocation scheme described above. Hence, for
example, updates of 32 KB to a 1 GB file result in copying a 64 KB block up to the container layer
or snapshot of that layer. Conversely, with filesystem-based storage drivers (AUFS, btrfs, ZFS),
the same copy-on-write procedure would copy the entire 1 GB file up to the container layer.
• Create additional Nutanix vDisks via the Acropolis management CLI (aCLI) on a CVM. We use
these disks to build logical volumes. Here, docker-directlvm is the VM acting as the Docker
host.
nutanix@CVM:~$ for i in {1..6}; do acli vm.disk_create docker-directlvm \ create_size=50g
container=DEFAULT-CTR; done
DiskCreate: pending
DiskCreate: complete
DiskCreate: pending
DiskCreate: complete
DiskCreate: pending
DiskCreate: complete
DiskCreate: pending
DiskCreate: complete
DiskCreate: pending
DiskCreate: complete
DiskCreate: pending
DiskCreate: complete
# Verify vDisks exposed to Guest VM using lsscsi
[root@docker-directlvm ~]# lsscsi
[0:0:0:0] cd/dvd QEMU QEMU DVD-ROM 1.5. /dev/sr0
[2:0:0:0] disk NUTANIX VDISK 0 /dev/sda
[2:0:1:0] disk NUTANIX VDISK 0 /dev/sdb
[2:0:2:0] disk NUTANIX VDISK 0 /dev/sdc
[2:0:3:0] disk NUTANIX VDISK 0 /dev/sdd
[2:0:4:0] disk NUTANIX VDISK 0 /dev/sde
[2:0:5:0] disk NUTANIX VDISK 0 /dev/sdf
[2:0:6:0] disk NUTANIX VDISK 0 /dev/sdg
• Create a logical volume management (LVM) physical volume (PV) on all previously created
vDisks.
$ sudo pvcreate /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
• Create a volume group (VG) using the PV created in the previous step.
$ sudo vgcreate direct-lvm /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
• Create a striped LV called data in the direct-lvm VG that is limited to 95 percent of the VG
space—note that the Nutanix vDisks added to the VG are already redundant.
$ sudo lvcreate -i6 -n data direct-lvm -l 95%VG
• Create another striped LV called metadata in the direct-lvm VG limited to 5 percent of the VG
space.
$ sudo lvcreate -i6 -n metadata docker -l 5%VG
$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log
data direct-lvm -wi-a----- 265.41g
metadata direct-lvm -wi-a----- 13.97g
To integrate a Docker Engine deployment with the Acropolis Distributed Storage Fabric (DSF),
Nutanix provides a volume driver plugin. This plugin is deployed in a separate container in a
sidekick or sidecar pattern.
The Nutanix volume driver plugin implements an HTTP server that the Docker Engine daemon
can discover. This server exposes a set of RPCs issued as HTTP POST requests with JSON
payloads. By registering itself with the Docker Engine, the plugin serves all requests from other
containers for volume creation, mount, removal, and so on. In order to create a persistent
volume, the driver calls out to the DSF and provides the requesting container with a mount point
or writeable path made from iSCSI block storage exposed via a Nutanix VG.
Deploying the Nutanix Volume Driver and the Nutanix Docker Volume Plugin
The Nutanix Volume Driver received Docker’s Enterprise Technology Partner (ETP) status. To
use the Nutanix Volume Driver for Docker containers, ensure that you have created a Nutanix
storage container either in Prism or via the nCLI. Nutanix recommends creating a separate
container for ISOs and disk images, and running actual VMs within another container.
• Connect to your Dockerized VM, for example:
docker-machine ssh dbhost01
• Notes:
⁃ The “prism ip address” is usually the cluster VIP address, for availability if a Controller VM
fails.
⁃ If you need to redeploy the volume driver plugin for any reason, you can pull it directly from
the Docker Hub.
docker pull orionapps/vol-plugin
Figure 10: The Nutanix Volume Plugin, via DSF, Creates a Writable Path on the Host
Nutanix also offers a more recent volume driver based on the Docker Volume API v2. Along
with installation and configuration instructions, this driver is available in the Docker store as
the Nutanix Docker Volume Plugin (DVP). You can use the DVP with or without the docker-
machine driver. We recommend using the DVP (which has been “Docker Certified”) when
running containers that require persistent storage on Nutanix.
8. Conclusion
The Nutanix Enterprise Cloud Platform and AHV provide a powerful foundation for the proven
capabilities of the Docker technology stack. Nutanix streamlines and enhances both storage
infrastructure configuration and overall deployment. The Docker on Nutanix solution provides a
single, distributed platform that allows deployments to scale linearly, in a modular fashion, with
predictable performance. Nutanix eliminates the need for complex SAN or NAS environments.
The Nutanix Docker integration components for Docker Machine and persistent storage ensure
security and isolation by running your containers in VMs, thereby sandboxing your installed base.
Nutanix is now a recognized target for Docker Machine, and each VM is enabled with the Docker
Engine for immediate container deployment. Stateful containers running in VMs preserve data
persistence and mobility, as the Acropolis DSF becomes the source for all Docker data volumes.
Persistent storage on the DSF allows data volumes to exist independent of the container’s
runtime specification, so you can use them as first-class resources. Ensuring that you always
access VM working sets from the most performant storage tiers achieves high-performance I/
O throughput. A platform-wide hot tier maintains a low I/O latency and response profile for your
microservice container deployments.
Managing Docker host VMs via Prism streamlines snapshots and clones to create test and QA
environments with production-style data quickly and easily. Nutanix Prism also provides cluster
health overviews, full stack performance analytics, hardware and software alerting, storage
utilization, and automated remote support.
Together, Nutanix and AHV provide a zero-touch invisible infrastructure, allowing you to get the
most out of critical business applications when deploying them as containerized microservices
via Dockerized VMs. In this way, IT departments spend less time in the datacenter and more time
innovating to help the business.
8. Conclusion | 39
Docker Containers on AHV
8. Conclusion | 40
Docker Containers on AHV
Appendix
Nutanix Resources
1. The Intersection of Docker, DevOps, and Nutanix
2. Containers Enter the Acropolis Data Fabric
3. Nutanix Acropolis 4.7: Container Support
4. Stateful Containers on Nutanix Part 1
5. Stateful Containers on Nutanix Part 2: MongoDB
6. Stateful Containers on Nutanix Part 3: MongoDB Replica Set
Further Research
1. https://developers.redhat.com/blog/2014/09/30/overview-storage-scalability-docker/
2. https://blog.docker.com/2015/02/orchestrating-docker-with-machine-swarm-and-compose/
3. https://en.wikipedia.org/wiki/Microservices
Appendix | 41
Docker Containers on AHV
About Nutanix
Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that
power their business. The Nutanix Enterprise Cloud OS leverages web-scale engineering and
consumer-grade design to natively converge compute, virtualization, and storage into a resilient,
software-defined solution with rich machine intelligence. The result is predictable performance,
cloud-like infrastructure consumption, robust security, and seamless application mobility for a
broad range of enterprise applications. Learn more at www.nutanix.com or follow us on Twitter
@nutanix.
Appendix | 42
Docker Containers on AHV
List of Figures
Figure 1: Nutanix Enterprise Cloud................................................................................... 7
Figure 4: Docker Machine Can Now Provision Dockerized VMs on Nutanix AHV...........17
Figure 10: The Nutanix Volume Plugin, via DSF, Creates a Writable Path on the Host...38
43
Docker Containers on AHV
List of Tables
Table 1: Document Version History.................................................................................. 6
44