Beruflich Dokumente
Kultur Dokumente
Server & Tools Blogs > Server & Management Blogs > Virtualization Blog
Sign in
Virtualization Blog
Information and announcements from Program Managers, Product Managers, Developers and Testers
in the Microsoft Virtualization team.
Share 2 11 25
This document walks through several steps for setting up a containerized NGINX server and using it to load
balance traffic across a swarm cluster. For clarity, these steps are designed as an endtoend tutorial for
setting up a three node cluster and running two docker services on that cluster; by completing this exercise,
you will become familiar with the general workflow required to use swarm mode and to load balance across
Windows Container endpoints using an NGINX load balancer.
The figure below provides a visualization of this threenode setup. Two of the nodes, the Swarm Manager
node and the Swarm Worker node together form a twonode swarm mode cluster, running two Docker web
services, S1 and S2. A third node the NGINX Host in the figure is used to host a containerized NGINX
load balancer, and the load balancer is configured to route traffic across the container endpoints for the two
container services. This figure includes example IP addresses and port numbers for the two swarm hosts and
for each of the six container endpoints running on the hosts.
https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 1/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog
System requirements
Three* or more computer systems running either Windows 10 Creators Update or Windows Server 2016
with all of the latest updates*, setup as a container host see the topic, Windows Containers on Windows 10 or
Windows Containers on Windows Server for more details on how to get started with Docker containers on
Windows 10.+
Open ports: Swarm mode requires that the following ports be available on each host.
TCP port 2377 for cluster management communications
https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 2/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog
bug on Windows which prevents containers from accessing their hosts using localhost or even the
hosts external IP address (for more background on this, see Caveats and Gotchas below). This
means that in order to access docker services via their exposed ports on the swarm hosts, the
NGINX load balancer must not reside on the same host as any of the service container instances.
Put another way, if you use only two nodes to complete this exercise, one of them will need to be
dedicated to hosting the NGINX load balancer, leaving the other to be used as a swarm container
host (i.e. you will have a singlehost swarm cluster, a host dedicated to hosting your containerized
NGINX load balancer).
Note: To avoid having to transfer your container image later, complete the instructions in this section on
the container host that you intend to use for your NGINX load balancer.
NGINX is available for download from nginx.org. An NGINX container image can be built using a simple
Dockerfile that installs NGINX onto a Windows base container image and configures the container to run as
an NGINX executable. The content of such a Dockerfile is shown below.
1FROMmicrosoft/windowsservercore
2RUNpowershellInvokeWebRequesthttp://nginx.org/download/nginx1.10.3.zipUseBa
3RUNpowershellExpandArchivec:\\nginx.zipDestc:\\nginx"
4WORKDIRc:\\nginx\\nginx1.10.3
5ENTRYPOINTpowershell.\\nginx.exe
Create a Dockerfile from the content provided above, and save it to some location e.g. C:\temp\nginx on
your NGINX container host machine. From that location, build the image using the following command:
C:\temp\nginx>dockerbuildtnginx.
Now the image should appear with the rest of the docker images on your system check using the docker
imagescommand.
https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 3/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog
C:\temp>dockerrunitp80:80nginx
Next, open a new cmdlet window and use the dockerpscommand to see that the container is running.
Note its ID. The ID of your container is the value of <CONTAINERID>in the next command.
C:\temp>dockerexec<CONTAINERID>ipconfig
For example, your containers IP address may be 172.17.176.155, as in the example output shown below.
Next, open a browser on your container host and put your containers IP address in the address bar. You
should see a confirmation page, indicating that NGINX is successfully running in your container.
Note: Complete the instructions in this section on one of the container hosts that you intend to use as a
swarm host.
1FROMmicrosoft/windowsservercore
2RUNdism.exe/online/enablefeature/all/featurename:iiswebserver/NoRestart
Create a Dockerfile from the content provided above, and save it to some location e.g. C:\temp\iis on one of
the host machines that you plan to use as a swarm node. From that location, build the image using the
following command:
C:\temp\iis>dockerbuildtiisweb.
C:\temp>dockerrunitp80:80iisweb
Next, use the dockerps command to see that the container is running. Note its ID. The ID of your container
is the value of <CONTAINERID> in the next command.
C:\temp>dockerexec<CONTAINERID>ipconfig
Now open a browser on your container host and put your containers IP address in the address bar. You
should see a confirmation page, indicating that the IIS Web server role is successfully running in your
container.
https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 5/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog
First, on your host machine create a simple file called, index_1.html. In the file type any text. For example, your
index_1.html file might look like this:
Now create a second file, index_2.html. Again, in the file type any text. For example, your index_2.html file
might look like this:
https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 6/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog
Now well use these HTML documents to make two custom web service images.
If the iisweb container instance that you just built is not still running, run a new one, then get the ID of the
container using:
C:\temp>dockerexec<CONTAINERID>ipconfig
Now, copy your index_1.html file from your host onto the IIS container instance that is running, using the
following command:
C:\temp>dockercpindex_1.html<CONTAINERID>:C:\inetpub\wwwroot\index.html
Next, stop and commit the container in its current state. This will create a container image for the first web
service. Lets call this first image, web_1.
C:\>dockerstop<CONTAINERID>
C:\>dockercommit<CONTAINERID>web_1
Now, start the container again and repeat the previous steps to create a second web service image, this time
using your index_2.html file. Do this using the following commands:
C:\>dockerstart<CONTAINERID>
C:\>dockercpindex_2.html<CONTAINERID>:C:\inetpub\wwwroot\index.html
C:\>dockerstop<CONTAINERID>
C:\>dockercommit<CONTAINERID>web_2
https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 7/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog
You have now created images for two unique web services; if you view the Docker images on your host by
running dockerimages, you should see that you have two new container imagesweb_1 and web_2.
Option 1: Repeat the steps above to build the web_1 and web_2 containers on your second host.
Option 2 [recommended]: Push the images to your repository on Docker Hub then pull them onto
additional hosts.
Using Docker Hub is a convenient way to leverage the lightweight nature of containers across all
of your machines, and to share your images with others. Visit the following Docker resources to
get started with pushing/pulling images with Docker Hub:
Create a Docker Hub account and repository
Tag, push and pull your image
Note: The host running the containerized NGINX load balancer cannot run on the same host as
any container endpoints for which it is performing load balancing; the host with your nginx
container image must be reserved for load balancing only. For more background on this, see
Caveats and Gotchas below.
First, run the following command from any machine that you intend to use as a swarm host. The machine that
you use to execute this command will become a manager node for your swarm cluster.
C:\temp>dockerswarminitadvertiseaddr=<HOSTIPADDRESS>listenaddr<HOSTIPADDRE
Now run the following command from each of the other host machines that you intend to use as swarm
nodes, joining them to the swarm as a worker nodes.
Replace <MANAGERIPADDRESS>with the public IP address of your host machine i.e. the value of
<HOSTIPADDRESS>that you used to initialize the swarm from the manager node
https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 8/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog
C:\temp>dockerswarmjointoken<WORKERJOINTOKEN><MANAGERIPADDRESS>:2377
Your nodes are now configured to form a swarm cluster! You can see the status of the nodes by running the
following command from your manage node:
C:\temp>dockernodels
C:\temp>dockerstop<CONTAINERID>
C:\temp>dockerrm<CONTAINERID>
Next, were going to use the web_1 and web_2 container images that we created in previous steps of this
exercise to deploy two container services to our swarm cluster.
To create the services, run the following commands from your swarm manager node:
C:\>dockerservicecreatename=s1publishmode=host,target=80endpointmodedn
C:\>dockerservicecreatename=s2publishmode=host,target=80endpointmodedn
You should now have two services running, s1 and s2. You can view their status by running the following
command from your swarm manager node:
https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 9/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog
C:\>dockerservicels
Additionally, you can view information on the container instances that define a specific service with the
following commands where <SERVICENAME> is replaced with the name of the service you are inspecting for
example, s1 or s2:
#Listallservices
C:\>dockerservicels
#Listinfoforaspecificservice
C:\>dockerserviceps<SERVICENAME>
C:\>dockerservicescale<SERVICENAME>=<REPLICAS>
#e.g.dockerservicescales1=3
Of course, generally load balancers are used to balance traffic across instances of a single service,
not multiple services. For the purpose of clarity, this example uses two services so that the function
of the load balancer can be easily seen; because the two services are serving different HTML
content, well clearly see how the load balancer is distributing requests between them.
https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 10/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog
You will need to adjust the file by adding the information for your hosts and container instances. The
template nginx.conf file provided contains the following section:
upstreamappcluster{
server<HOSTIP>:<HOSTPORT>;
server<HOSTIP>:<HOSTPORT>;
server<HOSTIP>:<HOSTPORT>;
server<HOSTIP>:<HOSTPORT>;
server<HOSTIP>:<HOSTPORT>;
server<HOSTIP>:<HOSTPORT>;
}
To adapt the file for your configuration, you will need to adjust the <HOSTIP>:<HOSTPORT> entries in the
config file. You will have an entry for each container endpoint that defines your web services. For any given
container endpoint, the value of <HOSTIP> will be the IP address of the container host upon which that
container is running. The value of <HOSTPORT> will be the port on the container host upon which the
container endpoint has been published.
When the services, s1 and s2, were defined in the previous step of this exercise, the publish
mode=host,target=80 parameter was included. This paramater specified that the container
instances for the services should be exposed via published ports on the container hosts. More
specifically, by including publishmode=host,target=80 in the service definitions, each service
was configured to be exposed on port 80 of each of its container endpoints, as well as a set of
automatically defined ports on the swarm hosts (i.e. one port for each container running on a
given host).
First, identify the host IPs and published ports for your
container endpoints
Before you can adjust your nginx.conf file, you must obtain the required information for the container
endpoints that define your services. To do this, run the following commands again, run these from your
swarm manager node:
C:\>dockerservicepss1
C:\>dockerservicepss2
The above commands will return details on every container instance running for each of your services, across
all of your swarm hosts.
https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 11/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog
One column of the output, the ports column, includes port information for each host of the form *:
<HOSTPORT>>80/tcp. The values of <HOSTPORT> will be different for each container instance, as
each container is published on its own host port.
Another column, the node column, will tell you which machine the container is running on. This is
how you will identify the host IP information for each endpoint.
You now have the port information and node for each container endpoint. Next, use that information to
populate the upstream field of your nginx.conf file; for each endpoint, add a server to the upstream field of
the file, replacing the field with the IP address of each node if you dont have this, run ipconfig on each host
machine to obtain it, and the field with the corresponding host port.
For example, if you have two swarm hosts IP addresses 172.17.0.10 and 172.17.0.11, each running three
containers your list of servers will end up looking something like this:
upstreamappcluster{
server172.17.0.10:21858;
server172.17.0.11:64199;
server172.17.0.10:15463;
server172.17.0.11:56049;
server172.17.0.11:35953;
server172.17.0.10:47364;
}
Once you have changed your nginx.conf file, save it. Next, well copy it from your host to the NGINX container
image itself.
C:\temp>dockerrunitp80:80nginx
C:\temp>dockerexec<CONTAINERID>ipconfig
With the container running, use the following command to replace the default nginx.conf file with the file that
you just configured run the following command from the directory in which you saved your adjusted version
of the nginx.conf on the host machine:
https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 12/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog
C:\temp>dockercpnginx.conf<CONTAINERID>:C:\nginx\nginx1.10.3\conf
Now use the following command to reload the NGINX server running within your container:
C:\temp>dockerexec<CONTAINERID>nginx.exesreload
If accessing from the NGINX host machine: Type the IP address of the nginx container running on the
machine into the browser address bar. This is the value of <CONTAINERID> above.
If accessing from another host machine with network access to the NGINX host machine: Type the IP
address of the NGINX host machine into the browser address bar.
Once youve typed the applicable address into the browser address bar, press enter and wait for the web page
to load. Once it loads, you should see one of the HTML pages that you created in step 2.
Now press refresh on the page. You may need to refresh more than once, but after just a few times you
should see the other HTML page that you created in step 2.
If you continue refreshing, you will see the two different HTML pages that you used to define the services,
web_1 and web_2, being accessed in a roundrobin pattern roundrobin is the default load balancing
strategy for NGINX, but there are others. The animated image below demonstrated the behavior that you
should see.
https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 13/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog
As a reminder, below is the full configuration with all three nodes. When youre refreshing your web page
view, youre repeatedly accessing the NGINX node, which is distributing your GET request to the container
endpoints running on the swarm nodes. Each time you resend the request, the load balancer has the
opportunity to route you to a different endpoint, resulting in your being served a different web page,
depending on whether or not your request was routed to an S1 or S2 endpoint.
https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 14/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog
Routing mesh for swarm mode on Windows is not yet supported, but will be coming soon.
In the context of this exercise, this means that the NGINX load balancer must be running on its own host, and
never on the same host as any services that it needs to via exposed ports. Put another way, for the
containerized NGINX load balancer to balance across the two web services defined in this exercise, s1 and s2,
it cannot be running on a swarm nodeif it were running on a swarm node, it would be unable to access any
containers on that node via host exposed ports.
Of course, an additional caveat here is that containers do not need to be accessed via host exposed ports. It
is also possible to access containers directly, using the container IP and published port. If this instead were
done for this exercise, the NGINX load balancer would need to be configured to access:
containers that do not share its host by their hosts IP and exposed port
There is no problem with configuring the load balancer in this way, other than the added complexity that it
introduces compared to simply putting the load balancer on its own machine, so that containers can be
uniformly accessed via their hosts.
ScottGu's Blog
https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 15/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog
Recent Posts
Use NGINX to load balance across your Docker Swarm cluster April 19, 2017
Windows Server 2016 Adds Native Overlay Network Driver, enabling mixed Linux + Windows Docker Swarm
Mode Clusters April 18, 2017
Whats new in HyperV for the Windows 10 Creators Update? April 13, 2017
Tags
Application Virtualization Azure Site Recovery Citrix Cloud Computing Community cross
platform management Disaster Recovery Dynamic Memory ESX events guest blog post High
Archives
April 2017 4
March 2017 1
February 2017 5
January 2017 5
December 2016 1
October 2016 5
All of 2017 15
All of 2016 17
All of 2015 18
All of 2014 36
https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 16/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog
All of 2013 30
All of 2012 23
All of 2011 41
All of 2010 58
All of 2009 97
All of 2008 80
All of 2007 13
All of 2006 3
GS 3 months ago
I thought docker swarm mode is not supported on Windows 2016 due to overlay
network not being available.
As of 4/11, Windows Server 2016 has had support for container overlay networks.
To enable this feature, ensure youve installed the latest Windows updates, including KB4015217. For more
info on how to get started, visit our main doc page for Docker Swarm.
What if the workers change? Or my swarm service scales up? Do I have to manually
update my nginx.conf every time?
Yes, thats rightright now, because routing mesh is not yet supported on
Windows, any external resources in this case, your NGINX load balancer will have to be reconfigured
whenever there is a change in how service tasks are exposed i.e. whenever the host IP or port for a
container instance changes, or whenever more/less containers are exposed due to scaling.
When can we expect routing mesh support? its not possible to use service in
production without it.
https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 17/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog
When will the localhost Big Ben fixed? Acses to exposed port over locoal Host would
be so much things making Easy.
TL 1 month ago
Hi,
What is exactly of this approach? By default, Docker swarm for windows is already supported overlay network
and internal load balancerDNS round robin
Thanks,
https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 18/18