Sie sind auf Seite 1von 18

6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog

Server & Tools Blogs > Server & Management Blogs > Virtualization Blog
Sign in

Virtualization Blog
Information and announcements from Program Managers, Product Managers, Developers and Testers
in the Microsoft Virtualization team.

Use NGINX to load balance across your Docker Swarm


cluster




April 19, 2017 by KallieBracken[msft] // 7 Comments

Share 2 11 25

A practical walkthrough, in six steps


This basic example demonstrates NGINX and swarm mode in action, to provide the foundation for you to apply
these concepts to your own configurations.

This document walks through several steps for setting up a containerized NGINX server and using it to load
balance traffic across a swarm cluster. For clarity, these steps are designed as an endtoend tutorial for
setting up a three node cluster and running two docker services on that cluster; by completing this exercise,
you will become familiar with the general workflow required to use swarm mode and to load balance across
Windows Container endpoints using an NGINX load balancer.

The basic setup


This exercise requires three container hoststwo of which will be joined to form a twonode swarm cluster,
and one which will be used to host a containerized NGINX load balancer. In order to demonstrate the load
balancer in action, two docker services will be deployed to the swarm cluster, and the NGINX server will be
configured to load balance across the container instances that define those services. The services will both be
web services, hosting simple content that can be viewed via web browser. With this setup, the load balancer
will be easy to see in action, as traffic is routed between the two services each time the web browser view
displaying their content is refreshed.

The figure below provides a visualization of this threenode setup. Two of the nodes, the Swarm Manager
node and the Swarm Worker node together form a twonode swarm mode cluster, running two Docker web
services, S1 and S2. A third node the NGINX Host in the figure is used to host a containerized NGINX
load balancer, and the load balancer is configured to route traffic across the container endpoints for the two
container services. This figure includes example IP addresses and port numbers for the two swarm hosts and
for each of the six container endpoints running on the hosts.

https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 1/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog

System requirements
Three* or more computer systems running either Windows 10 Creators Update or Windows Server 2016
with all of the latest updates*, setup as a container host see the topic, Windows Containers on Windows 10 or
Windows Containers on Windows Server for more details on how to get started with Docker containers on
Windows 10.+

*Note: Docker Swarm on Windows Server 2016 requires KB4015217

Additionally, each host system should be configured with the following:

The microsoft/windowsservercore container image

Docker Engine v1.13.0 or later

Open ports: Swarm mode requires that the following ports be available on each host.
TCP port 2377 for cluster management communications

TCP and UDP port 7946 for communication among nodes

TCP and UDP port 4789 for overlay network traffic

*Note on using two nodes rather than three:


These instructions can be completed using just two nodes. However, currently there is a known

https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 2/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog

bug on Windows which prevents containers from accessing their hosts using localhost or even the
hosts external IP address (for more background on this, see Caveats and Gotchas below). This
means that in order to access docker services via their exposed ports on the swarm hosts, the
NGINX load balancer must not reside on the same host as any of the service container instances.
Put another way, if you use only two nodes to complete this exercise, one of them will need to be
dedicated to hosting the NGINX load balancer, leaving the other to be used as a swarm container
host (i.e. you will have a singlehost swarm cluster, a host dedicated to hosting your containerized
NGINX load balancer).

Step 1: Build an NGINX container image


In this step, well build the container image required for your containerized NGINX load balancer. Later we will
run this image on the host that you have designated as your NGINX container host.

Note: To avoid having to transfer your container image later, complete the instructions in this section on
the container host that you intend to use for your NGINX load balancer.

NGINX is available for download from nginx.org. An NGINX container image can be built using a simple
Dockerfile that installs NGINX onto a Windows base container image and configures the container to run as
an NGINX executable. The content of such a Dockerfile is shown below.

1FROMmicrosoft/windowsservercore
2RUNpowershellInvokeWebRequesthttp://nginx.org/download/nginx1.10.3.zipUseBa
3RUNpowershellExpandArchivec:\\nginx.zipDestc:\\nginx"
4WORKDIRc:\\nginx\\nginx1.10.3
5ENTRYPOINTpowershell.\\nginx.exe

Create a Dockerfile from the content provided above, and save it to some location e.g. C:\temp\nginx on
your NGINX container host machine. From that location, build the image using the following command:

C:\temp\nginx>dockerbuildtnginx.

Now the image should appear with the rest of the docker images on your system check using the docker
imagescommand.

Optional Confirm that your NGINX image is ready


First, run the container:

https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 3/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog

C:\temp>dockerrunitp80:80nginx

Next, open a new cmdlet window and use the dockerpscommand to see that the container is running.
Note its ID. The ID of your container is the value of <CONTAINERID>in the next command.

Get the containers IP address:

C:\temp>dockerexec<CONTAINERID>ipconfig

For example, your containers IP address may be 172.17.176.155, as in the example output shown below.

Next, open a browser on your container host and put your containers IP address in the address bar. You
should see a confirmation page, indicating that NGINX is successfully running in your container.

Step 2: Build images for two containerized IIS Web


services
In this step, well build container images for two simple IISbased web applications. Later, well use these
images to create two docker services.
https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 4/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog

Note: Complete the instructions in this section on one of the container hosts that you intend to use as a
swarm host.

Build a generic IIS Web Server image


Below are the contents of a simple Dockerfile that can be used to create an IIS Web server image. The
Dockerfile simply enables the Internet Information Services IIS Web server role within a
microsoft/windowsservercore container.

1FROMmicrosoft/windowsservercore
2RUNdism.exe/online/enablefeature/all/featurename:iiswebserver/NoRestart

Create a Dockerfile from the content provided above, and save it to some location e.g. C:\temp\iis on one of
the host machines that you plan to use as a swarm node. From that location, build the image using the
following command:

C:\temp\iis>dockerbuildtiisweb.

Optional Confirm that your IIS Web server image is ready


First, run the container:

C:\temp>dockerrunitp80:80iisweb

Next, use the dockerps command to see that the container is running. Note its ID. The ID of your container
is the value of <CONTAINERID> in the next command.

Get the containers IP address:

C:\temp>dockerexec<CONTAINERID>ipconfig

Now open a browser on your container host and put your containers IP address in the address bar. You
should see a confirmation page, indicating that the IIS Web server role is successfully running in your
container.

https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 5/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog

Build two custom IIS Web server images


In this step, well be replacing the IIS landing/confirmation page that we saw above with custom HTML
pagestwo different images, corresponding to two different web container images. In a later step, well be
using our NGINX container to load balance across instances of these two images. Because the images will be
different, we will easily see the load balancing in action as it shifts between the content being served by the
containers well define in this step.

First, on your host machine create a simple file called, index_1.html. In the file type any text. For example, your
index_1.html file might look like this:

Now create a second file, index_2.html. Again, in the file type any text. For example, your index_2.html file
might look like this:

https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 6/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog

Now well use these HTML documents to make two custom web service images.

If the iisweb container instance that you just built is not still running, run a new one, then get the ID of the
container using:

C:\temp>dockerexec<CONTAINERID>ipconfig

Now, copy your index_1.html file from your host onto the IIS container instance that is running, using the
following command:

C:\temp>dockercpindex_1.html<CONTAINERID>:C:\inetpub\wwwroot\index.html

Next, stop and commit the container in its current state. This will create a container image for the first web
service. Lets call this first image, web_1.

C:\>dockerstop<CONTAINERID>
C:\>dockercommit<CONTAINERID>web_1

Now, start the container again and repeat the previous steps to create a second web service image, this time
using your index_2.html file. Do this using the following commands:

C:\>dockerstart<CONTAINERID>
C:\>dockercpindex_2.html<CONTAINERID>:C:\inetpub\wwwroot\index.html
C:\>dockerstop<CONTAINERID>
C:\>dockercommit<CONTAINERID>web_2

https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 7/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog

You have now created images for two unique web services; if you view the Docker images on your host by
running dockerimages, you should see that you have two new container imagesweb_1 and web_2.

Put the IIS container images on all of your swarm hosts


To complete this exercise you will need the custom web container images that you just created to be on all of
the host machines that you intend to use as swarm nodes. There are two ways for you to get the images onto
additional machines:

Option 1: Repeat the steps above to build the web_1 and web_2 containers on your second host.
Option 2 [recommended]: Push the images to your repository on Docker Hub then pull them onto
additional hosts.

Using Docker Hub is a convenient way to leverage the lightweight nature of containers across all
of your machines, and to share your images with others. Visit the following Docker resources to
get started with pushing/pulling images with Docker Hub:
Create a Docker Hub account and repository
Tag, push and pull your image

Step 3: Join your hosts to a swarm


As a result of the previous steps, one of your host machines should have the nginx container image, and the
rest of your hosts should have the Web server images, web_1 and web_2. In this step, well join the latter
hosts to a swarm cluster.

Note: The host running the containerized NGINX load balancer cannot run on the same host as
any container endpoints for which it is performing load balancing; the host with your nginx
container image must be reserved for load balancing only. For more background on this, see
Caveats and Gotchas below.

First, run the following command from any machine that you intend to use as a swarm host. The machine that
you use to execute this command will become a manager node for your swarm cluster.

Replace <HOSTIPADDRESS>with the public IP address of your host machine

C:\temp>dockerswarminitadvertiseaddr=<HOSTIPADDRESS>listenaddr<HOSTIPADDRE

Now run the following command from each of the other host machines that you intend to use as swarm
nodes, joining them to the swarm as a worker nodes.

Replace <MANAGERIPADDRESS>with the public IP address of your host machine i.e. the value of
<HOSTIPADDRESS>that you used to initialize the swarm from the manager node

https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 8/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog

Replace <WORKERJOINTOKEN>with the worker jointoken provided as output by the dockerswarm


initcommand you can also obtain the jointoken by running dockerswarmjointoken
workerfrom the manager host

C:\temp>dockerswarmjointoken<WORKERJOINTOKEN><MANAGERIPADDRESS>:2377

Your nodes are now configured to form a swarm cluster! You can see the status of the nodes by running the
following command from your manage node:

C:\temp>dockernodels

Step 4: Deploy services to your swarm


Note: Before moving on, stop and remove any NGINX or IIS containers running on your hosts.
This will help avoid port conflicts when you define services. To do this, simply run the following
commands for each container, replacing <CONTAINERID>with the ID of the container you are
stopping/removing:

C:\temp>dockerstop<CONTAINERID>
C:\temp>dockerrm<CONTAINERID>

Next, were going to use the web_1 and web_2 container images that we created in previous steps of this
exercise to deploy two container services to our swarm cluster.

To create the services, run the following commands from your swarm manager node:

C:\>dockerservicecreatename=s1publishmode=host,target=80endpointmodedn

C:\>dockerservicecreatename=s2publishmode=host,target=80endpointmodedn

You should now have two services running, s1 and s2. You can view their status by running the following
command from your swarm manager node:

https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 9/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog

C:\>dockerservicels

Additionally, you can view information on the container instances that define a specific service with the
following commands where <SERVICENAME> is replaced with the name of the service you are inspecting for
example, s1 or s2:

#Listallservices
C:\>dockerservicels
#Listinfoforaspecificservice
C:\>dockerserviceps<SERVICENAME>

Optional Scale your services


The commands in the previous step will deploy one container instance/replica for each service, s1 and s2. To
scale the services to be backed by multiple replicas, run the following command:

C:\>dockerservicescale<SERVICENAME>=<REPLICAS>
#e.g.dockerservicescales1=3

Step 5: Configure your NGINX load balancer


Now that services are running on your swarm, you can configure the NGINX load balancer to distribute traffic
across the container instances for those services.

Of course, generally load balancers are used to balance traffic across instances of a single service,
not multiple services. For the purpose of clarity, this example uses two services so that the function
of the load balancer can be easily seen; because the two services are serving different HTML
content, well clearly see how the load balancer is distributing requests between them.

The nginx.conf file


First, the nginx.conf file for your load balancer must be configured with the IP addresses and service ports of
your swarm nodes and services. The download for NGINX that was downloaded in step 1 as a part of building
your NGINX container image includes an example nginx.conf file. For the purpose of this exercise, a version of
that file was copied and adapted to create a simple template for you to adapt with your specific
node/container information. Get the template file here [TODO: ADD LINK]and save it onto your NGINX
container host machine. In this step, well adapt the template file and use it to replace the default nginx.conf
file that was originally downloaded onto your NGINX container image.

https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 10/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog

You will need to adjust the file by adding the information for your hosts and container instances. The
template nginx.conf file provided contains the following section:

upstreamappcluster{
server<HOSTIP>:<HOSTPORT>;
server<HOSTIP>:<HOSTPORT>;
server<HOSTIP>:<HOSTPORT>;
server<HOSTIP>:<HOSTPORT>;
server<HOSTIP>:<HOSTPORT>;
server<HOSTIP>:<HOSTPORT>;
}

To adapt the file for your configuration, you will need to adjust the <HOSTIP>:<HOSTPORT> entries in the
config file. You will have an entry for each container endpoint that defines your web services. For any given
container endpoint, the value of <HOSTIP> will be the IP address of the container host upon which that
container is running. The value of <HOSTPORT> will be the port on the container host upon which the
container endpoint has been published.

When the services, s1 and s2, were defined in the previous step of this exercise, the publish
mode=host,target=80 parameter was included. This paramater specified that the container
instances for the services should be exposed via published ports on the container hosts. More
specifically, by including publishmode=host,target=80 in the service definitions, each service
was configured to be exposed on port 80 of each of its container endpoints, as well as a set of
automatically defined ports on the swarm hosts (i.e. one port for each container running on a
given host).

First, identify the host IPs and published ports for your
container endpoints
Before you can adjust your nginx.conf file, you must obtain the required information for the container
endpoints that define your services. To do this, run the following commands again, run these from your
swarm manager node:

C:\>dockerservicepss1
C:\>dockerservicepss2

The above commands will return details on every container instance running for each of your services, across
all of your swarm hosts.

https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 11/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog

One column of the output, the ports column, includes port information for each host of the form *:
<HOSTPORT>>80/tcp. The values of <HOSTPORT> will be different for each container instance, as
each container is published on its own host port.
Another column, the node column, will tell you which machine the container is running on. This is
how you will identify the host IP information for each endpoint.

You now have the port information and node for each container endpoint. Next, use that information to
populate the upstream field of your nginx.conf file; for each endpoint, add a server to the upstream field of
the file, replacing the field with the IP address of each node if you dont have this, run ipconfig on each host
machine to obtain it, and the field with the corresponding host port.

For example, if you have two swarm hosts IP addresses 172.17.0.10 and 172.17.0.11, each running three
containers your list of servers will end up looking something like this:

upstreamappcluster{
server172.17.0.10:21858;
server172.17.0.11:64199;
server172.17.0.10:15463;
server172.17.0.11:56049;
server172.17.0.11:35953;
server172.17.0.10:47364;
}

Once you have changed your nginx.conf file, save it. Next, well copy it from your host to the NGINX container
image itself.

Replace the default nginx.conf file with your adjusted file


If your nginx container is not already running on its host, run it now:

C:\temp>dockerrunitp80:80nginx

Get the ID of the container using:

C:\temp>dockerexec<CONTAINERID>ipconfig

With the container running, use the following command to replace the default nginx.conf file with the file that
you just configured run the following command from the directory in which you saved your adjusted version
of the nginx.conf on the host machine:

https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 12/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog

C:\temp>dockercpnginx.conf<CONTAINERID>:C:\nginx\nginx1.10.3\conf

Now use the following command to reload the NGINX server running within your container:

C:\temp>dockerexec<CONTAINERID>nginx.exesreload

Step 6: See your load balancer in action


Your load balancer should now be fully configured to distribute traffic across the various instances of your
swarm services. To see it in action, open a browser and

If accessing from the NGINX host machine: Type the IP address of the nginx container running on the
machine into the browser address bar. This is the value of <CONTAINERID> above.

If accessing from another host machine with network access to the NGINX host machine: Type the IP
address of the NGINX host machine into the browser address bar.

Once youve typed the applicable address into the browser address bar, press enter and wait for the web page
to load. Once it loads, you should see one of the HTML pages that you created in step 2.

Now press refresh on the page. You may need to refresh more than once, but after just a few times you
should see the other HTML page that you created in step 2.

If you continue refreshing, you will see the two different HTML pages that you used to define the services,
web_1 and web_2, being accessed in a roundrobin pattern roundrobin is the default load balancing
strategy for NGINX, but there are others. The animated image below demonstrated the behavior that you
should see.

https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 13/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog

As a reminder, below is the full configuration with all three nodes. When youre refreshing your web page
view, youre repeatedly accessing the NGINX node, which is distributing your GET request to the container
endpoints running on the swarm nodes. Each time you resend the request, the load balancer has the
opportunity to route you to a different endpoint, resulting in your being served a different web page,
depending on whether or not your request was routed to an S1 or S2 endpoint.

https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 14/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog

Caveats and gotchas


Is there a way to publish a single port for my service, so that I can
load balance across just a few endpoints rather than all of my
container instances?
Unfortunately, we do not yet support publishing a single port for a service on Windows. This feature is swarm
modes routing mesh featurea feature that allows you to publish ports for a service, so that that service is
accessible to external resources via that port on every swarm node.

Routing mesh for swarm mode on Windows is not yet supported, but will be coming soon.

Why cant I run my containerized load balancer on one of my swarm


nodes?
Currently, there is a known bug on Windows, which prevents containers from accessing their hosts using
localhost or even the hosts external IP address. This means containers cannot access their hosts exposed
portsthe can only access exposed ports on other hosts.

In the context of this exercise, this means that the NGINX load balancer must be running on its own host, and
never on the same host as any services that it needs to via exposed ports. Put another way, for the
containerized NGINX load balancer to balance across the two web services defined in this exercise, s1 and s2,
it cannot be running on a swarm nodeif it were running on a swarm node, it would be unable to access any
containers on that node via host exposed ports.

Of course, an additional caveat here is that containers do not need to be accessed via host exposed ports. It
is also possible to access containers directly, using the container IP and published port. If this instead were
done for this exercise, the NGINX load balancer would need to be configured to access:

containers that share its host by their container IP and port

containers that do not share its host by their hosts IP and exposed port

There is no problem with configuring the load balancer in this way, other than the added complexity that it
introduces compared to simply putting the load balancer on its own machine, so that containers can be
uniformly accessed via their hosts.

Search MSDN with Bing

Search this blog Search all blogs

Top Server & Tools Blogs

ScottGu's Blog

Brad Andersons "In the Cloud" Blog

https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 15/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog

Brian Harry's Blog

Steve "Guggs" Guggenheimer's Blog

Share This Post

Recent Posts

Making it easier to revert April 20, 2017

Use NGINX to load balance across your Docker Swarm cluster April 19, 2017

Windows Server 2016 Adds Native Overlay Network Driver, enabling mixed Linux + Windows Docker Swarm
Mode Clusters April 18, 2017

Whats new in HyperV for the Windows 10 Creators Update? April 13, 2017

Tags

Application Virtualization Azure Site Recovery Citrix Cloud Computing Community cross
platform management Disaster Recovery Dynamic Memory ESX events guest blog post High

Availability HP HVR HyperV HyperV Replica integrated virtualization


Interop Live Migration Management Management tools Microsoft Application
Virtualization Microsoft Desktop Optimization Pack Microsoft HyperV Server MMS SoftGrid

System Center System Center Virtual Machine Manager VDI Virtual


Desktop Architecture virtualization Virtualization AMD virtualization management
virtual machine Virtual PC Virtual Server VMM 2008
Virtuallization Solution Accelerators

VMWare Windows Server 2008 Windows Server 2008 R2 Windows


Server 2012 Windows Server 2012 R2 Windows Virtualization XenServer

Archives

April 2017 4
March 2017 1
February 2017 5
January 2017 5
December 2016 1
October 2016 5
All of 2017 15
All of 2016 17
All of 2015 18
All of 2014 36
https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 16/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog

All of 2013 30
All of 2012 23
All of 2011 41
All of 2010 58
All of 2009 97
All of 2008 80
All of 2007 13
All of 2006 3

Tags containers docker IIS NGINX windows 10

Join the conversation


Add Comment

GS 3 months ago

I thought docker swarm mode is not supported on Windows 2016 due to overlay
network not being available.

KallieBracken[msft] 2 months ago

As of 4/11, Windows Server 2016 has had support for container overlay networks.
To enable this feature, ensure youve installed the latest Windows updates, including KB4015217. For more
info on how to get started, visit our main doc page for Docker Swarm.

Bharath Lohray 2 months ago

What if the workers change? Or my swarm service scales up? Do I have to manually
update my nginx.conf every time?

KallieBracken[msft] 2 months ago

Yes, thats rightright now, because routing mesh is not yet supported on
Windows, any external resources in this case, your NGINX load balancer will have to be reconfigured
whenever there is a change in how service tasks are exposed i.e. whenever the host IP or port for a
container instance changes, or whenever more/less containers are exposed due to scaling.

gordon 1 month ago

When can we expect routing mesh support? its not possible to use service in
production without it.

https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 17/18
6/15/2017 UseNGINXtoloadbalanceacrossyourDockerSwarmcluster|VirtualizationBlog

Michael 2 months ago

When will the localhost Big Ben fixed? Acses to exposed port over locoal Host would
be so much things making Easy.

TL 1 month ago

Hi,
What is exactly of this approach? By default, Docker swarm for windows is already supported overlay network
and internal load balancerDNS round robin

Thanks,

2017 Microsoft Corporation.

Terms of Use Trademarks Privacy & Cookies

https://blogs.technet.microsoft.com/virtualization/2017/04/19/usenginxtoloadbalanceacrossyourdockerswarmcluster/ 18/18

Das könnte Ihnen auch gefallen