Sie sind auf Seite 1von 11

Images haven’t loaded yet.

 Please exit printing, wait for images to load, and try to
Michael S. Fischer Follow
print again.
Principal Engineer, DevOps + SRE. Erstwhile attorney.
Mar 3, 2017 · 9 min read

Making Docker and Consul Get Along
If you manage the technology stack for an Internet business of any
signi cant size, you’ve probably heard of Consul. Consul is a fantastic
solution for providing, among other things, powerful and reliable
service-discovery capability to your network. It’s not surprising that
you’d want to use it.

Let’s suppose you’ve also decided to run Docker containers in


production in your stack. Let’s also suppose that you want to publish
the services running in your containers into Consul’s service catalog.
How can you do so reliably, without driving yourself mad in the
process?

Summary
• Install Consul and dnsmasq either directly on your host, or in a
container using host networking ( --net=host ).

• Create a dummy network interface on the host with a link-local IP


address (for example, 169.254.1.1).

• Con gure Consul to bind its HTTP and client RPC services to the
dummy network interface’s IP address.

• Con gure dnsmasq to listen to the dummy IP address as well.

• Con gure your containers to use the dummy IP address as their


DNS resolver and Consul server.

• Use a program such as Registrator to publish the containers’


services.

Consul and Containerized Applications
Suppose you’ve determined that you want to use Consul on your
Docker host, and you have the following requirements:

• Containerized applications must be able to accurately determine


the IP addresses and port numbers of other applications —
whether those applications be on the same host or on a di erent
host.

• Containerized applications must be able to read from and write to


Consul’s key-value database, and reliably perform lock operations.

• Applications running on foreign hosts must be able to connect to


the containerized application.

• Applications that are failing health checks must be reported


accurately to the Consul service catalog.

• If a Docker host becomes unreachable, all applications on the host


will be marked as down, and/or unpublished from the Consul
service catalog.

Installing Consul on your Docker Host
It’s considered a best practice to install and run the Consul Agent on
every host in your network, including your Docker hosts. This has a few
important bene ts:

First, it makes con guring services a cinch. Services running on the


host itself (i.e. that are not containerized) can simply drop service
de nitions, including health checks, into
/etc/consul.d/<service_name>.json , and the Consul agent will load
them at start or when signaled. The agent will then publish the services
into the catalog, and perform any health checks you’ve designated at
the frequency you specify.

Second, it provides for reliable failure detection. If your host becomes


unreachable due to termination or any other reason, the network of
Consul agents running on the other hosts in your network will quickly
notice; and any services registered on the host will be marked as
unavailable automatically.

Finally, it provides a host-local endpoint for accepting Consul DNS


queries and HTTP API requests. Those requests need not go onto the
network, which can help simplify security rules and reduce network
chatter.

The most controversial question is, should you install the Consul
Agent directly on the host, or install it in a container?

The answer is: it doesn’t matter — but the network con guration
does. Consul itself is a small, self-contained Linux binary; it has no
dependencies. You can certainly run it as a container if you wish, but
the bene t of runtime environment isolation that makes containers so
appealing is minimal when the application doesn’t need isolation
anyway. My personal preference is to run Consul as a rst-class service
on the host, alongside essential system services as the Docker engine
and sshd daemon.

If you choose to run Consul in a container, that’s ne, too. Hashicorp


itself publishes an o cial image in the Docker Hub. The important part
is that you must use the --net=host option when you run the
container:

$ sudo docker run -d --net=host consul:latest

Consul and the Loopback Interface
When you run the Consul agent, it listens on six ports, all of which
serve di erent functions. The three ports essential to our discussion
are:

• HTTP API (default: 8500): handles HTTP API requests from clients

• CLI RPC (default: 8400): handles requests from CLI

• DNS (default: 8600): answers DNS queries

By default, Consul allows connections to these ports only from the


loopback interface (127.0.0.1). This is a reasonable default choice for
security, and poses few problems on legacy hosts that don’t run
containers. But it presents a di culty for containerized applications,
since the loopback interface in a container is separate and distinct
from the loopback interfaces on the host. This is a consequence of
the private network namespace that every container operates in by
default under Docker. So if a containerized application tries to connect
to Consul by addressing it at http://127.0.0.1:8500 , it will fail.

Ideas we considered, but rejected
• Con guring Consul to bind to all interfaces. That would make the
HTTP and CLI RPC ports open to the world unless we also craft
iptables(8) rules to block requests from foreign hosts. In
addition, we’d have to ensure the containers know the IP address
of the host they’re running on in order to communicate with its
Consul agent.
• Con guring Consul to bind to the Docker bridge IP address. The
routing would work properly, but: (a) typically, bridge interfaces
are assigned dynamically by Docker; (b) there may be more than
one bridge interface; (c) containers would have to know the IP
address of the selected bridge interface; and (d) the Consul agent
and dnsmasq (discussed below) would not be able to start until
the Docker engine has started. We don’t want to create any
unnecessary dependencies.

• Installing a Consul agent per container. Consul’s architecture


anticipates a single agent per host IP address; and in most
environments, a Docker host has a single network-accessible IP
address. Running more than one Consul agent per container
would cause multiple agents to join the Consul network and claim
responsibility for the host, causing major instability in the cluster.

• Sharing the Consul agent container’s network with application


containers. A container can have exactly one network namespace.
So, if your application containers share the network namespace
with the Consul agent’s container, they will end up sharing
network namespaces with each other as well. This would deprive
us the bene ts of network isolation that is a prime bene t of using
containers in the rst place.

The Dummy Interface Solution
Linux provides a little-known network interface type called a “dummy
interface.” It’s much like a loopback interface, but you can give it any IP
address you like, and you can create as many of them as you like (but
we only need one). Here’s an example:

$ sudo ip link add dummy0 type dummy


$ sudo ip link set dev dummy0 up
$ ip link show type dummy
25: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN
mode DEFAULT qlen 1000
link/ether 2a:bb:3e:f6:50:1c brd ff:ff:ff:ff:ff:ff

What IP interface should we assign to it? A good choice is 169.254.1.1.


Addresses in the 169.254.0.0/16 network are reserved link-local
addresses, which means they are not routable, either on your local
network, or on the Internet. This means they’re e ectively private to
the host they’re assigned to. (A notable exception: Amazon EC2 utilizes
a 169.254.169.254 address for retrieving instance metadata, but what
we’re doing here won’t impact our ability to use that.)

$ sudo ip addr add 169.254.1.1/32 dev dummy0


$ sudo ip link set dev dummy0 up
$ ip addr show dev dummy0
25: dummy0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UNKNOWN qlen 1000
link/ether 2a:bb:3e:f6:50:1c brd ff:ff:ff:ff:ff:ff
inet 169.254.1.1/32 scope global dummy0
valid_lft forever preferred_lft forever
inet6 fe80::28bb:3eff:fef6:501c/64 scope link
valid_lft forever preferred_lft forever

Every host can use the same 169.254.1.1 address with its dummy
interface. This makes con guration a lot easier, since you don’t have to
write scripts to determine the IP address and provide it to the programs
that need it.

Con guring the interface
If your Linux distribution uses systemd , it’s easy to set up the dummy
interface at boot time by creating two les. (You may need to install
systemd-networkd via your distribution’s package manager, enable it,
and start it.)

Place the following le into /etc/systemd/network/dummy0.netdev :

[NetDev]
Name=dummy0
Kind=dummy

Then place the following le into


/etc/systemd/network/dummy0.network :

[Match]
Name=dummy0

[Network]
Address=169.254.1.1/32
Run sudo systemctl restart systemd-networkd and your new dummy0

interface should appear.

If you’re not using systemd , check the documentation provided with


your distribution to learn how to create a dummy interface on your
host.

Con guring Consul to use the dummy interface
Next, let’s con gure the Consul agent to bind its HTTP, CLI RPC, and
DNS interfaces to the 169.254.1.1 address.

Assume the agent is started with the -config-dir=/etc/consul.d

option. We can simply create a le at /etc/consul.d/interfaces.json

with the following content, substituting your host’s IP address for the
value of HOST_IP_ADDRESS below.

{
"client_addr": "169.254.1.1",
"bind_addr": "HOST_IP_ADDRESS"
}

You’ll need to restart the Consul agent after doing this.

Con guring dnsmasq to use the dummy interface
dnsmasq is a fantastic piece of software. Among other things, it can act
as a local DNS cache on your host. It’s extremely exible and can make
integration with Consul’s DNS service a snap. We’re going to install it
on our server; bind it to both our loopback and dummy interfaces;
make it pass queries ending in  .consul to the Consul agent; and
con gure /etc/resolv.conf on both the host and our containers to
dispatch DNS queries to it.

First, use your operating system’s package manager ( yum , apt-get ,


etc.) to install dnsmasq.

Next, con gure dnsmasq to bind to the loopback and dummy


interfaces, and forward Consul queries to the agent. Create a le
/etc/dnsmasq.d/consul.conf with the following text:

server=/consul/169.254.1.1#8600
listen-address=127.0.0.1
listen-address=169.254.1.1
Then restart dnsmasq.

Putting it together: Containers, Consul,
and DNS
The key to making everything work at this point is to ensure that the
container itself, and the code running inside, point to the right address
when resolving DNS queries or connecting to Consul’s HTTP API.

When starting your Docker container, con gure it so that it uses the
dnsmasq server as its resolver:

docker run --dns 169.254.1.1 ...

The containerized application will then be able to query addresses


ending in  .consul , because dnsmasq will forward those queries to the
Consul agent.

How about Consul API access? The key is to set two standard
environment variables called CONSUL_HTTP_ADDR and
CONSUL_RPC_ADDR . Nearly all standard Consul client libraries use these
values to determine where to send queries. Be certain your code uses
these variables too — never hard-code Consul endpoints into your
applications!

$ sudo docker run --dns 169.254.1.1 \


-e CONSUL_HTTP_ADDR=169.254.1.1:8500 \
-e CONSUL_RPC_ADDR=169.254.1.1:8400 ...

Now, let’s see it in action!

Suppose we have a service already registered in Consul called myapp .


Can we nd it in our container? Certainly:

$ sudo docker run --dns 169.254.1.1 \


-e CONSUL_HTTP_ADDR=169.254.1.1:8500 \
-e CONSUL_RPC_ADDR=169.254.1.1:8400 \
-it \
myImage:latest /bin/sh

# curl http://$CONSUL_HTTP_ADDR/v1/catalog/service/myapp?
pretty
[
{
"ID": "6c542e7f-a68d-4de0-bcc0-7eb6b80b68e3",
"Node": "vessel",
"Address": "10.0.0.2",
"ServiceID": "myapp",
"ServiceName": "myapp",
"ServiceTags": [],
"ServiceAddress": "",
"ServicePort": 80,
"ServiceEnableTagOverride": false,
"CreateIndex": 60,
"ModifyIndex": 60
}
]

# dig +short myapp.service.consul


10.0.0.2

It’s also a great idea to set CONSUL_HTTP_ADDR and CONSUL_RPC_ADDR as


default environment variables in all users’ shells. To do that, you can
simply edit the /etc/environment le on the host and set them there:

# /etc/environment
CONSUL_HTTP_ADDR=169.254.1.1:8500
CONSUL_RPC_ADDR=169.254.1.1:8400

Registering your containers
Now that we’ve shown that containers can access the Consul agent,
you’ll want to publish their services into the Consul catalog.

There are many tools and options for doing so. My favorite open-source
tool for this task is Registrator, which is available from Docker Hub.

Let’s install Registrator and have it publish a container. First:

$ sudo docker run -d --name=registrator --net=host \


--
volume=/var/run/docker.sock:/tmp/docker.sock \
gliderlabs/registrator:latest \
consul://$CONSUL_HTTP_ADDR
Now, let’s start a simple container that runs Nginx:

$ sudo docker run -d --name=webservice \


-e CONSUL_HTTP_ADDR=$CONSUL_HTTP_ADDR \
-e SERVICE_NAME=webservice \
--dns 169.254.1.1 -P nginx:latest

Registrator will notice, and publish to Consul. (Since the nginx image
exposes two ports, Registrator will append -80 and -443 to the
webservice service name when registering it with the catalog. You can
change that behavior if you like by setting other environment
variables.)

$ sudo docker logs registrator


2017/02/17 22:50:52 added: cd09c82f01ba
vessel:webservice:443
2017/02/17 22:50:52 added: cd09c82f01ba vessel:webservice:80

$ curl
http://$CONSUL_HTTP_ADDR/v1/catalog/service/webservice-80?
pretty
[
{
"ID": "6c542e7f-a68d-4de0-bcc0-7eb6b80b68e3",
"Node": "vessel",
"Address": "10.0.0.2",
"ServiceID": "vessel:webservice:80",
"ServiceName": "webservice-80",
"ServiceTags": [],
"ServiceAddress": "",
"ServicePort": 32772,
"ServiceEnableTagOverride": false,
"CreateIndex": 496,
"ModifyIndex": 496
}
]

When the container is shut down, Registrator will automatically


remove it from the Consul catalog.

Conclusion
With clever use of dummy network interfaces, we can allow Docker
hosts to utilize a Consul agent without any di cult or confusing
con guration.
And with Registrator, we can easily publish running Docker containers
to Consul with minimal e ort.

Das könnte Ihnen auch gefallen