Loose coupling and discovery of services with Consul — Part 2

Posted in: Technical Track

Creating a Consul Client Docker Image

In my previous post, I demonstrated how to create a cluster of Consul servers using a pre-built Docker image. I was able to do this because our use case was simple: run Consul.

In this post, however, we will be creating one or more Consul clients that will register services they provide, which can then be queried using Consul’s DNS and / or HTTP interfaces. As we are now interested in running Consul and an application providing a service, things are going to get a bit more complicated.

Before proceeding, I’m going to need to explain a little bit about how Docker works. Docker images, such as progrium/consul we used in the previous post, are built using instructions from a special file called Dockerfile. There are two related instructions that can be specified in this file which control the container’s running environment: that is, the process or shell that is run in the container. They are ENTRYPOINT and CMD.

There can be only one ENTRYPOINT instruction in a Dockerfile, and it has two forms: either an array of strings, which will be treated like an exec, or a simple string which will execute in ‘/bin/sh -c’. When you specify an ENTRYPOINT, the whole container runs as if it were just that executable.

The CMD instruction is a bit different. It too can only be specified once, but it has three forms: the first two are the same as ENTRYPOINT, but the third form is an array of strings which will be passed as parameters to the ENTRYPOINT instruction. It’s important to note that parameters specified in an ENTRYPOINT instruction cannot be overridden, but ones in CMD can. Therefore, the main purpose of CMD is to provide defaults for an executing container.

It’s probably becoming clear to you by now that Docker images are designed to run one process or shell. We want to run two processes, however: the Consul agent and an application. Thankfully, there is an image available called phusion/baseimage that provides runit for service supervision and management, which will make it easy for me to launch Consul and another service (such as nginx) within my containers.

SIDEBAR: There is quite a bit of debate about the intended / recommend use of Docker, and whether the process run in the container should be your application or an init process that will spawn, manage and reap children. If you’re interested in reading more about the pros and cons of each of these approaches, please refer to Jérôme Petazzoni’s post, phusion’s baseimage-docker page, and / or Google the topics of ‘separation of concerns’ and ‘microservices’.

Now that I’ve provided some background, let’s get into the specifics of the Docker image for my Consul clients. I’ll begin with the full contents of the Dockerfile and then describe each section in detail.

FROM phusion/baseimage:latest
MAINTAINER Bill Fraser 

# Disable SSH
RUN rm -rf /etc/service/sshd /etc/my_init.d/00_regen_ssh_host_keys.sh

# Install nginx
RUN \
  add-apt-repository -y ppa:nginx/stable && \
  apt-get update && \
  apt-get install -y nginx zip && \
  chown -R www-data:www-data /var/lib/nginx

# Clean up apt
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

# Define mountable directories.
VOLUME ["/data", "/etc/nginx/sites-enabled", "/var/log/nginx"]

# Add runit configuration for nginx
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
ADD files/nginx/nginx-runit /etc/service/nginx/run
RUN chmod 755 /etc/service/nginx/run

# Install consul
RUN curl -s -L -O https://dl.bintray.com/mitchellh/consul/0.3.0_linux_amd64.zip
RUN unzip -d /usr/bin 0.3.0_linux_amd64.zip
RUN chmod 555 /usr/bin/consul

# Add service configuration
ADD files/consul/consul.json /etc/consul.d/consul.json
RUN chmod 644 /etc/consul.d/consul.json

# Add runit configuration for consul
ADD files/consul/consul-runit /etc/service/consul/run
RUN chmod 755 /etc/service/consul/run

# Expose nginx ports
EXPOSE 80 443

# Expose consul ports
EXPOSE 53/udp 8300 8301 8301/udp 8302 8302/udp 8400 8500

ENV HOME /root

ENTRYPOINT [ "/sbin/my_init" ]

The first section specifies that my image will be based on that of phusion/baseimage, and that I am the maintainer of my image. So far so good.

Next, I am removing the SSHD service from the container. This is part of phusion’s image and is not something I am interested in using for the purposes of my demonstration.

The next step is to install nginx and should look fairly straight forward. I have taken the liberty of installing zip at the same time, as I will be using it later on to install Consul.

The VOLUME instruction lets me define mount points that can be used for mounting volumes in the container, passed as arguments of the docker run command. I am not actually using this in my demonstration, it is just there to make you aware of the capability.

Next I am telling nginx not to daemonize itself, and am adding an nginx configuration for runit. The ADD instruction adds a local file to the image in the specified path. The runit configuration I am adding is pretty simple and looks like this:

#!/bin/sh
exec /usr/sbin/nginx -c /etc/nginx/nginx.conf 2>&1

Now that I am done with nginx, I want to install and configure Consul. I simply retrieve the binary package and extract it into /usr/bin in the image. I then use another ADD instruction to supply a configuration file for Consul. This file is in JSON format and tells Consul to register a service named ‘nginx’.

{
	"service": {
		"name": "nginx",
		"port": 80
	}
}

I then use an ADD instruction to supply a runit configuration for Consul in the same manner I did for nginx. Its content is as follows:

#!/bin/sh
if [ -f "/etc/container_environment.sh" ]; then
  source /etc/container_environment.sh
fi

# Make sure to use all our CPUs, because Consul can block a scheduler thread
export GOMAXPROCS=`nproc`

# Get the public IP
BIND=`ifconfig eth0 | grep "inet addr" | awk '{ print substr($2,6) }'`

exec /usr/bin/consul agent \
  -config-dir="/etc/consul.d" \
  -data-dir="/tmp/consul" \
  -bind=$BIND \
  ${CONSUL_FLAGS} \
  >>/var/log/consul.log 2>&1

With all of the hard stuff out of the way, I now define the nginx and Consul ports to EXPOSE to other containers running on the host, and to the host itself.

And last but not least, I set the HOME environment variable to /root and set the init process of /sbin/my_init as the ENTRYPOINT of my container.

This creates a good foundation. If we were to run the image as is, we would end up with nginx running and listening on port 80, and Consul would be running as well. However, we haven’t provided Consul with any details of the cluster to join. As you have probably guessed, that’s what CONSUL_FLAGS is for, and we’ll see it in action in the next section.

Creating Consul Clients With Vagrant

So far we’ve gone to the trouble of creating a Docker image that will run Consul and nginx, and we’ve supplied configuration to Consul that will have it register nginx as a service. Now we’ll want to create some clients with Vagrant and see querying of services in action.

Let’s start by modifying our Vagrantfile. Just as was done with the Consul servers, we’ll want to create an array for the nginx members and tell Vagrant to use the Docker provider. This time, however, some additional configuration will be necessary. The full Vagrantfile is now going to look like this:

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  JOIN_IP = ENV['JOIN_IP']
  
  # A hash of containers to define.
  # These will be the Consul cluster members.
  consul_members = [ "consul1", "consul2", "consul3" ]
  consul_members.each do |member|
    config.vm.define member do |consul_config|

      # Use Docker provider
      consul_config.vm.provider "docker" do |docker|
        docker.name = member
        docker.image  = 'progrium/consul'
        docker.cmd = [ "-server", "-node=#{member}", "-join=#{JOIN_IP}" ]
      end
    end
  end

  # Create an nginx container running the consul agent
  nginx_members = [ "nginx1", "nginx2", "nginx3" ]
  nginx_members.each do | member|
    config.vm.define member do |nginx_config|
      nginx_config.vm.provider "docker" do |docker|
        docker.name = member
        docker.build_args = [ "-t", "bfraser/consul-nginx", "--rm=true" ]
        docker.build_dir = "."
        docker.cmd = [ "/bin/bash" ]
        docker.create_args = [ "--dns=#{JOIN_IP}", "-t", "-i" ]
        docker.env = { "CONSUL_FLAGS" => "-node=#{member} -join=#{JOIN_IP}" }
      end
    end
  end
end

Note that this time we are not using docker.image to supply the name of an existing Docker image to use for our containers. Instead, we are going to use docker.build_args and docker.build_dir to build our own.

docker.build_args = [ "-t", "bfraser/consul-nginx", "--rm=true" ]

This is a list of extra arguments to pass to the docker build command. Specifically, I am naming the image bfraser/consul-nginx and telling Docker to remove intermediate containers after a successful build.

docker.build_dir = "."

This should be fairly self-explanatory: I am simply telling Docker to use the current working directory as the build directory. However, I have some files (including the Vagrantfile) that I do not want to be part of the resulting image, so it is necessary to tell Docker to ignore them. This is accomplished with a file called .dockerignore and mine looks like this:

.git
.vagrant
Vagrantfile

Next, I am using docker.cmd to pass /bin/bash as an extra parameter to the image’s ENTRYPOINT, which allows me to have a shell in the container. A little later, I will show you how this can be useful.

The next line:

docker.create_args = [ "--dns=#{JOIN_IP}", "-t", "-i" ]

is a list of extra arguments to pass to the ‘docker run‘ command. Specifically, I am providing a custom DNS server and instructing Docker to allocate a TTY and keep STDIN open even if not attached to the container.

Lastly, I am supplying a hash to docker.env which will expose an environment variable named CONSUL_FLAGS to the container. The environment variable contains additional parameters to be used when starting Consul.

With this configuration in place, we can now use Vagrant to create three additional containers, this time running Consul and nginx.

$ JOIN_IP=172.17.42.1 vagrant up --provider=docker

This time if we check the output of ‘consul members‘ we should see our host and six containers: three Consul servers and three nginx servers functioning as Consul clients.

$ consul members -rpc-addr=172.17.42.1:8400
Node     Address           Status  Type    Build  Protocol
nginx1   172.17.0.18:8301  alive   client  0.3.0  2
nginx2   172.17.0.19:8301  alive   client  0.3.0  2
laptop   172.17.42.1:8301  alive   server  0.3.0  2
consul2  172.17.0.9:8301   alive   server  0.3.0  2
consul3  172.17.0.10:8301  alive   server  0.3.0  2
consul1  172.17.0.8:8301   alive   server  0.3.0  2
nginx3   172.17.0.20:8301  alive   client  0.3.0  2

Querying Services

As I mentioned in ‘Where Does Consul Fit In?’ in my original post, Consul is a tool for enabling discovery of services, and it provides two interfaces for doing so: DNS and HTTP. In this section, I’ll show you how we can use each of these interfaces to query for details of services being provided.

First, let’s use the HTTP interface to query which services are being provided by members of the Consul cluster.

$ curl http://172.17.42.1:8500/v1/catalog/services | python -m json.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    25  100    25    0     0  12722      0 --:--:-- --:--:-- --:--:-- 25000
{
    "consul": [],
    "nginx": []
}

This returns JSON-encoded data which shows that ‘consul’ and ‘nginx’ services are being provided. Great, now let’s query for details of the ‘nginx’ service.

$ curl http://172.17.42.1:8500/v1/catalog/service/nginx | python -m json.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   362  100   362    0     0   210k      0 --:--:-- --:--:-- --:--:--  353k
[
    {
        "Address": "172.17.0.18",
        "Node": "nginx1",
        "ServiceID": "nginx",
        "ServiceName": "nginx",
        "ServicePort": 80,
        "ServiceTags": null
    },
    {
        "Address": "172.17.0.19",
        "Node": "nginx2",
        "ServiceID": "nginx",
        "ServiceName": "nginx",
        "ServicePort": 80,
        "ServiceTags": null
    },
    {
        "Address": "172.17.0.20",
        "Node": "nginx3",
        "ServiceID": "nginx",
        "ServiceName": "nginx",
        "ServicePort": 80,
        "ServiceTags": null
    }
]

We can see here that there are three nodes providing the nginx service, and we have details of the IP address and port they are listening on. Therefore, if we were to open http://172.17.0.18 in a web browser, we would see the ‘Welcome to nginx!’ page.

Notice how the REST endpoint changed between the first and second curl requests, from /v1/catalog/services to /v1/catalog/service/nginx. Consul provides extensive documentation of the various REST endpoints available via the HTTP API.

While the HTTP API is the most powerful method of interacting with Consul, if we are only interested in querying for information about nodes and services, it is also possible to use its DNS server for simple name lookups. Querying for details of the nginx service via the DNS interface is as simple as running the following:

$ dig @172.17.42.1 -p 8600 nginx.service.consul

; <> DiG 9.9.5-3-Ubuntu <> @172.17.42.1 -p 8600 nginx.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 3084
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;nginx.service.consul.		IN	A

;; ANSWER SECTION:
nginx.service.consul.	0	IN	A	172.17.0.19
nginx.service.consul.	0	IN	A	172.17.0.18
nginx.service.consul.	0	IN	A	172.17.0.20

;; Query time: 1 msec
;; SERVER: 172.17.42.1#8600(172.17.42.1)
;; WHEN: Sat Aug 16 22:35:51 EDT 2014
;; MSG SIZE  rcvd: 146

As you can see, while it is certainly possible to develop a client to tightly integrate with Consul through its API, it is also easy to take advantage of its DNS interface and not have to write a client at all.

Attaching To A Docker Container

I have one last tip, which is especially useful if you are new to Docker: how to attach to your containers.

I mentioned earlier in this post that I was including the following line in my Vagrantfile:

docker.cmd = [ "/bin/bash" ]

What this does is pass /bin/bash as an extra parameter to the image’s ENTRYPOINT instruction, resulting in the /sbin/my_init process spawning a bash shell.

I also instructed Vagrant, via the docker.create_args line, to have Docker allocate a TTY and keep STDIN open even if not attached to the container. This means I can attach to my containers and interact with them through a bash shell as follows (note: press the ‘Enter’ key following the command to get the prompt):

$ docker attach nginx1

root@4b5a98093740:/# 

Once you are done working with the container, you can detach from it by pressing ^P^Q (that’s CTRL-P followed by CTRL-Q).

Summary

With that, we have reached the end of my demonstration. Thanks for sticking with me!

First I described the importance of loose coupling and service discovery in modern service-oriented architectures, and how Consul is one tool that can be used for achieving these design goals.

Then I detailed, by way of a demonstration, how Vagrant and Docker can be used to form a Consul cluster, and how to create a custom Docker image that will run both your application and a Consul agent.

And, last but not least, I showed how you can make use of Consul’s HTTP API and DNS interface to query for information about services provided.

Hopefully you have found these posts useful and now have some ideas about how you can leverage these technologies for managing your infrastructure. I encourage you to provide feedback, and would be very interested in any tips, tricks or recommendations you may have!

email

Interested in working with Bill? Schedule a tech call.

About the Author

Bill Fraser is passionate about creating and managing automation because it allows him to focus on the bigger picture for his clients. Known for being a driven and approachable team player, Bill is confident that he can solve any problem related to enterprise infrastructure, and welcomes the challenge. He enjoys working with the leading-edge technology, particularly in the cloud and virtualization space. To have that expertise, and be the one that companies come to – It’s a real win. When he’s not working, Bill can be found playing video games and spending time with family.

4 Comments. Leave new

Hi,

I think there’s a typo in your Dockerfile here:
RUN echo “ndaemon off

I think it should just say ‘daemon’.
Also, in files/consul/consul-runit I had to change the shell to /bin/bash as the source wouldn’t be effective when using /bin/sh
I also had to remove the linebreaks from there (or add \), but that’s probabl just a formatting thing.

Thanks for the otherwise excellent article, well written and just the right depth! Your efforts are much appreciated!

Do you intend to keep the series going, perhaps with enabling health checks, DNS forwarding (since currently the containers only use the host as DNS, which doesn’t listen on 53) etc.?

Reply

Hi Erik,

Thanks for the feedback!

I agree there must be a formatting issue of some sort. The echo command to turn off the nginx daemon is supposed to read echo “\ndaemon off;”. Also, the runit configuration for Consul should indeed have line continuation characters, and does for me when I view the page.

I would be happy to continue with another post about health checks and DNS forwarding. I should also point out that Jeff Lindsay, the creator of the progrium/consul image I used in part one, has written a tool called Registrator which eliminates the need to use a co-process for service registration. I have yet to work with it, but it looks pretty cool.

Again, thanks for your interest and feedback!

Reply

Thanks for the write up. I’m getting an error that is emanating from the consul-runit file. Fix is to replace ‘source’ with a period ‘.’

$ docker run –rm ec4 echo “testing”
*** Running /etc/rc.local…
*** Booting runit daemon…
*** Runit started as PID 7
*** Running echo testing…
testing
*** echo exited with status 0.
./run: 3: ./run: source: not found
*** Shutting down runit daemon (PID 7)…
*** Killing all processes…

Reply

hi.can we monitor the services inside containers on consul dashboard since i saw only stopping containers and checking container status on dashboard is available

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *