Membrane SoftwareTechnology development studio
Better sysadmin living through Docker
Wednesday, June 14 2017 18:01 GMT
Posted by: MembraneTags:dockersysadminprogramming
Explode
An artist's rendition of what happened to our web server. Luckily, the server was able to recover better than this island did.

Call it a fact of life: computer systems fail, sometimes catastrophically. One afternoon not too long ago, a web server of mine became the latest in a long line of systems throughout history to do just that. It was midway through just another day of programming in the office when I noticed this server go completely offline without a warning or apparent cause. Contacting the data center support staff, we soon discovered that the server had been accidentally wiped and reinstalled in what I can only assume was a bad click or fat finger type of error. This server had been running several web sites, including membranesoftware.com and the forums we use for blog comments, but due to this mishap it was now rendered dead in the water, a purposeless brick.

Back in the old days, fixing our dead server would mean carefully reinstalling and reconfiguring the many software packages involved with a set of web sites, including: nginx, apache, PHP, MongoDB, and others, not to mention any custom software on the sites that we hope runs exactly the same way when transported into a shiny new system environment (hint: sometimes it doesn't). Dealing with all of this mess takes time and effort, which is not exactly ideal when there's other work to be done. In this case we were prepared, however, and reduced an afternoon's worth of work to just a few commands. Today, we'll see how that was possible thanks to Docker, a containerization layer providing an efficient and reliable paradigm for deployment of software applications. We'll also look at examples of a working project on GitHub that could be used by anyone to run a web server on any host able to start a Docker container.

The Docker basics
Using Docker, we package each web application into a container image, described in the Docker documentation as follows:
A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment. Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.
In the case of our dead web server, this sounds like exactly what we need to get back up and running quickly. Traditional applications depend on their host system to provide a runtime environment and library dependencies, making them vulnerable if that host system happens to get wiped and reinstalled. With Docker, our applications can bring along their own environment and dependencies, allowing us to take the application image that worked on the old system and run it reliably on the new system without modification.
To build a container image, Docker reads from a Dockerfile, which we fill with commands for setup of our particular application. A very basic Dockerfile might contain only a single FROM line specifying a copy of the latest Ubuntu Linux environment:
ListingDockerfile
1  FROM ubuntu:latest
We can then execute commands with the docker utility to build an image and run it as a container.
01  $ docker build -t testapp:latest .
02  Sending build context to Docker daemon  45.06kB
03  Step 1/1 : FROM ubuntu:latest
04  latest: Pulling from library/ubuntu
05  bd97b43c27e3: Pull complete 
06  6960dc1aba18: Pull complete 
07  2b61829b0db5: Pull complete 
08  1f88dc826b14: Pull complete 
09  73b3859b1e43: Pull complete 
10  Digest: sha256:ea1d854d38be82f54d39efe2c67000bed1b03348bcc2f3dc094f260855dff368
11  Status: Downloaded newer image for ubuntu:latest
12   ---> 7b9b13f7b9c0
13  Successfully built 7b9b13f7b9c0
14  Successfully tagged testapp:latest
15  $ docker images
16  REPOSITORY  TAG     IMAGE ID      CREATED      SIZE
17  testapp     latest  7b9b13f7b9c0  11 days ago  118MB
18  ubuntu      latest  7b9b13f7b9c0  11 days ago  118MB
19  $ docker run -it --name=testapp_latest testapp:latest bash
20  root@d09919c1143d:/# ls
21  bin   boot  dev  etc  home  lib  lib64  media  mnt  opt
22  proc  root  run  sbin  srv  sys  tmp    usr    var
23  root@d09919c1143d:/# ps aux
24  USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
25  root         1  0.0  0.0  18232  1988 ?        Ss   23:40   0:00 bash
26  root        12  0.0  0.0  34416  1456 ?        R+   23:40   0:00 ps aux
27  root@d09919c1143d:/# echo "test file" > /etc/testfile
28  root@d09919c1143d:/# exit
29  exit
30  $ docker ps -a
31  CONTAINER ID   IMAGE            COMMAND   CREATED        STATUS                     PORTS  NAMES
32  d09919c1143d   testapp:latest   "bash"    8 minutes ago  Exited (0) 27 seconds ago         testapp_latest
33  $ docker cp d09919c1143d:/etc/testfile .
34  $ cat testfile
35  test file
36  $ docker rm d09919c1143d
37  d09919c1143d
38  $ docker ps -a
39  CONTAINER ID   IMAGE   COMMAND   CREATED   STATUS   PORTS   NAMES
40  $
  • 01 - The docker build command reads our Dockerfile and executes the FROM ubuntu:latest directive.
  • 15 - The docker images command shows our new "testapp:latest" image. Note that its IMAGE ID value is identical to that of the ubuntu:latest image, because the two images are indeed identical. Thanks to this fact, Docker is able to save resources by sharing storage layers; two images listed at 118MB consume only 118MB of underlying storage instead of 236MB as might be expected.
  • 19 - The docker run command executes a container based on the named image. The -it and bash arguments cause docker to run a bash command shell in our Ubuntu environment.
  • 20 - Our terminal descends into a bash session inside the docker container. We appear as the "root" user inside the container, and can browse inside a file system that looks like the real thing but actually exists only inside the container. Running ps inside the container shows our bash process as PID 1 and not much else.
  • 27 - We create a test file inside the container. This file is created only in the container environment and not in the host system. However, later on we'll be able to bring this file to the outside world.
  • 30 - After exiting the bash shell, we return to our base system shell and run the docker ps command and see our container. Note that the container's status is listed as "Exited"; with the bash process started in line 19 having ended, its container is ended as well. This fact requires us to provide the -a argument for docker ps; without this option, docker shows only containers that are still running.
  • 33 - The docker cp command pulls a file from a container environment into the base system. In this case, we try this command with the "/etc/testfile" file we created earlier, and see that it does indeed come through and contain the expected content.
  • 36 - Now that we're done with our container run, we use the docker rm command to delete its environment. After container deletion, it's no longer possible to recover "/etc/testfile", and docker ps shows no items.
Now add webs
Running bash to explore inside a base ubuntu:latest container is a fun exercise, but not a terribly useful one. To make an application container for apache/PHP web services, we add more lines to our Dockerfile.
ListingDockerfile
1  FROM ubuntu:latest
2  ENV DEBIAN_FRONTEND noninteractive
3  RUN apt-get update && apt-get upgrade -y && apt-get install -y --no-install-recommends apache2 php libapache2-mod-php
4  EXPOSE 80
5  CMD /usr/sbin/apachectl -D FOREGROUND
With these commands, we provide instructions to execute in sequence on top of the base image. In brief, we run apt-get to update system packages and then install apache and PHP, inform Docker that our application expects to listen on port 80/tcp, and specify /usr/sbin/apachectl -D FOREGROUND as the default command the container should run, thereby replacing bash as its primary process.
Executing docker commands with our new Dockerfile lets us rebuild and run this container image, now a specialized environment for running apache/PHP instead of a generic Ubuntu system. A couple of curl commands verify that this container does indeed provide web service.
01  $ docker build -t testapp:latest .
02  Sending build context to Docker daemon  46.08kB
03  Step 1/5 : FROM ubuntu:latest
04   ---> 7b9b13f7b9c0
05  Step 2/5 : ENV DEBIAN_FRONTEND noninteractive
06   ---> Running in 8582e499ee6b
07   ---> 6b7287e8e5b5
08  Removing intermediate container 8582e499ee6b
09  Step 3/5 : RUN apt-get update && apt-get upgrade -y && apt-get install -y --no-install-recommends apache2 php libapache2-mod-php
10   ---> Running in 4c87460112c3
11  Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
12  Get:2 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
13  Get:3 http://security.ubuntu.com/ubuntu xenial-security/universe Sources [35.8 kB]
14  ... (long output from apt-get as it downloads and installs packages)
15   ---> 129048e56fbe
16  Removing intermediate container 4c87460112c3
17  Step 4/5 : EXPOSE 80
18   ---> Running in e9b22e422412
19   ---> 49c8c1470d79
20  Removing intermediate container e9b22e422412
21  Step 5/5 : CMD /usr/sbin/apachectl -D FOREGROUND
22   ---> Running in 2f56815789f9
23   ---> e84f6dd433e5
24  Removing intermediate container 2f56815789f9
25  Successfully built e84f6dd433e5
26  Successfully tagged testapp:latest
27  $ docker run -d -p 8080:80 --name=testapp_latest testapp:latest 
28  8189661ca86d32129c76ead8381dde0a4764d0c3fe4ef10ac66b144376f5a44c
29  $ docker ps
30  CONTAINER ID  IMAGE           COMMAND                 CREATED        STATUS        PORTS                  NAMES
31  8189661ca86d  testapp:latest  "/bin/sh -c '/usr/..."  8 seconds ago  Up 8 seconds  0.0.0.0:8080->80/tcp   testapp_latest
32  $ curl -v http://localhost:8080/
33  * Connected to localhost (127.0.0.1) port 8080 (#0)
34  > GET / HTTP/1.1
35  > User-Agent: curl/7.35.0
36  > Host: localhost:8080
37  > Accept: */*
38  > 
39  < HTTP/1.1 200 OK
40  < Date: Wed, 14 Jun 2017 00:03:57 GMT
41  < Server: Apache/2.4.18 (Ubuntu)
42  < Last-Modified: Tue, 13 Jun 2017 23:59:32 GMT
43  < ETag: "2c39-551e03a0ce100"
44  < Accept-Ranges: bytes
45  < Content-Length: 11321
46  < Vary: Accept-Encoding
47  < Content-Type: text/html
48  < 
49  ... (content from the HTTP response)
50  $ docker stop 8189661ca86d
51  $ curl -v http://localhost:8080/
52  * connect to 127.0.0.1 port 8080 failed: Connection refused
53  * Failed to connect to localhost port 8080: Connection refused
54  * Closing connection 0
55  curl: (7) Failed to connect to localhost port 8080: Connection refused
  • 01 - The docker build command reads our Dockerfile and executes a new set of directives, resulting in a new image that is no longer simply identical to ubuntu:latest.
  • 27 - The docker run command uses different arguments from the last one. -d puts our container into the background, in contrast to the -it argument from the previous command that let us run a bash shell in the foreground. -p 8080:80 instructs docker to map host port 8080/tcp to port 80/tcp in the container. In other words, the container thinks it's listening on port 80/tcp but is actually listening to port 8080/tcp.
  • 32 - A curl command requesting "http://localhost:8080/" succeeds and is able to fetch the default index page from the Ubuntu webroot. Then, we stop the container and run curl again. As expected, it fails because the testapp:latest container is no longer around to receive this request.
Building in custom content
Our base apache/PHP server is now ready to serve content, but its webroot holds only the default index.html page that comes with Ubuntu. To finish the job and create a container image capable of serving a custom website, we again add lines to the Dockerfile.
ListingDockerfile
01  FROM ubuntu:latest
02  ENV DEBIAN_FRONTEND noninteractive
03  RUN apt-get update && apt-get upgrade -y && apt-get install -y --no-install-recommends apache2 php libapache2-mod-php
04  
05  # Source bundles are expected to be populated by the top-level make process
06  COPY www.tar.gz /var/www/
07  RUN cd /var/www && sync && rm -rf html && sync && tar zxvf www.tar.gz && mv www html && rm -f www.tar.gz && chown -R www-data:www-data html && chmod -R u=rX,g=rX,o=rX html
08  
09  EXPOSE 80
10  CMD /usr/sbin/apachectl -D FOREGROUND
The new COPY and RUN commands at lines 06-07 specify that a "www.tar.gz" file should be unpacked into the container's webroot, replacing the default /var/www files. Now all we need is a tar file packed with our web content. As the comment at line 05 indicates, we expect this bundle to be provided by some other step in the process.
But what might that process be? For a demonstration, we've created a sample docker-apache-php project on GitHub. By cloning this git repo and running make commands, we can build our desired container image, run it, and get web content from it with curl.
Listingbash session
01  $ git clone https://github.com/membranesoftware/docker-apache-php.git
02  Cloning into 'docker-apache-php'...
03  remote: Counting objects: 42, done.
04  remote: Compressing objects: 100% (26/26), done.
05  remote: Total 42 (delta 11), reused 42 (delta 11), pack-reused 0
06  Unpacking objects: 100% (42/42), done.
07  Checking connectivity... done.
08  $ cd docker-apache-php
09  $ make
10  tar czf www.tar.gz www
11  cp -v www.tar.gz docker
12  ‘www.tar.gz’ -> ‘docker/www.tar.gz’
13  BASEPATH=/home/arbiter/temp/3/docker-apache-php BUILDTYPE=normal BUILDVERSION=latest make -C docker all
14  make[1]: Entering directory `/home/arbiter/temp/3/docker-apache-php/docker'
15  Executing build type: normal
16  docker build -t docker-apache-php:latest .
17  Sending build context to Docker daemon  6.144kB
18  Step 1/7 : FROM ubuntu:latest
19   ---> 7b9b13f7b9c0
20  Step 2/7 : ENV DEBIAN_FRONTEND noninteractive
21   ---> Using cache
22   ---> 6b7287e8e5b5
23  Step 3/7 : RUN apt-get update && apt-get upgrade -y && apt-get install -y --no-install-recommends apache2 php libapache2-mod-php
24   ---> Using cache
25   ---> 129048e56fbe
26  Step 4/7 : COPY www.tar.gz /var/www/
27   ---> ad0e0fb15b13
28  Removing intermediate container 0a3ed4743804
29  Step 5/7 : RUN cd /var/www && sync && rm -rf html && sync && tar zxvf www.tar.gz && mv www html && rm -f www.tar.gz && chown -R www-data:www-data html && chmod -R u=rX,g=rX,o=rX html
30   ---> Running in 30d2f03e565c
31  www/
32  www/index.php
33   ---> 3ea406f4a69c
34  Removing intermediate container 30d2f03e565c
35  Step 6/7 : EXPOSE 80
36   ---> Running in 6a74d67da3b2
37   ---> f99708af662d
38  Removing intermediate container 6a74d67da3b2
39  Step 7/7 : CMD /usr/sbin/apachectl -D FOREGROUND
40   ---> Running in 3fbb1067409c
41   ---> 83e162ecd1de
42  Removing intermediate container 3fbb1067409c
43  Successfully built 83e162ecd1de
44  Successfully tagged docker-apache-php:latest
45  make[1]: Leaving directory `/home/arbiter/temp/3/docker-apache-php/docker'
46  rm -f www.tar.gz
47  $ docker images
48  REPOSITORY                          TAG                 IMAGE ID            CREATED             SIZE
49  docker-apache-php                   latest              83e162ecd1de        36 seconds ago      266MB
50  $ make run
51  BASEPATH=/home/arbiter/temp/3/docker-apache-php BUILDTYPE=normal BUILDVERSION=latest make -C docker run
52  make[1]: Entering directory `/home/arbiter/temp/3/docker-apache-php/docker'
53  if [ "normal" = "dev" -a ! -d www ]; then mkdir www && chmod 755 www; fi;
54  docker run -d -p 8080:80  --name=docker-apache-php_latest docker-apache-php:latest
55  1c6c73cc9f024361a7be3f46f67e02172029e5953ed4a511f9b23c29abd69e1e
56  make[1]: Leaving directory `/home/arbiter/temp/3/docker-apache-php/docker'
57  $ docker ps
58  CONTAINER ID  IMAGE                     COMMAND                 CREATED         STATUS         PORTS                 NAMES
59  1c6c73cc9f02  docker-apache-php:latest  "/bin/sh -c '/usr/..."  19 seconds ago  Up 19 seconds  0.0.0.0:8080->80/tcp  docker-apache-php_latest
60  $ curl -v http://localhost:8080/
61  > GET / HTTP/1.1
62  > User-Agent: curl/7.35.0
63  > Host: localhost:8080
64  > Accept: */*
65  > 
66  < HTTP/1.1 200 OK
67  < Date: Wed, 14 Jun 2017 16:32:45 GMT
68  < Server: Apache/2.4.18 (Ubuntu)
69  < Content-Length: 8
70  < Content-Type: text/html; charset=UTF-8
71  < 
72  PHP test
73  $
  • 01 - The git clone command fetches a copy of our GitHub project and writes it to a "docker-apache-php" directory. Then, we change to that directory and run make to execute the Docker build.
  • 21 - During its Dockerfile processing, docker reports that it's "Using cache" to execute this step. In many instances, docker can save time by referencing saved results from previous commands. In the case of Step 3/7 at line 23, this feature does indeed save quite a bit of time by preventing another round of downloads by apt-get; instead, we can simply load the image that resulted from running this command last time.
  • 50 - The make run command executes docker run to launch the container image with listening port 8080/tcp, just as in our previous demonstrations. We can then use curl to get our content.
We see that our curl request against the container returned the string "PHP test". This value comes from the web content in our git repo, specifically the www/index.php file.
ListingPHP
1  <?php echo ("PHP test"); ?>
From here, we could make any number of changes to files in the docker-apache-php www directory, and each build would generate container images with all of those changes baked right in. An example of this concept in action is the membranesoftware.com site, which runs as just this type of container: a docker build process surrounded by custom web content, in this case a bunch of blog entries and Star Commander game files.
Up and running again in minutes
Working
Back to work! Programming work doesn't photograph quite so well as fiery sparking work does, though.
All of this brings us back to our happy afternoon of server crashes. That afternoon was indeed happier than it might have been, thanks to our thoughtful prep work. After the data center technicians installed a fresh new Linux host, I simply installed Docker Community Edition and copied over a set of container images. A few docker run commands after that, and the system was back up with all sites running as expected in a brand new environment. It's great when system emergencies like this are over with quickly, so that we get time to pursue what we really want to work on: not looking after maintenance tasks, but developing new technologies.
 
What did you think of this article? Leave a comment or send us a note. Your feedback is appreciated!