From Docker in Practice, Second Edition by Ian Miell and Aidan Hobson Sayers

This article, adapted from chapter 1 of Docker in Practice, Second Edition, jumps in feet first and gets you started on making a simple application image with Docker. You will also explore some key Docker features like Dockerfiles, tagging an image for easy reference, and more.


Save 37% on Docker in Practice, Second Edition. Just enter code fccmiell into the discount code box at checkout at manning.com.


Building a Docker application

We’re going to get our hands dirty now by building a simple “to-do” application (todoapp) image with Docker. In the process, you’ll see some key Docker features like Dockerfiles, image re-use, port exposure, and build automation. Here’s what you’ll learn in the next 10 minutes:

  • How to create a Docker image using a Dockerfile

  • How to tag a Docker image for easy reference

  • How to run your new Docker image

A to-do app is one that helps you keep track of things you want to get done. The app we’ll build will store and display short strings of information that can be marked as done, presented in a simple web interface. Figure 1 shows what we’ll achieve by doing this.


Figure 1. Building a Docker application


The details of the application are unimportant. We’re going to demonstrate that from the single short Dockerfile we’re about to give you, you can reliably build, run, stop, and start an application in the same way on both your host and ours without needing to worry about application installations or dependencies. This is a key part of what Docker gives us-reliably reproduced and easily managed and shared development environments. This means no more complex or ambiguous installation instructions to follow and potentially get lost in.

Ways to create a new Docker image

There are four standard ways to create Docker images. Table 1 itemizes these methods.

Table 1 Options for creating Docker images

Method

Description

See technique

Docker commands / “By hand”

Fire up a container with docker run and input the commands to create your image on the command line. Create a new image with docker commit.

LINK:Ch3,’The “save game” approach to development – a cheap source control’

Dockerfile

Build from a known base image, and specify build with a limited set of simple commands.

Discussed shortly

Dockerfile and configuration management (CM) tool

Same as Dockerfile, but hand over control of the build to a more sophisticated CM tool.

LINK:Ch5,’Building images with Chef Solo’

Scratch image and import a set of files

From an empty image, import a TAR file with the required files.

LINK:Ch3,’Converting your VM to a container’

The first “by hand” option is fine if you’re doing proofs of concept to see whether your installation process works. At the same time, you should be keeping notes about the steps you’re taking so that you can return to the same point if you need to.

At some point, you’re going to want to define the steps for creating your image. This is the second option (and the one we’ll use here). For more complex builds, you may want to go for the third option, particularly when the Dockerfile features aren’t sophisticated enough for your image’s needs.

The final option builds from a null image by overlaying the set of files required to run the image. This is useful if you want to import a set of self-contained files created elsewhere, but it’s rarely seen in mainstream use. Let’s have a look at the Dockerfile method for now. You can learn more about docker images by visiting https://www.mirantis.com/blog/how-do-i-create-a-new-docker-image-for-my-application/.

Writing a Dockerfile

A Dockerfile is a text file with a series of commands in it. Here’s the Dockerfile we’re going to use for this example.

 
FROM node                                                       ?
MAINTAINER ian.miell@gmail.com                                  ?
RUN git clone -q https://github.com/docker-in-practice/todo.git ?
WORKDIR todo                                                    ?
RUN npm install > /dev/null                                     ?
EXPOSE 8000                                                     ?
CMD ["npm","start"]                                             ?
 

? Define the base image.

? Declare the maintainer.

? Clone the todoapp code.

? Move to the new cloned directory.

? Run the node package manager’s install command (npm).

? Specify that containers from the built image should listen on this port.

? Specify which command will be run on startup.

You begin the Dockerfile by defining the base image with the FROM command ?. This example uses a Node.js image so you have access to the Node.js binaries. The official Node.js image is called node.

Next, you declare the maintainer with the MAINTAINER command ? In this case, we’re using one of our email addresses, but you can replace this with your own reference, because it’s your Dockerfile now. This line isn’t required to make a working Docker image, but it’s good practice to include one. At this point, the build has inherited the state of the node container, and you’re ready to work on top of it.

Next, you clone the todoapp code with a RUN command ?. This uses the specified command to retrieve the code for the application, running git within the container. Git is installed inside the base node image in this case, but you can’t take this kind of thing for granted.

Now you move to the new cloned directory with a WORKDIR command ?. Not only does this change directory within the build context, but the last WORKDIR command determines which directory you’re in by default when you start up your container from your built image.

Next, you run the node package manager’s install command (npm) ?. This will set up the dependencies for your application. You aren’t interested in the output here, so you redirect it to /dev/null.

Because port 8000 is used by the application, you use the EXPOSE command to tell Docker that containers from the built image should listen on this port ?.

Finally, you use the CMD command to tell Docker which command will be run on startup of the container ?.

This simple example illustrates several key features of Docker and Dockerfiles. A Dockerfile is a simple sequence of a limited set of commands, run in strict order. They affect the files and metadata of the resulting image. Here the RUNcommand affects the filesystem by checking out and installing applications, and the EXPOSE, CMD, and WORKDIR commands affect the metadata of the image.

Building a Docker image

You’ve defined your Dockerfile’s build steps. Now you’re going to build the Docker image from it by typing the command in figure Docker build command.


Figure 2. Docker build command


The output you’ll see will be similar to this:

 
Sending build context to Docker daemon  2.048kB     ?
Step 1/7 : FROM node                                ?
 ---> 2ca756a6578b                                  ?
Step 2/7 : MAINTAINER ian.miell@gmail.com
 ---> Running in bf73f87c88d6
 ---> 5383857304fc
Removing intermediate container bf73f87c88d6        ?
Step 3/7 : RUN git clone -q https://github.com/docker-in-practice/todo.git
 ---> Running in 761baf524cc1
 ---> 4350cb1c977c
Removing intermediate container 761baf524cc1
Step 4/7 : WORKDIR todo
 ---> a1b24710f458
Removing intermediate container 0f8cd22fbe83
Step 5/7 : RUN npm install > /dev/null
 ---> Running in 92a8f9ba530a
npm info it worked if it ends with ok               ?
[...]
npm info ok
 ---> 6ee4d7bba544
Removing intermediate container 92a8f9ba530a
Step 6/7 : EXPOSE 8000
 ---> Running in 8e33c1ded161
 ---> 3ea44544f13c
Removing intermediate container 8e33c1ded161
Step 7/7 : CMD npm start
 ---> Running in ccc076ee38fe
 ---> 66c76cea05bb
Removing intermediate container ccc076ee38fe
Successfully built 66c76cea05bb                     ?
 

? Docker uploads the files and directories under the path supplied to the docker build command.

? Each build step is numbered sequentially from 1 and output with the command.

? Each command results in a new image being created, and the image ID is output.

? To save space, each intermediate container is removed before continuing.

? Debug of the build is output here (and edited out of this listing).

? Final image ID for this build, ready to tag.

You now have a Docker image with an image ID (“66c76cea05bb” in the preceding example, but your ID will be different). It can be cumbersome to keep referring to this ID, so you can tag it for easier reference:


Figure 3. Adding tag to an image


Type the preceding command, replacing the 66c76cea05bb with whatever image ID was generated for you.

You can now build your own copy of a Docker image from a Dockerfile, reproducing an environment defined by someone else!

Running a Docker container

You’ve built and tagged your Docker image. Now you can run it as a container:

 
$ docker run -i -t -p 8000:8000 --name example1 todoapp  ?
npm install
npm info it worked if it ends with ok
npm info using npm@2.14.4
npm info using node@v4.1.1
npm info prestart todomvc-swarm@0.0.1
 
> todomvc-swarm@0.0.1 prestart /todo                     ?
> make all
 
npm install
npm info it worked if it ends with ok
npm info using npm@2.14.4
npm info using node@v4.1.1
npm WARN package.json todomvc-swarm@0.0.1 No repository field.
npm WARN package.json todomvc-swarm@0.0.1 license should be a valid SPDX license expression
npm info preinstall todomvc-swarm@0.0.1
npm info package.json statics@0.1.0 license should be a valid SPDX license expression
npm info package.json react-tools@0.11.2 No license field.
npm info package.json react@0.11.2 No license field.
npm info package.json node-jsx@0.11.0 license should be a valid SPDX license expression
npm info package.json ws@0.4.32 No license field.
npm info build /todo
npm info linkStuff todomvc-swarm@0.0.1
npm info install todomvc-swarm@0.0.1
npm info postinstall todomvc-swarm@0.0.1
npm info prepublish todomvc-swarm@0.0.1
npm info ok
if [ ! -e dist/ ]; then mkdir dist; fi
cp node_modules/react/dist/react.min.js dist/react.min.js
 
LocalTodoApp.js:9:    // TODO: default english version
LocalTodoApp.js:84:            fwdList = this.host.get('/TodoList#'+listId); // TODO fn+id sig
TodoApp.js:117:        // TODO scroll into view
TodoApp.js:176:        if (i>=list.length()) { i=list.length()-1; } // TODO .length
local.html:30:    
model/TodoList.js:29:        // TODO one op - repeated spec? long spec?
view/Footer.jsx:61:        // TODO: show the entry's metadata
view/Footer.jsx:80:            todoList.addObject(new TodoItem()); // TODO create default
view/Header.jsx:25:        // TODO list some meaningful header (apart from the id)
 
npm info start todomvc-swarm@0.0.1
 
> todomvc-swarm@0.0.1 start /todo
> node TodoAppServer.js
 
Swarm server started port 8000
^Cshutting down http-server...                      ?
closing swarm host...
swarm host closed
npm info lifecycle todomvc-swarm@0.0.1~poststart: todomvc-swarm@0.0.1
npm info ok
$ docker ps -a                                      ?
CONTAINER ID  IMAGE    COMMAND      CREATED        STATUS                    PORTS  NAMES
b9db5ada0461  todoapp  "npm start"  2 minutes ago  Exited (0) 2 minutes ago         example1
$ docker start example1                             ?
example1
$ docker ps
CONTAINER ID  IMAGE    COMMAND      CREATED        STATUS         PORTS                   NAMES
b9db5ada0461  todoapp  "npm start"  8 minutes ago  Up 10 seconds  0.0.0.0:8000->8000/tcp  example1 ?
$ docker diff example1                              ?
C /root
C /root/.npm
C /root/.npm/_locks
C /root/.npm/anonymous-cli-metrics.json
C /todo                                             ?
A /todo/.swarm                                      ?
A /todo/.swarm/_log
A /todo/dist
A /todo/dist/LocalTodoApp.app.js
A /todo/dist/TodoApp.app.js
A /todo/dist/react.min.js
C /todo/node_modules
 

? The docker run subcommand starts the container, -p maps the container’s port 8000 to the port 8000 on the host machine, –name gives the container a unique name, and the last argument is the image.

? The output of the container’s starting process is sent to the terminal.

? Hit CTRL-C here to terminate the process and the container.

? Run this command to see containers that have been started and removed, along with an ID and status (like a process).

? Restart the container, this time in the background.

? Run the ps command again to see the changed status.

? The docker diff subcommand shows you what files have been affected since the image was instantiated as a container.

? The /todo directory has been changed ©.

? The /todo/.swarm directory has been added (A).

The docker run subcommand starts up the container ?. The -p flag maps the container’s port 8000 to the port 8000 on the host machine, so you should now be able to navigate with your browser to http://localhost:8000 to view the application. The — name flag gives the container a unique name you can refer to later for convenience. The last argument is the image name.

Once the container was started, we hit CTRL-C to terminate the process and the container ?. You can run the pscommand to see the containers that have been started but not removed. Note that each container has its own container ID and status, analogous to a process. Its status is Exited, but you can restart it ?. After you do, notice how the status has changed to Up and the port mapping from container to host machine is now displayed ?

The docker diff subcommand shows you which files have been affected since the image was instantiated as a container ?. In this case, the todo directory has been changed (C) <6> and the other listed files have been added (A) ?. No files have been deleted (D), which is the other possibility.

As you can see, the fact that Docker “contains” your environment means that you can treat it as an entity on which actions can be predictably performed. This gives Docker its breadth of power-you can affect the software lifecycle from development to production and maintenance. Next you’re going to learn about layering, another key concept in Docker.

Docker layering

Docker layering helps you manage a big problem that arises when you use containers at scale. Imagine what would happen if you started up hundreds-or even thousands-of the to-do app, and each of those required a copy of the files to be stored somewhere. As you can imagine, disk space would run out pretty quickly! By default, Docker internally uses a copy-on-write mechanism to reduce the amount of disk space required (see figure 4). Whenever a running container needs to write to a file, it records the change by copying the item to a new area of disk. When a Docker commit is performed, this new area of disk is frozen and recorded as a layer with its own identifier.


Figure 4. Docker layering diagram


This partly explains how Docker containers can start up so quickly-they have nothing to copy because all the data has already been stored as the image.

Copy-on-write

Copy-on-write is a standard optimization strategy used in computing. When you create a new object (of any type) from a template, rather than copying the entire set of data required, you only copy data over when it’s changed. Depending on the use case, this can save considerable resources.

Figure 5 illustrates that the to-do app you’ve built has three layers you’re interested in.


Figure 5. The to-do app’s filesystem layering in Docker


Because the layers are static, you only need build on top of the image you wish to take as a reference, should you need anything to change in a higher layer. In the to-do app, you built from the publicly available node image and layered changes on top.

All three layers can be shared across multiple running containers, much as a shared library can be shared in memory across multiple running processes. This is a vital feature for operations, allowing the running of numerous containers based on different images on host machines without running out of disk space.

Imagine that you’re running the to-do app as a live service for paying customers. You can scale up your offering to a large number of users. If you’re developing, you can spin up many different environments on your local machine at once. If you’re moving through test, you can run many more tests simultaneously, and far more quickly than before. All these things are made possible by layering.

By building and running an application with Docker, you’ve begun to see the power that Docker can bring to your workflow. Reproducing and sharing specific environments, and being able to land these in various places, gives you both flexibility and control over development.

That’s all for this article.


If you want more info, check out the book on our liveBook reader here.