Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Docker basics

Before we dive into writing Docker Compose files, let’s spend some time picking up the basics of running Docker images.

For most of this section, you will only need a text editor and a Bash shell (e.g. Git Bash on Windows).

Make sure Docker is running before you start this section.

We recommend that you try running the shell commands that we show in this section for yourself. These will be the lines starting with $ or / # (don’t type the $ or / #: these are just the shell prompting for a command).

If using a lab machine, we recommend that you run these commands from the C: drive. From Git Bash, you would switch to the relevant folder first:

$ cd /mnt/c/Users/yourUsername
$ mkdir -p docker-lab
$ cd docker-lab
$ export MSYS_NO_PATHCONV=1

Note

The export MSYS_NO_PATHCONV=1 line is needed to prevent the MSYS system underlying Git Bash to turn Unix-style paths to Windows-style paths, which breaks the bind mounts in some of the commands of this section.

You will need this line for your own machine as well, if you are doing this section from Git Bash on Windows.

Running the Hello World image

The most basic test you can perform on a Docker installation is to run the hello-world image. Run this shell command:

$ docker run hello-world

You should see a greeting from Docker, along with some indications of what went on:

  • Your docker client connected to the Docker daemon process (started by Docker Desktop in the Windows lab machines).
  • The daemon downloaded (“pulled”) the hello-world image from the Docker Hub that matched your CPU architecture.
  • The daemon created a container that uses the image.
  • The daemon streamed the output from the container to your terminal, until it completed its execution.

If you go to Docker Desktop, you will see that the container still exists, but it’s not running, as it completed its execution. You can also check this through your shell - note that your container will most likely not be named awesome_davinci, as autogenerated container names are random:

$ docker container ls -a | grep hello
d75ffbb8d8b9   hello-world  "/hello"  5 minutes ago   Exited (0) 3 minutes ago              awesome_davinci

If we wanted to start it without streaming its output to the terminal, we would use the start subcommand and pass the name of the container:

$ docker start awesome_davinci

It will immediately complete again. We could check its logs on Docker Desktop, or do that from the terminal:

$ docker logs awesome_davinci

You should see the greeting message repeated twice (once for each time the container executed). We can now delete the container, as we will not be needing it anymore:

$ docker container rm awesome_davinci

Note that deleting the container will not delete the image from the system, so you can at any time start another container without having to pull it again. If you want to automatically let Docker delete the container after it has finished running, you can use the --rm flag in docker run:

$ docker run --rm hello-world

Running the Alpine image

Alpine Linux is a minimal Linux distribution that is ideal for building compact Docker images. Its image on Docker Hub is a popular base for many other Docker images.

Let’s try to run it:

$ docker run --rm alpine

It immediately exited without printing anything. Let’s inspect the image a bit further:

$ docker inspect alpine

You will see a JSON document with various types of information. In this case, we’re looking for the Cmd and Entrypoint keys, which you can find here:

"Config": {
    "Env": [
        "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
    ],
    "Cmd": [
        "/bin/sh"
    ],
    "WorkingDir": "/"
},

We can interpret these entries as follows:

  • The image sets the PATH environment variable to a certain list of folders.
  • The image will run by executing the /bin/sh command (a Bourne shell).
  • The command will be executed while having the root of the image’s filesystem as its current working directory.

Since the image just runs a shell, and Docker containers are not allocated an interactive terminal by default, the shell immediately exits without doing anything.

If we want an interactive terminal, we need to ask for it by passing the -i (“interactive”) and -t (“allocate TTY”) flags (or -it together):

$ docker run --rm -it alpine

The above command will start a shell inside the Docker container. You can run most Linux commands there, although some may not be available as the image is very minimal. Once you are done, you can press Control+d or enter the exit command to exit the terminal and have it exit.

Working with layers

Docker images are made up in layers, with each layer adding and changing files on top of the previous one. You can use docker image history IMAGE to check the layers of a given image. However, the alpine image is minimal, so it will not be particularly interesting:

$ docker image history alpine
IMAGE          CREATED       CREATED BY                                      SIZE      COMMENT
25109184c71b   5 weeks ago   CMD ["/bin/sh"]                                 0B        buildkit.dockerfile.v0
<missing>      5 weeks ago   ADD alpine-minirootfs-3.23.3-aarch64.tar.gz …   9.36MB    buildkit.dockerfile.v0

In this case, the image is made up of two layers:

  • The bottom layer was made by unpacking the contents of a compressed root filesystem (rootfs).
  • The top layer was made by indicating the command to be run by default should be /bin/sh.

Image layers are read-only: when Docker creates a container from an image, it creates an additional layer that can be modified. This time we will run and keep an alpine container, creating a file inside it, and exiting:

$ docker run --name alpine-layers -it alpine
/ # echo hello > world.txt
/ # exit

We can start it again while attaching an interactive console to it, and the file will still be there:

$ docker start -ia alpine-layers
/ # cat world.txt 
hello
/ # exit

We can ask Docker what are the changes stored on the mutable top layer of the container:

$ docker container diff alpine-layers
A /world.txt
C /root
A /root/.ash_history

In this case, we see that the top layer has our world.txt file, as well as some changes to the shell’s history file.

It is important to remember that this mutable layer is only usable by this container. If we delete the container at any point, its contents will be lost. The only option we’d have is to create a new image from this container, with the current contents of the mutable layer as its new top layer:

$ docker container commit -m "Save world.txt" alpine-layers my-alpine
sha256:b1b8a7e2e8ce4e743149bb138b24c566d40347bf747f5bdbec2e1eddb971f98f

The SHA-256 checksum identifies the layer, which is useful for saving storage space (e.g. by having multiple images share the same base layer for the operating system).

You can now check the layers of our new my-alpine image:

$ docker image history my-alpine
IMAGE          CREATED         CREATED BY                                      SIZE      COMMENT
b1b8a7e2e8ce   5 minutes ago   /bin/sh                                         16.4kB    Save world.txt
25109184c71b   5 weeks ago     CMD ["/bin/sh"]                                 0B        buildkit.dockerfile.v0
<missing>      5 weeks ago     ADD alpine-minirootfs-3.23.3-aarch64.tar.gz …   9.36MB    buildkit.dockerfile.v0

You will notice that there is one more small layer at the top (just a few kilobytes), with our comment. We can try using it as an image now, and it will come with our created file:

$ docker run --rm -it my-alpine
/ # cat world.txt 
hello
/ # exit

Dockerfiles are a way to automate the creation of images based on this idea of piling layer upon layer. We will not ask you to write custom Dockerfiles in this practical, but it is useful to understand the underlying concepts behind Docker images.

Working with volumes

Suppose that we are running a database server from Docker. We would not want to have the database itself in the mutable layer, as it would be lost if we deleted and re-created the container (e.g. because a new version of the database server’s image came out). Instead, we should keep it in a volume: a storage location that is kept separate from the layers of a container, and mounted to a particular mountpoint or location inside the container.

There are two types of volume mounts:

  • bind mounts attach a location in our filesystem to the container, allowing the container to change its contents.
  • Named mounts attach a named volume (maintained by Docker in a special location) into the container.

Let’s try bind mounts first, by creating a sqlite folder and letting a sqlite container modify its contents. You will need to run a few shell commands, and also paste the SQL command below:

$ mkdir sqlite
$ docker run --rm -it -v ./sqlite:/apps -w /apps alpine/sqlite test.db
sqlite> CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT);
sqlite> .quit

If you look at your sqlite folder, you should see that the test.db file was created:

$ ls sqlite
test.db

You will also have noticed a few more elements in that command than usual:

  • -v ./sqlite:/apps means “perform a bind mount of the local directory sqlite into /apps in the container”. The initial ./ is needed in order to tell Docker that this is a bind mount and not a named mount (as the part before the : is a filesystem path and not just a name).
  • -w /apps means “change the working directory inside the container to /apps”.
  • We added test.db after the image name, which will be treated as the “command” to be run.

test.db is just the name of the SQLite file to be managed, rather than the name of an executable: how did Docker know to run the SQLite shell on it? The answer is that the SQLite image also defines an entrypoint: if defined, Docker will pass the command to be run to it.

You can check the entrypoint of the image using docker inspect as usual:

$ docker inspect alpine/sqlite
"Config": {
    ...
    "Entrypoint": [
        "sqlite3"
    ],
    "Cmd": [
        "--version"
    ],
    ...
},

Since the entrypoint is sqlite3, our previous docker run command translated to running sqlite3 test.db inside the container, with /apps as the working directory. You will also note that the default command is --version, so if you run the same command without test.db, the container will print the version of SQLite being used and immediately exit:

$ docker run --rm -it -v ./sqlite:/apps -w /apps alpine/sqlite        
3.51.2 2026-01-09 17:27:48 b270f8339eb13b504d0b2ba154ebca966b7dde08e40c3ed7d559749818cb2075 (64-bit)

In production environments, we will usually prefer volumes to live on their own rather than being attached to a regular folder. These will be named volumes, which we manage through our Docker client. We can manage them through the docker volume command - let’s create one and use it with the SQLite image:

$ docker volume create sqlite-db
$ docker run --rm -it -v sqlite-db:/apps -w /apps alpine/sqlite test.db
sqlite> CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT);
sqlite> INSERT INTO users (name) VALUES ('Bob');
sqlite> .quit
$ docker run --rm -it -v sqlite-db:/apps -w /apps alpine/sqlite test.db
sqlite> SELECT * FROM users;
1|Bob
sqlite> .quit

You’ll notice that despite the use of --rm (which meant the container was deleted upon completion), we didn’t lose data. The database was stored in the named volume sqlite-db, and attached to the container created in the second command.

Working with ports

The last aspect that we want to cover with plain Docker is the networking. Docker containers typically live in their own virtual sub-network that is private to the host machine.

Let’s try to run an nginx Web server:

$ docker run --rm --name web-server nginx

However, this Web server isn’t accessible from our host system’s Web browser in many setups. We can find out the IP address it has in Docker’s default virtual subnet.

Leave the command running, and open a separate terminal. In this new terminal, run this command, and note the IP address (it may differ from the example):

$ docker inspect --format '{{ .NetworkSettings.Networks.bridge.IPAddress }}' web-server
172.17.0.2

Try to enter the IP address in your browser: if you’re using Docker Desktop, it will not work. This is because Docker Desktop runs containers inside a virtual machine, which is a different network host than the rest of your system. It will only work if you’re running Docker natively from Linux (using the Docker Engine, instead of Docker Desktop).

Clearly, this is a problem - we want to be able to try the web server from our browser. What we can do is tell Docker to forward a port in our host system to a port in the container.

Switch back to the nginx terminal and use Control+c to shut down the server. This time, run it as follows:

$ docker run --rm --name web-server -p 8080:80 nginx

You should now be able to access your nginx server from here:

http://localhost:8080/

You can now shut down the server with Control+c.

Note

This so-called subnet where containers run is known as the “default bridge subnet”. In the default bridge subnet, containers can only refer to each other by their internal IP addresses, which is inconvenient.

Thankfully, the Docker Compose tool that we cover in the next section will automatically set up named bridge subnets that integrate Docker’s embedded DNS server, allowing containers to refer to each other by name.

Congratulations - you’ve gone through the most important aspects of using Docker images: understanding the distinction between layers and volumes, and the basics of exposing them from your host.

In the next section, we will cover setting up multiple containers and connecting them to each other via Docker Compose.