Docker
This design describes docker installations in NSCHOOL, and includes the following topics:
- Server design
- Directory organisation
- Containers
- Composer files
- Build procedures
- Applications
- NGINX Server with SSL
Server design
We will run Docker on a standard N-SCHOOL virtual machine running CentOS Stream 9. A single machine can host any number of applications, and each of those applications will have its own directory in /local/docker. For example, the www-nerdhole application will store its files in /local/docker/apps/www-nerdhole. We call that directory the application home directory. We don't want to tie ourselves down too much for its directory structure. We will automate the installation of Docker and any applications running on it through Ansible, see below.
The following image illustrates the server topology:

The Docker server is a physical or virtual machine running the Docker daemon (dockerd). It has a connection to the outside network, through which we can access the applications running on the machine.
A Docker server runs one or more applications. Each application has one or more containers, each of which perform a specific function within the application. One may be the web front end for the users, one can be the database, one can be a management tool accessed over the net.
Networking
Each application has one or more networks, to which all of the containers are connected. Some of these connections, we can connect to the outside network using a port specification. A port specification consists of an outside port and an inside port.In the example above, an external client connecting to port 80 on the Docker server will be led to port 8080 on the app-container in the first-app application. Containers can communicate with each other over the application network, where each machine is assigned a random IP address for its name. If the app container wants to access the database in the db-container, it can connect to a machine named db-container on port 27017.
Data volumes
Within Docker, a container is created fresh whenever it is started, with its data set to its initial default state. Anything that was made in a previous run of the container is deleted. To store data that persists over the lifecycle of a container, we need volumes. There are essentially two kinds of volume: One internal to Docker, and one outside Docker in a specific directory on the Docker host. The internal volumes physically live in /var/lib/docker/volumes and are mounted in the container when it starts. The external ones are in the file tree of the docker server.
Directory structure
The Docker home directory is /local/docker. This will contain all the information relevant to the docker installation. The structure is:
/local/docker/ - The Docker home directory, as set in the variable DOCKERHOME in the environment and docker.homedir Ansible variable. Will reside in a data volume group and a separate file system.
- etc/ - Templates, configuration files, and other files that need to go to
/etc/docker. - apps/ - Docker hosted application home directories.
- www-nerdhole/ - Nerdhole website
- compose.yml - Docker Compose file of all container definitions
- my-app/ - Example application directory
- compose.yml - Docker Compose file of all container definitions
- vol/ - Directories where applications store their data
- www-nerdhole/ - Nerdhole website
- build/ - Home-grown docker image data.
- my-app/ - Example app Dockerfile, compose yamls etc. Usually under Git control.
- app/ - The actual application files to be copied into the container
- Dockerfile - Definition file for the contents of the applicatoion image
- my-app-compose.yaml - Docker Compose file of all container definitions
- README.md - Documentation file for the app.
- my-app/ - Example app Dockerfile, compose yamls etc. Usually under Git control.
/var/lib/docker/ - The standard directory where Docker stores its data. We will put this in a separate file system in a data volume group so as not to overflow the / partition. It has an SELinux context of container_var_lib_t, which is important when creating the volume.
- containers/ - Configuration and data of all containers
- network/ - Data on the internal docker networks
- plugins/ - Docker plugin directory
- rootfs/ - The / filesystem of the virtual machines. Do not touch!
- volumes/ - Data of the named persistent volumes.
Ansible roles and variables
We have one role named docker, and any number of application roles, for instance www-nerdhole or nanas-example-app. The docker role will add the Internet Docker repo, install Docker, create the Docker filesystems, and enable and start the Docker daemon. Then it will access a host variable named docker_apps which is a list of applications that should run on the docker host, and execute their installation roles.
The docker application roles such as www-nerdhole can only run on a Docker host, and can only be used after the docker role completes. Their variables are stored in their own vars directories, and through the group variables they have access to the docker variables.
We are using the following Ansible variables:
- docker - Meant for the docker host, it defines the variables for any docker installation.
- docker_apps - A per-host list of the docker apps installed on that host.
- app_name - (For example: www-nerdhole) A per-application dictionary of app parameters.
docker
The docker variable is used by both the docker role and any applications it contains so it has to be defined on all docker hosts. It will be put in group_vars/docker_hosts/00-docker.yml. Non-docker hosts have no need for it.
docker_apps
The docker_apps variable is specific to a host, and will be defined in that host's variables. By default this will be an empty list. The docker install role will go through the list and install all the applications it finds.
We can install the same application on multiple servers. Which applications are run on which servers is configured through the host variables. For instance, if we want to run www-nerdhole on labo106.nerdhole.me.uk, we add the following stanza to the machine's host-vars:
description: "Experimental Docker host"
docker_apps:
- name: www-nerdhole
At that point, when we rebuild the labo106, the Docker-host role will install the specified applications into the Docker daemon.
Application Ansible variables
The app_name variable is named after the application. Any hyphens in the name are replaced with underscore for naming compatibility. It is local to the application and will be defined in that application role's vars section in roles/www-nerdhole/vars/main.yml. The hierarchy is:
www_nerdhole:
name: www-nerdhole
version: "1.0"
description: "Web server for www.nerdhole.me.uk"
Building the docker server
The starting point is a freshly installed CentOS 9 server with all the usual amenities such as shared file systems and users.
Software installation
First, we need to enable the Docker repositories using the following command:
dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Then, from that repository, we install the following packages:
- docker-ce
- docker-ce-cli
- containerd.io
- docker-buildx-plugin
- docker-compose-plugin
Then, we start and enable the docker.service in systemd using the systemctl command. That completes the installation of the software.
Users and groups
The installation creates a standard group docker that contains the Docker administrators These can start, stop, and create containers.
Creating the directory structure
We specify a /local/docker file system on a datavg - 10GB to start with. Then we create the following directories:
| Directory | Owner:Group | Permissions | Comment |
|---|---|---|---|
| /local/docker | root:docker | rwxrws--- | Docker home directory (File system) |
| /local/docker/etc | root:docker | rwxrws--- | Configuration files for N-SCHOOL docker utilities |
| /local/docker/apps | root:docker | rwxrws--- | Application home directories |
| /local/docker/build | root:docker | rwxrws--- | Application build directories |
The system is now prepared for our applications.
Docker applications
The application lifecycle consists of creating custom images and building them, then combining them with standard images into full applications. Applications these days are never finished, so we need a way to keep updating our custom images. In the near future we will be using Gitlab to do this. Any non-Gitlab containers will be stored in the N-SCHOOL ansible role directory.
The application itself
This Docker installation is only for simple aplications, not for development pipelines or Docker swarms. There are two important files for any application:
- Dockerfile - This is a file that is fed to the Docker Build command to customise an existing docker image and add your own custom resources to it.
- compose.yml - These are instructions for starting and stopping one or more containers, containing information on which directories to make permanent from container to container, which ports to listen on, environment variables, and so on.
In the basic Docker application, we will support one custom image (meaning one Dockerfile) and one compose file.
Building an application image
This is an image taken from Nana's example application. It builds a Javascript application image from a standard Node image. This is the Dockerfile that specifies the build:
FROM node:13-alpine
ENV MONGO_DB_USERNAME=admin \
MONGO_DB_PWD=password
RUN mkdir -p /home/app
COPY ./app /home/app
# set default dir so that next commands executes in /home/app dir
WORKDIR /home/app
# will execute npm install in /home/app because of WORKDIR
RUN npm install
# no need for /home/app/server.js because of WORKDIR
CMD ["node", "server.js"]
The parts are:
- FROM - This is the image (from Dockerhub by default) that your application is based on. The image is called "node" for Javascript node apps. The version tag is 13-alpine because it is version 13 and based on Alpine Linux.
- ENV - Environment variables. These get set in the container's environment and you can access them as any normal environment variable.
- RUN - These are commands that are executed inside the container as it is being built. Usually this will be a shell command.
- COPY - This is used to copy files from your development environment into your container. Relative to the place where your Dockerfile is.
- WORKDIR - A change-directory/cd command that applies to the inside of your container.
- CMD - This is the entry point for your application inside the container that gets called from
docker run. Only one of these should be present, but you can call a shellscript from this if your startup sequence is more than a single command.
Once you have the Dockerfile, you can build the container with the following command:
docker build -t my-app:1.0 .
The -t flag is a tag for the application. The dot at the end represents the directory where your application lives. When this completes, you will have a new image on your local machine (in /var/lib/docker/containers) that looks like it was downloaded from docker.io. With this done, we can start the container using the command:
- docker run my-app:1.0
Starting and stopping your application
The most basic way to start an application is to use the docker run command. For example:
docker run -d \
-p 27017:27017 \
-e MONGO_INITDB_ROOT_USERNAME=admin \
-e MONGO_INITDB_ROOT_PASSWORD=password \
--name mongodb \
--net mongo-network \
mongo
This will start a Mongodb (mongo) image, export port 27027/tcp to the host, set two environment variables, name the resulting container mongodb and attach it to a network named mongo-network. You will end up with a running container.
Some applications are more complicated than this, and require several containers communicating with each other, with disk storage configured and a host of other things. in those cases, we use the Docker compose plugin that will read a Yaml file containing the configuration of your containers and start or stop them all using one command. This is an example of a compose file:
services:
mongodb:
image: mongo
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
mongo-express:
image: mongo-express
ports:
- 8081:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=password
- ME_CONFIG_MONGODB_SERVER=mongodb
- ME_CONFIG_MONGODB_URL=mongodb://mongodb:27017
This example will manage two containers named mongo and mongo-express (which is a browser based database management tool). The usual warnings against putting usernames and passwords in a plaintext file apply - this is just an example. We start this application using the command docker compose -d -f compose.yml up.
The mongodb container will expose its TCP port 27017 to the outside network. The mongo-express container will expose 8081, so that a web browser can connect to the Docker host on port 8081 and manage the database. As you can see in the ME_CONFIG_MONGODB_URL line, the mongo-express container can connect to the mongodb container using the name mongodb, which is defined inside the application network.
With this defined, we can start and stop the entire application in one docker command.
Example application www-nerdhole
The first Docker application in the environment will be the web server you are reading this document on: www.nerdhole.me.uk. This will be an NGINX web server running HTTP on port 10080/tcp, HTTPS on port 10443/tcp, external volumes for the configuration, SSL certificates, and documents. It should be noted that this container only serves the documentation pages to the outside. Hosts inside the Nerdhole will still use the Apache server.
Docker host: Sypha
The www-nerdhole app will run on the main server Sypha, because that is where the web documents are and also Sypha is the only machine that is always on whenever the Nerdhole is up and running. So we have defined her as a docker host. The /local/docker filesystem will be in Sypha's datavg. As usual, the firewall will be enabled, and SELinux will be in enforcing mode.
Base image
The base image for the web server is nginx:latest. We do not intend to use any features that will be broken by updates. Simply serve a static HTML directory using HTTP and HTTPS.
Composer
The www-nerdhole application will be started and stopped using Composer, or through dockerd's persistent containers feature. This is the composer file:
# Docker Compose file for www-nerdhole
#----------------------------------------------------------------------
# Description: Docker container for www.nerdhole.me.uk
# UserID and GID will be set to: 387
name: www-nerdhole
services:
nginx:
image: nginx:latest
ports:
- '10080:80'
- '10443:443'
volumes:
- '/local/docker/apps/www-nerdhole/vol/etc/nginx/conf.d:/etc/nginx/conf.d'
- '/local/docker/apps/www-nerdhole/vol/etc/nginx/keys:/etc/nginx/keys'
- '/local/docker/apps/www-nerdhole/vol/etc/nginx/certs:/etc/nginx/certs'
- '/local/www:/usr/share/nginx/html'
environment:
UID: 387
GID: 387
entrypoint:
- /bin/sh
- -c
- |
usermod -u $${UID} nginx
groupmod -g $${GID} nginx
/docker-entrypoint.sh nginx -g 'daemon off;'
restart: unless-stopped
/local/docker/apps/www-nerdhole/compose.yml
The name www-nerdhole determines the name of such things as default networks. We have only one service: nginx. It runs mostly on the defaults for NGINX. It exposes the port 10080/tcp for HTTP to port 80, and 10443/tcp for HTTPS to 443. I have configured my home router to redirect ports 80 and 443 to Sypha accordingly. The volumes are:
| Host volume | Container volume | Contents |
|---|---|---|
| /local/docker/apps/www-nerdhole/vol/etc/nginx/conf.d | /etc/nginx/conf.d | Configuration files |
| /local/docker/apps/www-nerdhole/vol/etc/nginx/keys | /etc/nginx/keys | Private keys for HTTPS |
| /local/docker/apps/www-nerdhole/vol/etc/nginx/certs | /etc/nginx/certs | Certificates for HTTPS |
| /local/www | /usr/share/nginx/html | Location of web pages |
The Ansible role creates the nginx group, then inserts the GID into the composer file. The entrypoint part starts a shell that first modifies the nginx user and group to have the same ID as the one outside so that NGINX can access the same files as the user outside.
Finally, the container is set to restart whenever it stops unless the admin stops it deliberately.
SSL Certificates
For now, www-nerdhole will use a self-signed SSL certificate like the previous set of pages. We will investigate using a certificate from Let's Encrypt so that the Internet can verify that the Nerdhole is what we say it is.
Ansible roles
Three roles affect the www-nerdhole application: Firstly, docker, which is a prerequisite to running any Docker application. Second, the certificate_signed role, which will provide the web server with a self-signed OpenSSL X.509 certificate for encryption. Finally, the www-nerdhole role that installs the web server itself. The www-nerdhole role does the following:
- Creates the local
nginxgroup and remembers the GID it got. - Opens the ports 10080 and 10443 in the firewall.
- Creates the directory structure for the application.
- Obtains an OpenSSL certificate using the certificate_signed role
- Generates the compose.yml file for the application.
- Generates the nginx.conf file
It then falls to the admin to run the command docker compose -d -f compose.yml up from the application home directory. Docker will automatically restart the web server whenever the Docker host is rebooted.
Configuration files
The basic configuration for the www-nerdhole role is in the role's variables: roles/www-nerdhole/vars/main.yml:
---
www_nerdhole:
name: "www-nerdhole"
version: "1.0"
description: "Docker container for www.nerdhole.me.uk"
home: "{{docker.home}}/apps/www-nerdhole"
serverfqdn: "www.nerdhole.me.uk"
certificate_authority: local
Using the certificate_authority variable, we can specify whether we want to use a self-signed certificate from the local certificate authority, or a global one from Let's Encrypt.
We will maintain two separate configuration files for NGINX: one for a self-signed certificate (nginx.conf.j2) and one for a Let's Encrypt externally-trusted certificate (nginx_letsencrypt.conf.j2).