Docker Braindump

This is a braindump stemming from Nana's Docker course.

Intro

A container is a lightweight VM running a simple application for simple souls. A VM gives you full control, and the ultimate power to determine anything you want, whereas a container gives you a one size fits all solution that you don't need to mess with. There's lots of standard docker images on Docker Hub that you can download and run as root on your own machine... Oh hello Security and Audit!

Popular standard container images are:

  • NGINX - for web servers
  • MongoDB for databases
  • Redis for in-memory databases (mostly for structured inter-container communications)
  • And many many more.

Installing docker

On CentOS, just installing "docker" sets up a kind of emulated environment that has the commands but not the underlying software. Strange. Docker by itself does not seem to need all that many resources so a Docker server will not be "heavy."

I have now separated out the Docker installation procedure to a separate Ansible role named... docker. The Gitlab server will call the Docker role first and then the Gitlab stuff. This is what the Docker role does:

Removing any old versions

It looks like Docker was recently re-packaged and renamed. So out with the old, in with the new:

  • docker
  • docker-client
  • docker-client-latest
  • docker-common
  • docker-latest
  • docker-latest-logrotate
  • docker-logrotate
  • docker-engine

Setting up Docker RPM repositories

We want to install Docker from its official repos. These are not on CentOS by default.

dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

The yum repos directive does not seem to work for this, but the dnf command creates a recognisable file for creates: purposes.

Installing the software

So we use the Ansible DNF modfule to install the following:

  • docker-ce
  • docker-ce-cli
  • containerd.io
  • docker-buildx-plugin
  • docker-compose-plugin

Enable and start docker

The standard enable and start thing on the docker module.

Testing the install

Docker is very much alive on the Internet, so a default repo already is configured. You just go:

  • docker run -it ubuntu bash

And you actually have an Ubuntu image running Bash! Fastest install ever!

Containers

A container is a process running on your Docker server. It is based on an image, which is analogous to an executable file.

Containers are like ogres. They make you cry... sorry, they are layered. A container starts with a very minimal OS image (Alpine Linux, which I did not know before, seems to be a favourite). It then falls to you to extend this image with applications to arrive at hopefully a coherent service that you can stick in a container and run.

To the operating system, Docker is just an application.

Every container gets a 64-digit hexadecimal ID that it keeps even after it has completed. You can use this ID to perform operations on it such as observing the logs, stopping it, restarting it, and so on.

Commands

These are a few useful commands:

docker run: Docker run downloads an image from your repository if you don't have it locally already, and then starts it. By default, it relates to the Docker Hub (linked above). You can ask for the latest version of the image, or you can request a specific version:

docker pull: Downloads an image from your repository, but doesn't run it yet.

docker ps: Docker ps shows you a list of currently running containers. You can also get a list of no-longer-running containers with docker ps -a and then find out how to restart them.

docker start/stop: Starts or stops the container.

docker exec: runs a command inside a Docker container. This is especially useful for running a shell inside a running container so you can observe the system in operation. Keep in mind: doing this is not the way to fix problems, just diagnose them. Normally, you will find out what is wrong and then do the actual business in your container build environment.

Building your own containers and images

To get a Docker-based app to run, you will usually need a few containers. In these examples, it's one for your application, one for a Mongodb instance, and one for Mongo Express, a Mongodb management tool. We will start by doing it all using commands and a locally run application. Then we will properly containerise everything using a configuration file.

Setting up Mongodb and Mongo Express

The first thing done is to start up a mongodb and a mongo-express database management app. So...

  • docker pull mongo
  • docker pull mongo-express

This went OK. Then to set up a docker network for Mongodb and its friends:

  • docker network create mongo-network

Then we set up the mongodb database server:

docker run -d \
-p 27017:27017 \
-e MONGO_INITDB_ROOT_USERNAME=admin \
-e MONGO_INITDB_ROOT_PASSWORD=password \
--name mongodb \
--net mongo-network \
mongo

Then we need to add our admin tool:

docker run -d \
   -p 8081:8081 \
   -e ME_CONFIG_MONGODB_ADMINUSERNAME=admin \
   -e ME_CONFIG_MONGODB_ADMINPASSWORD=password \
   -e ME_CONFIG_MONGODB_SERVER=mongodb \
   -e ME_CONFIG_MONGODB_URL=mongodb://mongodb:27017 \
   --net mongo-network \
   --name mongo-express \
   mongo-express

Take note that the ME_CONFIG_MONGODB_URL variable is new. I tried leaving out the ME_CONFIG_MONGODB_SERVER variable, but it stopped working. So it stays.

Obtaining and running the test application natively

Nana has a simple test app which you can obtain thusly:

  • git clone https://gitlab.com/nanuchi/techworld-js-docker-demo-app.git

It is a javascript app, and you can actually run Javascript outside of a browser! Needless to say you need to install this first. I am following the instructions provided to me by Crowncloud, whoever they may be. It is fairly simple. group-install the 'Development Tools', then install a package named nodejs.

I now have node v16.20.2. Hope that is enough. I then go to Nana's app and am thrown straight into dependency hell - Error: Cannot find module 'express'

Typed that into Goog-El, and got the command "npm install" which immediately starts to complain about old versions of npm. Doing that, I get a message about vulnerabilities, and would I like to fix them? Sure. Now it says "15 packages are looking for funding." Lovely. Then I run the node server command, get a few warnings that look like they've been using deprecated APIs, but now the app is apparently listening on port 3000!

We then need to go into the user-account database and create a collection named users. That is easy enough. So I start the application and get... nothing! A blank page. Nana? Is there something you haven't told me?

Okay. Firefox tells me this:

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:3000/get-profile. (Reason: CORS request did not succeed). Status code: (null).

Wonderful! We're assuming that we are running the web browser on the same host as the application. I am in fact running this on my VM named labo106, so I updated the index.html file as follows:

-        const response = await fetch('http://localhost:3000/get-profile');
+        const response = await fetch('http://labo106:3000/get-profile');

-        const response = await fetch('http://localhost:3000/update-profile', {
+        const response = await fetch('http://labo106:3000/update-profile', {

And now I have a cute dog picture. Another surprise is that the data does not actually go into the "user-account" database, but instead into the "my-db" database. So now we have Anna Jones.

So the final list of actions is:

  • dnf groupinstall 'Development Tools'
  • dnf install nodejs
  • node --version
  • cd junk/docker/techworld-js-docker-demo-app/app
  • npm install
  • npm audit fix (Because why not?)
  • In the Mongo-Express GUI, create the database my-db
    • If it asks for a password, it's admin and pass.
  • In the user-account database, create a collection users
  • node server.js

And now we have the app running on labo106.

Docker compose

Docker compose is a tool to specify a bunch of containers that you want to run at the same time. It uses a config file that contains essentially the same information as the docker run and docker create commands. This is the compose file named mongo-docker-compose.yaml:

version: '3'
services:
  mongodb:
    image: mongo
    ports:
      - 27017:27017
    environment:
      - MONGO_INITDB_ROOT_USERNAME=admin
      - MONGO_INITDB_ROOT_PASSWORD=password

  mongo-express:
    image: mongo-express
    ports:
      - 8081:8081
    environment:
      - ME_CONFIG_MONGODB_ADMINUSERNAME=admin
      - ME_CONFIG_MONGODB_ADMINPASSWORD=password
      - ME_CONFIG_MONGODB_SERVER=mongodb
      - ME_CONFIG_MONGODB_URL=mongodb://mongodb:27017

To start the mongo containers, you use the following command:

  • docker compose -f mongo-docker-compose.yaml up

This will give you a lot of output and then leave you watching the log files. Pressing the D key will detach the containers and return you to the shell prompt.

From the shell prompt, you can stop the containers using:

  • docker compose -f mongo-docker-compose.yaml down

Docker compose will automatically create a Docker network named docker_default that contains all the containers specified. It is named docker_default because I am running the compose command from a directory named docker. Magic! The network includes a form of hostname resolution so you can refer to other containers by their name (mongodb or mongo-express here). If you don't want the name of the network to be docker_default, this is from the Docker documentation:

Your app's network is given a name based on the "project name", which is based on the name of the directory it lives in. You can override the project name with either the --project-name flag or the COMPOSE_PROJECT_NAME environment variable.

Docker build

Now things get exciting as we start producing our own docker containers. We do this with a Dockerfile named Dockerfile and nothing else. It lives in the application directory, and looks like this:

FROM node:13-alpine

ENV MONGO_DB_USERNAME=admin \
    MONGO_DB_PWD=password

RUN mkdir -p /home/app

COPY ./app /home/app

# set default dir so that next commands executes in /home/app dir
WORKDIR /home/app

# will execute npm install in /home/app because of WORKDIR
RUN npm install

# no need for /home/app/server.js because of WORKDIR
CMD ["node", "server.js"]

The parts are:

  • FROM - This is the image (from Dockerhub by default) that your application is based on. The image is called "node" for Javascript node apps. The version tag is 13-alpine because it is version 13 and based on Alpine Linux.
  • ENV - Environment variables. These get set in the container's environment and you can access them as any normal environment variable.
  • RUN - These are commands that are executed inside the container as it is being built. Usually this will be a shell command.
  • COPY - This is used to copy files from your development environment into your container. Relative to the place where your Dockerfile is.
  • WORKDIR - A change-directory/cd command that applies to the inside of your container.
  • CMD - This is the startup command for your application inside the container that gets called from docker run. Only one of these should be present, but you can call a shellscript from this if your startup sequence is more than a single command.

Once you have the Dockerfile, you can build the container with the following command:

  • docker build -t my-app:1.0 .

The -t flag is a tag for the application. The dot at the end represents the directory where your application lives. When this completes, you will have a new image on your local machine that looks like it was downloaded from docker.io.

Adding my-app to the compose file

Now we want to add my-app to the composer file (mongo-docker-compose.yaml) so that we can start all the needed services in one fell swoop of the docker compose. This is how we do it:

version: '3'
services:
  my-app:
    image: my-app:1.0
    ports:
      - 3000:3000
... Rest of the file as above ...

Then we go docker compose -f mongo-docker-compose.yaml up and all the containers are started in the new Docker network named docker_default.

One more bug to fix, though is it really a bug? In server.js I had to update the following lines twice:

+  // Connect to the db (was: mongoUrlLocal)
+  MongoClient.connect(mongoUrlDocker, mongoClientOptions, function (err, client) {

When you are running the app locally, it connects to localhost, which assumes that you are running everything on home sweet 127.0.0.1. Change it to mongoUrlDocker and it will connect to the correct database server in its little container. Nana... If you have to search-and-replace variable names everywhere when you switch to a different environment, why not... use the same variable and set it to the correct URL??! That's what variables are there for!

Sharing is caring

Now that we have our application, we want to publish it somewhere. As it happens, Gitlab already includes a Docker repository. We just need to enable it for access outside of my gitlab server. I'll describe that in my Gitlab CI/CD braindump.

I have found a use for this

At the moment, the Apache server on Sypha serves both IPA and the outside page, which is slightly iffy security-wise. So. If I install Docker on Sypha, and then run NGINX on ports 10080 for HTTP and 10443 for HTTPS, I can point my modem at those ports for the purpose of serving nerdhole.me.uk's webpages, which should be a bit more secure. I would run NGINX on its normal ports inside the container, mount /var/www/html read-only onto /usr/share/nginx/html, and export the ports. That way the Great Outdoors only sees these witterings and I can still use the internal Apache server.

This is a StackOverflow discussion on how to set up SSL in an NGINX image. Since Sypha is the main server, I will put it there. It doesn't seem too difficult. Maybe I'll even get an official certificate for it. That will most likely be done through Let's Encrypt.

Oh by the way, Firefox does not like certain ports and will not let you connect to them. So you just go to about:config, confirm you're not stupid, then find network.security.ports.banned.override and enter 10080,10443. You will need that for testing.

The setup

My modem like most others can redirect incoming connections from the Great Outdoors to specific ports on specific inside hosts. We need port 80/tcp for unencrypted HTTP, and 443/tcp for SSL-encrypted HTTPS. I will redirect these ports to a Docker container running NGINX with access to the files you are reading now. This is a picture of the configuration:

Nerdhole website setup

This way, the frontnet clients will be served the pages from the normal apache server (including things like installation repos) and the Great outdoors (like you) will be looking at the Docker container. As an extra exercise, I will actually register my domain with Let's Encrypt so you won't have to click through all manner of encryption warnings.

I think I won't have to create my own NGINX image - if I simply mount some of the important directories using Docker's volume feature, I can probably use /local directories to store WWW and SSL information.

So time for some organisation. A docker server will need dedicated places to put container data. The dedicated place will be /local/docker.