Boot Install Server

The purpose of a boot/install server is to install operating systems on target machines - either virtual or physical. We will concentrate on Linux operating systems but keep our options open to install other types of OS. There are several options to do this:

  • VMs only: Assign the boot CD ISO image to the machine and boot from it.
  • VMs only: Prepare a virtual machine disk image and copy that to the VM host.
  • Put the boot CD ISO image on a USB stick and use that to boot and install the machine.
  • Put the needed resources on the BIS and PXE-boot the machines.

The Boot/Install server will be able to support several of these options.

We want to support multiple boot install servers in an environment. For instance, put a BIS on every VM host so you can have multiple offices that are not in each other's broadcast range. Super fast installs over the virtual network. We do not have the BIS, we have one for each machine. That means I have to keep the Main server functionality, including the variables, separate from the BIS. There will only ever be one main server, but a group of BISs. Of course, the main server will always also be a BIS.

Debian and its derivatives do not use Kickstart, they use something called install variable seeding. Ubuntu and Mint have a feature called kickstart compatibility, but we will not be using that. Debian and Mint auto-installs will be implemented after CentOS and Rocky Linux.

Operating system data

An operating system needs one or more of the following:

  • An installation CD.
  • A PXE boot image (Kernel and initial RAM FS)
  • Kickstart files containing instructions for the installer
  • Optionally, a mirror of the OS software repositories to speed up installations.
  • GPG crypto keys to verify the integrity of the software

We will organise this in an Ansible group variable as follows:

##########################################
### BOOT/INSTALL MAIN SERVER VARIABLES ###
##########################################
# These are the variables used by the main boot
# install server.
bis:
  # File systems for a boot/install server.
  fs:
    - name: localbis
      desc: "Boot/install server data"
      lvname: localbislv
      vg: datavg
      mountpoint: /local/bis
      size: 512G
      owner: root
      group: root
      mode: "0755"

  # Operating systems supported (Snipped for brevity)
  os:
    "centos-stream-9":
      label: centos-stream-9
      name: CentOS Stream 9
      iso: CentOS-Stream-9-latest-x86_64-dvd1.iso
      initrd_file: images/pxeboot/initrd.img
      vmlinuz_file: images/pxeboot/vmlinuz
      linuxefi: "efi/centos-stream-9/vmlinuz inst.stage2=http://bis.nerdhole.me.uk/bis/iso/centos-stream-9 quiet"
      initrdefi: "efi/centos-stream-9/initrd.img"
      linuxefi_automatic: "efi/centos-stream-9/vmlinuz quiet"
      initrdefi_automatic: "efi/centos-stream-9/initrd.img"
      method: kickstart
    "linuxmint-22-cinnamon":
      label: linuxmint-22-cinnamon
      name: Linux Mint 22 Cinnamon
      iso: linuxmint-22-cinnamon-64bit.iso
      initrd_file: casper/initrd.lz
      vmlinuz_file: casper/vmlinuz
      linuxefi: "efi/linuxmint-21.3-cinnamon/vmlinuz ip=dhcp boot=casper netboot=nfs nfsroot=10.12.0.2:/local/bis/linuxmint-21.3-cinnamon/iso quiet"
      initrdefi: "efi/linuxmint-21.3-cinnamon/initrd.lz"
      method: debian

Whenever we download a new distribution, we need to add it to this data structure so that the bis Ansible playbook can add it. The variables are:

  • label - A name for the distribution that scripts use to refer to it
  • name - A name more suitable for human consumption
  • iso - The name for the installation CD downloaded from the makers of the distro
  • initrd_file - Where on the CD the initrd file is
  • vmlinuz_file - Where on the CE the kernel file is
  • linuxefi - The grub.cfg line that sets off the manual installation
  • initrdefi - The Grub specification for the initial RAM disk image
  • method - How the installation works. Currently two values:
    • kickstart - The kickstart method used by RedHat, CentOS and derivatives
    • debian - The Debian installer pre-fill used by Ubuntu, Mint, and... Debian.
    • Not present - If the "method" variable is not defined, there is no automated install available for this distro.
    • xxxToDo: See if I am actually using the method variable.
  • linux_automatic - The same as linuxefi, but with parameters that start the automatic install. When generating the grub.cfg, we need to add the inst.ks parameter to the end of this line.
  • initrd_automatic - The initrd line used by automatic installs.

Kickstart

Kickstart is the RedHat/Fedora/CentOS/Rocky way of auto-installing Linux. We will produce a kickstart configuration to bring any of these machines up to the point where we can access it with Ansible to continue the installation to its intended functionality. The post-kickstart OS image will have the following features:

  • Clean installation of the operating system.
  • Configures the network with DHCP
  • One internal disk for the rootvg volume group.
  • Other disks (datavg) untouched.
  • Secure shell server with the builder user that can use sudo to switch to root.

With this done, the machine can be added as an IPA client after which we should disable the "Builder" user account and from then on use one of the System Administrators' accounts for Ansible purposes.

BIS directory structure

We will create a suitably sized datavg storage volume for the BIS, and mount that on /local/bis. So far I find that one specific Linux distribution needs about 50-60GB of storage all told. We will size the volume accordingly. These are the directories in the BIS volume:

Directory Contents
/local/bis/ Boot/install server home (500GB file system)
./iso/ ISO images of operating system install media
./qcow2/ Pre-built QCOW2 images for virtual machines
./www/ Root directory of the BIS http server
./centos-stream-9/ Home directory of CentOS Stream 9
./centos-stream-9/etc/ Configuration files for this platform
./centos-stream-9/repos/ Local repository mirrors (reposync)
./centos-stream-9/iso/ Mount point for ISO image
./centos-stream-9/images/ Copy of kernel and initial RAMfs

Installation CDs

This is our current collection: CentOS-8-x86_64-1905-dvd1.iso

  • CentOS-Stream-8-x86_64-20230209-boot.iso
  • CentOS-Stream-9-latest-x86_64-dvd1.iso
  • fedora-coreos-40.20240416.3.1-live.x86_64.iso
  • linuxmint-21.3-cinnamon-64bit.iso
  • linuxmint-22-cinnamon-64bit.iso
  • Rocky-8.6-x86_64-dvd1.iso
  • Rocky-9.1-x86_64-dvd.iso
  • Rocky-9.3-x86_64-minimal.iso
  • ubuntu-22.04.2-live-server-amd64.iso
  • ubuntustudio-23.04-dvd-amd64.iso
  • ubuntustudio-23.10-dvd-amd64.iso

These are made available for download periodically by the software vendors. We simply copy them into /local/bis/iso/. Once they are there, we create an entry in /etc/fstab that mounts the image read-only on /local/bis/rocky-9.3/iso/:

/local/bis/iso/Rocky-9.3-x86_64-minimal.iso /local/bis/rocky-9.3/iso iso9660 defaults 0 0

We then add that directory to the BIS web server using a symbolic link from /local/bis/rocky-9.3/iso/ to <web root>/iso/rocky-9.3. We set the SELinux context on the directory to httpd_sys_rw_content_t so Apache can access it.

Some installers like Linux Mint's run off NFS volumes. We will NFS-export all the /local/bis/linuxmint-22-cinnamon/iso and other distro directories read-only to all clients. Since this is information freely available on the Internet, that is not a security risk. The entry in /etc/exports is:

/local/bis/iso/linuxmint-22-cinnamon-64bit.iso /local/bis/linuxmint-22-cinnamon/iso iso9660 defaults 0 0

This is used by the parameter nfsroot=10.12.0.2:/local/bis/linuxmint-22-cinnamon/iso in the grub.cfg file. The IP address is required since DNS is not available yet when the parameter is used.

PXE boot images

PXE is a very simple way to bootstrap a machine from the network. We have decided to use UEFI boot loaders only, because those will support secure boot when we later choose to enable it. We need two programs to start up a machine from the network: shimx86.efi and grubx64.efi. They are provided by the packages shim-x64 and grub2-efi-x64 respectively. These are installed as part of the "Server With GUI" package group that we base our main server on. We use the boot loaders from the boot/install server for every Linux version, so we should not let our boot servers get too far behind the most recent version available. The absolute paths after install are:

  • /boot/efi/EFI/centos/shimx64.efi
  • /boot/efi/EFI/centos/grubx64.efi

These both need to be copied into /var/lib/tftpboot/efi. We reference shimx64.efi in the DHCP configuration file so that it gets downloaded and run. Shim in its turn downloads and executes grub, and then grub downloads and runs the Linux kernel.

Note: Shimx64.efi is a signed executable. We need a signed executable when we want to run the system in secure mode. We will save the enablement of secure mode for another time.

Kernels and initramfses are normally somewhere on the installation CD, named vmlinuz and initrd or variations thereon. One is the Linux Kernel, the other is the root file system that the installation client uses to install the OS. PXE will download them using TFTP from /var/lib/tftpboot/efi/images/rocky-9.3/. That done, the Linux kernel is started, and the kernel will start the installation.

The Linux installer

The Linux installer is called Anaconda. It can work in two ways: interactive or automatic. In interactive mode, you type in the configuration: What you want the hostname to be, how you want your disks partitioned, what keyboard you have, timezones, what software you want installed and so on. In automatic mode, you work all of this out beforehand and put it in a file called a kickstart file. You put that kickstart file on the BIS' web server, and specify in the PXE boot configuration where to find it. If all goes well, Anaconda will get all the information it needs and install your system.

It is possible to install the entire system from start to finish, using only a kickstart file. You can specify scripts to run after the initial install, and those can be used to configure every aspect of the system. We choose not to do that. We will use kickstart to put a minimal image on the system and from there, we let Ansible handle the rest. Ansible has access to more advanced configuration options and information.

Local mirrors of repositories

In the Nerdhole, we will fairly often re-install lab machines and it would help if we didn't have to pull the entire installation over the Internet each time. We may not sync every repo, but at least CentOS Stream 8 and 9 will be locally stored.

Apache boot/install resources

FreeIPA requires Apache, and will not work with NGINX, so we will use Apache for our web server needs. The base URL for the Boot/Install server will be: http://bis.nerdhole.me.uk/bis/ Where "bis" is an alias for the main server. The main webserver will also serve the NSCHOOL website for these very documents.

For every distribution, there will be an iso image file in /local/bis/iso, which will be mounted on /local/bis/*distro*/iso. In order that the web server can get at it, links to /local/bis/*distro*/iso will be created in /local/bis/www/iso/*distro*. For the most often used distributions, we will create a local mirror of its repositories. This will be in /local/bis/*distro*/repo and a link will be made to /local/bis/www/repo/*distro*. Lesser used distributions will start being installed directly from the DVD, and then they will continue on to the Internet for their resources. The /local/bis/*distro*/repo will have its SELinux file context set to httpd_sys_rw_content_t, so that Apache can access it.

The URLs for the boot/install server are:

URL Source Contents
bis/ks bis/www/ks Kickstart files. One per host.
bis/iso/distro bis/distro/iso Mounted ISO images for Stage 2
bis/repo/local bis/www/repo Downloaded RPMs for third party software
bis/repo/distro bis/distro/repos Local mirrors of repositories(1)
bis/etc/distro bis/distro/etc Configuration files for repos (todo)
bis/config/ bis/www/config Generated configuration documents

(1) We will keep these local mirrors up to date using a Cron job on the main server.

Distributions tend to have multiple repositories: baseos, appstream, epel, and so on. The packages are signed with GPG, and we need to have the public keys to these repositories to install them. These are installed in /etc/pki/rpm-gpg, and need to be installed using rpm --import <Keyfile>. They are distributed using an RPM package that is installed by Kickstart.

Boot/Install PXE clients

We create a separate role for Linux clients using the boot/install server: pxeinstall. That role will have to do the following:

  1. Generate a kickstart file (hostname.centos-stream-9.ks) for the installation client depending on the distribution.
  2. Generate a grub.cfg-XXXXXXXX file that sends the machine to the boot/install server using that kickstart file.
  3. Either reboot the machine if it is a VM guest or tell the user to boot the thing off the network. (Use a conditional import of the "reboot" task).
  4. Wait for the machine to start its installation client. (It opens a specific port).
  5. Remove the grub.cfg-XXXXXXXX file to avoid install loops.
  6. Remove the machine's SSH keys on the main server so we don't get SOMETHING NASTY remarks.
  7. Wait ca. 20 minutes for the machine to show up on SSH (Port 22). This should be more than enough for a local install.

Working parameters

To install Linux on a machine, we need the following information:

Variable Location Contents
host_os OS group The label of the operating system to install
install_drive Host variable The disk to use as installation disk
local_pxefile Inventory 00-all Suffix for the grub.cfg file to use
macaddress Inventory 00-all MAC address to configure into DHCP
bis group_vars/bis.yml Boot install server parameter dictionary

OS Group variables in the inventory

OS groups have relatively few variables. If we had to add variables to the group_vars, we would need a file for each of these groups, which is inconvenient. We will create a file inventory/02-group-vars to define small lists of group variables. The OS groups (as in os_centos_stream_9) contain only the label of the OS installed on the system, like this:

[os_centos_stream_9:vars]
host_os=centos-stream-9

To avoid problems with "Illegal characters" in Ansible group names, all dashes will be converted to underscores. A BIS client can only belong to one OS group.

Generating the grub.cfg-F0F0F0F0 file

While the default grub.cfg file has menu options for the manual graphical install of all supported distros, the machine-specific file will have only a single menu item to re-install the machine using PXE and Kickstart.

Generating the kickstart file

The pxeinstall role includes templates for all distros that use kickstart. For CentOS Stream 9, it is: centos-stream-9-minimal.ks. The minimal kickstart file is the default for any distribution. Any kickstart installation requires a local mirror of needed repositories.

It does the following:

  1. Specify the text installer
  2. Sets all locale and keyboard parameters to the UK.
  3. Agrees to the EULA.
  4. Sets the network configuration to use DHCP.
  5. Sets the install sources:
    • The base URL is bis/centos-stream-9/iso @@@ check this
    • Other repos include Local-centos-stream-9-baseos, Local-centos-stream-9-appstream, and so on. The Local prefix distinguishes them from the Internet resident repositories. Those will be disabled later.
  6. Specifies which packages to install and which not to install.
    • Include the Minimal Environment and Standard package groups, as well as epel-release for the EPEL GPG keys.
    • Exclude Gnome's initial setup, and Cockpit.
  7. Configure the disks
    • Ignore all disks except the designated OS disk, which is a parameter in the host_vars/hostname.yml file, with a default of /dev/sda.
    • Install the boot loader in the OS disk's Master Boot Record
    • Wipe all the partitions on the OS disk, but leave the data disks alone.
    • Create a 600MB /boot/efi partition.
    • Create a 1024MB /boot partition.
    • Assign the rest of the OS disk to a single partition named pv.1 and turn that into an LVM Physical Volume.
    • Create a "rootvg" volume group using the pv.1 physical volume.
    • Create an 8GB swap logical volume in rootvg named swap.
    • Assign the rest of rootvg to a logical volume named root, format it with an xfs file system, then assign it to the / file system.
  8. Set the root password to a well-known value. Allow root to log in using SSH. This is for troubleshooting during installs and will be disabled later.
  9. Create the temporary System Builder user
    • User name is builder
    • UID is 1000
    • GID 100 (users)
    • Add wheel as an extra group for sudo privileges
    • Home directory of /builder to keep it out of the /home directory which will be remote
    • Shell is Bash
  10. In post-install:
    • Import all the installed GPG keys with rpm --import.
    • Disable all non-local repositories by editing their .repo files with sed.
    • Clean the repositories using dnf clean.