Ansible installation

This is a description of the N-SCHOOL Ansible installation. We keep as many options open as possible while still configuring features in a predictable way. We have the following groups of playbooks:

  • Environment Bootstrap - Sets up the main server as a boot/install server, generates the IP management files, installs authentication and file shares so that we have an office environment to work in.
  • Host Maintenance - Run updates, reinstall and reconfigure software.
  • Laboratory - Testing ground for new developments that will generate ideas to be included into Host Maintenance.

Software

The Ansible software uses a pluriform distribution model consisting of RPMs and Ansible specific collections.

RPM Packages

The Ansible system needs the following RPMs to be installed:

  • ansible-core
  • ansible-collection-redhat-rhel_mgmt
  • ansible-freeipa (Used to set up the initial userbase).

These are in the appstream repository and can be installed by Kickstart.

Collections

As installed, Ansible lacks many important features essential for system administration. We need to download from the Internet the Ansible Galaxy collections ansible.posix, community.libvirt, and community.general. This is done with the following command:

ansible-galaxy collection install \
    ansible.posix \
    community.general \
    community.libvirt \
    -p /usr/share/ansible/collections/ansible_collections

Environment

Every finished machine is to be fully defined in Ansible variable files. For instance if we make a web server or a database, all the configuration - URLs, databases, and so on, must be in configuration files on the main server. We will define most of the configuration in Ansible files, with perhaps some extra configration in application specific files.

Supported applications

We support the following applications through Ansible automation:

  • NSCHOOL Main Server - The first machine in the network. Master name server, Ansible, LDAP/Kerberos authentication, first file server for home directories, web server for main domain.
  • Boot/install server - Server supporting network booting and installing through PXE, HTTP, NFS, and TFTP. Contains the installation media for all the distributions we support.
  • Kernel Virtual Machines (KVM) - We can build the KVM host automatically, define virtual machines on them, and install them using PXE, Kickstart, and Ansible.

Directory structure

These are the directories we use for Ansible:

  • /local/nschool/inventory/ - Ansible inventory for this environment with group definitions
  • /local/nschool/ansible/ - Contains the NSCHOOL playbooks and roles.
  • /local/nschool/ansible/group_vars/all/ - Ansible group variables.
  • /local/nschool/ansible/host_vars/ - Ansible host variables.
  • /local/nschool/ansible/roles/ - Sets of plays to configure aspects of our machines.

Inventory

At the core of our environment is the host inventory. From this inventory we can generate instructions for building the rest of the network. The main inventory is a file called 00-all in the default Ansible directory. We only support this specific .ini type format and nothing else because various scripts and tools rely on it.

[all]
sypha.nerdhole.me.uk main_ip=10.12.0.2 local_pxefile=0A0C0002 macaddress=d8:9e:f3:91:8a:d8

The fields in this table are:

  • Fully qualified hostname. No short names are allowed as we use this to generate DNS zones.
  • Main IP - The IP address on the host's installation interface.
  • Local PXE file - This is used as part of the filename to direct network installs to the proper files, and is the main IP address in hexadecimal. The value is "default" if the machine will never use PXE.
  • MAC Address - The hardware address to be configured into DHCP to give the machine its IP address and boot information. Either taken from the Ethernet hardware or assigned by us for virtual machines.

The second file in the inventory is called 01-groups. It determines the character of our machines, and has the following groups defined:

Group Contents Notes
all Every single machine in the network This is where we put global variables in group_vars.
main The main server. Only ever contains one entry
os_linux All machines running some version of Linux Child groups for different Linux distributions
os_centos9s Machines running Centos Stream 9 Example. There is one group for each distribution.
os_ubuntu Machines running some version of Ubuntu. No current automation exists for Ubuntu.
boot_install_servers Machines capable of serving OS installs We can support multiple BIS servers
workstations All the desktop machines that users are working on Laptops are not workstations
laptops Mobile computers running a Linux distribution Able to operate without a Main server
printers Hostnames of the printers Here to exclude them from playbook runs
kvmhosts KVM virtual machine hosts This is a function that can be added to a Workstation
kvmguests KVM virtual machines Child groups have VMs on specific hosts
kvmguests_algernon Virtual machines on the Algernon workstation. Others exist for other KVM hosts
okd_hosts VM hosts hosting OpenShift Currently only Algernon
okd_nodes Openshift nodes running Fedore CoreOS A variable indicates the type of Openshift node
okd_loadbalancers Standard Linux Openshift load balancer Multi-homed into Shiftnet

To these groups, we attach variables using Ansible's group_vars mechanism. In order to turn any capable machine into an application server, we need only add it to the appropriate group and the installation and maintenance playbooks will know to operate on those hosts. Host specific information can be added either in host variables or directly as a variable in the group for the simplest of things.

Variables

Ansible lets you define variables in no less than twenty places. Each of these places has a priority, but I am not at all sure these priorities won't shift with new Ansible versions, so I will limit myself in where I put these variables.

I will class the variables as follows:

  • Top variables - These variables are usually simple values assigned to a host. Examples include main IP address, MAC address, KVM host for virtual machines. They can be assigned either in the inventory or in host_vars files for the host itself and accessed anywhere.
  • Environment information - These are dictionaries containing environmental information such as domain names, IP addreses, name of certain servers, and the like.

I will have three places where I put my variables:

  1. The inventory - The inventory resides in /local/nschool/inventory, in files named 00-all, and 02-group-vars. The file 01-groups determines group membership for all machines. The file 02-group-vars is used to gather in one place several simple top level variables.
  2. Group variables - In nschool/ansible/group_vars/all/ I will put stanza files for the more complex variables. See below for details.
  3. Host variables - In nschool/ansible/host_vars/hostname.example.com.yml, I will put variables specific to a single host.

We will have the following variable files, relative to /local/nschool/ansible/group_vars:

YAML file Contents
all/00-nschool.yml Public NSCHOOL variables
all/00-nschool-vault.yml Private NSCHOOL variables such as application passwords
all/01-bis.yml Information about the boot/install servers.
all/02-kvmhosts.yml Information for KVM hosts
all/02-kvmguests.yml Information for KVM guests.

The top level variables will be dictionaries, defined like this:

nschool:
  environment:
    name: "Nerdhole Enterprises"
    main_domain: nerdhole.me.uk
    main_server_fqdn: sypha.nerdhole.me.uk
    main_server_ip: 10.12.0.2
    main_server_os: centos-stream-9

When bootstrapping, the installaton playbook can access the name of the main server as {{nschool.environment.main_server_fqdn}}. The top level variables will all occupy the same namespace. We can define them either in global variables, in group vars or host vars.

Encrypted variables

In several cases, we will have to supply passwords to our software, most notably FreeIPA. We do not want those to be out in the open. We will encrypt the sensitive variables with Ansible Vault. Encrypted variables to a separate dictionary. For a naming convention, if the normal variable is called nschool, the encrypted variable will be nschool_vault.

nschool_vault:
  ipa:
    directory_manager_password: "*****"
    admin_password: "*****"
    kerberos_password: "*****"

We will encrypt this using a vault key named in some fashion after the role, like nschool. The top level dictionary is named rolename_vault.

Application variables

Whenever I create a new application for installation, I will define a new top level variable for it, which will be a dictionary.

# Apache web server
apache:
  packages:
  - httpd
  certificates:
    - name: webroot
  fs:
    - name: wwwdb
      desc: "Database for web server"
      lvname: wwwdblv
      vg: datavg
      mountpoint: /local/wwwdb
      size: 1024G
      owner: apache
      group: apache
      mode: "0755"

The variables underneath the top level depend on application requirements, but we have a few common variable layouts, such as the fs stanza that defines one or more file systems needed by the application.

Host variables

The host variables are mainly used during system installation time, to configure certain aspects of the system such as where to install the operating system and which of its local disks, if any, are data disks. They are stored in /local/ansible/host_vars in a file named after the fully qualified hostname as in /local/ansible/host_vars/sypha.nerdhole.me.uk:

is_uefi: true
install_drive: /dev/disk/by-path/pci-0000:00:17.0-ata-5

local_storage:
  vgs:
    datavg:
      label: datavg
      description: "Dual 2TB disks used in stripes"
      size_gb: 0  # A size of 0 means as large as the disks
      disks:
      - /dev/disk/by-path/pci-0000:01:00.1-ata-1
      - /dev/disk/by-path/pci-0000:01:00.1-ata-2
    backupvg:
      label: backupvg
      description: "Large 11TB disk used for backups"
      size_gb: 0  # A size of 0 means as large as the disks
      disks:
      - /dev/disk/by-path/pci-0000:01:00.1-ata-6
  fs:
    - name: custom
      desc: "Custom file system for this machine only."
      lvname: customlv
      vg: datavg
      mountpoint: /local/custom
      size: 10G
      owner: root
      group: users
      mode: "0755"
      selinux_fcontext: httpd_sys_rw_content_t # Future expansion

The is_uefi variable is always set to true as we will only be supporting UEFI capable machines. The 'install drive' is the place where we install Linux and eventually boot the machine from.

Next is a list of the volume groups present on the system, and what disks they contain. In the past, we used /dev/sda, /dev/vda and the like, but when disks are added or removed, these can change where the persistent drives do not. We use these names only to add them to a volume group, after which we can use /dev/mapper/vgname-lvname style names for our data volumes. This is the proper place to define a machine's available storage. For virtual machines, the volumes will be created in QCOW2 format on the virtual machine host. For physical machines, this stanza should reflect the disks actually installed in the system. When defining storage for virtual machines, all the disks will have the same size. Name them /dev/vdb, vdc, vde and so on and make sure they are unique. They will be turned into Physical Volumes using pvcreate, and added to the volume group specified.

The next stanza specifies the file systems on the machine that are not defined in some application or other. These tend to be user data, special database space and so on.

We can add more host specific information here, but that should be uncommon as most information will be defined in group variables. Examples include a list of web applications hosted on this machine or the name of an exported file system.

NSCHOOL variables

These variables determine the configuration of the /local file system, some identifying information for the main server, and the name of the environment at large. They are stored in ansible/group_vars/all/00-nschool.yml and in ansible/group_vars/all/00-nschool-vault.yml. Because any NSCHOOL machine may need access to this information, it is stored in the all section of the group_vars.

nschool:
  fs:
  - name: local
    desc: "Large storage tank for local file systems"
    lvname: locallv
    vg: datavg
    mountpoint: /local
    size: 1024G
    owner: root
    group: root
    mode: "0755"

  environment:
    name: "Nerdhole Enterprises"
    locality: "Medway"
    state: "Kent"
    organizationalunit: "N-School"
    commonname: "Menno Willemse"
    emailaddress: "mw@nerdhole.me.uk"
    main_domain: nerdhole.me.uk
    main_server_fqdn: sypha.nerdhole.me.uk
    main_server_ip: 10.12.0.2
    main_server_os: centos-stream-9
    nschool_home: /local/nschool
    first_user_id: 1725200000

  aliases:
    "sypha.nerdhole.me.uk":
    - main
    - ns
    - kerberos
    - ldap
    - bis
    - fs
    - git
    - www
    "paya.nerdhole.me.uk":
    - second
    "algernon.nerdhole.me.uk":
    - backup

User and group variables

We will store the users and groups in a variable structure. We will have a set of groups and users for every domain. We store these variables in the 00-nschool-vault.yml file in the group variables for "all".

The variables are organised as follows:

nschool_vault:
  org:
    "nerdhole.me.uk":
       name: "Nerdhole Enterprises"
       groups:
         users:
           gname: users
           gid: 100
           comments: "Everybody's primary group"
         sysadm:
           gname: sysadm
           gid: 1001
           comments: "Administrative users with system privileges"
           users:
           - js
         nerdhole:
           gname: nerdhole
           gid: 10001
           comments: "Members of the NerdHole family"
           users:
           - js
           - ljs

       users:
         js:
           surname: "Smith"
           givenname: "John"
           uname: js
           uid: 1725200001
           group: users
         ljs:
           surname: "Smith"
           givenname: "Little John"
           uname: ljs
           uid: 1725200004
           group: users

The community.general.ipa_user module requires a surname and a given name, which will be combined into a single gecos field. Since community.general.ipa_user cannot add a user to multiple groups, we have organised the groups so that now the users are listed under the groups.

BIS variables

These are the variables for boot/install servers. Only the CentOS Stream 9 OS is shown for brevity.

##########################################
### BOOT/INSTALL MAIN SERVER VARIABLES ###
##########################################
# These are the variables used by the boot
# install server.
bis:

  # File systems for a main server.
  fs:
    - name: localbis
      desc: "Boot/install server data"
      lvname: localbislv
      vg: datavg
      mountpoint: /local/bis
      size: 512G
      owner: root
      group: root
      mode: "0755"

  # Operating systems supported
  os:
    "centos-stream-9":
      label: centos-stream-9
      name: CentOS Stream 9
      iso: CentOS-Stream-9-latest-x86_64-dvd1.iso
      initrd_file: images/pxeboot/initrd.img
      vmlinuz_file: images/pxeboot/vmlinuz
      linuxefi: "efi/centos-stream-9/vmlinuz inst.stage2=http://bis.nerdhole.me.uk/bis/iso/centos-stream-9 quiet"
      initrdefi: "efi/centos-stream-9/initrd.img"
      linuxefi_automatic: "efi/centos-stream-9/vmlinuz quiet"
      initrdefi_automatic: "efi/centos-stream-9/initrd.img"
      method: kickstart
    ...

We have one file system /local/bis that contains all the installation media. The bis.os dictionary contains the information for all supported Linux distributions. I will include the version (CentOS 8, CentOS 9, Rocky 9.2) in the name of the distribution.

Ansible roles

In our Ansible environment, we have the following roles:

  • nschool - The environment bootstrap role for the NSCHOOL environment. This runs on the main server and configures the designated main server as a DNS, DHCP, NFS, Web, Authentication, Home, and Data server.
  • bis - Boot/install server. Configures the resources needed to boot and install several different versions of Linux: TFTP, Web, NFS. It will draw on a collection of CD ISO images and make them available for PXE boot clients.
  • kvmhost - Installs the (physical) machine to host virtual machines using Kernel Virtual Machines (KVM).
  • kvmguest - Sets up a freshly installed Kernel Virtual Machine guest.
  • pxeinstall - Configures a boot/install server so that a PC client or virtual machine will boot off it and reinstall its operating system from scratch, leaving only the existing data on the machine.
  • ipaclient - Configures the machine as a FreeIPA client of the main server.
  • workstation - Configures a machine as a workstation.
  • gdm_background - Sets a background image for the GDM login screen appropriate for the machine (A picture of Sypha Belnades for Sypha, for example).
  • storage - Configures for each machine which disks belong to which volume group.
  • filesystems - Configures logical volumes and file systems
  • ipman - Regenerates the DNS and DHCP configuration based on the inventory

The kvmguest role will provision a virtual machine and keep it turned off. Provisioning a physical machine is a matter of entering the relevant data into the inventory and configuring the BIOS to allow booting in non-secure UEFI mode. The pxeinstall role will either use libvirt stanzas to bot the machine, or prompt the user to start the physical machine.

xxxToDo: Find a way to gather up the fileystems from all applications and add up their sizes in order to arrive at a desired storage space.

Planned roles include:

  • update_local_mirrors - Generates a script to update the local OS mirrors from the Internet

The nschool role

The nschool role performs the following tasks:

  1. Installs the required software
    • Ansible core, bind, DHCP server, Git, Apache, NFS utilities, Perl 5, Python 3, IPA server, IPA client, Yum utilities
  2. Uses Pip to install the mkdocs package that generates the Nerdhole website.
  3. Opens up needed firewall ports
    • HTTP, HTTPS, NFS, DNS, DHCP, FreeIPA 4 ports
  4. Creates the volume groups and file systems
    • Adds designated data disks to the datavg.
    • Creates the Main Server's logical volumes, formats them with XFS, and mounts them.
  5. Set up the domain's main web site - the one you are reading this information on
  6. Configures the name server and the DHCP server
    • Generates the local hostfile
    • Creates a directory for the main DNS databases under /var/named/
    • Uses a Perl script to generate the DNS and DHCP configuration
    • Makes Named listen on all interfaces
    • Allows all comers to access the name server
  7. Installs and configures the FreeIPA server
    • Runs ipa-server-install with the needed parameters
    • Creates the local directories and exports them using NFS V4
    • Enables and starts NFS
    • Configures /home and /data.
  8. Creates the users and groups in IPA, and their home directories
    • Adds the initial users to IPA with primary group users, and GID 100
    • Adds the users to their designated groups (including sysadm)
    • Let sysadmins gain root access on all hosts with sudo

Once this is done, the main server is available with the users configured as they should be. We can now start using the actual system administrators' usernames and no longer have to use the "Builder" user.

xxxToDo: Harvest the various roles from this large playbook, into its constituent parts: Network server, BIS, and the like.

The bis role

This will install a boot/install server. This role has been designed to support multiple boot/install servers in the same network. The role will do the following:

  1. Install the required software
  2. Open up the needed ports in the firewall
  3. Configure the storage from the bis.fs variable by including the filesystems role.
  4. Creates the /local/bis subdirectories for storing configuration files, installation DVDs, and repositories.
  5. For each supported distribution, creates the supporting directory structure
  6. Mounts thge installation DVDs onto the ISO mountpoints.
  7. Exports the ISO images using NFS for Debian/Ubuntu style installers
  8. Enable and start the Apache web server
  9. Configures SELinux so that Apache can access the mirrored repositories and the other BIS files:
    • /local/bis/www - ISO images and kickstart files
    • /local/bis/*/repos - Mirrored repositories.
  10. Configure Apache to serve /local/bis/www as http://bis/bis/
  11. Create symbolic links from /local/bis/distro/iso to /local/bis/www/iso/
  12. Create symbolic links from /local/bis/distro/repos to /local/bis/www/repo/distro
  13. Create /var/lib/tftpboot/efi for the Linux kernels and ramdisk file systems
  14. Copy the files initramfs and vmlinuz or variations thereon to the TFTP server
  15. Generate the default grub.cfg for PXEboot putting every distro's manual installer into the menu.
  16. Copy shimx64 and grubx64 to the TFTP server
  17. Enable and start the TFTP server.

At this point the BIS server is able to start the installers of all the supported Linux distributions. To automatically install a machine using Kickstart, we use another role named pxeinstall, see below.

The kvmhost role

The kvmhost role installs Kernel Virtual Machine (KVM) on a physical machine. We can then define and install virtual machines running a variety of Linux versions. This is done as follows:

  1. Installs the Virtualization Host group, virt-manager, virt-install, and the libguestfs-tools.
  2. creates the needed file systems (/local/kvm, 500GB) using the filesystems role.
  3. Creates an extra swap space on the datavg to extend memory for the virtual machines, format it, and activate it.
  4. Enable and start the libvirtd subsystem.
  5. Use a script named "configure_bridge" to move the host's IP address from the Ethernet interface to the bridge interface that is used by VMs to access the network.
  6. Creates the /local/kvm/storage directory, configures it as a storage pool for VM storage, and starts it using the virsh pool-start command..

When this is done, the KVM host is ready to have virtual machines dcefined on it and run them.

The kvmguest role

The kvmguest role is destructive. It removes a KVM guest and then recreates it using an XML template, including its operating system disk. However, it will not remove the machine's data disks unless the purge option is used. With that done, the pxeinstall role can take over to put the operating system on the virtual machine. It does so in the following steps:

  1. Uninstall function
    • Forcibly shuts down the virtual machine using the destroyed option to the virt module.
    • Removes the virtual machine's rootvg-0 file from the KVM host.
    • Undefines (deletes) the virtual machine from the KVM host (using the --nvram option because our machines have non-volatile RAM files).
  2. Installation function
    • Creates the /local/kvm/xml directory if it doesn't exist.
    • Generates an XML definition file for the host depending on its class (basic by default).
    • Creates the host's rootvg-00 file as a qcow2 disk image file and sets the permissions correctly.
    • Defines the virtual machine using the virsh define command.

With that done, we can now start the machine and because it has no operating system on its OS disk, it will boot from its network interface, download the PXE installation files and install itself.

The pxeinstall role

The pxeinstall role is designed to take any machine, virtual or otherwise, from bare metal to a minimal Linux image that we can then further install using Ansible. This role needs to be run as builder because a bare metal machine does not know the IPA users yet. It does the following:

  1. Creates a grub.cfg-XXXXXXXX file that instructs the machine to start the installation. The suffix is the machine's IP address in hexadecimal.
  2. Generates a kickstart file for the host on the boot/install server.
  3. Reboots the machine from the network, in one of two ways:
    • For a physical machine, it instructs the user to boot the machine off the network.
    • For a virtual machine, it shuts down the machine forcibly and restarts it. The "kvmguest" role should already have wiped the machine's rootvg disk.
    • Note - Each of these variants will wait for ten minutes for the installation client to come up (Port 111/tcp is open) and fail if that does not happen.
  4. Remove the grub.cfg-XXXXXXXX file so that if something goes wrong the machine will not keep trying to reinstall.
  5. Delete the host's IPA record from the main server.
  6. Remove the host's SSH keys using ssh-keygen -R.
  7. Wait for the host to complete the installation of the operating system and open its SSH port (22). Maximum time for this is an hour.
  8. Use ssh-keyscan to read the host's new SSH key.
  9. Install some needed tools such as exfat support.

When this completes, the machine is ready to be further configured using subsequent Ansible playbooks.

The ipaclient role

The ipaclient role subscribes the host to IPA on the main server, and also creates the /home and /data filesystems to be mounted from the main server. It does the following:

  1. Install the ipa-client package.
  2. Uses the ipa-client-install command to configure the IPA client.
  3. Configures SELinux to allow NFS-mounted home directories.
  4. Creates the /home and /data mount points and permanently mounts the NFS file systems thereon.

Once this is done all the IPA users will be able to log in on the machine and they will have access to their home directories.

The workstation role

The workstation role takes a machine (usually physical) and turns it into an NSCHOOL workstation. The largest part of this is to install the software expected on a workstation such as office productivity applications, video and audio editing software, graphics packages, and so on. The role does the following:

  1. Install the "Workstation" group, Internet applications, and the office suite.
  2. Uninstall a few unwanted packages that are installed by default, such as Cockpit, and the Gnome initial setup.
  3. Configure a variable in SELinux to keep colord from generating alerts.
  4. Set the default target to "graphical.target" so users get a graphical login.
  5. Adds the RPM Fusion repository to the Yum/DNF configuration.
  6. Installs a list of video codecs and libraries so Firefox can understand certain video formats.
  7. Install certain desirable applications including Audacity, Blender, Inkscape, and Thunderbird.

With this done, the machine is ready to carry out the users' day-to-day tasks.

The gdm_background role

The gdm_background role copies a suitable background image to the machine and sets it as the background to the graphical login. It does so in the following steps:

  1. Install the glib2 library from the repos.
  2. Copy the host's background image to the host.
  3. Set the GDM background using a script written by the user DimaZirix on Github.

Now, users can see clearly what they are logging into. This only works for machines using GDM.

The storage role

The storage role bases itself on the machine's host variables (local_filesystems.vgs) and assigns the specified disks to their proper volume groups. This is an example of a physical machine's configuration:

is_uefi: true
install_drive: /dev/disk/by-path/pci-0000:00:1f.2-ata-1

local_storage:
  vgs:
    datavg:
      label: datavg
      description: "Local disk used for data"
      size_gb: 0  # A size of 0 means as large as the disks
      disks:
      - /dev/disk/by-path/pci-0000:00:1f.5-ata-1

Note that for a physical machine, the disk names /dev/sda, /dev/sdb and so on can change when disks are added or removed. One day the machine is booting from /dev/sda, the next from /dev/sdb. This works because OS volumes are identified by a UUID. However, we need to address the disks before they get a UUID, so we use the physical address of the controller and the location of the disk thereon. The entries under /dev/disk/by-path/ will stay the same as long as the disk is left in the system.

For a virtual machine, we do have full control over the disk names. The /dev/vda volume will always be the OS disk. When adding additional disks, we specify their names in the virsh command. For a virtual machine (such as labo100), the optional host file looks like this:

is_uefi: true
install_drive: /dev/vda

local_storage:
  vgs:
    datavg:
      label: datavg
      description: "Local disks used for data"
      size_gb: 10
      disks:
      - /dev/vdb
      - /dev/vdc
    backupvg:
      label: backupvg
      description: "Volume group for backups"
      size_gb: 25
      disks:
      - /dev/vdd

This will instruct the installer to create two 10GB disks named /dev/vdb and /dev/vdc to put in datavg and one 25GB disk to put in backupvg.

The filesystems role

The filesystems role creates the host's file systems based on the "fs" dictionary passed to it as a parameter. The file systems parameter looks like this:

local_storage:
  fs:
    - name: wwwdb
      desc: "Database for web server"
      lvname: wwwdblv
      vg: datavg
      mountpoint: /local/wwwdb
      size: 10G
      owner: apache
      group: apache
      mode: "0755"

The filesystems role supports the install, uninstall, and purge roles actions. Install will build the file systems, uninstall will unmount them so that they are not remounted at reboot, but leave the logical volume intact so that it will reappear when the filesystem is installed. The purge option removes all the data. The steps are as follows:

  1. Uninstallation
    • Use the ansible.posix.mount module to unmount the file system and remove it from fstab.
    • Note this block is also used by the "purge" option.
  2. Purge
    • Deletes the logical volumes using the community.general.lvol module.
  3. Installation
    • Creates the logical volumes in the existing VG using community.general.lvol
    • Formats the file system with XFS (community.general.filesystem)
    • Mounts the file system in its place (ansible.posix.mount)
    • Sets ownership, group ownership, and permissions.
    • Note: File system operations will sort the file systems before operating on them so as not to disturb nested file systems. Please do not nest too many file systems, it is Asking For Trouble.

With this done, the file systems will be available to the system.

The ipman role

The ipman role is meant to generate a DNS and a DHCP configuration on the main server. Technical details ase in the IP address management page. It also generates an /etc/hosts file containing all the machines and their aliases. This functionality is also part of the nschool role, but once the main sIP_Address_Management.mderver is up and running, we don't really want to run that role again. The ipman role takes the following steps:

  1. Upgrades Bind, DHCP, perl and python to the latest version.
  2. Opens up the DNS and DHCP firewall ports if not already done.
  3. Uses the global hostfile.j2 template to generate an /etc/hosts file.
  4. Use the NSCHOOL mkdnsserver script to generate the DNS databases and the DHCP configuration.
  5. Configures named's /etc files.
  6. Restarts named and dhcpd as needed.

This done, the Nerdhole name services will include any changes made to the inventory such as new aliases, ned machines, new MAC addresses and so on.

Top level playbooks

We have the following top level playbooks that are an official part of the NSCHOOL environment. Others may be created for tests and specific tasks.

  • Rebuild.yml (Built) - This will build an entire machine from bare metal, and install the operating system and any applications it needs to run.
  • Reconfigure.yml (Planned) - This will uninstall specific pieces of software and then re-install them using the latest configuration specified in the NSCHOOL group variables.
  • Maintenance.yml (Planned) - Will run updates on the system, download the latest packages from the OS vendor, run reports, update the NSCHOOL website and so on.

We will commonly execute these playbooks from a wrapper script named /local/nschool/bin/nschool. This script will take arguments that select the playbook to be run and pass information to it as required. When we build new applications, we will add those to the Rebuild and Reconfigure playbooks so they can be part of our

Rebuild playbook

The Rebuild.yml playbook is meant to completely rebuild one of the NSCHOOL machines. Because the normal IPA users may not be available on the target machines, it uses the builder account to log in on the machines using the well-known password, and then uses that same password again to sudo to the root account. The Rebuild playbook contains the following plays:

Play Tags Targets Comments
Provision a virtual machine on KVM kvmguest kvmguests Destructive. Removes the machine and rebuilds it from scratch
Reinstall the operating system osinstall os_centos_stream_9 Exclude the main server to avoid unpleasantness
Install the basic facilities for clients basic os_centos_stream_9 IPA client, local storage, NFS volumes
Install the workstation facilities workstation workstations Linux productivity apps
Install the boot/install server facilities bis bis Installs secondary boot/install servers.(1)

(1) We will allow this to reconfigure the main server's BIS later.

To run this playbook, we use the following ansible command:

  • ansible-playbook -Kk --limit=<hostnames> Rebuild.yml - from the Main server. The password requested is the builder user's password.

Reconfigure playbook

The reconfigure playbook is designed to reconfigure specific aspects of the running system without completely rebuilding it. It will usually base itself on the Ansible inventory, from which it will extract storage parameters, users, and the like. It will use all the applicable roles from the nschool collection. It is meant to be used by members of the sysadm group, who are able to gain root access using sudo. Use your own password when asked for one. The reconfigure playbook contains the following plays:

Play Tags Targets Comments
Install the workstation facilities workstation workstations Updates, printer, software
Install the boot/install server facilities bis bis Both the main server and any additional BISs
Update the DNS configuration ipman, dns, dhcp main Any future secondary DNS or DHCP servers to be done later

To run this playbook, we use the following ansible command:

  • ansible-playbook -Kk --limit=<hostnames> --tags <tags> Reconfigure.yml - from the Main server. The password asked for is your own password.

Maintenance playbook

xxxToDo: Complete, then describe.