Node Images and Boot Configurations

Compute Node Fields

To view various compute node fields, e.g., for node n0:

# View the full list of fields, first using long-form arguments:
scyld-nodectl -i n0 list --long-long
#  or the equivalent using shorthand arguments:
scyld-nodectl -i n0 ls -L

# View the abbreviated list of fields, instead using long-form args:
scyld-nodectl -i n0 list --long
#  or the equivalent using shorthand args:
scyld-nodectl -i n0 ls -l

The type field is currently set to "compute", although future updates to Scyld ClusterWare may add additional values.

The groups and attributes fields are described in more detail in Interacting with Compute Nodes and in the Reference Guide commands scyld-nodectl and scyld-attribctl.

Prior to a node booting, the system will inform the DHCP server of MAC-to-IP address mappings for nodes known to the system. Changes to node indices, IP, or MAC addresses may affect these mappings and will cause updates to be sent to the DHCP server within a few seconds. When a node makes a DHCP request, the DHCP server maps that node's MAC address to the correct IP and provides additional options to the booting node, including where to find the correct boot files. These boot files are linked in boot configurations stored in the database.

Compute Nodes IPMI access

ipmitool is a hardware management utility that supports the Intelligent Platform Management Interface (IPMI) specification v1.5 and v2.0.

IPMI is an open standard that defines the structures and interfaces used for remote monitoring and management of a computer motherboard (baseboard). IPMI defines a micro-controller, called the "baseboard management controller" (BMC), which is accessed locally through the managed computer's bus or through an out-of-band network interface connection (NIC).

The root can use ipmitool for a variety of tasks, such as:

  • Inventory a node's baseboards to determine what sensors are present

  • Monitor sensors (fan status, temperature, power supply voltages, etc.)

  • Read and display values from the Sensor Data Repository (SDR)

  • Read and set the BMC's LAN configuration

  • Remotely control chassis power

  • Display the contents of the System Event Log (SEL), which records events detected by the BMC as well as events explicitly logged by the operating system

  • Print Field Replaceable Unit (FRU) information, such as vendor ID, manufacturer, etc.

  • Configure and emulate a serial port to the baseboard using the out-of-band network connection known as serial over LAN (SOL)

Several dozen companies support IPMI, including many leading manufacturers of computer hardware. You can learn more about OpenIPMI from the OpenIPMI project page at http://openipmi.sourceforge.net, which includes links to documentation and downloads.

The node's power_uri field in the database is optional and informs the head node(s) how to control the power to a given node. A plugin interface allows for different forms of power control, currently supporting IPMI for bare metal nodes, and libvirt or VirtualBox (vbox) for different types of virtual nodes. For example, a power_uri for a VirtualBox virtual node might be:

vbox://192.168.56.1/CW_Compute0

Production system compute nodes are generally bare-metal nodes that can be controlled via the ipmitool command that communicates with the node's Baseboard Management Controller (BMC) interface. For such nodes the administrator should set a power_uri with the appropriate BMC IP address and username/password access credentials, e.g.,

ipmi:///admin:password@172.45.88.1

With such a power_uri, the head node communicates with that compute node's BMC located at 172.45.88.1 using the username "admin" and password "password" to perform a scyld-nodectl power on, power off, power cycle, shutdown --hard, or reboot --hard.

If for any reason only a specific remote machine can execute ipmitool to control a node, then add that server name, and an optional user name and password, to the power_uri, and the local head node will ssh to that remote server and execute the ipmitool command from there. For example, the power_uri:

ipmi://remote_server/admin:password@172.45.88.1

sends the ipmitool command details to server "remote_server" for execution.

The scyld-nodectl so-called "soft" shutdown --soft and reboot --soft commands do not use the power_uri, but rather ssh to the compute node to execute the local /usr/sbin/shutdown or /usr/sbin/reboot command with appropriate arguments. A simple scyld-nodectl -i <NODE> reboot (or shutdown) first attempts a "soft" action if the node is "up" and the head node can communicate with the node. If the "soft" action is not possible or does not complete within a reasonable time, then the scyld-nodectl resorts to a "hard" action using the power_uri connection.

Boot Configurations

The scyld-install script creates a basic boot configuration named DefaultBoot that references the initial DefaultImage and is initially associated with all compute nodes. After installation, the cluster administrator can customize that configuration and/or create additional boot configurations and compute node images.

Administrators can modify configuration fields using the scyld-bootctl tool. For example, the administrator can change the name and description of the newly created boot configuration on a freshly installed system using the update argument:

scyld-bootctl -i DefaultBoot update name="NewName" description="New description"

The kernel and initramfs can also be set using the same command, although their paths must be prefixed with @ (which signifies that what follows is a local file path), e.g.:

scyld-bootctl -i DefaultBoot update kernel=@/boot/vmlinuz-3.10.0-862.el7.x86_64

Other database objects (Nodes, Images, etc.) are modified using similarly named tools, e.g. scyld-nodectl and scyld-imgctl. Each node associates with a specific boot configuration through its _boot_config attribute. Like other attributes, this field may be inherited from an attribute group (including the global default attribute group) or set directly on the node. Details of manipulating node attributes are discussed in Interacting with Compute Nodes.

Boot configurations also contain two more fields, release and boot_style. The release field is not editable by the administrator and is populated by the system whenever the kernel file is uploaded, based on the Linux file command output. The boot_style dictates how the nodes will receive the root file system, although that can be overridden by the _boot_style attribute (see Reserved Attributes) set at the node level or in any attribute groups used by the node.

The possible values for boot_style are rwram, roram, iscsi, disked, live, next, and sanboot. The default rwram instructs the system to download the compressed image into compute node RAM where the mount_rootfs script unpacks it during the boot process. Alternatively, when the roram option is provided, the script downloads a squashfs image into compute node RAM, combines this with a writable tmpfs via overlayfs, and boots using that combined file system. The iscsi option instructs the node to mount a read-only image via iSCSI and similarly apply a writeable overlay,

The disked option allows a node with local storage to both employ a node-local persistent cache to retain downloaded images and unpack images onto a node-local partition. Using a cache avoids the need to download images at boot time, and booting from a local partition frees the RAM that would otherwise hold the compute node image. See Appendix: Booting From Local Storage Cache for details.

The live and next options are most useful when kickstarting locally installed nodes. The live option can be applied to a boot configuration that points to a repo based on an uploaded CentOS or RHEL ISO. Nodes booted live from such a configuration will use the kernel and initramfs from the ISO with an inst.repo kernel option to boot into the ISO's Anaconda-based installer. Given access to the node console, a cluster administrator can manually install to the local disk, thereby generating a kickstart file that can be used to reinstall this or similar nodes at a later time. The BIOS of such kickstarted nodes should be configured to boot from the network and then from local disk. In this configuration the next boot style should cause the compute node(s) to initially attempt to PXE boot, but then fail and try to boot their local disk. Additional details of kickstarting locally installed nodes can be found in Using Kickstart.

When booting a compute node into either a kickstart or live configuration, certain anaconda options can be provided on the command line through the cmdline field in the boot config or node. For example, if the inst.sshd option is included on the cmdline when a node uses a boot configuration made from an ISO-based repo, then the cluster administrator can log into the node during a "live" boot or during the node kickstart process. Be aware that by default there is no root password required, but it can be set in a kickstart file.

Similarly the inst.vnc anaconda argument will tell the booting node to start a VNC server that an admin can connect to in order to monitor the kickstart process or click through a manual install.

See https://anaconda-installer.readthedocs.io/en/latest/boot-options.html for documentation and additional options.

Depending on BIOS details, some locally installed systems will not properly handle the next boot style and will halt instead of failing over to another boot device. In that case, the sanboot option can be used to trigger booting of the first partition of the first disk. The sanboot option behavior can be customized using the _ipxe_sanboot attribute described in Reserved Attributes.

The boot_style setting can be overridden for an individual or group of nodes by assigning a _boot_style attribute. Similarly, to avoid overlayfs and use the rwtab approach to providing write capabilities to read-only root file systems, an administrator can set a node's (or attribute group's) the _boot_rw_layer attribute to rwtab.

Deleting boot configurations

Boot configurations contain only a kernel and initramfs and consume only a few tens of megabytes. Permanently delete an unwanted boot configuration xyzBoot with:

scyld-bootctl -i xyzBoot delete

PXEboot Images

An important concept is local image versus remote image.

The ClusterWare database retains the official copy of PXEboot images, which are termed remote images. When a compute node boots, it downloads its remote image (as specified in the boot config assigned to that node) from its parent node.

When a tool such as scyld-modimg creates or manipulates image contents, the tool manipulates a cached local version of the remote image. Per-administrator cache(s) are ~/.scyldcw/workspace/. The tool first downloads a remote image into the cache if it doesn't already exist there. Typically a new or modified cached local image is uploaded to the database when the creation or modification is complete.

See Deleting unused images for details about how to delete local or remote images.

Creating PXEboot Images

Important

Various commands that manipulate images execute as user root, thereby requiring that the commands internally use sudo and requiring that user root must have access to the administrator's workspace which contains the administrator's images. Typically the per-user workspace is ~/.scyldcw/workspace/. If that directory is not accessible to the command executing as root, then another accessible directory can be employed, and the administrator can identify that alternative path by adding a modimg.workspace setting to ~/.scyldcw/settings.ini.

Note

RHEL9-clone images cannot be built by a RHEL7-clone head node.

The scyld-install script creates an initial basic image with the default name DefaultImage based on the publicly available CentOS repositories. If these repositories are not accessible, the scyld-add-boot-config tool can be run later with locally accessible repositories as described in Appendix: Creating Local Repositories without Internet. Once the DefaultImage is created, the cluster administrator can use scyld-modimg to modify it directly, though a safer approach is to use scyld-imgctl to clone the DefaultImage to new name, and then use scyld-modimg to modify that cloned image, leaving the DefaultImage untouched. See Modifying PXEboot Images for details. The administrator can also re-create the DefaultImage. See Recreating the Default Image for details.

The administrator can also create a new image from an ISO or network accessible package repository. When doing that, consider the source of the components (aka packages) for that new image. A distro ties together a list of repos, i.e., package repositories, and an optional release. The package_manager is determined during image creation but can be overridden in the distro. The initial default distro is CentOS version 7 or 8 (matching the original head node's version), uses package_manager yum, and downloads packages from a one item repos list containing "CentOS_base":

[admin@virthead]$ scyld-clusterctl distros ls -l
Distros
  CentOS
    name: CentOS
    package_manager: yum
    release: 7
    repos
      CentOS_base

[admin@virthead]$ scyld-clusterctl repos ls -l
Repos
  CentOS_base
    keys: []
    name: CentOS_base
    urls
      http://mirror.centos.org/centos/$releasever/os/$basearch/

Create a new image named "NewImg" using the default distro:

scyld-modimg --create --set-name NewImg

which downloads packages from the latest CentOS 7 yum repo.

Create a CentOS 6 distro that downloads packages from the latest CentOS 6 yum repo:

scyld-clusterctl distros create name=CentOS6 release=6 repos=CentOS_base

Note that this new "CentOS6" distro for release 6 uses the same repos value of "CentOS_base" that is used for the "CentOS" distro for release 7. This works because a distro's release value becomes the repo's urls "$releasever" field, and the CentOS repositories for 6 and 7 have the same subdirectory hierarchies.

View both distros, and also see that the default remains the CentOS 7 distro:

[admin@virthead]$ scyld-clusterctl distros ls -l
Distros
  CentOS
    name: CentOS
    package_manager: yum
    release: 7
    repos
      CentOS_base

  CentOS6
    name: CentOS
    package_manager: yum
    release: 6
    repos
      CentOS_base

[admin@virthead]$ scyld-clusterctl --get-distro
Default distro: CentOS

Create an image using this CentOS6 distro, overriding the default CentOS:

scyld-modimg --create CentOS6 --set-name CentOS6_image

or first switch the default distro to CentOS6 and do another simple create:

scyld-clusterctl --set-distro CentOS6
scyld-modimg --create --set-name CentOS6_image

Keep in mind that now every subsequent simple --create command will default to use the CentOS6 distro.

To create a CentOS image that contains something other than the latest CentOS 7 or 6 release, see Appendix: Creating Arbitrary CentOS Images. To create a RHEL image, see Appendix: Creating Arbitrary RHEL Images.

Recreating the Default Image

If you wish to recreate the DefaultImage that was built by the scyld-install tool, then you must first delete the components of the existing image and boot config:

scyld-attribctl -i DefaultAttribs rm
scyld-bootctl   -i DefaultBoot    rm
scyld-imgctl    -i DefaultImage   rm

Then create a new default. If there are no attribute groups defined on this cluster (see Node Attributes), then:

scyld-add-boot-config --make-defaults

Otherwise the administrator should first clear the attributes.

Modifying PXEboot Images

Once you have an existing image, you can install additional RPMs into that image. We suggest that Best Practices is to rarely and only very carefully modify DefaultImage and DefaultBoot, and instead use them as stable baselines from which you clone new images and boot configurations.

The scyld-modimg tool supports a rich collection of options. See scyld-modimg for details.

For example:

scyld-imgctl -i DefaultImage clone name=mpiImage
scyld-add-boot-config --image mpiImage --boot-config mpiBoot
scyld-modimg -i mpiImage --install openmpi3.1

Suppose you want to create a new boot config mpiAltBoot that references the same mpiImage though is otherwise different than mpiBoot. For instance, suppose you want mpiAltBoot to have a different cmdline:

scyld-bootctl -i mpiBoot clone name=mpiAltBoot

# Note that an updated cmdline replaces the entire existing cmdline,
# so examine the current cmdline:
scyld-bootctl -i mpiAltBoot ls -l | grep cmdline
# and perhaps the current cmdline is "enforcing=0", which you add to a new cmdline:
scyld-bootctl -i mpiAltBoot update cmdline="enforcing=0 console=ttyS1,115200"

You can also manually customize an image, including installing or removing RPMs and modifying configuration files, by operating on the image inside a chroot:

scyld-modimg -i mpiImage --chroot

Or combine commands, ending inside a chroot:

scyld-modimg --create --set-name mpiImage --install openmpi3.1 --chroot

If scyld-modimg --chroot detects a problem accessing or manipulating the local image, then delete the local image (see Deleting unused images), and then the retry of the operation will download a fresh copy of the remote image into the cache. Alternatively, execute scyld-modimg adding the --freshen argument, which ignores the current cached local image and downloads a fresh copy.

Inside the chroot you execute as user root and can manually add, update, or remove rpms with yum (or other appropriate package manager), modify configuration files, etc. When you exit the chroot, you are asked if you want to discard or keep the changes. If you keep the changes, then you are asked whether or not you want to replace the local image, to upload the local image, and to replace the remote image.

Note

Keep in mind that several directories in the image do not get repacked and saved into the image file after an exit: among them are /tmp/, /var/tmp/, and /var/cache/yum.

If your intention is to answer yes to all the questions following your exit, then you can skip those questions by adding more arguments to the original command line:

scyld-modimg --create --set-name mpiImage --install openmpi3.1 --chroot \
                      --no-discard --overwrite --upload

You can examine the RPM contents of an image without going into a chroot by doing a simple query:

# Display the version of 'clusterware-node' in the image
scyld-modimg -i mpiImage --query clusterware-node

# Display the version of all RPMs in the image
scyld-modimg -i mpiImage --query

Finally, you must set the _boot_config attribute for specific nodes, or for all nodes, as desired to use this new boot config. For example, to have nodes n0-n15 use the mpiBoot boot config:

scyld-nodectl -i n[0-15] set _boot_config=mpiBoot

The scyld-modimg command prompts the user about whether to overwrite an existing image or create a new one, and whether to upload the resulting file to the head node, optionally overwriting the image stored on the ClusterWare head node. This tool operates on a local cache of the image and cannot be used to delete an image from the head nodes or to directly modify the name or description of an image on the head node. To modify these sorts of fields, use the scyld-imgctl tool.

Images are stored in the head node's /opt/scyld/clusterware/storage/ in cwsquash format, which consists of a squashfs image offset inside a pseudo-disk image. This format is suitable for exporting via iSCSI.

Small homogeneous clusters may use a single node image across all compute nodes, although larger clusters that include compute nodes with differing hardware will require additional customization that may not be applicable to all nodes. Although cluster administrators may find that node attributes (discussed in more detail in Interacting with Compute Nodes) and customized boot-time scripting provide adequate image customization, it may be useful (or necessary) to create additional boot configurations and root file systems that meet specific hardware and/or software needs.

Customization can involve more than adding software drivers to support node-specific hardware and adding applications and their associated software stacks. It can also involve customizing configuration files in an image to deal with a non-standard networking environment. For example, if the compute node needs to use a networking route that is not the gateway defined in the head node's /opt/scyld/clusterware-iscdhcp/dhcpd.conf.template, then the cluster administrator needs to edit that file to modify the default option routers <GATEWAY>; line, or edit the compute node image's appropriate /etc/sysconfig/network-scripts/ifcfg-* script to insert the desired GATEWAY IP address. For more details see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Networking_Guide/sec-Editing_Network_Configuration_Files.html or documentation for your base distribution.

Caching in scyld-modimg

To provide the best performance, the ''scyld-modimg'' command keeps a local cache of images within the ''~/.scyldcw/workspace'' directory. This directory contains a ''manifest.json'' file that lists the cached images, and an image is added whenever the user downloads it. When the ''scyld-modimg'' command executes it checks the workspace directory for images that are older than an hour and identical to the same images, matched by UID and name, stored by the head node. Images matching these criteria will be evicted from the cache.

When a user runs the ''scyld-modimg'' command, the tool will check the local cache for a copy of the image based on the identifier, usually the image name, provided by the user. If a match is found, that image will be unpacked for modification (to avoid a fresh download from the head node). Then, a user can modify the image in stages without losing changes in between, even if they do not upload the image to the cluster at each step.

Note that this behavior can cause unexpected conflicts in very specific circumstances. If a user modifies an image and then deletes that image from the cluster using the ''scyld-imgctl'' or ''scyld-bootctl'' commands, the local cache will still contain a copy. This means that the next time the ''scyld-modimg'' command executes it will see the locally cached image, and regardless of the age, not delete it because an identical image is no longer available from the head node.

If the user then creates a new image by the same name, perhaps through cloning an existing image using the ''scyld-imgctl'' command, and attempts to modify that image using ''scyld-modimg'', they will actually be modifying the local cache due to the name match. When they attempt to upload this modified image the upload will fail because of a UID mismatch like:

ERROR: No image found for ID=d67501a26509486ebaad00827d7fac23

The simplest way to resolve this problem is to delete the locally cached image:

scyld-modimg -i <IMAGENAME> --delete

Then re-run the ''scyld-modimg'' command to modify the image. Since the local cache no longer contains an image with a matching name the tool will download a fresh copy from the head node and record the correct UID in ''manifest.json''. This is almost always the desired solution since the user has unknowingly modified the wrong image.

If the user actually does want to keep the changes, the simplest approach is to identify the new image UID:

scyld-imgctl --show-uids ls <IMAGENAME>

Then replace the image UID in the ''~/.scyldcw/workspace/manifest.json'' file, rename the image file itself to the new UID, and reattempt the upload:

scyld-modimg -i <IMAGENAME> --upload

Once this command completes, the old image content will overwrite the new image content meaning that the previously made changes are preserved.

Updating the kernel in an image

Compute nodes that boot over the network download their kernel and initramfs at boot time from their parent-head-node, i.e. the first head node to respond to their DHCP request. The required kernel, initramfs, command line, and a reference to an image are combined within a ClusterWare boot configuration that can be assigned to the nodes.

To update kernels or other packages within an image, named ''Prod202404'' in this example, a cluster administrator can run either:

scyld-modimg -i Prod202404 --chroot --overwrite --upload

Or

scyld-modimg  -i Prod202404 --update --overwrite --upload

The first is interactive and the administrator would need to run the ''dnf'' or ''yum'' update commands inside the chroot, whereas the second will attempt the updates but may suppress some of the errors. If the kernel inside the image is updated, then the boot configuration will also need to be updated. Assuming the newly installed kernel’s version is ''5.14.0-362.24.1.el9_3.x86_64'' and the boot configuration is called ''ProdBoot202404'', we can use the ''scyld-mkramfs'' command to update the boot configuration:

scyld-mkramfs --kver 5.14.0-362.24.1.el9_3.x86_64 --update ProdBoot202404

Note

Note that this new ClusterWare initramfs file is not the same as a similarly named "initramfs" file in the head node /boot/ directory which is associated with a kernel in the /boot/ directory. This ClusterWare initramfs file is associated with a specific image and boot config, and it contains custom ClusterWare scripts that execute at boot time.

Without the ''--kver <KVER>'' options, the tool will attempt to select the most recent kernel based on version directories found in ''/lib/modules'' within the image, so explicitly selecting the version is not required. This command will examine the boot configuration, extract the kernel from the image, use dracut within the image to build a new initramfs, extract that as well, and then upload both into the boot configuration.

Alternatively, the administrator may choose to make a new boot configuration, perhaps because they want to be able to boot different kernels within the image, or because they chose to upgrade a cloned copy of the original image. In this case the ''scyld-add-boot-config'' command can be used:

scyld-add-boot-config --image Prod202404Cloned –boot-config ProdBoot202404New

This command will use ''scyld-mkramfs'' internally to extract the kernel and generate the initramfs, then use those files to construct the new boot configuration with a default command line and a reference to the named image, ''Prod202404Cloned'' in this example.

Once the boot configuration is updated or constructed, the cluster administrator can check the release field of the boot configuration to see the kernel version. Any nodes assigned to use this boot configuration will boot using that kernel. If a node is then booted and does not use the correct kernel, please confirm the node’s ''_boot_config'' attribute references the correct boot configuration.

Updating Drivers Inside Images

ClusterWare uses images to provision compute nodes. Because of this, any drivers, applications, or libraries required to run the compute node hardware or jobs need to be available to the running compute node, not the head node(s). To assure this availability, software needs to be installed into the image or onto some form of cluster shared storage. Generally, drivers are more commonly installed into the image while applications and libraries are installed to shared storage and accessed through the module command.

When installing software into an image there are two approaches available. The most common is installing into the image via the ''scyld-modimg'' command, commonly via the ''--chroot'' option. In rare cases, some software can only be installed on a running node. In these cases, the image be captured using the ''scyld-modimg --capture'' command.

For example, to install a package called ''prod-install.sh'' within an image named ''Prod202404'' using the chroot method, the cluster administrator would run:

scyld-modimg -iProd202404 --copyin prod-install.sh /root --chroot --upload --overwrite

The tool will unpack the image into a local workspace directory within the administrator’s home and chroot into it after bind mounting necessary system paths. Once inside the chroot, the administrator will find the ''prod-install.sh'' file copied into ''/root'' and can then perform the necessary steps to install the software.

Some types of software will try to build kernel modules for the currently running kernel. Within a ''scyld-modimg --chroot'', that may be incorrect because the current kernel is actually the host kernel, and may not match the kernel running on the booted compute node. Most installers provide some command line option to allow the administrator to specify the target kernel, but for installers that do not, the kernel version can be specified immediately after the ''--chroot'' argument:

scyld-modimg -iProd202404 --copyin prod-install.sh /root --chroot <KVER> --upload --overwrite

Specifying the kernel version causes ClusterWare to replace the ''uname'' command inside the chroot with a wrapper that will output the specified kernel version, in place of the one detected by the actual ''uname'' command. This is usually adequate to trick even stubborn installers into using the correct kernel. In the rare case of an installer that still fails, a cluster administrator can ssh into a running node, install the software there, and then capture the file system to a new image via:

scyld-modimg --capture <NODE> --set-name <IMAGE> --chroot --upload

This command will use ssh to connect to the running node and run scripts on the node. These will copy the contents of the local file systems, unpack them into a local directory, and then chroot into that directory. Within that chroot, the administrator can make further changes before the captured image is uploaded. Note that capturing a running node does run the risk of capturing node-specific details, so installing software within the chroot is preferable.

Capturing and Importing PXEboot Images

Cluster administrators can also modify the files on a booted compute node and use the scyld-modimg --capture command to capture those changes into the image. You can capture the node into an existing image or into a new image. For example, to capture node n0:

scyld-modimg --capture n0 --set-name NewImage

This process may take several minutes. During that time the scyld-pack-node tool is executed on the compute node via the scyld-nodectl exec mechanism, and the result is streamed back to the scyld-modimg command that then uploads it to the head node, potentially replacing an existing NewImage contents. The scyld-pack-node tool captures all files on the node's / mount, but does not walk other mounted file systems to ensure that any shared storage is not accidentally captured.

You also need to create a boot config for this captured image. For example:

scyld-add-boot-config --image NewImg --boot-config NewBoot

Please note that manual work will likely be required to generalize the captured image, as the process may capture details specific to the compute node. Due to this hazard, future ClusterWare releases may expand what files are excluded during image capture. Additionally, cluster administrators should confirm that the node being captured is idle to reduce the chance of capturing an image in some intermediate state.

Note that RHEL 7 clones use a version of RPM too old to properly interpret RHEL 9 packages, so a cluster administrator trying to create an image may choose to kickstart a diskful node and then use scyld-modimg --capture to create the image. This has been tested and works fine, although the administrator must comment out or delete the node-specific lines in /etc/fstab created during the kickstarted installation.

Deleting unused images

Compute node images consume significant storage space. Remote images are replicated among cooperating head nodes and are the files downloaded by PXEbooting compute nodes. A local image is a cached copy of a remote image that was downloaded when the cluster administrator viewed or modified the image. Deleting a local image does not affect its remote version and merely causes it to be re-downloaded from the head node if and when an administrator subsequently views or modifies it.

To view the list of local and remote images:

scyld-modimg ls

Delete a local cached image xyzImage with:

scyld-modimg -i xyzImage --delete

or delete all cached images:

scyld-modimg --all --delete

neither of which will delete or otherwise affect the remote images. Permanently delete an unwanted remote image with:

scyld-imgctl -i xyzImage delete

scyld-* Wrapper Scripts

Since a cluster administrator creating a new boot image commonly wants to create a corresponding boot configuration and assign that configuration to a set of nodes, the scyld-add-boot-config tool wraps scyld-modimg, scyld-mkramfs, and the appropriate scyld-*ctl tools to perform the necessary steps. The tool will also optionally display the required steps so that administrators can learn about the usage of the underlying tools.

When executed with no arguments, the scyld-add-boot-config script asks a series of questions to define the various fields of the boot configuration, image, and attribute group that are being created. Default values are provided where possible.

Important

The default kernel command line sets SELinux on the compute nodes to permissive mode.

Repos and Distros

One of the steps in the scyld-install script is to run the scyld-clusterctl tool to define a distro prior to creating the first image. The scyld-modimg tool can only create images based on defined distros. A distro associates one or more repos together with their package manager and an optional release string. If no release string is provided, then any supplied URL should not include the string "$releasever", as that variable will not be defined during image creation. On a CentOS or RHEL system the default repo and distro are created by:

scyld-clusterctl repos create name=CentOS_base \
             urls=http://mirror.centos.org/centos/$releasever/os/$basearch/
scyld-clusterctl distros create name=CentOS repos=CentOS_base release=7

Together with the local /etc/yum.repos.d/clusterware.repo file, this information will be used at image creation time to generate a /etc/yum.repos.d/clusterware-node.repo file for the image containing sections referring to both the head node's ClusterWare repository and to the distro's repos.

A system administrator is welcome to create additional repos and distros to make node images based on different upstream sources. An administrator can provide multiple comma-separated URLs to the scyld-clusterctl repos create command, or multiple repos to the scyld-clusterctl distros create command. Distros can also be imported from an existing yum repo files, e.g.,:

scyld-clusterctl distros import --name CentOS7 /etc/yum.repos.d/CentOS-Base.repo

The import action will create repos based on the contents of the provided yum repo file(s) and then associate all of them with a newly created CentOS7 distro. Any string passed to --release will be saved into the distro release field and will be used by yum to replace any occurrences of "$releasever" in the repo file.

See the Reference Guide for additional details of the scyld-clusterctl repos and distros actions and of the scyld-modimg command that is used to actually create and modify images.

Using Archived Releases

Many distributions will archive individual releases after they have been superseded by a newer release, but for this discussion we will examine CentOS. The CentOS project provides packages and updates on their various mirror sites for the most recent release, i.e. 7.9.2009 as of this writing, but deprecates all previous point releases. This means that at the URL where a mirror would nominally keep the previous release, a readme file is provided explaining that the release has been deprecated and pointing users to the CentOS vault for packages. The packages located in the vault are unchanged from when they were "current". The CentOS project also deprecates the release that is two major releases back, meaning that as of the release of version 7, version 5 was deprecated. In this way there are always two currently supported versions of CentOS, the latest and the most recent of the previous major release, i.e. 7.9.2009 and 6.10 as of the time of this writing.

What this means for ClusterWare administrators is twofold. First, in order to create an image of an archived version of CentOS, an administrator must create the correct repo and distro objects in the ClusterWare database. Second, after creating an image from the vault, the administrator must manually modify the yum repo files present in the image. We will now explore these steps in more detail.

To create an image based on an archived version of CentOS, 7.3 in this example, the steps are:

scyld-clusterctl repos create name=CentOS-vault \
                 urls=http://vault.centos.org/\$releasever/os/\$basearch/
scyld-clusterctl distros create name=CentOS_7.3 repos=CentOS-vault release=7.3.1611
scyld-modimg --create CentOS_7.3 --set-name CentOS_7.3_img

The first command creates a repo called CentOS-value pointing at the generic vault URL. The second command creates a distro that references the CentOS-vault repo and defining the release string. Once the distro exists, it can be referenced by name in the third command to actually create a new image.

Unfortunately, because the CentOS vault packages are identical to when they were the current release, the yum repo files located in the /etc/yum.repos.d/ directory will contain references to mirror.centos.org instead of vault.centos.org. The cluster administrator must manually modify these files after image creation and before running yum commands directly or through the scyld-modimg --install, --uninstall, --update, or --query. The above scyld-modimg --create command will also display an error referring back to this documentation:

[admin@virthead]$ scyld-modimg --create CentOS_7.3 --set-name CentOS_7.3_img

Executing step: Create
 Preparing the chroot...
    ...done.
 Initializing the chroot...
  elapsed: 0:01:11.4
    ...initialized.
 Installing core packages...
  elapsed: 0:00:01.0
ERROR: One or more repositories in the newly created image are invalid. This
can happen when installing older versions of Linux distributions such as CentOS.
Please consult the Administrator's Guide for more information.
WARNING: The command will be retried with unknown repositories disabled.
  elapsed: 0:02:39.9
   fixing SELinux file labels...
  ...done.
step completed in 0:04:13.6

In order to manually modify the yum repo files, an administrator can use the scyld-modimg --chroot command on an already created image as follows:

[admin@virthead]$ scyld-modimg -i CentOS_7.3_img --chroot
Checksumming image 6a8947156e08402ba2ad6e23a7642f4f
  elapsed: 0:00:01.0
Unpacking image 6a8947156e08402ba2ad6e23a7642f4f
  100.0% complete, elapsed: 0:00:29.6 (62.2% compression)
Checksumming...
  elapsed: 0:00:01.0
Executing step: Chroot
Dropping into a /bin/bash shell.  Exit when done.
[root@virthead /]# exit
exit
   fixing SELinux file labels...
(K)eep changes or (d)iscard? [kd]

When you exit the shell, the tool will confirm that you want to keep the changes made and offer to upload the modified image to head node storage.

Using ISO Releases

Many distributions are distributed in ISO form. Use the scyld-clusterctl tool to create an image from an ISO. For example, for an ISO named CentOS-7-x86_64-DVD-2009.iso, first create a repo:

scyld-clusterctl repos create name=centos_7.9_iso \
                              iso=@/path/to/CentOS-7-x86_64-DVD-2009.iso

then create a distro that references the new repo:

scyld-clusterctl distros create name=centos_7.9_distro repos=centos_7.9_iso

then you can create an image using that repo and distro:

scyld-modimg --create centos_7.9_distro --set-name centos_7.9_image

When this image is booted, the ISO-based repo may not be accessible, and the /etc/yum.repos.d/clusterware-node.repo file will need to be modified to use a more permanent repo location.

Note

If the CentOS 7.9 ISO was downloaded from https://www.centos.org/centos-linux/, then that ISO contains CentOS 7.9 base distribution packages for the first release of 7.9, not packages for the latest CentOS 7.9.

Installing Software With Subscriptions

For distributions requiring subscriptions for access to updated packages, please note that subscription information in an image will be used by all nodes unless removed before upload:

hostname nodeTemplate
subscription-manager register --username=$RHUSER --password=$RHPASS
subscription-manager attach --pool=$POOL_ID
yum upgrade -y
yum install $REQUIRED_PACKAGE
subscription-manager remove --all
subscription-manager unregister
subscription-manager clean