Required and Recommended ComponentsΒΆ
Scyld ClusterWare head nodes are expected to use x86_64 processors running a Red Hat RHEL, CentOS, or similar distribution. See Supported Distributions and Features for specifics.
Important
ClusterWare head nodes currently require a Red Hat RHEL or CentOS 7.6 (or later) base distribution environment due to dependencies on newer libvirt and selinux packages. This requirement only applies to head nodes, not compute nodes.
Important
By design, ClusterWare compute nodes handle DHCP responses on the private cluster network (bootnet) by employing the base distribution's facilities, including NetworkManager. If your cluster installs a network file system or other software that disables this base distribution functionality, then dhclient or custom static IP addresses, and potentially additional workarounds, must be configured.
ClusterWare head nodes should ideally be "lightweight" for simplicity and contain only software that is needed for the local cluster configuration. Non-root users typically do not have direct access to head nodes and do not execute applications on head nodes.
Head node components for a production cluster:
x86_64 processor(s) are required, with a minimum of four cores recommended.
8GB RAM (minimum) is recommended.
100GB storage (minimum) is recommended.
The largest storage consumption contains packed images, uploaded ISOs, et al. Its location is set in the file
/opt/scyld/clusterware/conf/base.ini
and defaults to/opt/scyld/clusterware/storage/
.The directory
/opt/scyld/clusterware/git/cache/
consumes storage roughly the size of the git repos hosted by the system.Other than the above
storage/
andcache/
, the directory/opt/scyld/
consumes roughly 300MB.Each administrator's
~/.scyldcw/workspace/
directory contains unpacked images that have been downloaded by an administrator for modification or viewing.One Ethernet controller (required) that connects to the private cluster network which interconnects the head node(s) with all compute nodes.
A second Ethernet controller (recommended) that connects a head node to the Internet.
Multiple Ethernet or other high-performance network controllers (e.g., Infiniband, Omni-Path) are common on the compute nodes, but do not need to be accessible by the head node(s).
We recommend employing virtual machines, hosted by "bare metal" hypervisors, for head nodes, login nodes, job scheduler servers, etc., for ease of management. Virtual machines are easy to resize and easy to migrate between hypervisors. See https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/ for basic Red Hat documentation.
Note
A bare metal hypervisor host must contain the aggregated resources required by each hosted virtual server, and ideally the aggregated recommended resources, plus several additional CPUs/cores and RAM resources devoted to the hypervisor functionality itself.
Note
The nmcli connection add
tool can be used to create network
bridges and to slave physical interfaces to those newly created bridges.
Once appropriate bridges exist, the virt-install
command can attach the
virtual interfaces to the bridges, so that the created virtual machines
exist on the same networks as the physical interfaces on the hypervisor.
A High Availability ("HA") cluster requires a minimum of three "production" head nodes, each a virtual machine hosted on a different bare metal hypervisor. Even if an HA cluster is not required, we recommend a minimum of two head nodes - one functioning as the production head node, and the other as a development head node that can be used to test software updates and configuration changes prior to updating the production node to the validated final updates.
Compute nodes are generally bare metal servers for optimal performance. See Supported Distributions and Features for a list of supported distributions.
See Cluster Architecture Overview for more details.