Recommended Components¶
Hardware selection for a ClusterWare system is based on the price/performance ratio. ClusterWare recommends the components listed below:
Processors. 64-bit Intel® or AMD ™ x86_64 architecture required, single-core or multi-core
Architecture. 1 or multiple sockets per motherboard
Physical Memory. 4096 MBytes (4 GBytes) or more preferred, minimum 2048 MBytes (2 GBytes)
Operating System. Red Hat Enterprise Linux 7 (RHEL7) or CentOS7 required
The Scyld ClusterWare Release Notes state the specific version and update of Red Hat or CentOS required to support the ClusterWare release you are installing.
Network Interface Controllers (NIC). Gigabit Ethernet (Fast Ethernet at a minimum) PCI-X or PCI-Express adapters (with existing Linux driver support) in each node for the internal private IP network.
The master node typically employs an additional NIC for connecting the cluster to the external network. This NIC should be selected based on the network infrastructure (e.g., Fast Ethernet if the external network you are connecting the cluster to is Fast Ethernet).
Network Switch. The master node private network NIC and all compute nodes should be connected to a non-blocking Gigabit Ethernet switch for the internal private network. At a minimum, the network switch should match the speed of the network cards.
The switch is a critical component for correct operation and performance of the cluster. In particular, the switch must be able to handle all network traffic over the private interconnect, including cluster management traffic, process migration, library transfer, and storage traffic. It must also properly handle DHCP and PXE.
Tip
It is sometimes confusing to identify which NIC is connected to the private network. Take care to connect the master node to the private switch through the NIC with the same or higher speed than the NICs in the compute nodes.
Disk Drives. For the master node, we recommend using either Serial ATA (SATA) or SCSI disks in a RAID 1 (mirrored) configuration. The operating system on the master node requires approximately 3 GB of disk space. We recommend configuring the compute nodes without local disks (disk-less).
If local disks are required on the compute nodes, we recommend using them for storing data that can be easily re-created, such as scratch storage or local copies of globally-available data.
In the default configuration, /home
on the master node is exported
to the compute nodes; other file systems may be exported as well. After
installing Scyld ClusterWare, see the file /etc/beowulf/fstab
for the full list of
default mounts for compute nodes. If you expect heavy file system
traffic, we recommend that you provide a second pair of disks in a RAID
1 (mirrored) configuration for these exported file systems. Otherwise,
it is possible for accesses to the exported file systems to interfere
with the master node accessing its system files, thus affecting the
master node’s ability to launch new processes and manage the cluster.
Optional Hardware Components. Gigabit Ethernet with a non-blocking switch serves most users. However, some applications benefit from a lower-latency interconnect.
Infiniband is an industry standard interconnect providing low-latency messaging, IP, and storage support. Infiniband can be configured as a single universal fabric serving all of the cluster’s interconnect needs.
More information about Infiniband may be found at the Infiniband Trade Association web site at http://www.infinibandta.org. ClusterWare supports Infiniband as a supplemental messaging interconnect in addition to Ethernet for cluster control communications.
Assembling the Cluster¶
The full Scyld ClusterWare Cluster Virtualization Software and the underlying Linux operating system are installed only on the master node.
Most recent hardware supports network boot (PXE boot), which ClusterWare requires for booting the compute nodes.
Software Components¶
The following are integral components of Scyld ClusterWare:
beostatus: A graphic utility for monitoring the status of a ClusterWare cluster.
Scyld ClusterWare: Allows processes to be started on compute nodes in the cluster and tracked in the process table on the master node. Scyld ClusterWare also provides process migration mechanisms to help in creating remote processes, and removes the need for most binaries on the remote nodes.
MPICH2, MVAPICH2, and OpenMPI: Message Passing Interfaces, customized to work with Scyld ClusterWare.
For more detailed information on these software components, see the Administrator’s Guide and the User’s Guide.