Troubleshooting ClusterWare¶
Failing PXE Network Boot¶
If a compute node fails to join the cluster when booted via PXE network boot, there are several places to look, as discussed below.
Rule out physical problems.. Check for disconnected Ethernet cables, malfunctioning network equipment, etc.
Check the system logs.. There are several log files:
The master node’s
/var/log/messages
file combines rsyslog output from the master node and each compute node. The master node’s Scyld ClusterWarebeoserv
daemon serves as the cluster’s DHCP server, and it logs the basic PXEboot interactions with each compute node. If a compute node shows no PXEboot logging, then thebeoserv
daemon is not seeing the initial PXEboot or DHCP request. Verify that the master node’s private cluster network firewall is not blocking incoming requests.If the syslog shows a compute node is making repeated PXEboot responses without ever reaching boot, error, or up state, then the Scyld ClusterWare
beoclient
daemon on the compute node is unable to start up the node.Commonly,
beoclient
is failing to load the appropriate kernel binary module for the Ethernet interface. Ensure that/etc/beowulf/config
specifies a bootmodule for the Ethernet controller hardware used by that specific compute node server, and that any modarg module options are valid for that particular kernel driver. Scyld ClusterWare distributes bootmodule entries for all Penguin Computing servers. If your compute node is not a Penguin Computing server, then verify that the necessary kernel driver is named as a bootmodule.Definitive diagnosis may require viewing the compute node’s console output, either by attaching a graphical monitor to the console port, attaching a serial cable from the compute node’s serial console output to another server and using
/usr/bin/minicom
to capture the output, or capturing the compute node’s serial console output using the IPMI serial console functionality.If a compute node reaches boot state, then example the node’s individual
/var/log/beowulf/node.
log file, where N is the node number.
Check for the correct DHCP server.. If a node fails to appear initially (on power-up), or appears then subsequently disappears, then the node may be unable to find the master node’s DHCP server. Another DHCP server may be answering and supplying IP addresses.
To check whether the master is seeing the compute node’s DHCP requests,
or whether another server is answering, use the Linux tcpdump
utility. The following example shows a correct dialog between compute
node 0 (10.10.100.100) and the master node.
[root@cluster ~]# tcpdump -i eth1 -c 10
Listening on eth1, link-type EN10MB (Ethernet),
capture size 96 bytes
18:22:07.901571 IP master.bootpc > 255.255.255.255.bootps:
BOOTP/DHCP, Request from .0, length: 548
18:22:07.902579 IP .-1.bootps > 255.255.255.255.bootpc:
BOOTP/DHCP, Reply, length: 430
18:22:09.974536 IP master.bootpc > 255.255.255.255.bootps:
BOOTP/DHCP, Request from .0, length: 548
18:22:09.974882 IP .-1.bootps > 255.255.255.255.bootpc:
BOOTP/DHCP, Reply, length: 430
18:22:09.977268 arp who-has .-1 tell 10.10.100.100
18:22:09.977285 arp reply .-1 is-at 00:0c:29:3b:4e:50
18:22:09.977565 IP 10.10.100.100.2070 > .-1.tftp: 32 RRQ
"bootimg::loader" octet tsize 0
18:22:09.978299 IP .-1.32772 > 10.10.100.100.2070:
UDP, length 14
10 packets captured
32 packets received by filter
0 packets dropped by kernel
Check the network interface..
Verify that the master node’s network interface is properly set up.
Then check the network interface settings using beonetconf
.
Reconfigure as needed, and restart cluster services again.
Verify that ClusterWare services are running.. Check the status of ClusterWare services by entering the following command in a terminal window:
[root@cluster ~]# systemctl status clusterware
Restart ClusterWare services from the command line using:
[root@cluster ~]# systemctl restart clusterware
Check the switch configuration.. If the compute nodes fail to boot immediately on power-up but successfully boot later, the problem may lie with the configuration of a managed switch.
Some Ethernet switches delay forwarding packets for approximately one minute after link is established, attempting to verify that no network loop has been created (“spanning tree”). This delay is longer than the PXE boot timeout on some servers.
Disable the spanning tree check on the switch; the parameter is typically named “fast link enable”. See the Administrator’s Guide for more details.
Mixed Uni-Processor and SMP Cluster Nodes¶
The Scyld ClusterWare system architecture eliminates the problem of unintentionally running different versions of a program over the cluster’s compute nodes.
The cluster nodes are required to run the same kernel version, typically with the same features and optimization enabled. Uni-processor machines can run the SMP kernel. The best choice for a mixed cluster is to run the SMP kernel. Beginning with CW4.1.1, support for uniprocessor kernels was dropped.
IP Forwarding¶
If IP forwarding is enabled in /etc/beowulf/config
but is still not
working, then check /etc/sysctl.conf
to see if it is disabled.
Check for the line “net.ipv4.ip_forward = 1”. If the value is set to 0
(zero) instead of 1, then IP forwarding will be disabled, even if it is
enabled in /etc/beowulf/config
.
SSH Traffic¶
The Red Hat installer configures the firewall with most services disabled. If SSH traffic isn’t passing through the firewall, then check your firewall settings to make sure SSH is selected as a trusted service.
To do this, log in as a root user and choose the Red Hat Applications -> System Settings -> Security Level menu option to open the Security Level Configuration window. Then make sure that SSH is checked in the list of trusted services.
Device Driver Updates¶
Scyld ClusterWare releases are tested on many different machine configurations, but it is impossible to provide device drivers for hardware unknown at release time.
Most problems with unsupported hardware or device-specific problems are resolved by updating to a newer device driver. Some devices may not yet be supported under Linux. Check with your hardware vendor.
The Scyld ClusterWare architecture makes most driver updates simple. Drivers are installed and updated on the master node exactly as with a single machine installation. The new drivers are immediately available to compute nodes, although already-loaded drivers are not replaced.
There are two irregular device driver types that require special actions: disk drivers and network drivers, both of which apply to the compute nodes. In both cases, the drivers must be available to load additional drivers and programs, and are thus packaged in initial RAM disk images.
Another irregular instance is where drivers must execute scripts when they load; one example is Infiniband. Contact the hardware vendor or Scyld support if you have difficulty with the script that loads the driver.
Finding Further Information¶
If you encounter a problem installing your Scyld cluster and find that this Installation Guide cannot help you, the following are sources for more information:
See the Release Notes for special installation or upgrade procedures that must be taken for your particular version of ClusterWare. It is available on the master node or on the documentation CD included in the Scyld installation kit.
See the Administrator’s Guide, which includes descriptions of more advanced administration and setup options. It is available on the master node or on the documentation CD included in the Scyld installation kit.
See the Reference Guide, a complete technical reference to Scyld ClusterWare. It is available on the master node or on the documentation CD included in the Scyld installation kit.
For the most up-to-date product documentation and other helpful information about Scyld ClusterWare, visit the Scyld Customer Support website at https://www.penguincomputing.com/support. and online documentation at https://www.penguincomputing.com/support/documentation.