Securing the Cluster

This Installation & Administrator Guide section discusses cluster security issues that are exclusive to Scyld ClusterWare. We assume that the cluster administrator is familiar with security issues that are not solely related to ClusterWare, such as securing the cluster from outside access, optionally enabling various Red Hat RHEL/CentOS functionalities for logging and auditing access to nodes and storage and for managing SELinux.

Authentication

The cluster administrator authentication method is controlled in the /opt/scyld/clusterware/conf/base.ini file by the plugins.auth variable and is initially set to "dummy". This plugin is the least secure and accepts any password for a known administrator, providing very little security. The initial list of known administrators is stored in the same file in the auth.tmpadmins variable. The scyld-install installation will (unless passed the --no-tools argument) add the current user to that comma separated list of user names.

Any administrator can add additional administrators through the scyld-adminctl command whose arguments match the other scyld-*ctl commands as described in the Reference Guide. We suggest that administrators add accounts for themselves through this tool, and thereafter clear the auth.tmpadmins variable. This variable is only intended to be used during early installation, for small experimental clusters, or when recovering from some sort of failure.

When deploying ClusterWare, the plugins.auth variable will be set to "appauth". This plugin executes the command defined in the appauth.app_path variable as user root. The default implementation of that command is provided by /opt/scyld/clusterware/bin/pam_authenticator. This implementation interfaces with the PAM authentication system using the /etc/pam.d/cw_check_user configuration file. The contents of this file initially use local system authentication, although this can be modified to authenticate against any mechanism available through the PAM system. Please see PAM documentation provided by your distro as well as the main PAM project. See the Red Hat https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system-level_authentication_guide/pluggable_authentication_modules documentation.

Administrators can provide authentication methods beyond PAM by implementing a script or application and providing it via the appauth.app_path variable. Appropriate applications should start with no arguments, read a username and password separated by a newline from stdin , and reply with either yes or no followed by a newline on stdout. For example, a test run of pam_authenticator looks like:

[example@head ~] sudo /opt/scyld/clusterware/bin/pam_authenticator
tester
not_the_password
no

Changing the Database Password

The scyld-install installation configures the ClusterWare database with a randomly generated password. This password is used when joining a new head node to the cluster and must be provided either through a command line or on request during the installation of the new head node. This password is stored in the database.admin_pass variable in the /opt/scyld/clusterware/conf/base.ini file. The details of changing this password depend on the specific database the cluster is using.

Important

Once this password is changed within the database, change the database.admin_pass variable in base.ini and restart the clusterware service on each head node.

Couchbase (deprecated)

Use the Couchbase console available on every head node on port 8091 to change the Couchbase password. Details of how to change this password can be found in the Couchbase documentation. See the "MANAGING CLUSTERS" section on the https://docs.couchbase.com/server/5.1/introduction/intro.html web page.

etcd

Use the etcdctl tool (provided in the clusterware-etcd package) to change the etcd password:

/opt/scyld/clusterware-etcd/bin/etcdctl user passwd root

Compute Node Remote Access

By default, remote access to compute nodes is provided through SSH using key-based authentication, although administrators may also enable password-based SSH in the compute node image by configuring a password for the root user. Every head node generates a public/private key pair and places these files in directory /opt/scyld/clusterware/.ssh/ using the names id_rsa.clusterware and id_rsa.clusterware.pub. These keys are used by the head nodes to execute commands on the compute nodes. All head node public keys are downloaded by compute nodes at boot time by the update_keys.sh script and appended to /root/.ssh/authorized_keys. This allows any head node to execute a command on any compute node. The /opt/scyld/clusterware/.ssh/id_rsa.clusterware key can be used by system administrators as an "automation" key for tasks like cron jobs. It is also useful in recovery situations where an administrator may need to use this private key to directly access compute nodes

This same script that downloads the head node public keys will also download the public keys attached to every cluster administrator account. These accounts are created using the scyld-adminctl tool as follows:

scyld-adminctl create name=admin keys=@~/.ssh/id_rsa.pub

This would allow anyone with the corresponding id_rsa to SSH into the root account on any compute node booted after the key was added. The key can also be added as a string or updated for an existing administrator. For example,

scyld-adminctl -i admin update keys='ssh-rsa AAAAB3NzaC1yc2EAAAADA....'

Cluster administrators are also welcome to add SSH keys to compute node images in small private clusters, although adding administrator accounts with public keys simplifies management of larger clusters with multiple node images or cluster administrators. Note that administrator accounts stored in the database or listed in the base.ini both use the same authentication mechanisms described in the previous section.

Important

We urge cluster administrators to create their own administrator accounts and remove their usernames from the base.ini file after cluster installation.

Compute Node Host Keys

In most computer systems the SSH sshd daemon uses unique host keys to identify itself to clients, and host keys are not created during image creation. This means that each compute node will generate its own host keys during boot. Since the compute node changes are discarded on reboot, a new set of keys will be generated with each boot.

In an appropriately protected cluster, some administrators prefer for all compute nodes to share host keys. This can be achieved by storing host keys in the compute node image. For example, to generate host keys and repack the DefaultImage, an administrator can run:

scyld-modimg -i DefaultImage --exec sshd-keygen --overwrite --upload

All nodes that boot using this image after this change will use identical host keys, so ideally you should reboot the nodes with each node's updated image. To remove the host keys from an image, an administrator needs to delete the /etc/ssh/ssh_host_* files from the compute node image.

Encrypting Communications

By default the administrative tools communicate with the head node via HTTP, although they can also use HTTPS if appropriate certificates are configured on the head node's Apache web server. Please refer to documentation provided by your distro about how to properly enable HTTPS on the Apache server. Apache configuration files are located in /opt/scyld/clusterware/conf/httpd/. The Apache VirtualHost definition can be found in vhost.conf, and the proxy definition in that file will need to be included into the HTTPS VirtualHost.

Once HTTPS is enabled, the ~/.scyldcw/settings.ini file of any existing ClusterWare tool installation should be updated. In that file the protocol of the client.base_url variable will need to be updated. It should be safe to leave HTTP enabled for localhost-only access, and in that case local tool installations can continue to use the original localhost-based URL.

Security-Enhanced Linux (SELinux)

Security-Enhanced Linux (SELinux) is a set of patches to the Linux kernel and various utilities that provide mandatory access control to major subsystems of a node. See https://en.wikipedia.org/wiki/Security-Enhanced_Linux for general discussion of SELinux.

ClusterWare supports SELinux on the head nodes and compute nodes.

SELinux On Compute Nodes

For Red Hat RHEL and CentOS compute nodes, the root file systems created by the scyld-modimg tool include SELinux support as part of the installation of the @core yum group. During the boot process the mount_rootfs script will, like the standard dracut based initramfs, load the SELinux policy before switching root. Note that the default cmdline in the boot configurations created through scyld-add-boot-config (including the DefaultBoot configuration) will contain enforcing=0, thereby placing all compute nodes in SELinux "permissive" mode. Only remove this option once you have completed testing to confirm that your applications will run as expected with SELinux in "enforcing" mode.

SELinux on compute nodes may be disabled in the standard ways through command line arguments or by changing the contents of the node's /etc/selinux/config configuration file. For details please refer to appropriate distro-provided documentation.

In addition to the default "targeted" SELinux policy provided by RHEL and CentOS, ClusterWare also supports the Multi-Level Security (MLS) policy for compute nodes. Enabling the MLS policy inside an image is done the same way as it would be done on a locally installed system. After entering the image chroot using scyld-modimg, first install the selinux-policy-mls package, and then modify the /etc/selinux/config file to reference the newly installed policy. Because the clusterware-node SELinux policy module is installed at image creation time, it may need to be re-installed after switching to the MLS policy:

semodule --install /opt/scyld/clusterware-node/clusterware-node.pp.bz2

The semodule command can also be used to check if the policy is loaded:

semodule --list | grep clusterware

When exiting the chroot, ClusterWare automatically relabels the file system based on the policy referenced in /etc/selinux/config.

Important

Fully configuring a cluster for MLS requires significant effort, including labeling objects on shared storage and defining additional policy around user workflows and tools. Please refer to your operating system documentation, as such details are beyond the scope of this document. Note that Scyld ClusterWare-provided schedulers, MPI implementations, and 3rd party applications may need additional custom permissions not covered here in order to configure a functional MLS cluster.

When creating boot configuration for an MLS enabled image, please be aware that the MLS policy, by default, does not allow the root user to log into the compute node via ssh. Because ssh is used by the ClusterWare soft power commands, please either enable the root login functionality or use the _remote_user node attribute to configure login as a user with sudo shutdown permission. The root login permission can be enabled through the setsebool command, and the boolean is named ssh_sysadm_login.

SELinux On Head Nodes

On head nodes, SELinux is detected to be in "enforcing" mode at both installation and service run time. To switch SELinux from "enforcing" to "permissive" mode, please see the documentation for your operating system. If this switch is made while the ClusterWare service is running, please restart that service:

sudo systemctl restart clusterware

MLS Policy On Head Nodes

For head nodes enforcing the MLS policy, the SELinux user sysadm_u should be used to install ClusterWare and run administrative tools.

To map a Linux user to the sysadm_u SELinux user, you can run:

sudo semanage login --add linux_user --seuser sysadm_u

By default, the sysadm_u user should run with the sysadm_t domain.

Security Technical Implementation Guides (STIG)

STIG security hardening implements compliance with the Defense Information Systems Agency (DISA) guidelines described in the Security Technical Implementation Guides (STIGs) (https://csrc.nist.gov/glossary/term/security_technical_implementation_guide). Certain high-security clusters may require STIG compliance.

ClusterWare provides basic STIG support for kickstarted nodes by adding the following snippet to your kickstart *.ks file:

%addon org_fedora_oscap
    content-type = scap-security-guide
    profile = xccdf_org.ssgproject.content_profile_stig
%end

To configure a STIG head node, add the snippet to your kickstart config file and reboot the node using that *.ks file to enable STIG. Then install ClusterWare (Installation and Upgrade of Scyld ClusterWare) on the STIG-enabled node in the usual way.

ClusterWare provides an example file /opt/scyld/clusterware/kickstarts/basic-stig.ks with that snippet appended for administrators who would like to kickstart infrastructure nodes or additional head nodes with that STIG applied at install time.