KubernetesΒΆ

ClusterWare administrators wanting to use Kubernetes as a container orchestration layer across their cluster can either choose to install Kubernetes manually following directions found online, or use scripts provided by the clusterware-kubeadm package. To use these scripts first install the clusterware-kubeadm package on a server that is a Scyld ClusterWare head node, a locally installed ClusterWare compute node, or a separate non-ClusterWare server. Installing the control plane an RAM-booted or otherwise ephemeral compute node is discouraged.

The provided scripts are based on the kubeadm tool and inherit both the benefits and limitations of that tool. If you prefer to use a different tool to install Kubernetes please follow appropriate directions available online from your chosen Kubernetes provider. The clusterware-kubeadm package is mandatory, and the clusterware-tools package is recommended:

sudo yum --enablerepo=scyld* install clusterware-kubeadm clusterware-tools

Important

For a server to function as a Kubernetes control plane, SELinux must be disabled (verify with getenforce) and swap must be turned off (verify with swapon -s, disable with swapoff -a -v).

After installing the software, as a cluster administrator execute the scyld-kube tool to initialize the Kubernetes control plane. To initialize on a local server:

scyld-kube --init

Or to initialize on an existing booted ClusterWare compute node (e.g., node n0):

scyld-kube --init -i n0

Note that a ClusterWare cluster can have multiple control planes and can use multiple control planes in a Kubernetes High Availability (HA) configuration. See Appendix: Using Kubernetes for detailed examples.

You can validate this initialization by executing:

kubectl get nodes

which should show the newly initialized control plane server.

Next, join one or more booted ClusterWare nodes (e.g., nodes n[1-3]) as worker nodes of this Kubernetes cluster. The full command syntax accomplishes this by explicitly identifying the control plane node by its IP address:

scyld-kube -i n[1-3] --join --cluster <CONTROL_PLANE_IP_ADDR>

However, if the control plane node is a ClusterWare compute node, then the scyld-kube --init process defined Kube-specific attributes and a simpler syntax suffices:

scyld-kube -i n[1-3] --join

The simpler join command can find the control plane node without needing to be told its IP address as long as there is only one compute node that functioning as a Kubernetes control plane.

Note that scyld-kube --join also accepts admin-defined group names, e.g., for a collection of nodes joined to the kube_workers group:

scyld-kube -i %kube_workers --join --cluster <CONTROL_PLANE_IP_ADDR>

See Attribute Groups and Dynamic Groups for details.

For persistence across compute node reboots, modify a node image (e.g., kubeimg), that is used by Kubernetes worker nodes so that these nodes auto-join when booted. If multiple control planes are present optionally specify the control plane by IP address:

scyld-kube --image kubeimg --join
    or
scyld-kube --image kubeimg --join --cluster CONTROL_PLANE_IP_ADDR

After rebooting these worker nodes, you can check Kubernetes status again on the control plane node and should now see the joined worker nodes:

kubectl get nodes

You can test Kubernetes by executing a simple job that calculates pi:

kubectl apply -f https://kubernetes.io/examples/controllers/job.yaml

(ref: https://kubernetes.io/docs/concepts/workloads/controllers/job/)

See Appendix: Using Kubernetes for detailed examples.