Appendix: Using KubernetesΒΆ
See Kubernetes under Additional Software for a brief explanation of how to install and initialize a Kubernetes cluster. This appendix provides more detailed examples.
EXAMPLE 1: Create a minimum viable Kubernetes cluster with one control plane ClusterWare node and some worker nodes.
The control plane node in a production environment should be a full-install node to provide persistence across reboots.
Step 1: Initialize the control plane node on a ClusterWare node, specifically using Kubernetes version 1.22.0 instead of defaulting to the latest version.
For this example use n0 as the control plane node:
[admin@head]$ scyld-kube -i n0 --init --version 1.22.0Upon successful initialization, you can check Kubernetes node and pod status on the control plane:
[admin@head]$ scyld-nodectl -i n0 exec kubectl get node NAME STATUS ROLES AGE VERSION n0.cluster.local Ready control-plane,master 2m15s v1.22.0 [admin@head]$ scyld-nodectl -i n0 exec kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcd69978-hqhw6 1/1 Running 0 4m9s coredns-78fcd69978-j7jcn 1/1 Running 0 4m9s etcd-n0.cluster.local 1/1 Running 0 4m24s kube-apiserver-n0.cluster.local 1/1 Running 0 4m17s kube-controller-manager-n0.cluster.local 1/1 Running 0 4m17s kube-flannel-ds-zpkmg 1/1 Running 0 4m9s kube-proxy-8wmwh 1/1 Running 0 4m9s kube-scheduler-n0.cluster.local 1/1 Running 0 4m17s
Step 2: Join worker node(s).
Join nodes n[3-4] to control plane node n0:
[admin@head]$ scyld-kube -i n[3-4] --join --version 1.22.0If there are multiple Kubernetes clusters defined in the ClusterWare cluster, then you need to specify the control plane node:
[admin@head]$ scyld-kube -i n[3-4] --join --version 1.22.0 --cluster n0Check Kubernetes node status to see nodes n[3-4] are joined as workers:
[admin@head]$ scyld-nodectl -i n0 exec kubectl get node NAME STATUS ROLES AGE VERSION n0.cluster.local Ready control-plane,master 20m v1.22.0 n3.cluster.local Ready <none> 48s v1.22.0 n4.cluster.local Ready <none> 46s v1.22.0Suppose nodes n[3-4] are full-install nodes. They maintain their role as Kubernetes workers after a reboot. If the nodes are instead non-persistent, i.e., they PXEboot, then their Kubernetes worker role disappears after a reboot.
Suppose nodes n[5-7] PXEboot with an image kubeimg. Make the worker state persistent across reboots of every node that uses kubeimg by executing the
--join
with an additional argument--image
specifying the image:[admin@head]$ scyld-kube --image kubeimg --join --version 1.22.0Now you can reboot nodes n[5-7] and observe nodes n[3-7] are all joined as Kubernetes workers:
[admin@head]$ scyld-nodectl -i n0 exec kubectl get node NAME STATUS ROLES AGE VERSION n0.cluster.local Ready control-plane,master 40m v1.22.0 n3.cluster.local Ready <none> 20m v1.22.0 n4.cluster.local Ready <none> 20m v1.22.0 n5.cluster.local Ready <none> 76s v1.22.0 n6.cluster.local Ready <none> 85s v1.22.0 n7.cluster.local Ready <none> 89s v1.22.0As with the earlier
--join
example, if there are multiple Kubernetes clusters defined in the ClusterWare cluster, then execute the join with an additional argument identifying the specific control plane node:[admin@head]$ scyld-kube --image kubeimg --join --version 1.22.0 --cluster n0
EXAMPLE 2: Create a High-Availability (HA) Kubernetes cluster with 3 control planes and some worker nodes.
The control plane nodes in a production High-Availability environment should all be full-install nodes to provide persistence across reboots.
Step 1: Prepare the load balancer files.
For this example use the latest Kubernetes version (e.g., 1.22.4), use node n0 for the first control plane node and master load balancer, and use nodes n1 and n2 as other control planes and backup load balancers:
[admin@head]$ scyld-kube --prepare-lb 10.54.2.1[:4200:51:42] \ n0:10.54.150.100,n1:10.54.150.101,n2:10.54.150.102In the above configuration, 10.54.2.1 is an unused virtual IP address negotiated between n0, n1 and n2 within the network subnet.
[:4200:51:42]
are optional default values for [:APISERVER_PORT:ROUTER_ID:AUTH_PASS] and are needed only if you want to use non-default values.
Step 2: Initialize the Kubernetes cluster on the first control plane.
Initialize node n0 with an HA variation of the
--init
argument:[admin@head]$ scyld-kube -i n0 --init-haUpon successful initialization the stdout should contain a proposed
kubeadm join
command, such as:[admin@head]$ kubeadm join 10.54.2.1:6443 --token g9h7gm.qmg18h8evp7g0701 \ --discovery-token-ca-cert-hash \ sha256:8a536515c02bb7f099f38be604c94a90b54d1ccec8422e8219c2680e379c9e14 \ --control-plane --certificate-key \ b97c9c1ca0635ffef5a531b5fff41eaa55e0b379242ac85ef8028c0a184c190and near the end of the stdout you will see:
A non-expiring Scyld ClusterWare token: sv1tb2.qyfuu8ehrbxk3tzu is generated.The first "--token TOKEN" in the proposed
kubeadm join
output expires after 24 hours, so your upcomingscyld-kube --join-ha
should instead use the non-expiring ClusterWare token.The KEY expires in two hours. If needed, you can generate a new key:
[admin@head]$ scyld-nodectl -i n0 exec kubeadm init phase upload-certs \ --upload-certsCheck the Kubernetes node and pod on n0:
[admin@head]$ scyld-nodectl -i n0 exec kubectl get node NAME STATUS ROLES AGE VERSION n0.cluster.local Ready control-plane,master 2m10s v1.22.4 [admin@head]$ scyld-nodectl -i n0 exec kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcd69978-kkc8x 1/1 Running 0 2m19s coredns-78fcd69978-rlk4d 1/1 Running 0 2m19s etcd-n0.cluster.local 1/1 Running 0 2m13s haproxy-n0.cluster.local 1/1 Running 0 2m13s keepalived-n0.cluster.local 1/1 Running 0 2m13s kube-apiserver-n0.cluster.local 1/1 Running 0 2m13s kube-controller-manager-n0.cluster.local 1/1 Running 0 2m13s kube-flannel-ds-f97k5 1/1 Running 0 2m18s kube-proxy-4mzrl 1/1 Running 0 2m18s kube-scheduler-n0.cluster.local 1/1 Running 0 2m13sNotice that the haproxy and keepalived pods are running. They are not running in Example 1.
Step 3: Join the other control plane nodes to the first control plane.
Join control plane nodes n[1-2] to the first control plane n0 using the "certificate-key" discussed above, e.g.,:
[admin@head]$ scyld-kube -i n[1-2] --join-ha --certificate-key KEYAs before, if there are multiple Kubernetes clusters in ClusterWare cluster, then execute the
--join-ha
command with the additional--cluster n0
argument to identify the first control plane.Now check the status:
[admin@head]$ scyld-nodectl -i n0 exec kubectl get node NAME STATUS ROLES AGE VERSION n0.cluster.local Ready control-plane,master 23m v1.22.4 n1.cluster.local Ready control-plane,master 14m v1.22.4 n2.cluster.local Ready control-plane,master 14m v1.22.4 [admin@head]$ scyld-nodectl -i n0 exec kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcd69978-kkc8x 1/1 Running 0 22m coredns-78fcd69978-rlk4d 1/1 Running 0 22m etcd-n0.cluster.local 1/1 Running 0 22m etcd-n1.cluster.local 1/1 Running 0 13m etcd-n2.cluster.local 1/1 Running 0 13m haproxy-n0.cluster.local 1/1 Running 0 22m haproxy-n1.cluster.local 1/1 Running 0 13m haproxy-n2.cluster.local 1/1 Running 0 13m keepalived-n0.cluster.local 1/1 Running 0 22m keepalived-n1.cluster.local 1/1 Running 0 13m keepalived-n2.cluster.local 1/1 Running 0 13m kube-apiserver-n0.cluster.local 1/1 Running 0 22m kube-apiserver-n1.cluster.local 1/1 Running 0 13m kube-apiserver-n2.cluster.local 1/1 Running 0 13m kube-controller-manager-n0.cluster.local 1/1 Running 1 (13m ago) 22m kube-controller-manager-n1.cluster.local 1/1 Running 0 13m kube-controller-manager-n2.cluster.local 1/1 Running 0 13m kube-flannel-ds-262pd 1/1 Running 0 13m kube-flannel-ds-b5scg 1/1 Running 0 13m kube-flannel-ds-f97k5 1/1 Running 0 22m kube-proxy-2swbv 1/1 Running 0 13m kube-proxy-4mzrl 1/1 Running 0 22m kube-proxy-ktlc9 1/1 Running 0 13m kube-scheduler-n0.cluster.local 1/1 Running 1 (13m ago) 22m kube-scheduler-n1.cluster.local 1/1 Running 0 13m kube-scheduler-n2.cluster.local 1/1 Running 0 13m
Step 4: Join worker node(s)
Join the worker node(s) or modify the workers' node image to the first control plane node using the same commands as in Step 2 in Example 1.
EXAMPLE 3: Create a High Availability Kubernetes cluster with 3 non-ClusterWare control plane servers and some ClusterWare worker nodes.
For this example use servers named kube-1 as the first control plane node and master load balancer, and servers kube-2 and kube-3 as other control planes and backup load balancers. These servers must be running a full RHEL/CentOS distribution, must be connected to the ClusterWare private cluster network, and must know the names of the other servers and the prospective ClusterWare nodes that will be used as Kubernetes workers.
Step 1: Initialize the Kubernetes HA cluster on the first control plane.
Install the clusterware-kubeadm RPM on the first control plane, either using
yum install clusterware-kubeadm
from the ClusterWare repo, or downloading the RPM from the repo and installing it manually withrpm -i
, then prepare the load balancer files:[root@kube-1]$ scyld-kube --prepare-lb 10.54.2.1[:4200:51:42] \ kube-1:10.54.150.200,kube-2:10.54.150.201,kube-3:10.54.150.202In the above configuration, 10.54.2.1 is an unused virtual IP address negotiated between kube-1, kube-2, and kube-3 within the network subnet.
[:4200:51:42]
are optional default values for [:APISERVER_PORT:ROUTER_ID:AUTH_PASS] and are needed only if you want to use non-default values.Now initialize the Kubernetes cluster on the first control plane:
[root@kube-1]$ scyld-kube --init-haUpon successful initialization the stdout should contain a proposed
kubeadm join
command, such as:kubeadm join 10.54.2.1:6443 --token g9h7gm.qmg18h8evp7g0701 \ --discovery-token-ca-cert-hash \ sha256:8a536515c02bb7f099f38be604c94a90b54d1ccec8422e8219c2680e379c9e14 \ --control-plane --certificate-key \ b97c9c1ca0635ffef5a531b5fff41eaa55e0b379242ac85ef8028c0a184c190and near the end of the stdout you will see:
A non-expiring Scyld ClusterWare token: sv1tb2.qyfuu8ehrbxk3tzu is generated.The first "--token TOKEN" in the proposed
kubeadm join
output expires after 24 hours, so your upcomingscyld-kube --join-ha
should instead use the non-expiring ClusterWare token.The KEY expires in two hours. If needed, you can generate a new key with:
[root@kube-1]$ kubeadm init phase upload-certs --upload-certsYou will need these values on the other control planes kube-2 and kube-3 and the worker node(s) in order to perform their joins to kube-1.
Step 2: Initialize and join the other control plane nodes.
For each of the other control plane nodes, prepare the load balancer files in the same manner as was done with the first control plane. Install the clusterware-kubeadm package on the server, then prepare the load balancer files and join the server to the first. For example, for kube-2:
[root@kube-2]$ scyld-kube --prepare-lb 10.54.2.1[:4200:51:42] \ kube-1:10.54.150.200,kube-2:10.54.150.201,kube-3:10.54.150.202 [root@kube-2]$ scyld-kube --join-ha --cluster 10.54.150.200 --token TOKEN \ --cahash CAHASH --certificate-key KEY
Step 3: Join the worker nodes to the first control plane node.
On the ClusterWare head node, install the clusterware-kubeadm package and join the desired ClusterWare nodes as workers to the first control node. For example, for full-install nodes n[3-4]:
[admin@head]$ scyld-kube -i n[3-4] --join --cluster 10.54.150.200 \ --token TOKEN --cahash CAHASHor for PXEbooting nodes n[5-7] that use image kubeimg:
[admin@head]$ scyld-kube --image kubeimg --join --cluster 10.54.150.200 \ --token TOKEN --cahash CAHASH