Appendix: Using AnsibleΒΆ

A compute node can be configured to execute an Ansible playbook at boot time or after the node is up. In the following example, the cluster administrator creates a git repository hosted by the ClusterWare head nodes, adds an extremely simple Ansible playbook to that git repository, and assigns a compute node to execute that playbook.

Install the clusterware-ansible package into the image (or images) that you want to support execution of an Ansible playbook:

scyld-modimg -i DefaultImage --install clusterware-ansible --upload --overwrite

The administrator should amend their PATH variable to include the git binaries that are provided as part of the clusterware package in /opt/scyld/clusterware/git/. This is not strictly necessary, though the git in that subdirectory is version 2.39.1 and is significantly more recent than the version normally provided by an el7 base distribution:

export PATH=/opt/scyld/clusterware/git/bin:${PATH}

The administrator should add their own personal public key to their ClusterWare admin account. This key will be populated into user root's (or _remote_user's) authorized_keys file for a newly booted compute node. See Compute Node Remote Access for details. In addition, this provides simple SSH access to the git repository:

scyld-adminctl up keys=@/full/path/.ssh/id_rsa.pub

Adding the localhost's host keys to a personal known_hosts file is not strictly necessary, though it will avoid an SSH warning that can interrupt scripting:

ssh-keyscan localhost >> ~/.ssh/known_hosts

Now create a ClusterWare git repository called "ansible". This repository will default to public, meaning it is accessible read-only via unauthenticated HTTP access to the head nodes and therefore should not include unprotected sensitive passwords or keys:

scyld-clusterctl gitrepos create name=ansible

Note that being unauthenticated means the HTTP access mechanism does not allow for git push or other write operations. Alternatively the repository can be marked private (public=False), although it then cannot be used for a client's ansible-pull.

Initially the repository will include a placeholder text file that can be deleted or discarded.

Now clone the git repo over an SSH connection to localhost:

git clone cwgit@localhost:ansible

The administrator could also create that clone on any machine that has the appropriate private key and can reach the SSH port of a head node.

Finally, create a simple Ansible playbook to demonstrate the functionality:

cat >ansible/HelloWorld.yaml <<EOF
---
- name: This is a hello-world example
  hosts: n*.cluster.local
  tasks:
    - name: Create a file called '/tmp/testfile.txt' with the content
      copy:
        content: hello world
        dest: /tmp/testfile.txt
EOF

and add that playbook to the "ansible" git repo:

bash -c "\
  cd ansible; \
  git add HelloWorld.yaml; \
  git -c user.name=Test -c user.email='<test@test.test>' \
         commit --message 'Adding a test playbook' HelloWorld.yaml; \
  git push; \
"

Multiple playbooks can co-exist in the git repo.

In a multiple-head node cluster an updated git repository will be replicated to other head nodes in the cluster, so any client ansible_pull to any cluster head node will see the same playbook and the same commit history. This replication requires several seconds to complete.

With the playbook now available in the git repo, configure the compute node to execute ansible-pull to download it at boot time:

scyld-nodectl -i n1 set _ansible_pull=git:ansible/HelloWorld.yaml

Alternatively, to download the playbook from an external git repository on the server named gitserver:

scyld-nodectl -i n1 set _ansible_pull=http://gitserver//HelloWorld.yaml

Either format can optionally end with "@<gitrev>", where <gitrev> is a specific commit, tag, or branch in the target git repo.

Use the _ansible_pull_args attribute to specify any arguments to the _ansible_pull playbook.

You may now reboot the node and wait for it to boot to an up status after the playbook has executed:

scyld-nodectl -i n1 reboot
scyld-nodectl -i n1 waitfor up

You can verify that the HelloWorld.yaml playbook executed:

scyld-nodectl -in1 exec cat /tmp/testfile.txt ; echo

Note that during playbook execution the node remains in the booting status, changing to an up status after the playbook completes, assuming the playbook is not fatal to the node. That status may timeout to down (with no ill effect) when executing a lengthy playbook before switching to up after playbook completion. Administrators are advised to log the ansible progress to a known location on the booting node, such as /var/log/ansible.log.

The clusterware-ansible package supports another attribute, _ansible_pull_now, which uses the same syntax as _ansible_pull. Prior to first use, the administrator must enable the cw-ansible-pull-now service inside the chroot image:

systemctl enable cw-ansible-pull-now

and then on a running compute node, start the service:

systemctl start cw-ansible-pull-now

When the attribute is present and the service has been enabled and started, the node will download and execute the playbook during the node's next status update event, which occur every 10 seconds by default. Once the node completes execution of the playbook, it directs the head node to prepend "done" to the _ansible_pull_now attribute to ensure the script does not run again.