Configuring Kubernetes Multi-Node Cluster over AWS using Ansible

Divya Kurothe
FAUN — Developer Community 🐾
6 min readJun 6, 2021

--

In this article, we are going to integrate these three technologies and see the power of automation. But before starting if you want to know more about these technologies you can go through my previously created blogs:

Now let’s start to Configure Kubernetes Multi-Node Cluster over AWS Cloud. For this we need to:

Create Ansible Role to launch 3 AWS EC2 Instances.

Create Ansible Role to configure Docker over those instances.

Create Role to configure K8S Master, K8S Worker Nodes on the above created EC2 Instances using kubeadm.

In this article, firstly I’ll tell you how to execute my roles if you want to use my pre-created roles and later I’ll explain you my code if you wish to create your own roles.

You can clone the repository from GitHub using the following command:

git clone https://github.com/emxkd/k8s_cluster_role.git

Or you can install my collection from Ansible Galaxy:

ansible-galaxy collection install emxkd.k8s_cluster_collection

Make required changes in ansible.cfg according to your file paths (i.e, inventory, roles_path and private_key_file)

In vars.yml fill all the AWS instance related details which are required to launch the instances.

At last in the same directory, we have to make ansible-vault to store the AWS IAM user access_keyand secret_key(You need to have an AWS IAM user with access key and secret key). Create file with the name credentials.yml only, because it is hard-coded in the program using the command:

ansible-vault create credentials.yml

And inside this file input AWS access_key and secret_key in the following format:

access_key: <access_key>
secret_key: <secret_key>

Now the setup is ready and we only need to run the playbooks and relax…

The first playbook will create instances in AWS and we can run the playbook using the command:

ansible-playbook --ask-vault-pass ec2_create.yml

This playbook will create 3 instances, one master and two slaves and will also tag them. Once the instances are created it will dynamically retrieve the IP of these instances and put it in ip.txt file, from where it is read by ansible.cfg file to do the further configuration of k8s.

At last, you only need to run the playbook final.yml using the command:

ansible-playbook final.yml

It will execute the two roles which are created to configure k8s master node and k8s slave nodes in AWS.

NOTE :- Be patient while all the playbooks are running cause it will consume time to create instance and configure k8s in it.

Here we come to the explanation part of the setup and code…

First and foremost I’ve created a directory named “k8s_cluster_role” inside which I’ve created a local ansible.cfg file as follows:

As you can see “public_key_file” stores the AWS key pair which is needed by ansible to login to AWS instance via SSH for configuring k8s. And the remote user in this case will be “ec2-user” and we also need to provide paths for inventory file(which is ip.txt in our case) and roles path(which will be again created in the same directory).

Then I’ve written the following playbook for launching ec2 instances in AWS:

This playbook is using credentials.yml and vars.yml file

  • I have created credentials.yml to store AWS access_key and secret_key and I’ve discussed how to create the same in the former part of this article.
  • vars.yml is needed to store all the necessary details required for launching the instance(such as image-id, key pair, etc)(PS- We could have also asked the user to enter it during the runtime using vars_prompt but it would be cumbersome to do it every time we run the playbook)

This playbook will install the boto library in the base OS and create a security group for our cluster. It will then launch the master node and two worker nodes and will also update their IPs in ip.txt file. (The next time when we run the playbook and the public IPs of the instances change, it will also clear the older IPs and update the new ones, without creating another new instance).

Then I have created two roles in this directory, one for configuring k8s-master node and the other for k8s-slave nodes using the command:

ansible-galaxy init k8s-master
ansible-galaxy init k8s-slave

If you go inside these two created roles, you can see the following tree structure:

k8s_cluster_role/ 
├── defaults
│ └── main.yml
├── files
├── handlers
│ └── main.yml
├── meta
│ └── main.yml
├── README.md
├── tasks
│ └── main.yml
├── templates
├── tests
│ ├── inventory
│ └── test.yml
└── vars
└── main.yml

To know more about roles you can refer to:

In k8s_cluster_role/k8s-master/tasks/main.yml we have written the following code:

  • In the above code, I've used ansible “package” module to install docker, iproute-tc, kubectl, kubeadm and kubelet software and have used “service” module to start their services.
  • Next, I’ve configured yum for kubernetes by providing baseurl, gpgkeys and other required parameters by yum_repository module.
  • In ansible when we don’t have module for a specific requirement then we can use command/shell module to run specific commands. So, here I have to use shell to pull required images.
  • To change the driver in docker from cgroup to systemd, for this we can use “copy” module to copy the content in “/etc/docker/daemon.json” file and then I have again used “service” module to restart the services. (Note that register keyword is also being used here to capture the changes and only restart the service when changes are made).
  • Now required k8s configurations are done such as updating the k8s config file and initializing master. Then I’ve created .kube directory and copied admin files in that and later installed flannel on the Kubernetes Cluster so that it create the overlay network setup.
  • At last, to generate the token in master node I’ve used “shell” module and then stored the token in local system so that slave can join the cluster with this token.

In k8s_cluster_role/k8s-slave/tasks/main.yml we have written the following code:

  • As you can see, the initial configuration part of slave node is same as the master but in this we need to change the parameters for kernel.
  • Next I’ve created a file to copy the token that earlier I’ve fetched from master to local to join the k8s cluster. And lastly we only need to run this file using “shell” keyword.

Till this point roles for master and slave are ready and now only a small playbook is need to be created to execute both the roles.

Running this playbook will configure k8s master in master node of AWS and k8s slave in the two slave nodes of AWS.

Here is my GitHub repo for this entire code:

Here is the output of my created ansible galaxy collection:

For launching EC2 instances:

And here is my final playbook which will execute both roles:

I would like to thank Prithviraj Singh for his help and guidance throughout this task ^_^

That’s all folks. Thankyou for reading :)

Join FAUN: Website 💻|Podcast 🎙️|Twitter 🐦|Facebook 👥|Instagram 📷|Facebook Group 🗣️|Linkedin Group 💬| Slack 📱|Cloud Native News 📰|More.

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author 👇

--

--