Setting up your on-premise K8s cluster through cloudformation

Aug 11, 2022

Through this post we’re going to build a 2 node (educational purpose) cluster using kubeadm.
The arrival of this journey will be to launch kubeadm init/join, and install the network plugin, from there on I’ll redirect you to the official documentation.

The cloudformation template we’re building could also come useful for automation, so if you want just that here it is.


Shopping list:

  • 2 aws ec2 as nodes.
  • cri-o as the container runtime.
    from k8s 1.24 I wouldn’t recommend the use of docker because then you’d also need to install a shim to make it compatible with CRI, also the default cri-o cgroup driver is systemd, and that matches kubelet’s cgroup so we don’t need to change any configuration in this regards.
  • calico for the networking.
  • latest kubernetes, kubeadm (at the time of writing we’re on 1.24.3).

Nothing fancy, the added value of this post is that we’re going to automate as much as possible using cloudformation for Iac.

ship which carries container

The VMs

The VM type is going to be a t3.medium with ubuntu 22;
I prefer going with ubuntu because if gives wide compatibility with k8s and it comes with a lot of other useful goodies already installed; that said for a production scenario I’d probably start from a slimmer distro.
In AWS the amiId is unique for region, the default one you’ll find in the template is for the eu-north-1 region,
NB: if you’re going to launch the stack somewhere else you’ll need to look this up.

For the securityGroup we’re going to configure an all open group to host the VMs, if you want to tweak this part you can check the ports needed in a k8s cluster.

To access the machines we’ve added an ssh key, in case you don’t have a public key loaded in AWS refer to this guide.

K8s tools install

During the EC2 startup we can drop the bash commands we want to run, as soon as the machine is available, in the UserData Property for the EC2 Instance.
The field inside the template is named UserData, here is an overview of what we’re doing:

  • General update.
  • Install the keyring certificates for kubernetes and crio.
  • Install Kubeadmn, cri-o, k8s and friends.
  • Add the ControlPlane network ip in the hosts file (so we can reference it by name instead of IP, useful in multi ControlPlane scenario), we also do this on the workerNode.
  • Tweak the cri-o configuration (use the reccomended pause container).
  • Enable some kernel modules and parameters needed for crio (adding configuration to make it persistent accross reboots).
  • Start and enable crio daemon.
  • Launch kubeadm with essential configuration,
    kubeadm init —control-plane-endpoint k8scp:6443 —cri-socket unix:///var/run/crio/crio.sock —pod-network-cidr 192.168.0.0/16.
  • And finally add calico.

Create the stack

After having defined the stack we’ll need to launch the cloudformation create-stack command in order to create an instance of our stack.

First thing to do is download the template:

wget https://gist.githubusercontent.com/fracalo/a72cc8f42c1cb15110690ebfd2ac22e8/raw/8421413261730c5a506dfe892bda5036028a9e51/simpleKubeadmConf.yaml

The template parameters are keyName (the ssh key you’re going to use to access the VMs), Ami for the amiId (it’s going to be the ubuntu22 AmiId for the region you have chosen) and the instance type.

aws cloudformation create-stack --region eu-north-1 --stack-name k8s-stack --template-body file://simpleKubeadmConf.yaml --parameters ParameterKey=KeyName,ParameterValue=k8sKey ParameterKey=Instance,ParameterValue=t3.medium 

This is an example of how I’m using create-stack,
in this case we’ve omitted the ami because there is a default value in the template.

You can check the status of the stack with aws cloudformation describe-stacks —region eu-north-1 —stack-name k8s-stack, in the Stack.Outputs you can also view the ip addresses of the machines, useful for ssh access.
The commands we’re running on the ControlPlane might take some minutes, you can check if some process are still running with ps, or checking the kubeadm init output (/root/kubeadmInit.out) on the CP.

Connecting the worker node to the cluster

To join the worker node you can find the instructions in the output of kubeadm init that we’ve saved on the CP node (/root/kubeadmInit.out).
And this is it! From here on you can follow your K8s Administration journey on the official kubeadm guide.


Once finished with your tests you can tear down the stack with aws cloudfomation delete-stack —stack-name … .
In conclusion I found that automating the cluster creation through cloudformation and kubeadm is quite easy,
clearly this is just for testing purpose, in a production scenario 2 nodes wouldn’t be enough and there are lots of things that can be configured more accurately.
But still I find a lot of value in having a single template doing all the heavy lifting, hope you do too!