This project is a personal testing ground for some DevOps technologies. The repo bootstraps an ephemeral, highly available Kubernetes cluster from scratch on EC2 instances roughly following the Kubernetes The Hard Way tutorial, but using Ansible and Terraform to streamline the whole process.
This project assumes a 64-bit Linux OS with the following dependencies already installed
- Python, 3.8+ (versions 2.7 and 3.4+ are untested but should work too)
- Terraform
- Ansible
- GNU make
- AWS CLI, AWS credentials and permissions
Parameters such as the number of controller nodes and worker nodes can be edited in the tf/variables.tf
file. Nodes are allocated to subnets in a round-robin fashion.
-
Make sure the environment variables
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
are set and that the corresponding user has appropriate permissions. The policy intf/iam-policy/
can be used to this end. -
From the root folder, run
make setup && export PATH=$(pwd)/bin:$PATH && make ca
. This will download precompiled binaries forkubectl
,cfssl
andcfssljson
to thebin
folder and add it toPATH
. It will also bootstrap a certificate authority -
Run
cd tf/ && terraform init
to initialise Terraform -
From the root folder, run
make cluster
. This will take a while.
Nodes of the same type can communicate directly
Terraform provisions the infrastructure according to files in the tf
folder; python scripts in py
use Terraform's formatted output to create configuration files at runtime using information of the instances' IP addresses. Specifically they create:
- the Ansible inventory
- certificate signing requests
- service unit files
- networking configuration
- entries in the routing table
Ansible is then used to place config files in each node and start the services.