Kubernetes Cluster Setup On Ubuntu 20.04: A Step-by-Step Guide
Setting up a Kubernetes (K8s) cluster on Ubuntu 20.04 might seem daunting at first, but fear not! This guide breaks down the entire process into manageable steps, making it easier for you to deploy and manage containerized applications. Whether you're a seasoned DevOps engineer or just starting your journey with Kubernetes, this tutorial will provide you with a solid foundation.
Prerequisites
Before diving into the setup, ensure you have the following prerequisites in place:
- Ubuntu 20.04 Servers: You'll need at least two Ubuntu 20.04 servers. One will act as the master node, and the others will be worker nodes. For a basic setup, two servers are sufficient, but for production environments, consider having at least three master nodes for high availability.
- User with Sudo Privileges: Each server should have a user account with sudo privileges to execute administrative commands.
- Network Connectivity: Ensure all servers can communicate with each other over the network. This is crucial for the Kubernetes components to interact correctly.
- Basic Understanding of Linux Commands: Familiarity with basic Linux commands will help you navigate the setup process more efficiently.
- Containerization Concepts: A basic understanding of containerization, particularly Docker, is beneficial as Kubernetes manages containerized applications.
Step 1: Update Package Repositories and Install Docker
First things first, let's update the package repositories and install Docker on all your Ubuntu servers. Docker is a containerization platform that Kubernetes uses to run applications, so it's a fundamental requirement. Here's how:
-
Update Package Lists:
Open your terminal and run the following command to update the package lists:
sudo apt updateThis command synchronizes your package lists with the repositories, ensuring you have the latest information about available packages.
-
Install Required Packages:
Next, install packages that allow
aptto use a repository over HTTPS:sudo apt install -y apt-transport-https ca-certificates curl software-properties-commonThese packages are essential for securely adding and using external repositories.
-
Add Docker's GPG Key:
Add Docker’s official GPG key to your system:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpgThis key verifies the authenticity of the Docker packages you'll be installing.
-
Set Up the Stable Docker Repository:
Add the Docker repository to your
aptsources:echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/nullThis command adds the Docker repository to your system, allowing you to install Docker packages.
-
Update Package Lists Again:
Update the package lists again to include the new Docker repository:
sudo apt update -
Install Docker Engine:
Finally, install Docker Engine, Docker CLI, containerd.io, and Docker Compose:
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-pluginThis command installs all the necessary Docker components. After installation, Docker should start automatically. You can verify this using the following command:
sudo systemctl status dockerIf Docker is not running, start it with:
sudo systemctl start dockerAlso, enable Docker to start on boot:
sudo systemctl enable docker
Step 2: Install and Configure Kubernetes Components
Now that Docker is up and running, let’s install the Kubernetes components. These include kubeadm, kubelet, and kubectl. kubeadm is a tool for bootstrapping Kubernetes clusters, kubelet is the agent that runs on each node, and kubectl is the command-line tool for interacting with the cluster.
-
Add the Kubernetes Apt Repository:
First, add the Kubernetes apt repository:
sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.listThese commands add the Kubernetes repository to your system, similar to how you added the Docker repository.
-
Install Kubeadm, Kubelet, and Kubectl:
Install
kubeadm,kubelet, andkubectlusing the following command:sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectlThe
apt-mark holdcommand prevents these packages from being accidentally updated, which could cause compatibility issues. -
Configure Systemd Cgroup Driver:
Kubernetes requires the systemd cgroup driver. To configure it, edit the Docker configuration file:
sudo nano /etc/docker/daemon.jsonAdd the following content to the file. If the file is empty, create it:
{ "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" }Save and close the file. Then, restart Docker for the changes to take effect:
sudo systemctl restart docker
Step 3: Initialize the Kubernetes Cluster on the Master Node
With all the necessary components installed, it's time to initialize the Kubernetes cluster on the master node. This involves setting up the control plane and generating the necessary configuration files.
-
Initialize the Kubernetes Cluster:
Run the following command to initialize the Kubernetes cluster:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16The
--pod-network-cidrflag specifies the IP address range for the pod network. This range should not overlap with any existing network in your environment. This process will take a few minutes.Important: Note the
kubeadm joincommand output at the end of the initialization process. You'll need this command to join the worker nodes to the cluster. It will look something like this:kubeadm join <master-ip>:<port> --token <token> --discovery-token-ca-cert-hash sha256:<hash> -
Configure Kubectl:
After initialization, configure
kubectlto interact with the cluster:mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configThese commands copy the Kubernetes configuration file to your user's home directory and set the correct permissions.
-
Install a Pod Network Add-on:
Kubernetes requires a pod network add-on to enable communication between pods. We'll use Calico in this example. Install Calico with the following command:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yamlThis command applies the Calico manifest, deploying Calico to your cluster. This step is crucial for proper network functioning within your Kubernetes cluster. Other options include Flannel, Weave Net, and Cilium.
Step 4: Join Worker Nodes to the Kubernetes Cluster
Now that the master node is set up, it's time to join the worker nodes to the cluster. This involves running the kubeadm join command on each worker node.
-
Run the
kubeadm joinCommand:On each worker node, run the
kubeadm joincommand that you noted down during the master node initialization. It should look similar to this:sudo kubeadm join <master-ip>:<port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>Replace
<master-ip>:<port>,<token>, and<hash>with the actual values from the output of thekubeadm initcommand on the master node. This command registers the worker node with the master node, allowing it to run pods.
Step 5: Verify the Kubernetes Cluster
After joining the worker nodes, verify that the cluster is set up correctly. This involves checking the status of the nodes and pods.
-
Check Node Status:
On the master node, run the following command to check the status of the nodes:
kubectl get nodesYou should see all the nodes listed, with their status as
Ready. If the nodes are not in theReadystate, wait a few minutes and try again. If they remain in aNotReadystate, troubleshoot the network connectivity and kubelet configuration. -
Check Pod Status:
Check the status of the pods in the
kube-systemnamespace:kubectl get pods -n kube-systemThis command shows the status of the Kubernetes system pods. Ensure that all pods are running and in the
Readystate. If any pods are in a pending or error state, investigate the logs to identify the issue.
Step 6: Deploy a Sample Application
To ensure everything is working correctly, deploy a sample application to your Kubernetes cluster. We'll deploy a simple Nginx deployment.
-
Create a Deployment:
Create a file named
nginx-deployment.yamlwith the following content:apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80This YAML file defines a deployment that creates two replicas of the Nginx container.
-
Apply the Deployment:
Apply the deployment using the following command:
kubectl apply -f nginx-deployment.yamlThis command creates the deployment in your Kubernetes cluster.
-
Create a Service:
Create a file named
nginx-service.yamlwith the following content:apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancerThis YAML file defines a service that exposes the Nginx deployment.
-
Apply the Service:
Apply the service using the following command:
kubectl apply -f nginx-service.yamlThis command creates the service in your Kubernetes cluster. After a few minutes, you should be able to access the Nginx application through the LoadBalancer IP.
-
Verify the Deployment and Service:
Check the status of the deployment and service:
kubectl get deployments kubectl get servicesEnsure that the deployment is running and the service has an external IP address assigned. You can then access the Nginx application using the external IP address in your web browser.
Conclusion
Congratulations! You've successfully set up a Kubernetes cluster on Ubuntu 20.04. This guide covered the installation of Docker, Kubernetes components, cluster initialization, joining worker nodes, and deploying a sample application. With this foundation, you can now explore more advanced Kubernetes features and deploy your own containerized applications.
Remember, Kubernetes is a complex system, and continuous learning is key. Keep exploring, experimenting, and referring to the official Kubernetes documentation to deepen your understanding and master the art of container orchestration.