Kubernetes Cluster Setup On Ubuntu 2204: A Step-by-Step Guide

by Team 62 views
Kubernetes Cluster Setup on Ubuntu 2204: A Step-by-Step Guide

Setting up a Kubernetes cluster on Ubuntu 22.04 can seem daunting, but fear not! This guide breaks down each step, making the process manageable and understandable. We'll walk you through everything from preparing your Ubuntu servers to deploying your first application. By the end of this article, you'll have a fully functional Kubernetes cluster ready to handle your containerized workloads. Let's dive in!

Prerequisites

Before we get started, there are a few things you'll need:

  • Ubuntu 22.04 Servers: You'll need at least three Ubuntu 22.04 servers. One will act as the master node, and the others will be worker nodes. Ensure each server has a unique hostname and static IP address. A minimum of 2GB RAM per node is recommended, but more is always better!
  • Internet Access: All servers should have internet access to download the necessary packages.
  • SSH Access: Ensure you can SSH into each server for easy management.
  • Basic Linux Knowledge: Familiarity with Linux commands will be helpful.

Step 1: Preparing the Nodes

First, we need to prepare each of our Ubuntu 22.04 servers. This involves updating the package lists, installing necessary packages, and disabling swap. Let's go through these steps on each node (master and workers).

Update Package Lists and Install Dependencies

Start by SSHing into each of your servers. Then, update the package lists and install the necessary dependencies:

sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common

These packages are essential for adding and using Kubernetes repositories. apt-transport-https allows APT to access repositories over HTTPS. ca-certificates ensures that your system trusts the SSL certificates of the repositories. curl is a command-line tool for transferring data with URLs, and software-properties-common provides scripts for managing software repositories.

Disable Swap

Kubernetes requires swap to be disabled for proper operation. Here’s how to disable it:

sudo swapoff -a

To make this change permanent, you need to edit the /etc/fstab file. Open the file with your favorite text editor (e.g., nano or vim):

sudo nano /etc/fstab

Comment out the line that contains the swap entry by adding a # at the beginning of the line. Save the file and exit. This prevents the swap from being enabled on reboot. Ensuring swap is disabled is crucial because Kubernetes relies on predictable memory allocation, and swap can interfere with this process, leading to performance issues and instability. By disabling swap, you ensure that Kubernetes has direct control over memory resources, resulting in a more reliable and efficient cluster.

Configure the hostname

Make sure each of your machines has a unique hostname. Configure hostnames for each node. For example, set the master node's hostname to k8s-master and worker nodes to k8s-worker-1 and k8s-worker-2:

sudo hostnamectl set-hostname k8s-master

Change k8s-master to the desired hostname for each node. Add these hostnames and their corresponding IP addresses to the /etc/hosts file on each node. This step is critical for name resolution within the cluster:

sudo nano /etc/hosts

Add lines like these, replacing the IPs and hostnames with your actual values:

192.168.1.10 k8s-master
192.168.1.11 k8s-worker-1
192.168.1.12 k8s-worker-2

Step 2: Installing Container Runtime (Docker)

Kubernetes needs a container runtime to run your containers. Docker is a popular choice. Let's install Docker on all nodes.

Add Docker Repository

First, add the Docker repository to your system:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

This adds the official Docker GPG key and the Docker repository to your system's APT sources. Using the official Docker repository ensures that you are getting the latest and most secure version of Docker. It also simplifies the update process, as Docker updates will be available through the standard apt update and apt upgrade commands.

Install Docker Engine

Now, update the package lists and install Docker Engine, containerd, and Docker Compose:

sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin

Configure Docker Daemon

Configure Docker to use systemd as the cgroup driver. Create or edit the /etc/docker/daemon.json file:

sudo mkdir -p /etc/docker
sudo nano /etc/docker/daemon.json

Add the following content to the file:

{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}

Save the file and restart the Docker service:

sudo systemctl restart docker
sudo systemctl enable docker

Verifying the cgroup driver is essential for compatibility with Kubernetes. Kubernetes uses cgroups to manage resources for containers, and using systemd as the cgroup driver ensures that Docker and Kubernetes are aligned in how they manage these resources. This alignment prevents potential conflicts and ensures that resource limits and isolation are properly enforced.

Step 3: Installing Kubernetes Components

With the container runtime in place, it's time to install the Kubernetes components: kubelet, kubeadm, and kubectl.

Add Kubernetes Repository

Add the Kubernetes repository to your system:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Note: Even though Ubuntu 22.04 is not Xenial, use kubernetes-xenial in the repository URL. This is a common practice and works correctly.

Install Kubelet, Kubeadm, and Kubectl

Update the package lists and install the Kubernetes components:

sudo apt update
sudo apt install -y kubelet kubeadm kubectl

Hold Package Versions

To prevent accidental upgrades, hold the package versions:

sudo apt-mark hold kubelet kubeadm kubectl

Holding package versions ensures that your Kubernetes components remain at the tested and stable versions you initially installed. This prevents unexpected issues that can arise from automatic updates introducing breaking changes or incompatibilities. By holding the versions, you maintain control over your cluster's stability and can plan updates carefully.

Step 4: Initializing the Kubernetes Cluster (Master Node)

Now, let's initialize the Kubernetes cluster on the master node.

Initialize the Cluster

Run the following command to initialize the cluster. Replace 192.168.1.10 with the IP address of your master node:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.1.10
  • --pod-network-cidr: Specifies the IP address range for pods. 10.244.0.0/16 is a common choice.
  • --apiserver-advertise-address: Specifies the IP address that the API server will advertise. This should be the IP address of your master node.

This process takes a few minutes. The --pod-network-cidr flag is crucial because it defines the IP address range that will be used for assigning IP addresses to pods within the cluster. Choosing an appropriate CIDR range is important to avoid conflicts with existing networks. The --apiserver-advertise-address ensures that other nodes can reach the API server on the master node.

Configure Kubectl

After the initialization is complete, you'll see instructions on how to configure kubectl. Run the following commands as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

These commands copy the Kubernetes configuration file to your user's home directory and set the correct ownership, allowing you to use kubectl without sudo. Configuring kubectl properly is essential for interacting with the Kubernetes cluster. Without the correct configuration, kubectl will not be able to authenticate with the API server and you will not be able to manage your cluster.

Apply a Network Plugin

Kubernetes requires a network plugin to enable communication between pods. We'll use Calico in this example:

kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml

This command applies the Calico manifest, which sets up the Calico network plugin. Applying a network plugin like Calico is fundamental for enabling networking between pods in the cluster. Without a network plugin, pods will not be able to communicate with each other, rendering the cluster non-functional. Calico provides network policies, ensuring secure and isolated communication between different applications running in the cluster.

Step 5: Joining Worker Nodes to the Cluster

Now, let's join the worker nodes to the cluster. After initializing the master node, the kubeadm init command provides a kubeadm join command. It will look similar to this:

kubeadm join <master-ip>:<master-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

Run this command on each worker node. Joining worker nodes is essential to expand the cluster's capacity and distribute workloads. Worker nodes contribute their computing resources (CPU, memory) to the cluster, allowing it to run more applications and handle higher traffic. Without worker nodes, the master node would be responsible for running all the workloads, which is not scalable or resilient.

Step 6: Verify the Cluster

After joining the worker nodes, verify that the cluster is working correctly. On the master node, run:

kubectl get nodes

You should see all the nodes listed, with their status as Ready.

kubectl get pods --all-namespaces

This command lists all the pods running in the cluster. Make sure all the essential pods, like the Calico pods, are running.

Step 7: Deploying a Sample Application

Finally, let's deploy a simple application to test our cluster. We'll deploy a basic Nginx deployment.

Create a Deployment

Create a file named nginx-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

Apply the Deployment

Apply the deployment using kubectl:

kubectl apply -f nginx-deployment.yaml

Create a Service

Create a service to expose the Nginx deployment. Create a file named nginx-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: LoadBalancer

Apply the Service

Apply the service using kubectl:

kubectl apply -f nginx-service.yaml

Access the Application

Get the external IP address of the service:

kubectl get service nginx-service

Access the application in your browser using the external IP address. If everything is set up correctly, you should see the Nginx welcome page.

Conclusion

Congratulations! You've successfully set up a Kubernetes cluster on Ubuntu 22.04. You've learned how to prepare your nodes, install the container runtime and Kubernetes components, initialize the cluster, join worker nodes, and deploy a sample application. This is just the beginning. Explore more advanced Kubernetes features, such as deployments, services, networking, and storage, to unlock the full potential of your cluster. Setting up a Kubernetes cluster is a significant achievement that opens up a world of possibilities for deploying and managing containerized applications. With your own cluster, you can now explore the benefits of scalability, resilience, and efficient resource utilization that Kubernetes offers.

Enjoy your Kubernetes journey, and happy deploying!