Kubernetes On Ubuntu: A Step-by-Step Tutorial

by Team 46 views
Kubernetes on Ubuntu: A Step-by-Step Tutorial

Hey guys! Ready to dive into the awesome world of Kubernetes (k8s) on Ubuntu? You've come to the right place! This comprehensive tutorial will guide you through setting up a Kubernetes cluster on Ubuntu, step by step. We'll cover everything from the basic requirements to deploying your first application. Let's get started!

What is Kubernetes?

Kubernetes, often abbreviated as k8s, is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF). Think of Kubernetes as the conductor of an orchestra, ensuring that all the different parts of your application work together harmoniously. It handles tasks like deploying applications, scaling them based on demand, managing resources, and ensuring high availability.

Key benefits of using Kubernetes include:

  • Scalability: Easily scale your applications up or down based on traffic and resource usage.
  • High Availability: Kubernetes ensures that your applications are always available by automatically restarting failed containers and rescheduling them on healthy nodes.
  • Resource Optimization: Efficiently utilize your resources by packing containers tightly onto nodes.
  • Automation: Automate deployment, scaling, and management tasks, reducing manual intervention.
  • Portability: Deploy your applications consistently across different environments, from on-premises to public clouds.

Kubernetes achieves this by abstracting away the underlying infrastructure, allowing developers to focus on writing code and deploying applications without worrying about the complexities of managing servers and networks. It introduces concepts like Pods (the smallest deployable unit), Services (an abstraction layer that exposes applications), and Deployments (which manage the desired state of your applications). Understanding these concepts is crucial for effectively using Kubernetes.

Prerequisites

Before we begin, let's make sure you have everything you need:

  • Ubuntu Servers: You'll need at least two Ubuntu servers (1 master node and 1 worker node). I recommend using Ubuntu 20.04 or later. Make sure each server has at least 2 GB of RAM and 2 CPUs. For a production environment, you'll likely want more worker nodes for redundancy and higher capacity. You can use virtual machines (VMs) or physical servers, depending on your needs.
  • SSH Access: Make sure you can SSH into all your servers. This will allow you to remotely manage and configure them.
  • Root or Sudo Privileges: You'll need root or sudo privileges to install and configure the necessary software.
  • Internet Connection: All servers should have an internet connection to download packages and container images.
  • Basic Linux Knowledge: A basic understanding of Linux commands and concepts will be helpful.

It’s also good practice to ensure your servers are up to date. Run the following commands on each server to update the package lists and upgrade installed packages:

sudo apt update
sudo apt upgrade -y

This will ensure you have the latest security patches and bug fixes, which is crucial for a stable and secure Kubernetes cluster.

Step 1: Install Container Runtime (Docker)

Container runtime is a software that is responsible for running containers. Kubernetes supports several container runtimes, including Docker, containerd, and CRI-O. In this tutorial, we'll use Docker, which is one of the most popular and widely used container runtimes. To install Docker, follow these steps on all your servers (master and worker nodes):

  1. Update the package index:

    sudo apt update
    
  2. Install required packages to allow apt to use a repository over HTTPS:

    sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
    
  3. Add Docker’s official GPG key:

    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
    
  4. Set up the stable repository:

    echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    
  5. Install Docker Engine:

    sudo apt update
    

sudo apt install docker-ce docker-ce-cli containerd.io -y ```

  1. Verify Docker installation:

    sudo docker run hello-world
    

    This command downloads a test image and runs it in a container. If everything is set up correctly, you should see a message confirming that Docker is working.

  2. Add your user to the docker group (optional but recommended):

    sudo usermod -aG docker $USER
    newgrp docker
    

    This allows you to run Docker commands without using sudo. You'll need to log out and log back in for this change to take effect.

Step 2: Install kubeadm, kubelet, and kubectl

kubeadm, kubelet, and kubectl are essential components for setting up and managing a Kubernetes cluster. kubeadm is a command-line tool used to bootstrap the cluster. kubelet is an agent that runs on each node and manages the containers. kubectl is a command-line tool used to interact with the Kubernetes API server.

Install these components on all your servers (master and worker nodes) by following these steps:

  1. Update the package index:

    sudo apt update
    
  2. Install required packages:

    sudo apt install apt-transport-https ca-certificates curl -y
    
  3. Download the Google Cloud public signing key:

    sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
    
  4. Add the Kubernetes apt repository:

    echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
    
  5. Update the package index again:

    sudo apt update
    
  6. Install kubeadm, kubelet, and kubectl:

    sudo apt install kubelet kubeadm kubectl -y
    sudo apt-mark hold kubelet kubeadm kubectl
    

The apt-mark hold command prevents these packages from being accidentally updated, which could cause compatibility issues with your Kubernetes cluster.

Step 3: Initialize the Kubernetes Cluster (Master Node)

Now, let's initialize the Kubernetes cluster on the master node. This process involves setting up the control plane components, such as the API server, scheduler, and controller manager. Run the following command on your master node only:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

The --pod-network-cidr flag specifies the IP address range for the pod network. This range should not overlap with any existing network in your environment. The 10.244.0.0/16 range is commonly used, but you can choose a different range if needed.

After the command completes successfully, you'll see some output that includes instructions for setting up kubectl and joining worker nodes to the cluster. Make sure to copy these instructions, as you'll need them later.

Set up kubectl:

Run the following commands on the master node to configure kubectl to connect to the cluster:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

These commands copy the Kubernetes configuration file to your home directory and set the correct ownership, allowing you to use kubectl without sudo.

Step 4: Deploy a Pod Network (Master Node)

A pod network provides connectivity between pods running on different nodes in the cluster. We need to deploy a pod network add-on to enable this communication. We'll use Calico, which is a popular and flexible networking solution. Run the following command on the master node:

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

This command downloads the Calico manifest file and applies it to the cluster, deploying the necessary components. It may take a few minutes for the Calico pods to become ready.

You can check the status of the Calico pods using the following command:

kubectl get pods -n kube-system

Wait until all Calico pods are in the Running state before proceeding.

Step 5: Join Worker Nodes to the Cluster

Now, let's join the worker nodes to the cluster. Run the kubeadm join command that was outputted during the kubeadm init process on each of your worker nodes. It should look something like this:

sudo kubeadm join <master-ip>:<master-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

Replace <master-ip>, <master-port>, <token>, and <hash> with the actual values from the output of kubeadm init. This command configures the worker node to connect to the master node and join the cluster.

After running the command on each worker node, you can check the status of the nodes on the master node using the following command:

kubectl get nodes

You should see all your worker nodes listed, with a status of Ready. It might take a few minutes for the nodes to become ready.

Step 6: Deploy Your First Application

Congratulations! You now have a working Kubernetes cluster. Let's deploy a simple application to test it out. We'll deploy a basic Nginx web server. Create a file named nginx-deployment.yaml with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

This YAML file defines a deployment that creates two replicas of the Nginx web server. To deploy the application, run the following command on the master node:

kubectl apply -f nginx-deployment.yaml

This command creates the deployment and starts the Nginx pods. You can check the status of the deployment using the following command:

kubectl get deployments

You can also check the status of the pods using the following command:

kubectl get pods

Wait until all the pods are in the Running state.

Step 7: Expose the Application

To access the application from outside the cluster, you need to expose it using a service. We'll create a service of type NodePort, which exposes the application on a specific port on each node. Run the following command on the master node:

kubectl expose deployment nginx-deployment --type=NodePort --port=80 --target-port=80

This command creates a service that exposes the Nginx deployment on port 80. You can get the NodePort using the following command:

kubectl get service nginx-deployment

The output will show the NodePort assigned to the service. It will be a port number between 30000 and 32767. To access the application, open a web browser and navigate to the IP address of any of your nodes, followed by the NodePort. For example, if your node's IP address is 192.168.1.100 and the NodePort is 30000, you would navigate to http://192.168.1.100:30000.

You should see the default Nginx welcome page. Congratulations! You have successfully deployed your first application on Kubernetes.

Conclusion

And there you have it! You've successfully set up a Kubernetes cluster on Ubuntu and deployed your first application. This is just the beginning of your Kubernetes journey. There's a lot more to learn, such as managing deployments, scaling applications, and configuring advanced networking. But with this foundation, you're well on your way to becoming a Kubernetes master. Keep experimenting and exploring, and you'll be amazed at what you can achieve!