Kubeadm: Create Kubernetes Cluster On Ubuntu

by Team 45 views
Kubeadm: Create Kubernetes Cluster on Ubuntu

Hey guys! Today, we're diving into the awesome world of Kubernetes and I'm going to walk you through setting up a Kubernetes cluster on Ubuntu using Kubeadm. Trust me, it's not as scary as it sounds! Kubernetes has become the go-to platform for orchestrating containerized applications, and Kubeadm makes the process of creating a cluster surprisingly straightforward. So, buckle up, and let's get started!

Prerequisites

Before we jump into the nitty-gritty, let's make sure you have everything you need. Think of this as gathering your ingredients before baking a cake. You wouldn't want to start only to realize you're missing something crucial, right?

  • Ubuntu Servers: You'll need at least two Ubuntu servers. One will be your master node, and the other will be a worker node. For a production environment, you'd typically want more worker nodes for redundancy and scalability. But for our little experiment, two is perfectly fine.
  • Operating System: Make sure you're running Ubuntu 16.04 or later. I'll be using Ubuntu 20.04 in this guide, but the steps should be similar for other versions.
  • User Privileges: You'll need sudo privileges on both servers. This allows you to run commands with administrative rights, which is essential for installing and configuring Kubernetes.
  • Internet Access: Both servers should have internet access to download the necessary packages.
  • Container Runtime: Docker is the most popular container runtime, and we'll be using it in this guide. Make sure Docker is installed and running on both servers. If you don't have it already, don't worry; I'll show you how to install it in the next section.
  • Basic Linux Knowledge: A basic understanding of Linux commands and concepts will be helpful. You don't need to be a Linux guru, but knowing your way around the terminal will make things much smoother.

Installing Docker

First off, you will need to install Docker on each of your Ubuntu servers, which involves updating the package index, installing necessary packages to allow apt to use a repository over HTTPS, adding Docker’s official GPG key, setting up the stable repository, and finally installing Docker. This foundational step ensures that you have the containerization technology necessary for running Kubernetes. After installation, it’s crucial to start and enable Docker to ensure it runs on boot. Docker is a vital component in our Kubernetes setup. It's the engine that will run our containers, so it's essential to have it up and running before we proceed. By installing Docker first, we lay the groundwork for deploying and managing containerized applications within our Kubernetes cluster. This ensures that all our services can be properly packaged and run consistently across different environments. Remember, Docker simplifies the deployment process by allowing you to bundle an application with all of its dependencies into a standardized unit for software development.

sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io
sudo systemctl start docker
sudo systemctl enable docker

Installing Kubeadm, Kubelet, and Kubectl

Next, you need to install Kubeadm, Kubelet, and Kubectl on both servers. Kubeadm is the tool we'll use to bootstrap our Kubernetes cluster. Kubelet is the agent that runs on each node and communicates with the master node. Kubectl is the command-line tool we'll use to interact with the cluster.

To install these, we first need to add the Kubernetes apt repository. Then, we can install the packages using apt-get. It's important to hold the package versions to prevent accidental upgrades, which could lead to compatibility issues.

By installing Kubeadm, Kubelet, and Kubectl, we equip our servers with the necessary tools to create and manage a Kubernetes cluster. Kubeadm simplifies the process of initializing a cluster, while Kubelet ensures that our nodes can execute the instructions given by the master node. Kubectl, on the other hand, empowers us to interact with the cluster, deploy applications, and monitor its health. These tools are the building blocks of our Kubernetes infrastructure, enabling us to orchestrate and manage containerized workloads efficiently.

sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Initializing the Kubernetes Master Node

Now comes the exciting part where we initialize the Kubernetes master node. This is where we use Kubeadm to create the control plane for our cluster. The control plane is the brain of the cluster, responsible for managing and coordinating all the worker nodes.

Before initializing, it's a good idea to disable swap. Kubernetes doesn't play nicely with swap enabled. You can disable it temporarily with sudo swapoff -a. To make the change permanent, you'll need to comment out the swap line in /etc/fstab.

To initialize the master node, you'll use the kubeadm init command. You'll also need to specify the pod network CIDR. This is the range of IP addresses that will be used for the pods in your cluster. Calico, a popular networking solution, requires a specific CIDR: 192.168.0.0/16.

sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
sudo kubeadm init --pod-network-cidr=192.168.0.0/16

After running kubeadm init, you'll see some important information in the output. Make sure to save this information, as you'll need it later to configure Kubectl and join worker nodes to the cluster.

Specifically, you'll need the kubeadm join command, which includes a token and the master node's IP address and port.

Configuring Kubectl

After initializing the master node, you need to configure Kubectl to interact with the cluster. Kubectl is the command-line tool that allows you to manage your Kubernetes cluster. To configure it, you need to copy the Kubernetes configuration file from the /etc/kubernetes/admin.conf path to your user's home directory.

Then, you need to change the ownership of the file to your user. This will allow you to run Kubectl commands without using sudo.

mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now, you can verify that Kubectl is configured correctly by running kubectl get nodes. You should see the master node listed, but it will likely be in the NotReady state. This is because we haven't installed a pod network yet.

By configuring Kubectl, we gain the ability to manage our Kubernetes cluster from the command line. This is essential for deploying applications, monitoring cluster health, and performing other administrative tasks. Kubectl provides a powerful and flexible interface for interacting with the Kubernetes API, allowing us to control every aspect of our cluster.

Installing a Pod Network

Now, let's install a pod network. A pod network allows pods to communicate with each other. There are several pod network options available, such as Calico, Flannel, and Weave Net. In this guide, we'll use Calico, as it's a popular and powerful choice. To install Calico, you can apply the Calico manifest using kubectl apply.

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

After applying the manifest, wait a few minutes for the pods to start. You can check the status of the pods by running kubectl get pods -n kube-system. Once all the Calico pods are running, the master node should be in the Ready state.

Installing a pod network is crucial for enabling communication between pods within our Kubernetes cluster. Without a pod network, pods would be isolated from each other, making it impossible to deploy multi-tiered applications or services that rely on inter-pod communication. Calico provides a robust and scalable networking solution for Kubernetes, ensuring that our pods can communicate efficiently and securely.

Joining Worker Nodes

Now that the master node is up and running, we can join the worker nodes to the cluster. To do this, you'll need the kubeadm join command that was outputted when you initialized the master node.

Run this command on each worker node. This will configure the worker node to connect to the master node and join the cluster.

sudo kubeadm join <master-ip>:<master-port> --token <token> --discovery-token-ca-cert-hash sha256:<sha256>

After running the kubeadm join command, the worker node will be registered with the master node. You can verify that the worker node has joined the cluster by running kubectl get nodes on the master node. You should see the worker node listed, and its status should be Ready.

Joining worker nodes to the cluster is essential for scaling our Kubernetes deployment. Worker nodes provide the compute resources necessary to run our containerized applications. By adding more worker nodes, we can increase the capacity of our cluster and handle more traffic. This ensures that our applications remain responsive and available, even under heavy load.

Testing the Cluster

Finally, let's test the cluster to make sure everything is working as expected. We'll deploy a simple Nginx pod and expose it as a service.

First, create a deployment:

kubectl create deployment nginx --image=nginx

Then, expose the deployment as a service:

kubectl expose deployment nginx --port=80 --type=NodePort

Get the service's NodePort:

kubectl get service nginx

You'll see a port number listed under the PORT(S) column. This is the NodePort. You can access the Nginx service by navigating to the IP address of any of your nodes (master or worker) followed by the NodePort in your web browser.

If you see the Nginx welcome page, congratulations! You've successfully created a Kubernetes cluster using Kubeadm.

Testing the cluster is the final step in our Kubernetes setup. By deploying a simple application like Nginx, we can verify that our cluster is functioning correctly. This ensures that our pods are running, our services are accessible, and our networking is properly configured. A successful test confirms that our Kubernetes cluster is ready to deploy and manage real-world applications.

Conclusion

And there you have it! You've successfully created a Kubernetes cluster on Ubuntu using Kubeadm. This is just the beginning, of course. There's a whole world of Kubernetes concepts to explore, such as deployments, services, namespaces, and more.

But with a working cluster, you're now ready to start deploying your own applications and exploring the power of container orchestration. Happy Kubernetes-ing!

Remember always to refer to the official Kubernetes documentation for the most up-to-date information and best practices. Kubernetes is constantly evolving, and staying informed is key to mastering this powerful platform.