Kubernetes Cluster On Ubuntu 20.04: A Step-by-Step Guide
So, you want to dive into the world of Kubernetes, huh? Great choice! Kubernetes, often abbreviated as K8s, is the leading container orchestration platform, and getting a cluster up and running on Ubuntu 20.04 is a fantastic way to get your hands dirty. This guide will walk you through each step, making the process as smooth as possible. Whether you're a seasoned developer or just starting out, you'll find this comprehensive tutorial super helpful.
Prerequisites
Before we get started, let’s make sure you have everything you need. You'll save yourself a lot of headaches later on!
- Ubuntu 20.04 Servers: You’ll need at least two Ubuntu 20.04 servers. One will serve as the master node, and the other as a worker node. For a production environment, consider having multiple worker nodes for redundancy and scalability. Ensure each server has a static IP address and can communicate with each other.
- Sudo Privileges: Make sure you have a user account with sudo privileges on each server. This allows you to run commands as an administrator.
- Internet Connection: Each server needs a stable internet connection to download the necessary packages.
- Basic Linux Knowledge: Familiarity with basic Linux commands will be beneficial.
- Containerization Concepts: A basic understanding of containerization, particularly Docker, will help you grasp the concepts more easily.
Having these prerequisites in place will ensure a smoother installation process. Now, let's move on to the exciting part – installing the Kubernetes cluster!
Step 1: Installing Container Runtime (Docker)
Container runtime installation is crucial. Kubernetes needs a container runtime to manage and run containers. Docker is a popular choice, and we'll use it in this guide. Follow these steps to install Docker on each of your Ubuntu servers.
-
Update the Package Index:
First, update the package index to ensure you have the latest package information:
sudo apt update -
Install Required Packages:
Install packages that allow apt to use a repository over HTTPS:
sudo apt install apt-transport-https ca-certificates curl software-properties-common -
Add Docker’s Official GPG Key:
Add Docker’s official GPG key to verify the downloaded packages:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg -
Set Up the Stable Docker Repository:
Add the stable Docker repository to your system:
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null -
Install Docker Engine:
Update the package index again and install Docker Engine, containerd, and Docker Compose:
sudo apt update sudo apt install docker-ce docker-ce-cli containerd.io docker-compose-plugin -
Verify Docker Installation:
Check if Docker is installed correctly by running the Docker version command:
docker --versionYou should see the Docker version information printed in the terminal. Also, run the hello-world image to ensure Docker is working correctly:
sudo docker run hello-worldThis command downloads a test image and runs it in a container. If everything is set up correctly, you’ll see a “Hello from Docker!” message.
-
Configure Docker to Start on Boot:
To ensure Docker starts automatically on boot, enable the Docker service:
sudo systemctl enable docker sudo systemctl start docker
By completing these steps, you’ve successfully installed Docker on your Ubuntu 20.04 servers. Now that the container runtime is ready, let’s move on to installing Kubernetes components.
Step 2: Installing Kubernetes Components (kubeadm, kubelet, kubectl)
Now, let's install the Kubernetes components: kubeadm, kubelet, and kubectl. These tools are essential for setting up and managing your Kubernetes cluster. Here’s how to install them on each server.
-
Add Kubernetes Repository:
First, add the Kubernetes repository to your system. This involves adding the Kubernetes signing key and the repository source.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list -
Install kubeadm, kubelet, and kubectl:
Update the package index and install kubeadm, kubelet, and kubectl:
sudo apt update sudo apt install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectlkubeadmis a tool for bootstrapping a Kubernetes cluster.kubeletis the agent that runs on each node in the cluster and ensures that containers are running in a Pod.kubectlis the command-line tool for interacting with the Kubernetes API server.The
apt-mark holdcommand prevents these packages from being updated automatically, which can cause compatibility issues. -
Verify Installation:
Check if the Kubernetes components are installed correctly by running the following commands:
kubeadm version kubelet --version kubectl version --clientThese commands should display the version information for each component.
With these steps completed, you've successfully installed the necessary Kubernetes components on your Ubuntu 20.04 servers. Next, we'll initialize the Kubernetes cluster using kubeadm.
Step 3: Initializing the Kubernetes Cluster
Initializing the Kubernetes cluster is a critical step. You'll use kubeadm on the master node to bootstrap the cluster. Follow these steps to get your cluster up and running.
-
Initialize the Kubernetes Master Node:
On your master node, run the following command to initialize the Kubernetes cluster:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16--pod-network-cidr: Specifies the range of IP addresses for Pods.10.244.0.0/16is a common CIDR block used with Calico, a network policy and networking solution.
This command will generate a
kubeadm joincommand that you’ll need to run on the worker nodes to join them to the cluster. Make sure to copy this command and keep it in a safe place. -
Configure kubectl:
After the initialization is complete, you need to configure
kubectlto connect to the cluster. Run the following commands:mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configThese commands copy the Kubernetes configuration file to your user’s
.kubedirectory and set the correct permissions. -
Install a Pod Network Addon:
Kubernetes requires a Pod network addon to enable communication between Pods. We'll use Calico in this guide. Install it by running:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yamlThis command applies the Calico manifest, which sets up the necessary networking components for your cluster.
-
Verify the Master Node Status:
Check the status of the nodes to ensure the master node is ready:
kubectl get nodesThe master node should be in the
Readystate.
By following these steps, you’ve successfully initialized the Kubernetes cluster on your master node. Now, let’s move on to joining the worker nodes to the cluster.
Step 4: Joining Worker Nodes to the Cluster
Joining worker nodes is how you expand your cluster's capacity. On each worker node, run the kubeadm join command that was generated during the kubeadm init step on the master node. This command connects the worker nodes to the master node, allowing them to run containerized applications.
-
Run the
kubeadm joinCommand:On each worker node, run the
kubeadm joincommand:sudo kubeadm join <master-node-ip>:<port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>Replace
<master-node-ip>,<port>,<token>, and<hash>with the values provided in the output of thekubeadm initcommand on the master node. -
Verify Worker Node Status:
Back on the master node, check the status of the nodes to ensure the worker nodes have joined the cluster:
kubectl get nodesThe worker nodes should be listed and in the
Readystate.
With these steps, you’ve successfully joined the worker nodes to the Kubernetes cluster. Your cluster is now ready to deploy and manage containerized applications.
Step 5: Deploying a Sample Application
Now that your Kubernetes cluster is up and running, let’s deploy a sample application to see everything in action. We'll deploy a simple Nginx deployment and service.
-
Create a Deployment:
Create a YAML file named
nginx-deployment.yamlwith the following content:apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80This deployment creates two replicas of the Nginx container.
-
Apply the Deployment:
Apply the deployment to your Kubernetes cluster:
kubectl apply -f nginx-deployment.yaml -
Create a Service:
Create a YAML file named
nginx-service.yamlwith the following content:apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancerThis service exposes the Nginx deployment using a LoadBalancer.
-
Apply the Service:
Apply the service to your Kubernetes cluster:
kubectl apply -f nginx-service.yaml -
Verify the Deployment and Service:
Check the status of the deployment and service:
kubectl get deployments kubectl get servicesYou should see the Nginx deployment and service listed. The service may take a few minutes to get an external IP address.
-
Access the Application:
Once the service has an external IP address, you can access the Nginx application in your web browser using that IP address.
By following these steps, you’ve successfully deployed a sample application to your Kubernetes cluster. Congratulations!
Conclusion
Alright, you made it! Installing a Kubernetes cluster on Ubuntu 20.04 might seem daunting at first, but breaking it down into manageable steps makes it totally achievable. You've successfully installed Docker, set up the Kubernetes components, initialized the cluster, joined worker nodes, and even deployed a sample application. Give yourself a pat on the back!
This guide provides a solid foundation for working with Kubernetes. From here, you can explore more advanced topics such as: mastering deployments, scaling applications, managing services, and diving deep into the world of Kubernetes. So keep exploring, keep learning, and most importantly, keep experimenting with your new Kubernetes cluster! Who knows? Maybe you’ll be the next Kubernetes guru! You got this!