Table of contents
Why do you need Kind?
Say, you have all your manifests ready and you want to deploy them quickly before moving them to production. You don't want to use any cloud providers to get your cluster up and running. What are the options? Well, you can create a cluster locally and deploy your application in under a minute. This also holds true for beginners that are hesitant to use cloud providers after hearing about the scary stories of expensive invoices. In conclusion, kind is capable of simulating your production-grade cluster.
What is Kind?
Kind is a tool built by the Kubernetes community to deploy a cluster locally that is speedy and lightweight. It uses Docker containers as its nodes. Its image is kindest/node. With this, you don't have to worry about spawning a VM and therefore it is significantly fast.
What about Minikube ? Is it still relevant?
Absolutely not. Although Minikube runs on top of a hypervisor (newer versions now support docker driver too) and is slower than kind by running Minikube on VMs, you get the VM isolation which is 'more secure per see and not to mention it provides options on using it on multiple hypervisors such as VirtualBox, Hyperkit, parallels, etc.
Notable Features of Kind
- Since the nodes are Docker containers, we can deploy more than 1 control plane and workers.
- Bringing up clusters at lightning speed since the docker image is cached the first time you create the cluster.
- Ability to load your local images directly into the cluster. This saves a few extra steps of
pushing your image to a container registry. You can do that by running
kind load docker-image my-app:latest
No Loadbalancer
As Kind is deployed on docker containers, it doesn't come with a built-in load balancer as it does on minikube. But there is a workaround for this. You can use MetalLB to bring up a load balancer.
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
Hello World App
We will now deploy a simple hello world application using Kind cluster. I will be using an ingress controller instead of a load balancer. You can know more about the difference between ingress and load balancer here
Step 1: Create a Kind Cluster
For this, to work, we need to create a custom cluster having
- extraPortMappings : This allows localhost to make requests to your cluster
- node-labels: Let the ingress controller the port's availability by setting a label.
Now create a config file which will be passed to kind when creating a cluster.
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
Then pass this config file to kind create cluster to create a cluster using the above-defined config
kind create cluster --name=hello-world --config=kind.yml
Step 2: Use the Nginx Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
This manifest contains kind specific patches to forward the hostPorts to the ingress controller.
Wait until the Nginx controller in the pod is running
kubectl get pods --namespace ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-l2rbj 0/1 Completed 0 27m
ingress-nginx-admission-patch-rdlrl 0/1 Completed 0 27m
ingress-nginx-controller-6bccc5966-grkxp 1/1 Running 0 27m
Step 3: Hello world app
For this demo, we will be using a simple web server image which is hashicorp/http-echo that echos back the arguments are given to it.
Pod
kind: Pod apiVersion: v1 metadata: name: hello-world-app labels: app: hello-world spec: containers: - name: Hello-world-app image: hashicorp/http-echo:0.2.3 args: - "-text=Hello World! Kubernetes with kind App"
Service
kind: Service apiVersion: v1 metadata: name: hello-world-service spec: selector: app: hello-world ports: - port: 5678
Ingress Controller
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress spec: rules: - http: paths: - pathType: Prefix path: "/home" backend: service: name: hello-world-service port: number: 5678
Step 4: Deploying
You can wire all the components into one single file. Something like this
First, we define a pod with a single container and its respective image that it should be running.
Then we define a service that acts as an intermediate between the pod and the ingress controller. The service exposes the pod's default port which is 5678.
Finally we define an entry point that sits in front of multiple services in the cluster aka Ingress to send the localhost request to the pod.
kubectl deploy -f hello.yml
Check if the pods are up and running. The deployment will be done in the default namespace.
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/hello-world-app 1/1 Running 0 67m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hello-world-service ClusterIP 10.96.36.194 <none> 5678/TCP 67m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 70m
Looking good. Last but not the least, our output
curl localhost/home
Hello World! Kubernetes with kind App
Thanks for the Kindness. Now I want to get rid of my cluster.
You have been waiting for this haven't you ๐. Jokes aside, to delete your cluster
kind delete cluster --name=hello-world