Meet K3s — A Lightweight Kubernetes Distribution for You

Meet K3s — A Lightweight Kubernetes Distribution for You

Apr 5, 2019·

10 min read

Play this article

To implement a microservice architecture and a multi-cloud strategy, Kubernetes today has become a key enabling technology. The bedrock of Kubernetes remains the orchestration and management of Linux containers, to create a powerful distributed system for deploying applications across a hybrid cloud environment. Said that, Kubernetes has become the de-facto standard container orchestration framework for cloud-native deployments. Development teams have turned to Kubernetes to support their migration to new microservices architectures and a DevOps culture for continuous integration and continuous deployment.

Why Docker & Kubernetes on IoT devices?

Today many organizations are going through a digital transformation process. Digital transformation is the integration of digital technology into almost all areas of a business, fundamentally changing how you operate and deliver value to customers. It’s basically a cultural change. The common goal for all these organization is to change how they connect with their customers, suppliers and partners. These organizations are taking advantage of innovations offered by technologies such as IoT platforms, big data analytics, or machine learning to modernize their enterprise IT and OT systems. They realize that the complexity of development and deployment of new digital products require new development processes. Consequently, they turn to agile development and infrastructure tools such as Kubernetes.

At the same time, that there has been a major increase in the demand for Kubernetes outside the datacenter. Kubernetes is pushing out of the data center into stores and factories. DevOps teams do find Kubernetes quite interesting as it provides predictable operations and a cloud-like provisioning experience on just about any infrastructure.

Docker containers & Kubernetes are an excellent choice for deploying complex software to the Edge. The reasons are listed below:

Introducing K3s — A Stripped Down version of Kubernetes

K3s is a brand new distribution of Kubernetes that is designed for teams that need to deploy applications quickly and reliably to resource-constrained environments. K3s is a Certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.

K3s is lightweight certified K8s distribution built for production operations. It is just 40MB binary with 512MB memory consumption. It is based on a single process w/t integrated K8s master, Kubelet and containerd. It includes SQLite in addition to etcd. Simultaneously released for x86_64, ARM64 and ARMv7. It is an open Source project, not yet Rancher product. It is wrapped in a simple package that reduces the dependencies and steps needed to run a production Kubernetes cluster. Packaged as a single binary, k3s makes installation and upgrade as simple as copying a file. TLS certificates are automatically generated to ensure that all communication is secure by default.

k3s bundles the Kubernetes components (kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, kube-proxy) into combined processes that are presented as a simple server and agent model. Running k3s server will start the Kubernetes server and automatically register the local host as an agent. This will create a one node Kubernetes cluster. To add more nodes to the cluster just run k3s agent --server ${URL} --token ${TOKEN} on another host and it will join the cluster. It's really that simple to set up a Kubernetes cluster with k3s.

Minimum System Requirements

  • Linux 3.10+
  • 512 MB of ram per server
  • 75 MB of ram per node
  • 200 MB of disk space
  • x86_64, ARMv7, ARM64

Under this blog post, I will showcase how to get started with K3s on 2-Node Raspberry Pi’s cluster.

Hardware:

  1. Raspberry Pi 3 ( You can order it from Amazon in case you are in India for 2590 INR)
  2. Micro-SD card reader ( I got it from here )
  3. Any Windows/Linux/MacOS
  4. HDMI cable ( I used the HDMI cable of my plasma TV)
  5. Internet Connectivity(WiFi/Broadband/Tethering using Mobile) — to download Docker 18.09.0 package
  6. Keyboard & mouse connected to Pi’s USB ports

Software:

  1. SD-Formatter — to format microSD card (in case of Windows Laptop)
  2. Win32DiskImager(in case you have Windows OS running on your laptop) — to burn Raspbian format directly into microSD card.(No need to extract XZ using any tool). You can use Etcher tool if you are using macbook.

Steps to Flash Raspbian OS on Pi Boxes:

2. Download Raspbian OS from here and use Win32 imager(in case you are on Windows OS running on your laptop) to burn it on microSD card.

3. Insert the microSD card into your Pi box. Now connect the HDMI cable from one end of Pi’s HDMI slot to your TV or display unit and mobile charger(recommended 5.1V@1.5A).

4. Let the Raspbian OS boot up on your Pi box. It takes hardly 2 minutes for OS to come up.

5. Configure WiFi via GUI. All you need is to input the right password for your WiFi.

6. The default username is “pi” and password is “raspberry”. You can use this credentials to login into the Pi system.

7. You can use “FindPI” Android application to search for IP address if you don’t want to look out for Keyboard or mouse to search for the right IP address.

Enable SSH to perform remote login

To login via your laptop, you need to allow SSH service running. You can verify IP address command via ifconfig command.

[Captains-Bay]? > ssh pi@192.168.1.5 pi@192.168.1.5's password: Linux raspberrypi 4.14.98-v7+ #1200 SMP Tue Feb 12 20:27:48 GMT 2019 armv7l The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Last login: Tue Feb 26 12:30:00 2019 from 192.168.1.4 pi@raspberrypi:~ $ sudo su root@raspberrypi:/home/pi# cd

Verifying Raspbian OS Version

root@raspberrypi:~# cat /etc/os-release PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)" NAME="Raspbian GNU/Linux" VERSION_ID="9" VERSION="9 (stretch)" ID=raspbian ID_LIKE=debian HOME_URL="http://www.raspbian.org/" SUPPORT_URL="http://www.raspbian.org/RaspbianForums" BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs" root@raspberrypi:~# </code></pre>

Enable container features in Kernel

Edit /boot/cmdline.txt on both the Raspberry Pi nodes and add the following to the end of the line:

cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

Reboot the devices.

Installing K3s

root@raspberrypi:~# curl -sfL https://get.k3s.io | sh - [INFO] Finding latest release [INFO] Using v0.2.0 as release [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v0.2.0/sha256sum-arm.txt [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v0.2.0/k3s-armhf ^Croot@raspberrypi:~# wget https://github.com/rancher/k3s/releases/download/v0.2.0/k3s-mhf && \ > chmod +x k3s-armhf && \ > sudo mv k3s-armhf /usr/local/bin/k3s --2019-03-28 22:47:22-- https://github.com/rancher/k3s/releases/download/v0.2.0/k3s-armhf Resolving github.com (github.com)... 192.30.253.112, 192.30.253.113 Connecting to github.com (github.com)|192.30.253.112|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/135516270/4010d900-41db-11e9-9992-cc2248364eac?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20190328%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190328T171725Z&X-Amz-Expires=300&X-Amz-Signature=75c5a361f0219d443dfa0754250c852257f1b8512e54094da0bcc6fbb92327cc&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dk3s-armhf&response-content-type=application%2Foctet-stream [following] --2019-03-28 22:47:25-- https://github-production-release-asset-2e65be.s3.amazonaws.com/135516270/4010d900-41db-11e9-9992-cc2248364eac?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20190328%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190328T171725Z&X-Amz-Expires=300&X-Amz-Signature=75c5a361f0219d443dfa0754250c852257f1b8512e54094da0bcc6fbb92327cc&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dk3s-armhf&response-content-type=application%2Foctet-stream Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.216.0.56 Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.216.0.56|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 34684224 (33M) [application/octet-stream] Saving to: 'k3s-armhf' k3s-armhf 100%[========================>] 33.08M 93.1KB/s in 8m 1s 2019-03-28 22:55:28 (70.4 KB/s) - 'k3s-armhf' saved [34684224/34684224]

Boostrapping Your K3s Server

root@raspberrypi:~# sudo k3s server INFO[2019-03-29T10:52:06.995054811+05:30] Starting k3s v0.2.0 (2771ae1) INFO[2019-03-29T10:52:07.082595332+05:30] Running kube-apiserver --watch-cache=false --cert-dir /var/lib/rancher/k3s/server/tls/temporary-certs --allow-privileged=true --authorization-mode Node,RBAC --service-account-signing-key-file /var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range 10.43.0.0/16 --advertise-port 6445 --advertise-address 127.0.0.1 --insecure-port 0 --secure-port 6444 --bind-address 127.0.0.1 --tls-cert-file /var/lib/rancher/k3s/server/tls/localhost.crt --tls-private-key-file /var/lib/rancher/k3s/server/tls/localhost.key --service-account-key-file /var/lib/rancher/k3s/server/tls/service.key --service-account-issuer k3s --api-audiences unknown --basic-auth-file /var/lib/rancher/k3s/server/cred/passwd --kubelet-client-certificate /var/lib/rancher/k3s/server/tls/token-node.crt --kubelet-client-key /var/lib/rancher/k3s/server/tls/token-node.key INFO[2019-03-29T10:52:08.094785384+05:30] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 10251 --address 127.0.0.1 --secure-port 0 --leader-elect=false INFO[2019-03-29T10:52:08.105366477+05:30] Running kube-controller-manager --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --service-account-private-key-file /var/lib/rancher/k3s/server/tls/service.key --allocate-node-cidrs --cluster-cidr 10.42.0.0/16 --root-ca-file /var/lib/rancher/k3s/server/tls/token-ca.crt --port 10252 --address 127.0.0.1 --secure-port 0 --leader-elect=false Flag --address has been deprecated, see --bind-address instead. INFO[2019-03-29T10:52:10.410557414+05:30] Listening on :6443 INFO[2019-03-29T10:52:10.519075956+05:30] Node token is available at /var/lib/rancher/k3s/server/node-token INFO[2019-03-29T10:52:10.519226216+05:30] To join node to cluster: k3s agent -s https://192.168.43.134:6443 -t ${NODE_TOKEN} INFO[2019-03-29T10:52:10.543022102+05:30] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml INFO[2019-03-29T10:52:10.548766216+05:30] Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml

I encountered the below error message while running K3s server for the first time.

INFO[2019-04-04T15:52:44.710450122+05:30] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused" containerd: exit status 1

You can fix it by editing /etc/hosts and adding the right entry for your Pis boxes

127.0.0.1 raspberrypi-node3

By now, you should be able to get k3s nodes listed.

root@raspberrypi:~# sudo k3s kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME raspberrypi Ready <none> 2m13s v1.13.4-k3s.1 192.168.43.134 <none> Raspbian GNU/Linux 9 (stretch) 4.14.98-v7+ containerd://1.2.4+unknown
root@raspberrypi:~# k3s kubectl get nodes NAME STATUS ROLES AGE VERSION raspberrypi Ready <none> 2m26s v1.13.4-k3s.1

Listing K3s Pods

root@raspberrypi:~# k3s kubectl get po No resources found.
root@raspberrypi:~# k3s kubectl get po,svc,deploy NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 11h root@raspberrypi:~#

containerd and Docker

k3s by default uses containerd. If you want to use it with Docker, all you just need to run the agent with the --docker flag

k3s agent -s ${SERVER_URL} -t ${NODE_TOKEN} --docker &

Running Nginx Pods

To launch a pod using the container image nginx and exposing a HTTP API on port 80, execute:

root@raspberrypi:~# k3s kubectl run mynginx --image=nginx --replicas=3 --port=80 kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. deployment.apps/mynginx created

Listing the Nginx Pods

You can now see that the pod is running:

root@raspberrypi:~# k3s kubectl get po NAME READY STATUS RESTARTS AGE mynginx-84b8d48d44-ggpcp 1/1 Running 0 119s mynginx-84b8d48d44-hkdg8 1/1 Running 0 119s mynginx-84b8d48d44-n4r6q 1/1 Running 0 119s

Exposing the Deployment

Create a Service object that exposes the deployment:

root@raspberrypi:~# k3s kubectl expose deployment mynginx --port 80 service/mynginx exposed

Verifying the endpoints controller for Pods

The below command verifies if endpoints controller has found the correct Pods for your Service:

root@raspberrypi:~# k3s kubectl get endpoints mynginx NAME ENDPOINTS AGE mynginx 10.42.0.10:80,10.42.0.11:80,10.42.0.12:80 17s

Testing if Nginx application is up & running:

root@raspberrypi:~# curl 10.42.0.10:80 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>

Adding a new Node to K3s Cluster

To add more nodes to the cluster just run k3s agent --server ${URL} --token ${TOKEN} on another host and it will join the cluster. It's really that simple to set up a Kubernetes cluster with k3s.

To test drive K3s on multi-node cluster, first you will need to copy the token which is stored on the below location:

root@raspberrypi:~# cat /var/lib/rancher/k3s/server/node-token K108b8e370b380bea959e8017abea3e540d1113f55df2c3f303ae771dc73fc67aa3::node:42e3dfc68ee27cf7cbdae5e4c8ac91b2 root@raspberrypi:~#

Create a variable NODETOKEN with token ID and then pass it directly with k3s agent command as shown below:

root@pi-node1:~# NODETOKEN=K108b8e370b380bea959e8017abea3e540d1113f55df2c3f303ae771dc73fc67aa3::node:42e3dfc68ee27cf7cbdae5e4c8ac91b2 root@pi-node1:~# k3s agent --server https://192.168.1.5:6443 --token ${NODETOKEN} INFO[2019-04-04T23:09:16.804457435+05:30] Starting k3s agent v0.3.0 (9a1a1ec) INFO[2019-04-04T23:09:19.563259194+05:30] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log INFO[2019-04-04T23:09:19.563629400+05:30] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd INFO[2019-04-04T23:09:19.613809334+05:30] Connecting to wss://192.168.1.5:6443/v1-k3s/connect INFO[2019-04-04T23:09:19.614108395+05:30] Connecting to proxy url="wss://192.168.1.5:6443/v1-k3s/connect" FATA[2019-04-04T23:09:19.907450499+05:30] Failed to start tls listener: listen tcp 127.0.0.1:6445: bind: address already in use root@pi-node1:~# pkill -9 k3s root@pi-node1:~# k3s agent --server https://192.168.1.5:6443 --token ${NODETOKEN} INFO[2019-04-04T23:09:45.843235117+05:30] Starting k3s agent v0.3.0 (9a1a1ec) INFO[2019-04-04T23:09:48.272160155+05:30] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log INFO[2019-04-04T23:09:48.272542392+05:30] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd /run/k3s/containerd/containerd.sock: connect: connection refused" INFO[2019-04-04T23:09:49.321863688+05:30] Waiting for containerd startup: rpc error: code = Unknown desc = server is not initialized yet INFO[2019-04-04T23:09:50.347628159+05:30] Connecting to wss://192.168.1.5:6443/v1-k3s/connect

Listing the k3s Nodes

root@raspberrypi:~# k3s kubectl get nodes NAME STATUS ROLES AGE VERSION pi-node1 Ready <none> 118s v1.13.5-k3s.1 pi-node2 Ready <none> 108m v1.13.5-k3s.1

Setting up Nginx

As shown earlier, we will go ahead and test Nginx application on top of K3s cluster nodes

root@raspberrypi:~# k3s kubectl run mynginx --image=nginx --replicas=3 --port=80 kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. deployment.apps/mynginx created

Verifying the endpoints controller for Pods

root@raspberrypi:~# k3s kubectl expose deployment mynginx --port 80 service/mynginx exposed

Test driving Kubernetes Dashboard

root@node1:/home/pi# k3s kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created serviceaccount/kubernetes-dashboard created role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created deployment.apps/kubernetes-dashboard created service/kubernetes-dashboard created root@node1:/home/pi#

We require kubectl proxy to get it accessible on our Web browser. This command creates a proxy server or application-level gateway between localhost and the Kubernetes API Server. It also allows serving static content over specified HTTP path. All incoming data enters through one port and gets forwarded to the remote kubernetes API Server port, except for the path matching the static content path.

root@node1:/home/pi# k3s kubectl proxy Starting to serve on 127.0.0.1:8001

By now, you should be able to access Dashboard via 8001 port on your Raspberry Pi system browser.

Cleaning up

kubectl delete --all pods pod "mynginx-84b8d48d44-9ghrl" deleted pod "mynginx-84b8d48d44-bczsv" deleted pod "mynginx-84b8d48d44-qqk9p" deleted

Hope you find this blog useful. In my future blog post, I will be talking about K3s Internals in detail.

References:

Originally published at collabnix.com on April 5, 2019.

Did you find this article valuable?

Support Collabnix by becoming a sponsor. Any amount is appreciated!