Docker Desktop is the easiest way to run Kubernetes on your local machine - it gives you a fully certified Kubernetes cluster and manages all the components for you.%[INVALID_URL]
In this tutorial you’ll learn how to set up Kubernetes on Docker Desktop and run a simple demo app. You’ll gain experience of working with Kubernetes and comparing the app definition syntax to Docker Compose.
1. Install Docker Desktop
2. Enable Kubernetes
Kubernetes itself runs in containers. When you deploy a Kubenetes cluster you first install Docker (or another container runtime like containerd) and then use tools like kubeadm which starts all the Kubernetes components in containers. Docker Desktop does all that for you.
Make sure you have Docker Desktop running - in the taskbar in Windows and the menu bar on the Mac you’ll see Docker’s whale logo. Click the whale and select Settings:
3. The Docker Desktop menu
A new screen opens with all of Docker Desktop’s configuration options. Click on Kubernetes and check the Enable Kubernetes checkbox:
4. Enabling Kubernetes in Docker Desktop
That’s it! Docker Desktop will download all the Kubernetes images in the background and get everything started up. When it’s ready you’ll see two green lights in the bottom of the settings screen saying Docker running and Kubernetes running.
The star in the screenshot shows the Reset Kubernetes Cluster button, which is one of the reasons why Docker Desktop is the best of the local Kubernetes options. Click that and it will reset your cluster back to a fresh install of Kubernetes.
5. Verify your Kubernetes cluster
If you’ve worked with Docker before, you’re used to managing containers with the docker and docker-compose command lines. Kubernetes uses a different tool called kubectl to manage apps - Docker Desktop installs kubectl for you too.
6. Using Kubernetes via AI
- Docker Desktop
- Install Kubectl-ai
brew tap sozercan/kubectl-ai https://github.com/sozercan/kubectl-ai brew install kubectl-ai
- Get OpenAI Keys via platform.openai.com/account/api-keys
kubectl-ai requires an OpenAI API key or an Azure OpenAI Service API key and endpoint, and a valid Kubernetes configuration.
export OPENAI_API_KEY=<your OpenAI key>
Setting up Kubeview
Assuming that you have already installed Git and Helm on your laptop, follow the below steps
git clone https://github.com/benc-uk/kubeview cd kubeview/charts/ helm install kubeview kubeview
Testing it locally
kubectl port-forward svc/kubeview -n default 80:80
Deploying Pod using namespace
kubectl ai "Create a namespace called ns1 and deploy a Nginx Pod" ✨ Attempting to apply the following manifest: --- apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment namespace: ns1 spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 Use the arrow keys to navigate: ↓ ↑ → ← ? Would you like to apply this? [Reprompt/Apply/Don't Apply]: + Reprompt ▸ Apply Don't Apply
The YAML manifest you provided creates a basic Nginx pod with the name "nginx-pod" and exposes port 80. To apply this manifest and create the pod, you can use the kubectl apply command. Save the manifest in a file, for example, nginx-pod.yaml, and then execute the following command in your terminal:
Difference between "Create" and "Deploy" [Be Careful]
kubectl ai "Create a namespace called ns1 and create a Nginx Pod" ✨ Attempting to apply the following manifest: apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: v1 kind: Pod metadata: name: nginx namespace: ns1 spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 ✔ Apply
Accessing the Nginx Pod via Web Browser
kubectl port-forward nginx 8000:80 -n ns1 Forwarding from 127.0.0.1:8000 -> 80 Forwarding from [::1]:8000 -> 80 Handling connection for 8000 Handling connection for 8000
If I can access Pod directly via a web browser, why do I need deployment?
While it is possible to access a pod directly via a web browser, using a deployment provides several benefits and is generally recommended in a production environment. Deployments help manage the lifecycle of your application and provide features such as scalability, rolling updates, and fault tolerance. Here are a few reasons why using a deployment is beneficial:
Replication and Scalability: Deployments allow you to specify the desired number of replicas for your application. This means that multiple identical pods will be created, providing scalability and load balancing. If one pod fails or becomes unavailable, the deployment will automatically create a new replica to ensure that the desired number of pods is maintained.
Rolling Updates: Deployments support rolling updates, which allow you to update your application without incurring downtime. You can update the pod template in the deployment specification, and the deployment controller will manage the update process by gradually replacing old pods with new ones. This ensures a smooth transition and minimizes any impact on your application's availability.
Versioning and Rollbacks: Deployments enable you to manage different versions of your application. If an update introduces issues or unexpected behavior, you can easily roll back to a previous version by specifying the desired revision. This helps maintain application stability and allows for quick recovery in case of problems.
Health Checks and Self-Healing: Deployments include health checks that monitor the status of pods. If a pod fails the health check, it will be automatically restarted or replaced by a new pod. This self-healing mechanism ensures the availability and reliability of your application.
Declarative Configuration: Deployments use a declarative approach, where you define the desired state of your application, and the deployment controller ensures that the current state matches the desired state. This simplifies management and allows for easier collaboration and reproducibility.
Integration with Other Kubernetes Resources: Deployments integrate well with other Kubernetes resources, such as services, ingress, and horizontal pod autoscalers. This enables you to expose your application externally, manage traffic routing, and automatically scale your application based on resource utilization or custom metrics.
While it's possible to directly access a pod via a web browser, using a deployment provides a more robust and scalable solution for managing your application in a production environment. It offers benefits such as replication, scalability, rolling updates, versioning, self-healing, and integration with other Kubernetes resources. By leveraging deployments, you can ensure the stability, availability, and efficient management of your application throughout its lifecycle.
This is an example for deploying 3 replicas in the specific namespace
kubectl ai "create an nginx deployment with 3 replicas under namespace ns1" ✨ Attempting to apply the following manifest: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment namespace: ns1 spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: webpage image: ajeetraina/webpage ports: - containerPort: 80 ✔ Apply
Query: Reprompt: Scale to 3 replicas
Reprompt: Scale to 3 replicas ✨ Attempting to apply the following manifest: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80
kubectl ai "create an nginx deployment with 3 replicas under namespace ns1 and this time create service type as NodePort" ✨ Attempting to apply the following manifest: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment namespace: ns1 spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: webpage image: ajeetraina/webpage ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-service namespace: ns1 spec: type: NodePort selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 nodePort: 30080 Use the arrow keys to navigate: ↓ ↑ → ← ? Would you like to apply this? [Reprompt/Apply/Don't Apply]: + Reprompt ▸ Apply Don't Apply
Listing the Kubernetes Resources
kubectl get po,deploy,svc -n ns1 NAME READY STATUS RESTARTS AGE pod/nginx-deployment-58945458f5-5pk6b 1/1 Running 0 28s pod/nginx-deployment-58945458f5-7htd7 1/1 Running 0 28s pod/nginx-deployment-58945458f5-s6cxm 1/1 Running 0 28s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx-deployment 3/3 3 3 28s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/nginx-service NodePort 10.100.230.251 <none> 80:30080/TCP 28s
If you're new to Kubernetes and looking for a hassle-free setup, Docker Desktop is your gateway to a single node Kubernetes cluster. With Docker Desktop, you can quickly get started and effortlessly install essential tools like kubeview and kubectl-ai, further simplifying your Kubernetes experience.
Did you find this article valuable?
Support Collabnix by becoming a sponsor. Any amount is appreciated!