5 Minutes to Multi-Node Redis Cluster running on Google Cloud Kubernetes Engine using Docker…

5 Minutes to Multi-Node Redis Cluster running on Google Cloud Kubernetes Engine using Docker…

Play this article

If you are looking out for the easiest way to create Redis Cluster on remote Cloud Platform like Google Cloud Platform just by sitting on your laptop, then Docker Desktop is the right solution. Docker Desktop for Windows is an application for your Windows laptop for the building and sharing of containerized applications and microservices.

By using “docker context” CLI tool which comes by default with Docker Engine 19.03+, you can easily access GKE cluster and build containerized workload flawlessly. You can still use your favourite PowerShell interface to bring up Pods, ConfigMaps, Services and Deployment.

Under this blog post, we will see how easily one can setup GKE cluster on remote Google Cloud Platform using Docker Desktop for Windows. Also, we will bring up Redis Cluster running on Kubernetes. By the end of this blog, we will try to simulate Cluster Failure and see how slave nodes becomes master node once the master quorum gets disturbed.


C:\Users\Ajeet_Raina\AppData\Local\Google\Cloud SDK>gcloud init Welcome! This command will take you through the configuration of gcloud. Your current configuration has been set to: [default] You can skip diagnostics next time by using the following flag: gcloud init --skip-diagnostics Network diagnostic detects and fixes local network connection issues. Checking network connection...done. Reachability Check passed. Network diagnostic passed (1/1 checks passed). You must log in to continue. Would you like to log in (Y/n)? Y Your browser has been opened to visit: ... You are logged in as: [dockercaptain1981@gmail.com]. Pick cloud project to use: [1] lofty-tea-249310
C:\Users\Ajeet_Raina\AppData\Local\Google\Cloud SDK>gcloud container clusters create k8s-lab1 --disk-size 10 --zone asia-east1-a --machine-type n1-standard-2 --num-nodes 3 --scopes compute-rw WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning. WARNING: Newly created clusters and node-pools will have node auto-upgrade enabled by default. This can be disabled using the `--no-enable-autoupgrade` flag. WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run `clusters create` with the flag `--metadata disable-legacy-endpoints=true`. WARNING: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s). This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs. Creating cluster k8s-lab1 in asia-east1-a... Cluster is being health-checked (master is healthy)...done. Created [https://container.googleapis.com/v1/projects/lofty-tea-249310/zones/asia-east1-a/clusters/k8s-lab1]. To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/asia-east1-a/k8s-lab1?project=lofty-tea-249310 kubeconfig entry generated for k8s-lab1. NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS k8s-lab1 asia-east1-a 1.13.11-gke.23 n1-standard-2 1.13.11-gke.23 3 RUNNING C:\Users\Ajeet_Raina\AppData\Local\Google\Cloud SDK>kubectl get nodes NAME STATUS ROLES AGE VERSION gke-k8s-lab1-default-pool-f1fae040-9vd9 Ready <none> 108s v1.13.11-gke.23 gke-k8s-lab1-default-pool-f1fae040-ghf5 Ready <none> 108s v1.13.11-gke.23 gke-k8s-lab1-default-pool-f1fae040-z0rf Ready <none> 108s v1.13.11-gke.23 C:\Users\Ajeet_Raina\AppData\Local\Google\Cloud SDK>

Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
choco install git
PS C:\Users\Ajeet_Raina> kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * gke_lofty-tea-249310_asia-east1-a_k8s-lab1 gke_lofty-tea-249310_asia-east1-a_k8s-lab1 gke_lofty-tea-249310_asia-east1-a_k8s-lab1 PS C:\Users\Ajeet_Raina> kubectl get nodes NAME STATUS ROLES AGE VERSION gke-k8s-lab1-default-pool-f1fae040-9vd9 Ready <none> 64m v1.13.11-gke.23 gke-k8s-lab1-default-pool-f1fae040-ghf5 Ready <none> 64m v1.13.11-gke.23 gke-k8s-lab1-default-pool-f1fae040-z0rf Ready <none> 64m v1.13.11-gke.23 PS C:\Users\Ajeet_Raina>

Cloning this Repo

$git clone https://github.com/collabnix/redisplanet cd redis/kubernetes/gke/
$ kubectl apply -f redis-statefulset.yaml configmap/redis-cluster created statefulset.apps/redis-cluster created

$ kubectl apply -f redis-svc.yaml service/redis-cluster created
C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl get po NAME READY STATUS RESTARTS AGE redis-cluster-0 1/1 Running 0 92s redis-cluster-1 1/1 Running 0 64s redis-cluster-2 1/1 Running 0 44s redis-cluster-3 1/1 Running 0 25s redis-cluster-4 0/1 ContainerCreating 0 12s

C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-redis-cluster-0 Bound pvc-34bdf05b-4af2-11ea-9222-42010a8c00e8 1Gi RWO standard 2m15s data-redis-cluster-1 Bound pvc-4564abb9-4af2-11ea-9222-42010a8c00e8 1Gi RWO standard 107s data-redis-cluster-2 Bound pvc-51566907-4af2-11ea-9222-42010a8c00e8 1Gi RWO standard 87s data-redis-cluster-3 Bound pvc-5c8391a0-4af2-11ea-9222-42010a8c00e8 1Gi RWO standard 68s data-redis-cluster-4 Bound pvc-64a340d3-4af2-11ea-9222-42010a8c00e8 1Gi RWO standard 55s data-redis-cluster-5 Bound pvc-71024053-4af2-11ea-9222-42010a8c00e8 1Gi RWO standard 34s C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>
C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl describe pods redis-cluster-0 Name: redis-cluster-0 Namespace: default Priority: 0 Node: gke-k8s-lab1-default-pool-f1fae040-9vd9/ Start Time: Sun, 09 Feb 2020 09:41:14 +0530 Labels: app=redis-cluster controller-revision-hash=redis-cluster-fd959c7f4 statefulset.kubernetes.io/pod-name=redis-cluster-0 Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container redis Status: Running IP: Controlled By: StatefulSet/redis-cluster Containers: redis: Container ID: docker://6c8c32c785afabff22323cf77103ae3df29a29580863cdfe8c46db12883d87eb Image: redis:5.0.1-alpine Image ID: docker-pullable://redis@sha256:6f1cbe37b4b486fb28e2b787de03a944a47004b7b379d0f8985760350640380b Ports: 6379/TCP, 16379/TCP Host Ports: 0/TCP, 0/TCP Command: /conf/update-node.sh redis-server /conf/redis.conf State: Running Started: Sun, 09 Feb 2020 09:41:38 +0530 Ready: True Restart Count: 0 Requests: cpu: 100m Environment: POD_IP: (v1:status.podIP) Mounts: /conf from conf (rw) /data from data (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-m9xql (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: data-redis-cluster-0 ReadOnly: false conf: Type: ConfigMap (a volume populated by a ConfigMap) Name: redis-cluster Optional: false default-token-m9xql: Type: Secret (a volume populated by a Secret) SecretName: default-token-m9xql Optional: false QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 4m13s (x3 over 4m16s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times) Normal Scheduled 4m13s default-scheduler Successfully assigned default/redis-cluster-0 to gke-k8s-lab1-default-pool-f1fae040-9vd9 Normal SuccessfulAttachVolume 4m8s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-34bdf05b-4af2-11ea-9222-42010a8c00e8" Normal Pulling 3m55s kubelet, gke-k8s-lab1-default-pool-f1fae040-9vd9 pulling image "redis:5.0.1-alpine" Normal Pulled 3m49s kubelet, gke-k8s-lab1-default-pool-f1fae040-9vd9 Successfully pulled image "redis:5.0.1-alpine" Normal Created 3m49s kubelet, gke-k8s-lab1-default-pool-f1fae040-9vd9 Created container Normal Started 3m49s kubelet, gke-k8s-lab1-default-pool-f1fae040-9vd9 Started container
C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl apply -f redis-svc.yaml service/redis-cluster created C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP <none> 443/TCP 28m redis-cluster ClusterIP <none> 6379/TCP,16379/TCP 5s C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>
C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379' '''''''
C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica to Adding replica to Adding replica to M: 8a78d53307bdde11f6e53a9c1e90b1a9949463f1 slots:[0-5460] (5461 slots) master M: bf11440a398e88ad7bfc167dd3219a4f594ffa39 slots:[5461-10922] (5462 slots) master M: c82e231121118c731194d31ddc20d848953174e7 slots:[10923-16383] (5461 slots) master S: 707bb247a2ecc3fd36feb3c90cc58ff9194b5166 replicates 8a78d53307bdde11f6e53a9c1e90b1a9949463f1 S: 63abc45d61a9d9113db0c57f7fe0596da4c83a6e replicates bf11440a398e88ad7bfc167dd3219a4f594ffa39 S: 10c2bc0cc626725b5a1afdc5e68142610e498fd7 replicates c82e231121118c731194d31ddc20d848953174e7 Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join ..... >>> Performing Cluster Check (using node M: 8a78d53307bdde11f6e53a9c1e90b1a9949463f1 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 63abc45d61a9d9113db0c57f7fe0596da4c83a6e slots: (0 slots) slave replicates bf11440a398e88ad7bfc167dd3219a4f594ffa39 M: c82e231121118c731194d31ddc20d848953174e7 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: 10c2bc0cc626725b5a1afdc5e68142610e498fd7 slots: (0 slots) slave replicates c82e231121118c731194d31ddc20d848953174e7 S: 707bb247a2ecc3fd36feb3c90cc58ff9194b5166 slots: (0 slots) slave replicates 8a78d53307bdde11f6e53a9c1e90b1a9949463f1 M: bf11440a398e88ad7bfc167dd3219a4f594ffa39 slots:[5461-10922] (5462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-0 -- redis-cli cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:126 cluster_stats_messages_pong_sent:130 cluster_stats_messages_sent:256 cluster_stats_messages_ping_received:125 cluster_stats_messages_pong_received:126 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:256 C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>
kubectl apply -f app-depolyment.yaml
C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hit-counter-lb LoadBalancer 80:31309/TCP 103s kubernetes ClusterIP <none> 443/TCP 46m redis-cluster ClusterIP <none> 6379/TCP,16379/TCP 18m

Simulating a Node Failure

Let us try to simulate the failure of a cluster member by deleting the Pod. The moment you delete redis-cluster-0, which was originally a master, we see that Kubernetes promotes redis-cluster-3 to master, and when redis-cluster-0 returns, it does so as a slave. Let us test it out:

C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-0 -- redis-cli role 1) "master" 2) (integer) 854 3) 1) 1) "" 2) "6379" 3) "854"
C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-1 -- redis-cli role 1) "master" 2) (integer) 994 3) 1) 1) "" 2) "6379" 3) "994" C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-2 -- redis-cli role 1) "master" 2) (integer) 1008 3) 1) 1) "" 2) "6379" 3) "1008" C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-3 -- redis-cli role 1) "slave" 2) "" 3) (integer) 6379 4) "connected" 5) (integer) 1008 C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-4 -- redis-cli role 1) "slave" 2) "" 3) (integer) 6379 4) "connected" 5) (integer) 1022 C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-5 -- redis-cli role 1) "slave" 2) "" 3) (integer) 6379 4) "connected" 5) (integer) 1022

Bring down redis-cluster-0 pod and you will see that redis-cluster-3 gets converted from “slave” to “master”.

Further Readings:

Originally published at collabnix.com on February 9, 2020.

Did you find this article valuable?

Support Collabnix by becoming a sponsor. Any amount is appreciated!