Collabnix

Collabnix

kOps: Setting up a Kubernetes cluster on AWS

Photo by Alex Kulikov on Unsplash

kOps: Setting up a Kubernetes cluster on AWS

Kubernetes clusters made simple

Rohith Raju's photo
Rohith Raju
·Nov 4, 2022·

6 min read

Subscribe to our newsletter and never miss any upcoming articles

Play this article

How can you set up a Kubernetes cluster on AWS?

Well, there are three well-known ways you can set up a Kubernetes cluster on AWS.

  1. Manually bringing up EC2 instances.
  2. Using an automated provisioning tool like kOps.
  3. Using Amazon EKS (Elastic Kubernetes Service).

Since managing a Kubernetes cluster without any tooling is complicated (also not recommended) we can negate that.

Using Amazon EKS

This would be the first thought for most of us as it is described as a “Highly available, scalable, and secure Kubernetes service”. But you can't bring up a cluster magically with a click of a button. Eks manages just the Kubernetes control plane. Scaling and upgrading of master nodes are taken care of by Aws.

The easiest way to get started with EKS is to use the eksctl CLI. You can create a cluster by easily running

eksctl create cluster --region=ap-south-1 --name=sample

After it's done creating we get something like this

image.png

Now, you can now deploy any application into your cluster

Then why kOps? Everything looks fine.

As you can see from the above picture, we cannot see the master node or control plane. Those are the two worker nodes that have been provisioned by default. We can only access these nodes. This limits an operator's ability to turn on/off Kubernetes API features. For example, if there were an alpha feature or configuration flag your version of Kubernetes supports - it cannot be enabled on a managed service provider. Therefore we might be looking for tools to provision automatically ie kOps.

KOps = More control

Just like how eksctl is capable of creating an EKS cluster, KOps can also create a cluster automatically but with its control plane and master nodes. EKS is relatively new and was introduced back in 2017. But engineers were able to deploy and manage K8s applications before EKS even came out! How? By manually provisioning ec2 instances and network properties like subnets and DNS. KOps conveniently does all the manual work with a single command. KOps lets you manage your clusters even after installation.

Getting started with kOps

Prerequisites:

  1. Install AWS CLI
  2. Install Kubectl
  3. You will need a domain hosted by AWS (explained later)
  • Installing kOps

We will be using Ubuntu for this tutorial. Check the official docs for mac and windows. Installing kops is easy, you can do that with a single curl command.

curl -Lo kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops
sudo mv kops /usr/local/bin/kops

Deploying clusters to AWS

  • Buying a domain

    If you're learning to use kOps you can move to the google cloud platform. But if you have a domain lying around then stick around. If you want to buy a cheap domain, you can head over to Amazon Route 53 and buy it. Typically ".click" domains are inexpensive.

image.png

  • Using Domain purchased via AWS

    But what if have a subdomain or purchased a domain with another registrar and not AWS? You can check out the guides for these scenarios. of course, I'll be using a domain that was purchased by AWS.

  • Setting up AWS CLI

    After you install the CLI, for the CLI to be aware of your AWS account, you'll have to generate credentials and configure it to your CLI.

    1. Go to your AWS account.
    2. Go to the "I'am management console " .
    3. Click on Manage access keys and create a new access key .
    4. A access Id and secret key will be provided. You can download those keys.

Run aws configure and fill in the credentials

aws configure
AWS Access Key ID [****************I7MR]: 
AWS Secret Access Key [****************ZyLX]:
  • Setting up kOps user

    For the tool to create and modify resources, it needs access, So we can create a separate user group for kops and allow permissions for it to provision resources. Luckily CLI lets you do all that.
aws iam create-group --group-name kops

aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonSQSFullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEventBridgeFullAccess --group-name kops

aws iam create-user --user-name kops

aws iam add-user-to-group --user-name kops --group-name kops

Double-check if everything is ok.

image.png

After this, you can reconfigure your AWS CLI to use kops (the user we just created) as the default user. You can retrieve the kops user credentials by running this

aws iam create-access-key --user-name kops

Run aws configure and use the kops credentials. You can refer to "Setting up AWS CLI" above.

Even kops needs access to those credentials, for that you can manually export them as "aws configure" doesn't export it automatically.

export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)

Now the fun part, Creating the Cluster

  • Create a S3 bucket for kOps

Since kOps lets us manage the clusters even after installation, it must keep track of the clusters that you have created, along with their configuration, the keys they are using etc. This information is stored in an S3 bucket. S3 permissions are used to control access to the bucket. Go ahead and create a new S3 bucket. When you're creating remember to

  1. Choose ACLs enabled.
  2. Uncheck "Block all public access" and choose the appropriate option.
  • Prepare local environment for kOps

    1. Create a variable name using your domain (which becomes a subdomain)

      image.png

    2. Create a variable for the s3 bucket

      image.png

      export NAME=cluster.rohith.click  //my domain is rohith.click replace it to yours
      export KOPS_STATE_STORE=s3://rohithkops-state-store-new //replace with the name of         
      your S3
      
  • Finally! creating the cluster.

    kops create cluster \
      --name=${NAME} \
      --cloud=aws \
      --zones=us-west-2a \
      --discovery-store=s3://rohithkops-state-store-new/${NAME}/discovery
    

    This will give you a huge list of the resources that will be used to create the cluster. Which at the end looks something like this

      Cluster configuration has been created.
    
      Suggestions:
       * list clusters with: kops get cluster
       * edit this cluster with: kops edit cluster cluster.rohith.click
       * edit your node instance group: kops edit ig --name=cluster.rohith.click nodes-us-    
       west-2a
       * edit your master instance group: kops edit ig --name=cluster.rohith.click master-us- 
       west-2a
    
      Finally configure your cluster with: kops update cluster --name cluster.rohith.click --yes - 
     -admin
    
  • Now, Deploy it, for real

      kops update cluster --name cluster.rohith.click --yes --admin
    

    "--yes --admin is responsible to deploy on the cloud". After completion, your output should look something like this.

      I1102 15:16:03.879482    9982 update_cluster.go:326] Exporting kubeconfig for cluster
      kOps has set your kubectl context to cluster.rohith.click
    
      Cluster is starting.  It should be ready in a few minutes.
    
      Suggestions:
       * validate cluster: kops validate cluster --wait 10m
       * list nodes: kubectl get nodes --show-labels
       * ssh to the master: ssh -i ~/.ssh/id_rsa ubuntu@api.cluster.rohith.click
       * the ubuntu user is specific to Ubuntu. If not using Ubuntu please use the appropriate         
       user based on your OS.
      * read about installing addons at: https://kops.sigs.k8s.io/addons.
    
    • What just happened?

      After the cluster was created, Kops automatically added the cluster context into your local kube config file. Which means you can access the cluster using kubectl. image.png By default, we are assigned one control plane and one worker node to work with. You can now deploy any Kubernetes application like how we deploy locally.

Deleting your cluster

If you're like me and just experimenting with kOps and don't want to be surprised with a hefty bill. You would want to delete it. Soo

kops delete cluster --name cluster.rohith.click --yes

Make sure your instances are terminated

image.png

References

Did you find this article valuable?

Support Collabnix by becoming a sponsor. Any amount is appreciated!

Learn more about Hashnode Sponsors
 
Share this