When it comes to infrastructure provisioning, including the AWS EKS Cluster, Terraform is the first tool that comes to mind. Learning Terraform is much easier than setting up the infrastructure manually. That said, would you rather use the traditional approach to set up the infrastructure, or would you prefer to use Terraform? More specifically, would you rather create EKS Cluster using Terraform and have Terraform Kubernetes Deployment in place, or use the manual method, leaving room for human errors?
As you may already know, Terraform is an open-source Infrastructure as Code software platform that allows you to manage hundreds of cloud services using a uniform CLI approach and uses declarative configuration files to codify cloud APIs. In this article, we won’t go into all the details of Terraform. Instead, we will be focusing on Terraform Kubernetes Deployment.
In summary, we will be looking at the steps to provision EKS Cluster using Terraform. Also, we will go through how Terraform Kubernetes Deployment helps save time and reduce human errors which can occur when using a traditional or manual approach for application deployment.
Before we proceed and provision EKS Cluster using Terraform, there are a few commands or tools you need to have in mind and on hand. First off, you must have an AWS Account, and Terraform must be installed on your host machine, seeing as we are going to create EKS Cluster using Terraform CLI on the AWS cloud.
Now, let’s take a look at the prerequisites for this setup and help you install them.
1. AWS Account: If you don’t have an AWS account, you can register for a Free Tier Account and use it for test purposes. Click here to learn more about the Free Tier AWS Account or create one if you don’t already have one.
2. IAM Admin User: You must have an IAM user with AmazonEKSClusterPolicy and AdministratorAccess permissions as well as its secret and access keys. We will be using the IAM user credentials to provision EKS Cluster using Terraform. Click here to learn more about the AWS IAM Service. The keys that you create for this user will be used to connect to the AWS account from the CLI (Command Line Interface).
When working on production clusters, only provide the required access and avoid providing admin privileges.
3. EC2 Instance: We will be using Ubuntu 18.04 EC2 Instance as a host machine to execute our Terraform code. You may use another machine, however, you will need to verify which commands are compatible with your host machine in order to install the required packages. Click here to learn more about the AWS EC2 service. The first step is installing the required packages on your machine. You can also use your personal computer to install the required tools. This step is optional.
4. Access to the Host Machine: Connect to the EC2 Instance and Install the Unzip package.
a. ssh -i “<key-name.pem>” ubuntu@<public-ip-of-the–ec2-instance>
If you are using your personal computer, you may not need to connect to the EC2 instance, however, in this case, the installation command will differ.
b. sudo apt-get update -y
c. sudo apt-get install unzip -y
5. Terraform: To create EKS Cluster using Terraform, you need to have Terraform on your Host machine. Use the following commands to install Terraform on an Ubuntu 18.04 EC2 machine. Click here to view the installation instructions for other platforms.
a. sudo apt-get update && sudo apt-get install -y gnupg software-properties-common curl
b. curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add –
c. sudo apt-add-repository “deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main”
d. sudo apt-get update && sudo apt-get install terraform
e. terraform –version
6. AWS CLI: There is not much to do with aws-cli, however, we need to use it to check the details about the IAM user whose credentials will be used from the terminal. To install it, use the commands below. Click here to view the installation instructions if you are using another platform.
a. curl “https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip” -o “awscliv2.zip”
b. unzip awscliv2.zip
c. sudo ./aws/install
7. Kubectl: We will be using the kubectl command against the Kubernetes Cluster to view the resources in the EKS Cluster that we want to create. Install kubectl on Ubuntu 18.04 EC2 machine using the commands below. Click here to view different installation methods for installing kubectl on different platforms.
a. curl -LO “https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl”
b. url -LO “https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256″
c. echo “$(cat kubectl.sha256) kubectl” | sha256sum –check
d. sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
e. kubectl version –client
8. DOT: This step is completely optional. We will be using this to convert the output of the Terraform graph command. The output of the Terraform graph command is in DOT format, which can easily be converted into an image by making use of the DOT provided by GraphViz. The Terraform graph command is used to generate a visual representation of a configuration or execution plan.
To install the DOT command, execute the command below.
a. sudo apt install graphviz
9. Export your AWS Access and Secret Keys for the current session. If the session expires, you will need to export the keys again on the terminal. There are other ways to use your keys that allow aws-cli to interact with AWS. Click here to learn more.
a. export AWS_ACCESS_KEY_ID=<YOUR_AWS_ACCESS_KEY_ID>
b. xport AWS_SECRET_ACCESS_KEY=<YOUR_AWS_SECRET_ACCESS_KEY>
c. export AWS_DEFAULT_REGION=<YOUR_AWS_DEFAULT_REGION>
Here, replace <YOUR_AWS_ACCESS_KEY_ID> with your access key, <YOUR_AWS_SECRET_ACCESS_KEY> with your secret key and <YOUR_AWS_DEFAULT_REGION> with the default region for your aws-cli.
10. Check the details of the IAM user whose credentials are being used. Basically, this will display the details of the user whose keys you used to configure the CLI in the above step.
a. aws sts get-caller-identity
The architecture should appear as follows.
A VPC will be created with three Public Subnets and three Private Subnets. Traffic from Private Subnets will route through the NAT Gateway and traffic from Public Subnets will route through the Internet Gateway.
Kubernetes Cluster Nodes will be created as part of Auto-Scaling groups and will reside in Private Subnets. Public Subnets can be used to create Bastion Servers that can be used to connect to Private Nodes.
Three Public Subnets and three Private Subnets will be created in three different Availability Zones.
You can change the VPC CIDR in the Terraform configuration files if you wish. If you are just getting started, we recommend following the blog without making any unfamiliar changes to the configuration in order to avoid human errors.
This blog will help you provision EKS Cluster using Terraform and deploy a sample NodeJs application. When creating an EKS Cluster, other AWS resources such as VPC, Subnets, NAT Gateway, Internet Gateway, and Security Groups will also be created on your AWS account. This blog is divided into two parts:
You may also like 5 Reasons Why You Should Choose Node.js for App Development.
First off, we will create an EKS Cluster, after which we will deploy a sample Nodejs application on it using Terraform. In this blog, we refer to Terraform Modules for creating VPC, and its components, along with the EKS Cluster.
Here is a list of some of the Terraform elements we’ll use.
Before we go ahead and create EKS Cluster using Terraform, let’s take a look at why Terraform is a good choice.
It’s normal to wonder “why provision EKS Cluster using Terraform” or “why create EKS Cluster using Terraform,” when we can simply achieve the same with the AWS Console or AWS CLI or other tools. Here are a few of the reasons why:
In this part of the blog, we shall provision EKS Cluster using Terraform. While doing this, other dependent resources like VPC, Subnets, NAT Gateway, Internet Gateway, and Security Groups will also be created, and we will also deploy an Nginx application with Terraform.
Note: You can find all of the relevant code on my Github Repository. Before you create EKS Cluster with Terraform using the following steps, you need to set up and make note of a few things.
Now, let’s proceed with the creation of an EKS Cluster using Terraform.
module “eks” { source = “terraform-aws-modules/eks/aws” version = “17.24.0” cluster_name = local.cluster_name cluster_version = “1.20” subnets = module.vpc.private_subnets vpc_id = module.vpc.vpc_id workers_group_defaults = { root_volume_type = “gp2” } worker_groups = [ { name = “worker-group-1” instance_type = “t2.small” additional_userdata = “echo nothing” additional_security_group_ids = [aws_security_group.worker_group_mgmt_one.id] asg_desired_capacity = 2 }, { name = “worker-group-2” instance_type = “t2.medium” additional_userdata = “echo nothing” additional_security_group_ids = [aws_security_group.worker_group_mgmt_two.id] asg_desired_capacity = 1 }, ] } data “aws_eks_cluster” “cluster” { name = module.eks.cluster_id } data “aws_eks_cluster_auth” “cluster” { name = module.eks.cluster_id } |
provider “kubernetes” { host = data.aws_eks_cluster.cluster.endpoint token = data.aws_eks_cluster_auth.cluster.token cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) } resource “kubernetes_namespace” “test” { metadata { name = “nginx” } } resource “kubernetes_deployment” “test” { metadata { name = “nginx” namespace = kubernetes_namespace.test.metadata.0.name } spec { replicas = 2 selector { match_labels = { app = “MyTestApp” } } template { metadata { labels = { app = “MyTestApp” } } spec { container { image = “nginx” name = “nginx-container” port { container_port = 80 } } } } } } resource “kubernetes_service” “test” { metadata { name = “nginx” namespace = kubernetes_namespace.test.metadata.0.name } spec { selector = { app = kubernetes_deployment.test.spec.0.template.0.metadata.0.labels.app } type = “LoadBalancer” port { port = 80 target_port = 80 } } } |
output “cluster_id” { description = “EKS cluster ID.” value = module.eks.cluster_id } output “cluster_endpoint” { description = “Endpoint for EKS control plane.” value = module.eks.cluster_endpoint } output “cluster_security_group_id” { description = “Security group ids attached to the cluster control plane.” value = module.eks.cluster_security_group_id } output “kubectl_config” { description = “kubectl config as generated by the module.” value = module.eks.kubeconfig } output “config_map_aws_auth” { description = “A kubernetes configuration to authenticate to this EKS cluster.” value = module.eks.config_map_aws_auth } output “region” { description = “AWS region” value = var.region } output “cluster_name” { description = “Kubernetes Cluster Name” value = local.cluster_name } |
resource “aws_security_group” “worker_group_mgmt_one” { name_prefix = “worker_group_mgmt_one” vpc_id = module.vpc.vpc_id ingress { from_port = 22 to_port = 22 protocol = “tcp” cidr_blocks = [ “10.0.0.0/8”, ] } } resource “aws_security_group” “worker_group_mgmt_two” { name_prefix = “worker_group_mgmt_two” vpc_id = module.vpc.vpc_id ingress { from_port = 22 to_port = 22 protocol = “tcp” cidr_blocks = [ “192.168.0.0/16”, ] } } resource “aws_security_group” “all_worker_mgmt” { name_prefix = “all_worker_management” vpc_id = module.vpc.vpc_id ingress { from_port = 22 to_port = 22 protocol = “tcp” cidr_blocks = [ “10.0.0.0/8”, “172.16.0.0/12”, “192.168.0.0/16”, ] } } |
terraform { required_providers { aws = { source = “hashicorp/aws” version = “>= 3.20.0” } random = { source = “hashicorp/random” version = “3.1.0” } local = { source = “hashicorp/local” version = “2.1.0” } null = { source = “hashicorp/null” version = “3.1.0” } kubernetes = { source = “hashicorp/kubernetes” version = “>= 2.0.1” } } required_version = “>= 0.14” } |
variable “region” { default = “us-east-1” description = “AWS region” } provider “aws” { region = var.region } data “aws_availability_zones” “available” {} locals { cluster_name = “test-eks-cluster-${random_string.suffix.result}” } resource “random_string” “suffix” { length = 8 special = false } module “vpc” { source = “terraform-aws-modules/vpc/aws” version = “3.2.0” name = “test-vpc” cidr = “10.0.0.0/16” azs = data.aws_availability_zones.available.names private_subnets = [“10.0.1.0/24”, “10.0.2.0/24”, “10.0.3.0/24”] public_subnets = [“10.0.4.0/24”, “10.0.5.0/24”, “10.0.6.0/24”] enable_nat_gateway = true single_nat_gateway = true enable_dns_hostnames = true tags = { “kubernetes.io/cluster/${local.cluster_name}” = “shared” } public_subnet_tags = { “kubernetes.io/cluster/${local.cluster_name}” = “shared” “kubernetes.io/role/elb” = “1” } private_subnet_tags = { “kubernetes.io/cluster/${local.cluster_name}” = “shared” “kubernetes.io/role/internal-elb” = “1” } } |
16. After the “terraform apply” command is successfully completed, you should see the output as depicted below.
17. You can now go to the AWS Console and verify the resources created as part of the EKS Cluster.
17.1. EKS Cluster
You can check for other resources in the same way.
18. Now, if you try to use the kubectl command to connect to the EKS Cluster and control it, you will get an error seeing as you have the kubeconfig file being used for authentication purposes.
18.1. kubectl get nodes
In the screenshot above, you can see the Namespace, Pods, Deployment, and Service that we created with Terraform.
We have now attempted to create EKS Cluster using Terraform and deploy Nginx with Terraform. Now, let’s see if we can deploy a sample NodeJs application using Terraform in the same EKS Cluster. This time, we will keep the Kubernetes objects files in a separate folder so that the NodeJs application can be managed independently, allowing us to deploy and/or destroy our NodeJs application without affecting the EKS Cluster.
In this part of the article, we will deploy a sample NodeJs application and its dependent resources, including Namespace, Deployment, and Service. We have used the publicly available Docker Images for the sample NodeJs application and MongoDB database.
Now, let’s go ahead with the deployment.
provider “kubernetes” { config_path = “~/.kube/config” } resource “kubernetes_namespace” “sample-nodejs” { metadata { name = “sample-nodejs” } } resource “kubernetes_deployment” “sample-nodejs” { metadata { name = “sample-nodejs” namespace = kubernetes_namespace.sample-nodejs.metadata.0.name } spec { replicas = 1 selector { match_labels = { app = “sample-nodejs” } } template { metadata { labels = { app = “sample-nodejs” } } spec { container { image = “learnk8s/knote-js:1.0.0” name = “sample-nodejs-container” port { container_port = 80 } env { name = “MONGO_URL” value = “mongodb://mongo:27017/dev” } } } } } } resource “kubernetes_service” “sample-nodejs” { metadata { name = “sample-nodejs” namespace = kubernetes_namespace.sample-nodejs.metadata.0.name } spec { selector = { app = kubernetes_deployment.sample-nodejs.spec.0.template.0.metadata.0.labels.app } type = “LoadBalancer” port { port = 80 target_port = 3000 } } } resource “kubernetes_deployment” “mongo” { metadata { name = “mongo” namespace = kubernetes_namespace.sample-nodejs.metadata.0.name } spec { replicas = 1 selector { match_labels = { app = “mongo” } } template { metadata { labels = { app = “mongo” } } spec { container { image = “mongo:3.6.17-xenial” name = “mongo-container” port { container_port = 27017 } } } } } } resource “kubernetes_service” “mongo” { metadata { name = “mongo” namespace = kubernetes_namespace.sample-nodejs.metadata.0.name } spec { selector = { app = kubernetes_deployment.mongo.spec.0.template.0.metadata.0.labels.app } type = “ClusterIP” port { port = 27017 target_port = 27017 } } } |
10. Once the “terraform apply” command is successfully completed, you should see the following output.
11. You can now verify the objects that have been created using the commands below.
11.1. kubectl get pods -A
11.2. kubectl get deployments -A
11.3. kubectl get services -A
In the above screenshot, you can see the Namespace, Pods, Deployment, and Service that were created for the sample NodeJs application.
We just deployed a sample NodeJs application that was publicly accessible over the LoadBalancer DNS using Terraform.
Next, we will complete the creation of the EKS Cluster and deployment of the sample NodeJs application using Terraform.
It’s always better to delete the resource once you’re done with the tests, seeing as this saves costs. To clean up the resources and delete the sample NodeJs application and EKS Cluster, follow the steps below.
In the above screenshot you can see that all of the sample Nodejs application resources have been deleted.
5. You will see the following output once the above command is successful.
6. You can now go to the AWS console to verify whether or not the resources have been deleted.
There you have it! We have just successfully deleted the EKS Cluster, as well as the sample NodeJs application.
Elastic Kubernetes Service (EKS) is a managed Kubernetes service provided by AWS, which takes the complexity and overhead out of provisioning and optimizing a Kubernetes Cluster for development teams. An EKS Cluster can be created using a variety of methods; nevertheless, using the best possible way is critical in improving the infrastructure management lifecycle.
Terraform is one of the Infrastructure as Code (IaC) tools that allows you to create, modify, and version control cloud and on-premise resources in a secure and efficient manner. You can use the Terraform Kubernetes Deployment method to create EKS Cluster using Terraform while automating the creation process of the EKS Cluster and having additional control over the entire infrastructure management process through code. The creation of the EKS Cluster and the deployment of Kubernetes objects can also be managed using Terraform Kubernetes Provider.
You can definitely create an EKS Cluster from the AWS console, but what if you want to create the cluster for different environments such as Dev, QA, Staging, or Prod. To avoid human errors and maintain consistency across the different environments, it’s important to have an automation tool. Therefore, in such a case, it’s better to create EKS Cluster using Terraform.
Terraform has the advantage of being able to use the same configuration language for both provisioning the Kubernetes Cluster and deploying apps to it. Moreover, Terraform allows you to build, update, and delete pods and resources with only one command, thus eliminating the need to check APIs to identify resources.
Terraform keeps track of everything it creates or manages in a State File. This State File can be stored on your local machine or on remote storage such as S3 Bucket. State Files should be saved to remote storage so that everyone can work with the same state and actions can be performed on the same distant objects.
Discover the steps for developing cloud applications, from costs to cloud app deployment
Imagine launching your product with just the core features, getting honest user feedback, and then…
When a tight deadline is non-negotiable, every second counts! Here’s how we developed and launched…
You may have considered hiring a nearshore software development company or services, but you still have doubts…
End-to-end project management goes as far back as you can remember. Every project in history, even…
AWS DevOps has recently become a trending topic in IT circles as it offers companies…