Applications running on the cloud dive deeper towards the most critical changes in how they are developed and deployed. Perhaps the most useful and trendy tool that has come in this space is Kubernetes. There are certain best practices that you should consider for your Kubernetes multi tenancy SaaS application with Amazon EKS.
Kubernetes has been deployed on AWS practically since its inception. On AWS, it is popularly known as EKS. The Amazon-managed Kubernetes service provides a flexible platform for managing your containers without forcing you to control the management infrastructure.
Unprecedented issues like the COVID-19 pandemic have accelerated different organizations to accelerate their digital transformation. The leading flag bearer in these digital transformations is the next-generation cloud offering in software-as-a-Service (SaaS). These SaaS products support the business when half of the world is at a halt. AWS provides a perfect platform for SaaS product deliveries in this space, which complements its rich and diverse IaaS and PaaS offerings. This article will show how we achieve Kubernetes Multi-tenancy in SaaS applications using EKS.
Before starting, I recommend this must-read article about Multi-tenant Architecture SaaS Application on AWS. It will help you understand the multi-tenant environment better, and you’ll learn some meaningful strategies to build your SaaS Application.
When it comes to configuring Kubernetes Multi-tenancy with Amazon EKS, there are numerous advantages. Some of these benefits are highlighted below:
Read our blog about Amazon ECS vs EKS and discover the best container orchestration platform!
Before we move ahead to discuss the challenges and best practices for Kubernetes multi-tenant SaaS applications on EKS, let’s first understand what multi-tenancy is. An essential criterion for the success of any application is usability. A successful application should serve multiple users at the same time. Consequently, the processing capacity of the application should grow linearly with the growth in the number of users.
For an application to scale up rapidly, it is important to maintain performance, stability, durability, and data isolation. The Kubernetes multi-tenant architecture enables concurrent processing. It isolates the application and data from one user to another.
Now, let’s understand the challenges while creating a Kubernetes multi-tenant environment through a metaphor. In an apartment or a condominium building, you need to provide full isolation to the people staying inside. You cannot create an architectural design where one person walks through another person’s apartment to get to the bathroom. All the apartments have to be isolated. Similarly, all the tenants have to be sufficiently isolated.
Similarly, you must ensure that each person should have ample access to resources in the apartment building when it comes to resource sharing. Your building will not function well if the water is shut off in one apartment when people in another apartment take a shower. This implies that all the tenants should have access to all the resources.
However, multi-tenancy poses multiple challenges, as described in the metaphor above. Each workload must be isolated. If there is a vulnerability or a security breach, it should not propagate into another. Each workload should have a fair share of resources like computing, networking, and other resources provided by Kubernetes.
If you want to understand multi-tenant fully, this blog reviews the differences between Single-tenant vs Multi-tenant so you can better comprehend both architectures.
To achieve multi-tenancy, we need to create EKS clusters on AWS, which has multiple tenants on the workloads. In the architecture diagram below, we have multiple tenants hosted on different and fully isolated namespaces. Namespaces are nothing but a logical way to divide cluster resources between multiple resources.
On both the EKS clusters, we have independent components of the applications. These components can be computer resources, storage resources, etc. This isolation between namespaces can be achieved through various techniques; for instance, network policies.
Multi-tenant workload architecture
There are several layers in EKS that provide a specific layer of security and isolation for a Kubernetes multi-tenancy SaaS application. Below are some layers of isolation which you can implement in your design:
Container: A container provides a fundamental layer of isolation but does not isolate the identity or the network. It provides a certain level of isolation from noisy neighbors.
Pod: Pods are nothing but a collection of containers. The pod can isolate networks for a group of containers. Kubernetes network policies help in this kind of micro-segmentation of containers.
Node: A node is a machine, either physical or virtual. A machine includes a collection of pods. A node leverages a hypervisor or dedicated hardware for the isolation of resources.
Cluster: A cluster is a collection of nodes and a control plane. This is the management layer for your containers. The cluster can provide extreme network isolation.
The diagram below shows multiple isolation layers for a Kubernetes multi-tenancy:
Some primary constructs that help design EKS multi-tenancy are Compute, Networking, and Storage. Let’s go through them one by one:
Namespaces are the fundamental element of multi-tenancy. Most of the Kubernetes object belongs to a particular namespace, which virtually isolates them from one another. Namespaces may not provide workload or user isolation, but it does provide RBAC (Role-based Access Control). This defines who can do what on the Kubernetes API.
Amazon EKS provides RBAC using different IAM policies. These policies are mapped to roles and groups. RBAC acts as a central component that offers a layer of isolation between the multiple tenants.
Kubernetes also allows users to limit and define the CPU request and memory for the pods. To optimize intelligent resource allocation, ResourceQuotas can be used. Resource quotas enable users to limit the number of resources consumed within one namespace. CPU utilization and memory utilization can be controlled using ResourceQuotas.
apiVersion: v1.0 kind: ResourceQuota metadata: name: mem-cpu namespace: customNamespace spec: hard: requests.cpu: "2" requests. memory: 2Gi limits.cpu: "2" limits. memory: 4Gi
The pods’ default nature is to communicate over the network on the same cluster across different namespaces. There are several network policies that enable the user to have fine-grained control over pod-to-pod communication. Let’s look at the below network policy, which allows communication within namespaces:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: np-ns1 namespace: namespace1 spec: podSelector: {} policyTypes: - Ingress - Egress ingress: -from: -namespaceSelector: matchLabels: nsname:namespace1 egress: -to: -namespaceSelector: matchLabels: nsname:namespace1
There are more advanced network policies that can be used to isolate the tenants in a multi-cluster environment. These are Service Mesh and App Mesh.
A service mesh provides additional security over the network, which spans outside the single EKS network. It can provide better traffic management, observability, and security. A service mesh can also define better Authorization and Authentication policies for users to access different network layers.
Lastly, AWS App mesh is a managed service that provides consistent network traffic visibility. It provides a detailed control panel to see and control all the different elements in the network.
Storage isolation is a necessity for tenants using a shared cluster. “Volume” is a major tool that Kubernetes offers, which provides a way to connect a form of persistent storage to a pod.
A Persistent Volume is usually declared at the cluster level along with the StorageClass (a cluster administrator responsible for the configurations and operations). Amazon EKS provides different out-of-the-box storage integration, including Amazon EBS, Amazon EFS, and FSx for Lustre.
A PersistentVolumeClaim allows a user to request some volume storage for a pod. PVCs are defined as namespace resources and hence provide better tenancy access to storage. AWS admins can use ResourceQuotas to describe different storage classes for other namespaces. In the code below, we can see how to disable the use of storage class storage2 from the namespace1:
apiVersion: v1 kind: ResourceQuota metadata: name: storage-ns1 namespace: namespace1 spec: hard: storage2.storageclass.storage.k8s.io/requests.storage:
Another alternative to implementing isolation is to implement multiple single tenants’ EKS clusters. With this strategy, all the Tenants will have dedicated resources.
In this kind of implementation, Terraform can be helpful in the provisioning of multiple homogeneous clusters. This can maintain similar policies across different EKS clusters and help automate the provisioning and policy mapping of other EKS clusters.
This is a scalable isolation technique provided that an excellent provisioning and monitoring solution is implemented in the infrastructure where Amazon EKS clusters are running.
Also read: Apache and Ngnix Multi Tenant to Support SaaS Applications
There are multiple best practices in the context of the implementation of EKS multi-tenancy. We are going to discuss some best practices for this implementation:
Namespaces should be categorized based on usage. Some of the command categories can be:
Enabling Role-Based Access Control allows better control of Kubernetes APIs for a different group of users. Using this technique, the admins can create different roles for different users, e.g., one role for admin, and the other one can be for a tenant.
Admins can have better governance of the networking between pods using Network Policies. Tenant namespaces can be easily isolated using this technique.
Implementation of Resource Quotas can ensure proportionate resource usage across tenants. Resource Quotas can better control system resources like CPU, memory, and storage.
It is good to ensure that the tenants do not have access to non-namespaced resources. Non-namespaced resources do not specifically belong to a particular namespace. But, they do belong to a cluster. The admins should ensure that the Tenant does not have privileges to create, update, or delete the cluster-scoped resources.
This is an auto-scaling feature for the pods. HPA provides a cost-optimized solution for scaling the applications to offer a higher uptime and availability. This feature helps to manage unpredictable workloads in Production environments. Automatic sizing detects the application usage patterns and adds the corresponding scaling factor to your application. For example, a static scale schedule will schedule the pods to sleep if your application traffic is less during nighttime. On the other hand, more pods will be added to the cluster if there is an unexpected hike in the application traffic.
EKS Multi-tenancy Best Practices | Summary |
---|---|
1. Categorize the namespaces | Namespaces should be categorized based on usage. |
2. Enable RBAC | Role-Based Access Control allows better control of Kubernetes APIs. |
3. Namespace isolation using Network Policy | Admins can have better governance of the networking between pods using Network Policies. |
4. Limit use of Shared resources | Resource Quotas can ensure proportionate resource usage across tenants. |
5. Limit access to non-namespaced resources | Remove privileges to create, update or delete the cluster-scoped resources. |
6. Horizontal Pod Autoscaler | Cost-optimized solution for scaling applications. |
This article covered some of the best considerations for Kubernetes multi-tenancy implementation using Amazon EKS. We covered different perspectives on computing, networking, and storage. It is imperative to mention that you should weigh these strategies against the cost and complexity of any design. Depending upon the SaaS service you are implementing, using any of the above implementation models or even a hybrid approach can suit your design needs.
Go and read this blog on DZone!
It is an ecosystem or model in which a single environment can serve multiple tenants. It utilizes a scalable, available, and resilient architecture that helps businesses have a more comfortable startup experience and a lower hardware requirement. The multi-tenant architecture has become the standard within enterprise SaaS environments.
Maintaining performance, stability, durability, and data isolation is essential for rapidly scaling up applications. The Kubernetes multi-tenant architecture enables concurrent processing. It isolates the application and data from one user to another.
Multi-tenancy poses multiple challenges. Each workload should have a fair share of resources like computing, networking, and other resources provided by Kubernetes.
SaaS applications are the new normal nowadays, and software providers are looking to transform their web applications into a Software As a Service application. The only solution is to build a Multi-tenant architecture SaaS application.
Two layers are needed to enable your application to act as a real SaaS platform. It is paramount to decide which multi-tenant architecture you’ll incorporate the application and database layer in your SaaS platform. These two types of multi-tenant architectures are the Application layer Multi-tenancy and the Database layer Multi-tenancy.
A schema per tenant single database, also known as the bridge model, is a multi-tenant database approach that is still very cost-effective and more secure than the pure tenancy.
One important distinction to notice is that with more than 100 schemas or tenants within a database, it can provoke a lag in your database performance. Hence, splitting the database into two (add the second database as a replica). However, the best database tool for this approach is PostgreSQL, which supports multiple schemas without much complexity.
To build a multi-tenant architecture, you must integrate the correct AWS web stack into AWS technologies, including OS, language, libraries, and services. This is just the first step towards creating a next-generation multi-tenant architecture.
Discover the steps for developing cloud applications, from costs to cloud app deployment
Imagine launching your product with just the core features, getting honest user feedback, and then…
When a tight deadline is non-negotiable, every second counts! Here’s how we developed and launched…
You may have considered hiring a nearshore software development company or services, but you still have doubts…
End-to-end project management goes as far back as you can remember. Every project in history, even…
AWS DevOps has recently become a trending topic in IT circles as it offers companies…