I had the opportunity of working with Kubernetes (K8s) since 2018. Over the years, I have architected an Enterprise Kubernetes implementations, running containerised and production ready applications. I also continuously work on Personal Projects involving Docker, Kubernetes and Helm.
In this article we will evaluate Kubernetes deployments and multi-cloud experience of Kubernetes in the following clouds:
- Azure Kubernetes Service (AKS) on Azure Cloud
- Google Kubernetes Engine (GKE) on Google Cloud Platform
- Elastic Kubernetes Service (EKS) on AWS Cloud
We will be working with my own Artificial Intelligence (AI) solution – TitanicAI. Automation, scripts and declarative language will be used to provision Kubernetes cluster as well as relevant networking resources in the cloud.
This will facilitate access to the application from the Internet and traffic routing all the way to private K8s clusters hosted in Azure, GCP and AWS. In addition, we will setup DNS routing for public access and to top it off, the application will be secured with SSL Certificate.
The solution consists of TitanicAI Webapp and Api components packaged as Docker Images as well as the following automation scripts and YAML declarations: AKS Scripts and YAML, GKE Scripts and YAML and EKS Scripts and YAML.
There are three sections in the Analysis chapter, each worth 5 points. These sections represent different aspects of working with Kubernetes, such as Scripting and CLI, YAML Declarations and The Cloud Experience. The maximum score per cloud is 15 points.
Scripting and CLI
Each cloud deployment script to either Azure AKS, GCP GKE, or AWS EKS consists of Variables Setup, Provisioning Cloud Resources, Deploying Application to Kubernetes and Clean Up stages.
GCP CLI (gcloud) in my view offers the least complicated CLI experience when it comes to provisioning Kubernetes. The scripting merely requires making two gcloud calls, one to provision the cluster and the second one to provision public IP address.
Google cloud takes care of the rest of the resource provisioning beyond that point. This includes IP address registration on our private network, SSL certificate provisioning and external routing to the application. You will read more about it in the YAML Declarations section.
To get you started with TitanicAI GKE Scripts all you need is Google Cloud account, a Project created in GCP and a User or Service Account with relevant roles and privileges.
gcloud CLI documentation can be found here.
Azure CLI (az) probably offers the next best experience. I like the fact that Azure creates Resource Groups allowing to bundle all project related resources inside them. And of course, this can be done with CLI. All it takes is one az call to create the resource group and one more to create the cluster.
In addition to that, you also need to provision Ingress Controller and Certificate Manager using Helm Charts. This is required to route external traffic to the cluster as well as secure it with SSL, respectively. In contrast, the above is fully automated in GCP.
To get you started with TitanicAI AKS Scripts all you need is Azure Cloud account and a User or Service Account with relevant roles and privileges.
az CLI documentation can be found here.
AWS CLI (aws & eksctl) are two separate CLIs with the latter offering an easy to use wrapper around core AWS CLI Kubernetes functionality. You will need to make two CLI calls, one aws CLI to provision SSL certificate and second eksctl to provision the cluster.
The disadvantage in AWS automation experience is that you will need to request CNAME Name & Value for your AWS SSL Certificate and manually create DNS entries on your Domain Registrar to validate domain ownership. This is something that was not required for GCP & Azure deployments.
In addition, you will need to manually update YAML Service declaration with the AWS SSL Certificate ARN Name to secure the traffic – again, something that was not required for GCP & Azure deployments.
To get you started with TitanicAI EKS Scripts all you need is AWS Cloud account and a User or Service Account with relevant roles and privileges.
For our TitanicAI example we will be working with the following YAML Declarations: Namespace, ManagedCertificate (GKE only), ClusterIssuer (AKS only), Deployment, Service, Ingress (GKE & AKS only) and ConfigMap.
GKE supports SSL certificate creation with YAML Declarations using ManagedCertificate objects. This great GCP feature allows to tie the GCP SSL Issuer very neatly with the GKE cluster. All that is required to use the SSL certificate in our app is SSL label annotation on Ingress, and viola, your app is secured.
Of course, think cloud lock-in at this stage and how cloud agnostic you want your solution to be. If cloud agnostic is your goal then keep reading, as AKS did it differently… which might fit your cloud agnostic requirement.
GKE Ingress also takes another annotation for the IP address label – this data is static and is defined once per app deployment so not a massive overhead here in terms of script and infrastructure maintenance.
Therefore, with a single and static TitanicAI GKE YAML Declaration you can have the application up & running with ease and in no time at all.
AKS offers similar SSL Cert experience to GKE. However, under the hood it uses Let’s Encrypt for the SSL issuing rather than cloud native solution. Please do not get me wrong, Let’s Encrypt is great and, in a sense, it offers more of a cloud agnostic SSL implementation. Therefore, the choice is yours as to whether you would prefer your cloud provider issuing SSL certs or more of a cloud agnostic solution, like Let’s Encrypt.
Aside from that, the setup is like GKE. You will need to provision the certificate using ClusterIssuer object, which you’ll then link on Ingress using annotations.
AKS Ingress will also require DNS entry rule to allow Internet traffic routing to the app.
Therefore, with a single and static TitanicAI AKS YAML Declaration you can have the application up & running with ease and in no time at all – provided you have deployed Ingress Controller and Certificate Manager with Helm charts beforehand – which I explained in the Scripts and CLI section.
EKS implementation is notably different to GKE and AKS. We have no concept of YAML declaration for the SSL cert here. As you might remember from the Scripts and CLI section, AWS relies on aws CLI for SSL cert provisioning. All you need to do once you have your AWS SSL cert provisioned is to add SSL Cert ARN annotation on the YAML Service declaration to secure traffic to the app.
The disadvantage in this setup is the fact that the AWS SSL Cert ARN is dynamic. This means that if you re-deploy the infrastructure, you will then have to change the SSL ARN annotation in YAML because its value will be different each time. Of course, token:value substitution is the recipe here…
Aside from the SSL ARN annotation setup, the TitanicAI EKS YAML Declaration is actually quite neat. All you need to create is Namespace, Deployments and Services. And as a bonus, no need for Ingress either.
The Cloud Experience
We have reached the third section of our analysis The Cloud Experience. Here, we will look at few cloud UX and CLI aspects – all considering the TitanicAI deployment. I will present this section in a tabular format.
|Category||Google Cloud||Azure Cloud||AWS Cloud|
|Cluster Provisioning Time
|Daily Cloud Cost
1xIP 1xLB 1xNode 2vCPU 4GBRAM 32GBDisk
|Cloud UI and Navigation
Finding deployed resources
|Setting up the TitanicAI Solution
Ease of putting the solution together
|Overall Satisfaction with the Cloud
Would I carry on using the service
I have got to admit that every single implementation either Azure AKS, GCP GKE or AWS EKS, has provided me with a degree of challenges. This could be around automation related problems, or maybe gaps around provisioning resources via CLI or simply some YAML configuration aspects that are unique to certain cloud provider.
It is worth noting that these challenges were simply gaps in my own knowledge rather than a limitation on the cloud provider side. In the end, I was able to automate provisioning of cloud resources, apply configuration and then deploy a single application TitanicAI into three different clouds: Azure, Google, and AWS.
|Section||Google Cloud||Azure Cloud||AWS Cloud|
|Scripting and CLI||5||4.5||4|
|The Cloud Experience||4.5||5||4|
Different people will have different opinions and levels of experience with cloud platforms. The above is simply qualitative and quantitative summary of my own journey with Kubernetes in the Cloud: AKS vs GKE vs EKS and I hope it will help you on your Kubernetes journey…
- plainkube.dev – My own experiences of Kubernetes implementation in an Enterprise environment
- platformops.dev – Automation scripts to run modern infrastructure and Kubenretes in the cloud: AWS, GCP, Azure and DevOps
- titanicai.dev – An example of using AI purely on Docker container technology and Kubernetes
- Docker Building Blocks – Learn Docker from ground up in this step-by-step guide from QbitUniverse
- Kubernetes Building Blocks – Learn Kuberetes from ground up in this step-by-step guide from QbitUniverse
- Helm Building Blocks – Learn Helm from ground up in this step-by-step guide from QbitUniverse