Introduction

Recently, I delved deeply into EKS setup, aiming to establish a solid understanding of IAM management, focusing on the aws_auth config map and access policies. I also explored how to expose applications via the AWS Load Balancer ingress controller and ensured the correct tags were in place for automatic discovery of private and public subnets. Additionally, I wanted to install ArgoCD, expose its UI through an ingress, and deploy an application using ArgoCD. Since the game 2048 is widely used in Kubernetes deployment demos, I chose to include it in this demonstration.

In this blog, you’ll discover tips and tricks for:

  • Configuring ArgoCD server settings to resolve the 307 Error (too many redirects)
  • Managing EKS access entries and creating access policies using Terraform
  • Creating Helm Charts instead of Kubernetes Manifests during cluster provisioning

By the end of this blog, you’ll be able to access the 2048 game in your browser and deploy it using ArgoCD.

Requirements

The code for this demo is available at eks-argocd-deployment. To run it, you will need:

  1. An AWS account and local credentials configured. This demo will use some services that might not be part of the Free Tier allowance, however do not expect to spend more than $1-2 if you keep the infrastructure running for 2 hours.
  2. Terraform v1.9.4 installed.
  3. As part of the demo you will also expose the application to the internet. Both ArgoCD and the game ALBs are public and listen on both HTTP and HTTPS. You can target them on HTTP by pasting the public ALB DNS names in the console. If you want to access the ALBs on HTTPS, the Terraform code accepts a domain name and a R53 public hosted zone ID as input. You will need to have a domain registered with a Domain Registrar and a R53 Public Zone to host your domain. Feel free to go to namecheap and register a domain for under $2.

Architecture

alt

The Terraform code will create a VPC with both private and public subnets in separate AZs (Availability Zones) according to the numbers that you specify. If the number that you specify is bigger than the number of AZs in your region, you will be presented with an error from the lifecycle preconditions.

Assuming that you are deploying the code from your local laptop and you do not have any VPN connection to the AWS cloud, you will need to enable public access on the EKS cluster API. This is enabled by default in the Terraform code and is needed because we are creating Helm releases for ArgoCD and AWS Load Balancer Controller. The EKS cluster will create endpoints in all the private and public subnets. A separate security group will be created to allow access to these endpoints from a jumpbox EC2 instance. In this way you can disable the public access on the cluster API, however you will need to reenable it in case you need to delete your infra via Terraform from you local machine. Additional access to the EKS API is available via the access entry/access policy combination. One thing to keep in mind is that you need to first create the access entry before creating the access policy.

SSM endpoints are deployed to allow access to the EC2 jumpbox via SSM console. The instance role is configured as principal for the EKS access entry, allowing the instance assuming that role to access the k8s API. The instance is configured with awscli and kubectl. All you need to do is update the kubeconfig file with the command below:

aws eks --region YOUR_REGION update-kubeconfig --name YOUR_PROJECT_NAME-cluster

A wildcard certificate will be created for the domain that you mention and it will be validated via DNS validation. This certificate will be used by both the ArgoCD and the game ALB. When I first deployed the ArgoCD helm chart and tried to expose the ArgoCD server via an ingress, I came across the HTTP 307 error, caused by too many redirects. A solution to that was to set the value for server.insecure to true. In this way the ArgoCD server will allow HTTP traffic which is absolutely fine since we are terminating TLS on the Application Load Balancer.

The AWS Load Balancer Controller is the ingress controller which is used as part of the ALB provisioning. All the subnets have the kubernetes.io/role/elb or the kubernetes.io/role/internal-elb tag which enables the ingress controller to select the right subnets to provision the load balancers into.

Demo

Apply the Terraform code from eks-argocd-deployment. After the infrastructure is deployed, go to your R53 public hosted zone and add a CNAME record pointing a subdomain of your choice (e.g. argocd.yourdomain.com) to the DNS name of the ArgoCD ALB. Set a TTL of 300s and wait for the changes to propagate. If you go to your browser and access the subdomain you created a record for, you should see the below:

alt

The username is admin and the password is contained in the argocd-initial-admin-secret located in the argo namespace:

kubectl get secret argocd-initial-admin-secret --namespace argo -o yaml

Now that ArgoCD is deployed, you can deploy the 2048 game. The kustomization file together with the necessary manifests are located inside the 2048-game folder in the root of the repo. You will use the ArgoCD application defined in the argocd folder. Before deploying the manifests, make sure to replace the certificate ARN in the 2048-game/3-ingress.yaml file.

Go to the root of the repo and run the below command to create the ArgoCD application:

kubectl apply -f argocd/argocd-application.yaml

Go to the Argo UI and you should see something similar to the below:

alt

Go to your public R53 hosted zone and add another CNAME record, pointing a subdomain of your choice (e.g. 2048-game.yourdomain.com) to the DNS name of the ALB created by the Argo application. Wait for the changes to propagate, then search for your game subdomain in the browser, you should see something like below:

alt

And with that, our demo comes to a close! I hope you found this walkthrough both informative and engaging, providing you with the basics of ArgoCD deployments. By following along, you should now have a clearer understanding of how to leverage ArgoCD for application deployments on k8s and how to use the AWS ingress controller to expose your application. Until next time, happy learning!