Member-only story
Journey Of A Microservice Application In The Kubernetes World
Running the application locally on k3s

TL;DR
In the previous article (the first of the series) we introduced the webhooks application and saw how to run it locally with Docker Compose. We will now deploy the same application on a local Kubernetes cluster.
Articles in this series
- Presentation of the webhooks.app
- Running the application on Kubernetes using Helm( the current article)
- Running the application on a Civo Kubernetes cluster
- Continuous Deployment using GitOps with ArgoCD
- Observability using the Loki stack
- Defining the application using Acorn
- Security considerations : security related tools
- Security considerations : fixing misconfigurations
- Security considerations : policies enforcement
- Security considerations : vulnerability scanning (coming soon)
In this article
Now that we have a basic understanding of the application and know how to run it locally with Docker Compose, it’s time to go one step further and run it on Kubernetes. In this article, we will then go through the following tasks:
- create a local Kubernetes cluster based on k3s
- explain how the application can be packaged as a Helm chart and deploy it on the cluster
- add an Ingress Controller to expose the application so we can access it from our browser
Creation of a local k3s cluster
We will start by creating an Ubuntu VM using Multipass and install k3s (a lightweight Kubernetes distribution from Rancher) inside of it.
Note: there are a lot of solutions to run a local Kubernetes cluster (Minikube, microk8s, k0s, …), Multipass + k3s is one of my favorites.
- Provisioning a local VM
Once Multipass is installed on our local machine we create a VM named kube. It should only take a few tens of seconds for the VM to be up and running: