An intuitive graphical tool to define complex network policies

Dragonfly
Dragonfly
Photo by Andreas Weilguny on Unsplash.

Cilium, the eBPF-based networking solution, just released a web editor to facilitate the creation of Kubernetes network policies. In this article, we will demo the tool and use it to define a sample network policy.

A Quick Presentation of Cilium and eBPF

“Cilium is an open source project that has been designed on top of eBPF to address the networking, security, and visibility requirements of container workloads. It provides a high-level abstraction on top of eBPF.” — Cilium blog

Cilium has a wide application domain and is commonly used as a CNI plugin in Kubernetes, as illustrated in the following schema:


Using another container runtime for better isolation and security

People standing on an ice floe.
People standing on an ice floe.
Photo by Roxanne Desgagnés on Unsplash

As not all pods can be trusted, this article will show different options to enhance process isolation through the usage of container runtimes other than the default one (runc). We will use Kubernetes k0s distribution to illustrate all of this. If you do not know k0s, you can find a quick introduction in this article.

Create a K0s Cluster

In the introduction article, we detailed the steps needed to easily setup a k0s cluster. …


Get familiar with some key concepts of cluster management

Kitten pawing at flower
Kitten pawing at flower
Photo by Dimitri Houtteman on Unsplash.

This article offers a back-to-basics approach to help you understand several actions that can be done on a cluster’s nodes.

Our Test Cluster

Let’s consider a newly created kubeadm cluster containing one master and two worker nodes:

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-1 Ready control-plane,master 18m v1.20.0
k8s-2 Ready <none> 18m v1.20.0
k8s-3 Ready <none> 18m v1.20.0

First, we will install Kubernetes Operational View (aka kube-ops-view). This application is very handy for seeing all the pods running in a cluster at a glance. There are currently 14 pods running:

  • Two of them are in the default namespace and are…


An example showing how kube-proxy plays with iptables

Examples of iptables rules
Examples of iptables rules
Examples of iptables rules. Photo by the author.

The Kubernetes network proxy (aka kube-proxy) is a daemon running on each node. It basically reflects the services defined in the cluster and manages the rules to load-balance requests to a service’s backend pods.


Demystifying the usage of CNI plugins

Image for post
Image for post

When setting up a Kubernetes cluster, the installation of a network plugin is mandatory for the cluster to be operational. To keep things simple, the role of a network plugin is to set up the network connectivity so Pods running on different nodes in the cluster can communicate with each other. Depending upon the plugin, different network solutions can be provided: overlay (vxlan, IP-in-IP) or non-overlay.

To simplify the usage of a network plugin, Kubernetes exposes the Container Network Interface (aka CNI) so any network plugin that implements this interface can be used.

Kubernetes also allows the usage of kubenet


A first look into the Vanilla Stack, a new open-source-only, cloud-native stack based on Kubernetes

flowers of the vanilla plant with pods attached
flowers of the vanilla plant with pods attached

A couple of weeks ago, I stumbled upon the Vanilla Stack, a technology stack based on Kubernetes and embedding many great open source components. In this article, which is mainly a presentation of the stack, we will quickly go through the installation process showing the different options available.

Vanilla Stack — A Gentle Introduction

The Vanilla Stack can be defined as a Kubernetes cluster shipped with many open source components.

Among the different solutions provided out of the box are:

  • Rook to manage distributed storage (filesystem, block, object)
  • OpenStack offering infrastructure as a service (IaaS)
  • Cloud Foundry offering a platform as a service (PaaS)

The following…


Building a k0s Kubernetes cluster across standard Ubuntu (amd64) and Raspberry Pi (arm64)

A cluster of mushrooms
A cluster of mushrooms
Photo by Nareeta Martin on Unsplash

In a previous article, we presented the basics of k0s, a new lightweight Kubernetes distribution packaged in a single Go binary. We also set up a demo cluster using local VMs created with Multipass.

In this new article, we’ll continue our exploration of k0s and set up a simple cluster with one master and three workers. The workers will run on different architectures (amd64/arm64):

  • master: Ubuntu 20.04 running on DigitalOcean
  • worker1 : Ubuntu 20.04
  • worker2 : RPI 4 running RPI OS 64 bits
  • worker3 : RPI 4 running Alpine Linux

The master is running on DigitalOcean, but the workers are…


A first look into this new Kubernetes distribution

An egg with a smiley face drawn on placed in a cup.
An egg with a smiley face drawn on placed in a cup.
Photo by Annie Spratt on Unsplash

A couple of days ago, a friend told me about Mirantis’s new Kubernetes distribution named k0s. We all know and love K8s, right? We also succumbed to K3s, the lightweight Kubernetes made by Rancher Labs and donated to the CNCF some time ago. It’s now time to discover a new distribution: k0s.

After a short introduction to k0s, we’ll set up a three-node cluster following the steps below:

  • Provisioning three virtual machines (Multipass in action)
  • Installing k0s on each of them
  • Setting up a simple k0s cluster configuration file
  • Initializing the cluster
  • Accessing the cluster
  • Adding worker nodes
  • Adding a…


An example using Rancher’s RKE clusters

Image for post
Image for post
Photo by Markus Winkler on Unsplash

In a previous article, I explained the role etcd plays in a Kubernetes cluster.

We saw examples of the information etcd contains, the different ways it can be installed (inside or outside of the cluster), and how the etcd nodes exchange information through the Raft distributed concensus algorithm. All of that makes etcd a vital component of a Kubernetes cluster.

In today’s article, we will use Rancher’s RKE clusters and see how we can backup etcd from one cluster and restore it in the other one. …

Luc Juggery

Docker & Kubernetes trainer (CKA / CKAD), 中文学生, Learning&Sharing

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store