----- Kubernetes -----Run Kubernetes On Your MachineCreate Pods With Imperative CommandsK3s Cluster Made Easy With Multipassk0s: Kubernetes in a Single BinaryUsing a ServiceAccountGive Access To Your Cluster With A Client CertificateEtcd: The Brain Of A Kubernetes ClusterBackup and Restore Etcd----- Docker -----About /var/run/docker.sockRunning a Container With a Non Root UserAbout <none> imagesClean Up Your Local Machine
Scroll down to see the latest ones…
Cilium, the eBPF-based networking solution, just released a web editor to facilitate the creation of Kubernetes network policies. In this article, we will demo the tool and use it to define a sample network policy.
“Cilium is an open source project that has been designed on top of eBPF to address the networking, security, and visibility requirements of container workloads. It provides a high-level abstraction on top of eBPF.” — Cilium blog
Cilium has a wide application domain and is commonly used as a CNI plugin in Kubernetes, as illustrated in the following schema:
As not all pods can be trusted, this article will show different options to enhance process isolation through the usage of container runtimes other than the default one (runc). We will use Kubernetes k0s distribution to illustrate all of this. If you do not know k0s, you can find a quick introduction in this article.
In the introduction article, we detailed the steps needed to easily setup a k0s cluster. …
This article offers a back-to-basics approach to help you understand several actions that can be done on a cluster’s nodes.
Let’s consider a newly created kubeadm
cluster containing one master and two worker nodes:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-1 Ready control-plane,master 18m v1.20.0
k8s-2 Ready <none> 18m v1.20.0
k8s-3 Ready <none> 18m v1.20.0
First, we will install Kubernetes Operational View (aka kube-ops-view). This application is very handy for seeing all the pods running in a cluster at a glance. There are currently 14 pods running:
The Kubernetes network proxy (aka kube-proxy) is a daemon running on each node. It basically reflects the services defined in the cluster and manages the rules to load-balance requests to a service’s backend pods.
When setting up a Kubernetes cluster, the installation of a network plugin is mandatory for the cluster to be operational. To keep things simple, the role of a network plugin is to set up the network connectivity so Pods running on different nodes in the cluster can communicate with each other. Depending upon the plugin, different network solutions can be provided: overlay (vxlan, IP-in-IP) or non-overlay.
To simplify the usage of a network plugin, Kubernetes exposes the Container Network Interface (aka CNI) so any network plugin that implements this interface can be used.
Kubernetes also allows the usage of kubenet…
A couple of weeks ago, I stumbled upon the Vanilla Stack, a technology stack based on Kubernetes and embedding many great open source components. In this article, which is mainly a presentation of the stack, we will quickly go through the installation process showing the different options available.
The Vanilla Stack can be defined as a Kubernetes cluster shipped with many open source components.
Among the different solutions provided out of the box are:
The following…
In a previous article, we presented the basics of k0s, a new lightweight Kubernetes distribution packaged in a single Go binary. We also set up a demo cluster using local VMs created with Multipass.
In this new article, we’ll continue our exploration of k0s and set up a simple cluster with one master and three workers. The workers will run on different architectures (amd64/arm64):
master
: Ubuntu 20.04 running on DigitalOceanworker1
: Ubuntu 20.04worker2
: RPI 4 running RPI OS 64 bitsworker3
: RPI 4 running Alpine LinuxThe master is running on DigitalOcean, but the workers are…
A couple of days ago, a friend told me about Mirantis’s new Kubernetes distribution named k0s. We all know and love K8s, right? We also succumbed to K3s, the lightweight Kubernetes made by Rancher Labs and donated to the CNCF some time ago. It’s now time to discover a new distribution: k0s.
After a short introduction to k0s, we’ll set up a three-node cluster following the steps below:
In a previous article, I explained the role etcd plays in a Kubernetes cluster.
We saw examples of the information etcd contains, the different ways it can be installed (inside or outside of the cluster), and how the etcd nodes exchange information through the Raft distributed concensus algorithm. All of that makes etcd a vital component of a Kubernetes cluster.
In today’s article, we will use Rancher’s RKE clusters and see how we can backup etcd from one cluster and restore it in the other one. …
Docker & Kubernetes trainer (CKA / CKAD), 中文学生, Learning&Sharing