Updated: Sep 8
Working with Kubernetes on a local machine when you are a Dev or an Ops is not as easy as we could think. So, how to easily create a local Kubernetes cluster that would meet these needs ? At SoKube we heavily use k3d and k3s for these purposes.
What is k3d/k3s
What’s new with k3d v3
Create a simple kubernetes cluster on your local machine
Create a multi-server (masters) and multi-agent (workers) kubernetes cluster on your local machine
Create a cluster with a specific Kubernetes version
How to replace the default CNI plugin of k3s
How to replace the default ingress controller of k3s
How to use a dedicated registry to download images with k3s
What are the other Alternatives
k3s is a very efficient and lightweight fully compliant Kubernetes distribution. k3d is a utility designed to easily run k3s in Docker, it provides a simple CLI to create, run, delete a fully compliance Kubernetes cluster with 1 to n nodes.
Flannel: a very simple L2 overlay network that satisfies the Kubernetes requirements. This is a CNI plugin (Container Network Interface), such as Calico, Romana, Weave-net Flannel doesn’t support Kubernetes Network Policy, but it can be replaced by Calico (see next sections).
CoreDNS: a flexible, extensible DNS server that can serve as the Kubernetes cluster DNS
Traefik is a modern HTTP reverse proxy and load balancer. In a next section, I will also show how to replace it either by Traefik v2 or Nginx
Klipper Load Balancer : Service load balancer that uses available host ports.
SQLite3: The storage backend used by default (also support MySQL, Postgres, and etcd3)
Containerd is a runtime container like Docker without the image build part
The choices of these components were made to have the most lightweight distribution. But as we will see later in this blog, k3s is a modular distribution where components can easily be replaced.
Recently k3s has joined the Cloud Native Computing Foundation (CNCF) at the sandbox level as first Kubernetes Distribution (raising a lot of debates whether or not k3s should be a kubernetes sub-project instead).
Installation is very easy and available through many installers: wget, curl, Homebrew, Aur, … and supports all well known OSes (linux, darwin, windows) and processor architectures (386, amd64) !
Note that you only need to install the k3d client, which will create a k3s cluster using the right Docker image.
Once installed, configure the completion with your preferred shell (bash, zsh, powershell), for instance with zsh:
k3d completion zsh > ~/.zsh/completions/_k3d source .zshrc
What’s new with k3d v3
In one year, the k3d team did a great job and completely rewrote k3d v3. It is therefore not a simple major version, they have implemented new concepts and structures to make it an evolving tool with very practical and interesting features.
New terminology of k3d and k3s: To be as inclusive to the community as possible, "Server" and "Agent" words are now used to design "master" and "worker" node.
Every cluster you create will now spawn at least 2 containers: 1 load balancer and 1 “server” node. The load balancer will be the access point to the Kubernetes API, so even for multi-server clusters, you only need to expose a single api port. The load balancer will then take care of proxying your requests to the correct server node. (can be disabled with the --no-lb flag)
Adoption of the “NOUN VERB” syntax: This breaking change makes it easier to add new nouns (i.e. k3d managed objects) and is similar to many other cloud-native CLIs (e.g. gcloud, awscli, azure cli, ...) and also provides a cleaner CLI hierarchy.
Support of multi-server clusters (dqlite) with hot-reloads configuration when a new server node is being added to the cluster
Handling nodes independently from clusters: k3d create/start/stop/delete node mynode
Shell completion via k3d completion [zsh | bash | psh | fish]
Basic plugin system support (k3d my-plugin)
My first k3d cluster
Let’s create a simple cluster with 1 loadbalancer and 1 node (with role of server and agent) with name “dev”:
k3d cluster create dev --port 8080:80@loadbalancer --port 8443:443@loadbalancer
docker ps will show the underlying containers created by this command:
--port 8080:80@loadbalancer will add a mapping of local host port 8080 to loadbalancer port 80, which will proxy requests to port 80 on all agent nodes
--api-port 6443 : by default, no API-Port is exposed (no host port mapping). It’s used to have k3s‘s API-Server listening on port 6443 with that port mapped to the host system. So that the load balancer will be the access point to the Kubernetes API, so even for multi-server clusters, you only need to expose a single api port. The load balancer will then take care of proxying your requests to the appropriate server node
-p "32000-32767:32000-32767@loadbalancer" You may as well expose a NodePort range (if you want to avoid the Ingress Controller).
By default it will directly switch the default kubeconfig's current-context to the new cluster's context so that your “~/.kube/config” is automatically updated.(checkout “kubectl config current-context”)
you can disable this behaviour by using flag "--update-default-kubeconfig=false" so it will need to create a kubeconfig file and export the KUBECONFIG var:
export KUBECONFIG=$(k3d kubeconfig write dev)
Removing the cluster will also delete the entry in the kubeconfig file.
k3d provides some commands to easily manipulate the kubeconfig:
# get kubeconfig from cluster dev k3d kubeconfig get dev # create a kubeconfile file in $HOME/.k3d/kubeconfig-dev.yaml k3d kubeconfig write dev # get kubeconfig from cluster(s) and # merge it/them into a file in $HOME/.k3d or another file k3d kubeconfig merge ...
Stopping a cluster is very easy:
k3d cluster stop dev
Then Restarting and restoring the state of the cluster as it was before stopping:
k3d cluster start dev
Deleting a cluster is as simple as :
k3d cluster delete dev
Test with a simple nginx container application
Once the cluster running, execute the following commands to test with a simple nginx container:
kubectl create deployment nginx --image=nginx kubectl create service clusterip nginx --tcp=80:80 cat <<EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: nginx annotations: ingress.kubernetes.io/ssl-redirect: "false" spec: rules: - http: paths: - path: / backend: serviceName: nginx servicePort: 80 EOF
To test : http://localhost:8080/
A multi-server and multi-agent kubernetes cluster on your local machine
For testing purposes and to be as close as possible of a production kubernetes cluster you can create a multi-servers and/or a multi-agents cluster:
k3d cluster create test --port 8080:80@loadbalancer --port 8443:443@loadbalancer --api-port 6443 --servers 3 --agents 3
Get the list of nodes:
> kubectl get nodes NAME STATUS ROLES AGE VERSION k3d-test-agent-1 Ready <none> 20m v1.18.2+k3s1 k3d-test-agent-0 Ready <none> 21m v1.18.2+k3s1 k3d-test-agent-2 Ready <none> 20m v1.18.2+k3s1 k3d-test-server-2 Ready master 21m v1.18.2+k3s1 k3d-test-server-0 Ready master 21m v1.18.2+k3s1 k3d-test-server-1 Ready master 21m v1.18.2+k3s1
Once all node are running you can deploy the same nginx application for testing. Scale the application to 3 replicas:
kubectl scale deployment nginx --replicas 3
Pods should be spread over the agent nodes:
This cluster contains 7 Docker images representing the full cluster test (1 lodbalancer, 3 containers for the servers and 3 containers for the agents). Note that server nodes are also running workloads:
After the cluster has been created it is also possible to add nodes:
k3d node create newserver --cluster test --role agent
A cluster with a specific Kubernetes version
It could be very convenient to create a Kubernetes cluster with a specific version, either for a older version:
k3d cluster create test --port 8080:80@loadbalancer --port 8443:443@loadbalancer --image rancher/k3s:v1.17.13-k3s2
or a newer version:
k3d cluster create test --port 8080:80@loadbalancer --port 8443:443@loadbalancer --image rancher/k3s:v1.19.3-k3s3
The list of available versions are in the k3s docker repository. Currently, the oldest version in this repo is V1.16.x.
Use Calico instead of Flannel as the CNI plugin
Flannel is a very good and lightweight CNI plugin but doesn’t support the Kubernetes network policy resources (note that the NetworkPolicy will be applied without any error but also without any effect) ! The modularity of k3s allows to replace the default CNI by Calico. In order to deploy Calico, 2 features of k3s are used:
--k3s-server-arg '--flannel-backend=none': remove Flannel from the initial k3s installation.
‘Auto-Deploying Manifests’ A practical feature of k3s: Any file found in /var/lib/rancher/k3s/server/manifests will automatically be deployed to Kubernetes in a manner similar to kubectl apply
So you will need to save locally the calico.yaml file configuration and then create the cluster with the following args:
k3d cluster create calico --k3s-server-arg '--flannel-backend=none' --volume "$(pwd)/calico.yaml:/var/lib/rancher/k3s/server/manifests/calico.yaml"
# create 2 pods 'web' and 'test' kubectl run web --image nginx --labels app=web --expose --port 80 kubectl run test --image alpine -- sleep 3600 # check pod "test" can access pod "web" kubectl exec -it test -- wget -qO- --timeout=2 http://web
Everything should be ok Let’s add a NetworkPolicy that denies all communications:
cat <<EOF | kubectl apply -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-deny-all spec: podSelector: matchLabels: app: web ingress:  EOF # check pod "test" cannot access pod "web" kubectl exec -it test -- wget -qO- --timeout=2 http://web
Now, the “web” pod cannot be accessed anymore !
Change the default Ingress Controller
By default k3s uses Traefik v1 as an Ingress controller but it is an old version and Traefik V2 has been released more than 1 year ago with lots of nice features, such as TCP Support with SNI Routing & Multi-Protocol Ports, Canary deployment, Mirroring with Service Load Balancers, new Dashboard & WebUI… There are plans to replace by default this ingress controller with Traefik v2 but again the modularity of k3s makes it possible to replace the default Ingress Controller using:
--k3s-server-arg '--no-deploy=traefik' : to remove traefik v1 from k3s installation
‘Auto-Deploying Manifests’ as mentioned previously
Helm charts Operator: K3s includes a Helm Controller that manages Helm charts using a HelmChart Custom Resource Definition (CRD)
Replace using Nginx ingress controller: