Introduction
This tutorial will guide you through the setup of a Kubernetes cluster on Hetzner Cloud servers. The resulting cluster will support the full range of Kubernetes objects including LoadBalancer service types, Persistent Volumes and a private network between the servers. It will not cover high availability of the Kubernetes control plane.
Prerequisites
- A Hetzner Cloud account
- Familiarity with the concepts of Kubernetes
- Familiarity with Linux and working on the shell
ssh
,hcloud
,kubectl
andhelm
command line tools installed
This tutorial was tested on Ubuntu 22.04 Hetzner Cloud servers and Kubernetes version v1.27.1
Terminology and Notation
Commands
local$ <command> # This command must be executed on your local machine
all$ <command> # This command must be executed on all servers as root
master$ <command> # This command must be executed on the master server as root
worker$ <command> # This command must be executed on all worker servers as root
Multiline commands end with a backslash. E.g.
local$ command --short-option \
--with-quite-long-parameter=9001
Files
A to be configured file will be described as /path/to/file.txt
And the full content of the file following in the box after the sentence
IP Addresses
<10.0.0.X>
internal IP address of server X<116.203.0.X>
public IP address of server X<159.69.0.1>
public Floating IP address
Explanations
This is a deeper explanation to the previous content. It might contain useful information, but if you did understand everything it can safely be skipped.
Step 1 - Create new Hetzner Cloud resources
For this tutorial the following resources will be used
- 1 Hetzner cloud network
- 1 Hetzner cloud CX11 server
- 2 Hetzner cloud CX21 servers
- 1 IPv4 Floating IP address
- 1 IPv6 Floating IP address (optional)
The CX11 server will be the master node and the CX21 servers the worker nodes. The Floating IP address(es) will be used for LoadBalancer services.
While this guide will create the resources manually, a production setup should be set up with terraform, ansible or comparable tools. Have a look at the Terraform Hcloud How-to for an example and guideline how to do so.
Create the required resources in the web interface or with the hcloud CLI tool
local$ hcloud network create --name kubernetes --ip-range 10.98.0.0/16
local$ hcloud network add-subnet kubernetes --network-zone eu-central --type server --ip-range 10.98.0.0/16
local$ hcloud server create --type cx11 --name master-1 --image ubuntu-18.04 --ssh-key <ssh_key_id> --network <network_id>
local$ hcloud server create --type cx21 --name worker-1 --image ubuntu-18.04 --ssh-key <ssh_key_id> --network <network_id>
local$ hcloud server create --type cx21 --name worker-2 --image ubuntu-18.04 --ssh-key <ssh_key_id> --network <network_id>
local$ hcloud floating-ip create --type ipv4 --home-location nbg1
local$ hcloud floating-ip create --type ipv6 --home-location nbg1 # (Optional)
The existing ssh keys can be listed with
hcloud ssh-key list
. The network ID is printed when creating the network and can be looked up withhcloud network list
.
The names of the server do not affect the cluster creation, but should not be changed later on. Feel free to change the image and type regarding your needs.
Step 1.1 - Update the servers (Optional)
It is recommended to update the servers after creation. Log onto each server and run
all$ apt-get update
all$ apt-get dist-upgrade
all$ reboot
Step 2 - Configure the network
Each server will have a private and a public IP address configured. Those are noted as <10.0.0.X>
as private and <116.203.0.X>
as public IP address.
Step 2.1 - Configure Floating IPs
For the LoadBalancer service type a Floating IP was registered in the first step. This IP address will be noted as <159.69.0.1>
. For the setup to actually work, the IP address needs to be configured on all worker nodes. Create a file /etc/network/interfaces.d/60-floating-ip.cfg
auto eth0:1
iface eth0:1 inet static
address <159.69.0.1>
netmask 32
If you also registered an IPv6 Floating IP extend the config by
iface eth0:1 inet6 static
address <2001:db8:1234::1>
netmask 64
Then restart your networking
worker$ systemctl restart networking.service
These IP addresses should not yet be assigned to a server, so don't worry that they can't be reached for now.
Step 2.2 - Configure IPv6 to IPv4 translation (Optional)
If you have registered an IPv6 Floating IP, the worker nodes need to be configured to translate IPv6 to IPv4, as Kubernetes currently does not support an IPv4/6 dual stack. This tutorial will (ab)use gobetween for the translation.
Gobetween is a lightweight Layer 4 loadbalancer, that supports both TCP and UDP. Any other service which translates between IPv4 and IPv6 can be used instead as well.
To install gobetween download and extract the latest version from their GitHub repository and place it in /usr/local/bin/
. As of writing this guide, the current version is 0.7.0
worker$ cd /usr/local/bin
worker$ wget https://github.com/yyyar/gobetween/releases/download/0.7.0/gobetween_0.7.0_linux_amd64.tar.gz
worker$ tar -xvf gobetween_0.7.0_linux_amd64.tar.gz gobetween
worker$ rm gobetween_0.7.0_linux_amd64.tar.gz
Then create a small systemd unit file /etc/systemd/system/gobetween.service
[Unit]
Description=Gobetween - modern LB for cloud era
Documentation=https://github.com/yyyar/gobetween/wiki
After=network.target remote-fs.target nss-lookup.target
[Service]
Type=simple
PIDFile=/run/gobetween.pid
ExecStart=/usr/local/bin/gobetween from-file /etc/gobetween.json -f json
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true
[Install]
WantedBy=multi-user.target
And a configuration file /etc/gobetween.json
{
"api": {
"enabled": false
},
"logging": {
"level": "info",
"output": "stdout"
},
"servers": {
"IPv6-tcp-http": {
"bind": "[<2001:db8:1234::1>]:80",
"protocol": "tcp",
"discovery": {
"kind": "static",
"static_list": [
"<159.69.0.1>:80"
]
}
},
"IPv6-tcp-https": {
"bind": "[<2001:db8:1234::1>]:443",
"protocol": "tcp",
"discovery": {
"kind": "static",
"static_list": [
"<159.69.0.1>:443"
]
}
}
}
}
This config file should be adapted to match your needs. Check out the gobetween documentation for further information.
The basic idea is to listen on each used port and protocol on the IPv6 Floating IP address and forward it to the IPv4 Floating IP address.
When the configuration suits your needs you can reload the systemd unit files and start the gobetween service
worker$ systemctl daemon-reload
worker$ systemctl start gobetween.service
Step 3 - Install Kubernetes
To install the Kubernetes cluster on the servers we will utilise kubeadm. The interface between Kubernetes and the Hetzner Cloud will be the Hetzner Cloud Controller Manager and the Hetzner Cloud Container Storage Interface. Both tools are provided by the Hetzner Cloud team.
Step 3.1 - Prepare the Cloud Controller Manager
The Hetzner Cloud Controller Manager requires the Kubernetes cluster to be set up to use an external cloud provider. Therefore, create a file /etc/systemd/system/kubelet.service.d/20-hetzner-cloud.conf
on each server
[Service]
Environment="KUBELET_EXTRA_ARGS=--cloud-provider=external"
This will make sure, that kubelet is started with the
cloud-provider = external
flag. Any further configuration will be handled by kubeadm later on.
Step 3.2 - Install containerd and Kubernetes Packages
As containerd and Kubernetes are installed on a distribution with systemd as init system, containerd should be set up to use the systemd cgroups. To do so, download the containerd.service
unit file on each server
all$ wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
all$ mv containerd.service /usr/lib/systemd/system/
Then, reload the systemd unit files on all servers
all$ systemctl daemon-reload
Then, install the required packages by executing the following commands on all servers
To double-check the architecture of your server, use:
dpkg --print-architecture
-
Install containerd
Make sure to download the archive that matches your own architecture (
containerd-<VERSION>-<OS>-<ARCH>.tar.gz
)all$ wget https://github.com/containerd/containerd/releases/download/v1.6.2/containerd-1.6.2-linux-amd64.tar.gz
all$ tar Czxvf /usr/local containerd-1.6.2-linux-amd64.tar.gz
all$ systemctl enable --now containerd all$ systemctl status containerd
Install runc
Make sure to download the binary that matches your own architecture (
runc.<ARCH>
)all$ wget https://github.com/opencontainers/runc/releases/download/v1.1.6/runc.amd64
all$ install -m 755 runc.amd64 /usr/local/sbin/runc
Install CNI plugins
Make sure to download the archive that matches your own architecture (
cni-plugins-<OS>-<ARCH>-<VERSION>.tgz
)all$ wget https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz
all$ mkdir -p /opt/cni/bin all$ tar Czxvf /opt/cni/bin cni-plugins-linux-amd64-v1.2.0.tgz
Set up
config.toml
all$ mkdir -p /etc/containerd/ all$ containerd config default | sudo tee /etc/containerd/config.toml
To use the systemd cgroup driver with runc, edit the
/etc/containerd/config.toml
file and setSystemdCgroup
to "true":[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] ... [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true
all$ systemctl restart containerd
-
Install Kubernetes
all$ curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
all$ cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb http://packages.cloud.google.com/apt/ kubernetes-xenial main EOF
all$ apt-get update all$ apt-get install kubeadm kubectl kubelet
-
Set the sysctl settings
You need to make sure that the system can actually forward traffic between the nodes and pods. Set the following sysctl settings on each server
all$ cat <<EOF | tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF
all$ modprobe overlay all$ modprobe br_netfilter
all$ cat <<EOF | tee /etc/sysctl.d/k8s.conf # Allow IP forwarding for kubernetes net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.ipv6.conf.default.forwarding = 1 EOF
all$ sysctl --system
These settings will allow forwarding of IPv4 and IPv6 packages between multiple network interfaces. This is required because each container has its own virtual network interface.
Step 3.3 - Setup control plane
The servers are now prepared to finally install the Kubernetes cluster. Log on to the master node and initialize the cluster
master$ kubeadm config images pull
master$ kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--kubernetes-version=v1.27.1 \
--ignore-preflight-errors=NumCPU \
--upload-certs \
--apiserver-cert-extra-sans 10.0.0.1
If the first command does not work, execute
containerd config default > /etc/containerd/config.toml
to reset the containerd configuration and try again.
The kubeadm init
process will print a kubeadm join
command in between. You should copy that command for later use (not required as you can always create a new token when needed). The --apiserver-cert-extra-sans
flag ensures your internal IP is recognized as valid IP for the apiserver.
If the server has a flavor bigger than CX11, the
ignore-preflight-errors
flag does not need to be passed. You can add additional flags regarding your needs. Check the documentation for further information.
When the initialisation is complete begin with setting up the required master components in the cluster. For ease of use, configure the kubeconfig of the root user to use the admin config of the Kubernetes cluster
master$ mkdir -p /root/.kube
master$ cp -i /etc/kubernetes/admin.conf /root/.kube/config
The cloud controller manager and the container storage interface require two secrets in the kube-system
namespace containing access tokens for the Hetzner Cloud API
master$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: hcloud
namespace: kube-system
stringData:
token: "<hetzner_api_token>"
network: "<hetzner_network_id>"
---
apiVersion: v1
kind: Secret
metadata:
name: hcloud-csi
namespace: kube-system
stringData:
token: "<hetzner_api_token>"
EOF
Both services can use the same token, but if you want to be able to revoke them independent from each other, you need to create two tokens.
To create a Hetzner Cloud API token log in to the web interface, and navigate to your project -> Security -> API tokens and create a new token. You will not be able to fetch the secret key again later on, so don't close the popup before you have copied the token.
Now deploy the Hetzner Cloud controller manager into the cluster
master$ kubectl apply -f https://raw.githubusercontent.com/hetznercloud/hcloud-cloud-controller-manager/master/deploy/ccm-networks.yaml
And set up the cluster networking
master$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
This tutorial uses flannel, as the CNI has very low maintenance requirements. For other options and comparisons check the official documentation
As Kubernetes with the external cloud provider flag activated will add a taint to uninitialized nodes, the cluster critical pods need to be patched to tolerate these
master$ kubectl -n kube-flannel patch ds kube-flannel-ds --type json -p '[{"op":"add","path":"/spec/template/spec/tolerations/-","value":{"key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true","effect":"NoSchedule"}}]'
master$ kubectl -n kube-system patch deployment coredns --type json -p '[{"op":"add","path":"/spec/template/spec/tolerations/-","value":{"key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true","effect":"NoSchedule"}}]'
Taints are flags on a node, that only specific sets of pods are allowed to be scheduled on that node (which is defined by the pod tolerations). The controller-manager will add a
node.cloudprovider.kubernetes.io/uninitialized
taint to each node which is not yet initialized by the controller-manager. But for the initialization to succeed the networking on the node and Cluster-DNS must be functional. Further information on taints and tolerations can be found in the documentation.
If you just use a single node cluster, take care to taint the master node to accept pods:
kubectl taint nodes --all node-role.kubernetes.io/master-
Last but not least deploy the Hetzner Cloud Container Storage Interface to the cluster
master$ kubectl apply -f https://raw.githubusercontent.com/hetznercloud/csi-driver/main/deploy/kubernetes/hcloud-csi.yml
Your control plane is now ready to use. Fetch the kubeconfig from the master server to be able to use kubectl
locally
local$ scp root@<116.203.0.1>:/etc/kubernetes/admin.conf ${HOME}/.kube/config
Or merge your existing kubeconfig with the admin.conf
accordingly.
Step 3.4 - Secure nodes
Using the Hetzner Firewall you can secure your nodes. Replace <116.203.0.x> with the public IPs of your node servers.
local$ hcloud firewall add-rule k8s-nodes --protocol=tcp --direction=in --source-ips <116.203.0.1>/32 --source-ips <116.203.0.2>/32 --source-ips <116.203.0.3>/32 --port any
local$ hcloud firewall add-rule k8s-nodes --protocol=udp --direction=in --source-ips <116.203.0.1>/32 --source-ips <116.203.0.2>/32 --source-ips <116.203.0.3>/32 --port any
Step 3.5 - Join worker nodes
In the kubeadm init
process a join command for the worker nodes was printed. If you don't have that command noted anymore, a new one can be generated by running the following command on the master node
master$ kubeadm token create --print-join-command
Then log on to each worker node and execute the join command
worker$ kubeadm join <116.203.0.1>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
When the join was successful list all nodes
local$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-1 Ready master 11m v1.27.1
worker-1 Ready <none> 5m v1.27.1
worker-2 Ready <none> 5m v1.27.1
Step 3.6 - Setup LoadBalancing (Optional)
At the time of writing, Hetzner Cloud did not support LoadBalancer as a Service yet. This tutorial explains how to install MetalLB to make the LoadBalancer service type available in the cluster. For more information about the option to use Hetzner Cloud Load Balancers with Kubernetes Services, see Kubernetes Cloud Controller Manager for Hetzner Cloud.
A Kubernetes LoadBalancer is typically managed by the cloud controller, but it is not implemented in the hcloud cloud controller (because it's not supported by Hetzner Cloud). MetalLB is a project, which provides the LoadBalancer type for bare metal Kubernetes clusters. It announces changes of the IP address endpoint to neighbor-routers, but we will just make use of the LoadBalancer provision in the cluster.
If you have not initialized helm yet run the following commands
local$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
EOF
local$ helm init --service-account tiller --history-max 200
Now you can install MetalLB with helm
local$ kubectl create namespace metallb
local$ helm install --name metallb --namespace metallb stable/metallb
Then create the config for MetalLB
local$ cat <<EOF |kubectl apply -f-
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb
name: metallb-config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- <159.69.0.1>/32
EOF
This will configure MetalLB to use the IPv4 Floating IP as LoadBalancer IP. MetalLB can reuse IPs for multiple LoadBalancer services if some conditions are met. This can be enabled by adding an annotation metallb.universe.tf/allow-shared-ip
to the service.
Step 3.7 - Setup Floating IP failover (Optional)
As the Floating IP is bound to one server only I wrote a little controller, which will run in the cluster and reassign the Floating IP to another server, if the currently assigned node becomes NotReady.
If you do not ensure, that the Floating IP is always associated to a node in status Ready your cluster will not be high available, as the traffic can be routed to a (potentially) broken node.
To deploy the Hetzner Cloud Floating IP controller create the following resources
local$ kubectl create namespace fip-controller
local$ kubectl apply -f https://raw.githubusercontent.com/cbeneke/hcloud-fip-controller/master/deploy/rbac.yaml
local$ kubectl apply -f https://raw.githubusercontent.com/cbeneke/hcloud-fip-controller/master/deploy/deployment.yaml
local$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: fip-controller-config
namespace: fip-controller
data:
config.json: |
{
"hcloudFloatingIPs": [ "<116.203.0.1>" ],
"nodeAddressType": "external"
}
---
apiVersion: v1
kind: Secret
metadata:
name: fip-controller-secrets
namespace: fip-controller
stringData:
HCLOUD_API_TOKEN: <hetzner_api_token>
EOF
If you did not set up the hcloud cloud controller, the external IP of the nodes might be announced as internal IP of the nodes in the Kubernetes cluster. In that event you must change
nodeAddressType
in the config tointernal
for the Floating IP controller to work correctly.
Please be aware, that the project is still in development and the config might be changed drastically in the future. Refer to the GitHub repository for config options etc.
Conclusion
Congratulations, you now got a fully featured Kubernetes cluster running on Hetzner Cloud servers. Now you should start to install your first applications on the cluster or look into high availability for your control plane.