Introduction
This tutorial will guide you through the setup of a Kubernetes cluster on Hetzner Cloud servers. The resulting cluster will support the full range of Kubernetes objects including LoadBalancer service types, Persistent Volumes and a private network between the servers. It will not cover high availability of the Kubernetes control plane.
Prerequisites
- A Hetzner Cloud account
- Familiarity with the concepts of Kubernetes
- Familiarity with Linux and working on the shell
- A local machine that has the following installed:
This tutorial was tested on Ubuntu 24.04 Hetzner Cloud servers and Kubernetes version v1.35.2
Terminology and Notation
Commands
local$ <command> # This command must be executed on your local machine
all$ <command> # This command must be executed on all servers as root
master$ <command> # This command must be executed on the master server as root
worker$ <command> # This command must be executed on all worker servers as rootMultiline commands end with a backslash. E.g.
local$ command --short-option \
--with-quite-long-parameter=9001Files
A to be configured file will be described as /path/to/file.txt
And the full content of the file following in the box after the sentenceIP Addresses
<10.0.0.X>internal IP address of server X<116.203.0.X>public IP address of server X
Explanations
This is a deeper explanation to the previous content. It might contain useful information, but if you did understand everything it can safely be skipped.
Step 1 - Create new Hetzner Cloud resources
For this tutorial the following resources will be used
- 1 Hetzner cloud network
- 1 Hetzner cloud server CX23
- 2 Hetzner cloud servers CX33
- 1 Hetzner Load Balancer
The CX23 server will be the master node and the CX33 servers the worker nodes.
While this guide will create the resources manually, a production setup should be set up with terraform, ansible or comparable tools. Have a look at the tutorial "Setup your own scalable Kubernetes cluster with the Terraform provider for Hetzner Cloud" for an example and guideline how to do so.
Create the required resources in the web interface or with the hcloud CLI tool
local$ hcloud network create --name kubernetes --ip-range 10.98.0.0/16
local$ hcloud network add-subnet kubernetes --network-zone eu-central --type server --ip-range 10.98.0.0/16
local$ hcloud server create --type cx23 --name master-1 --image ubuntu-24.04 --ssh-key <ssh_key_id> --network <network_id>
local$ hcloud server create --type cx33 --name worker-1 --image ubuntu-24.04 --ssh-key <ssh_key_id> --network <network_id>
local$ hcloud server create --type cx33 --name worker-2 --image ubuntu-24.04 --ssh-key <ssh_key_id> --network <network_id>The existing SSH keys can be listed with
hcloud ssh-key list. The network ID is printed when creating the network and can be looked up withhcloud network list.
The names of the server do not affect the cluster creation, but should not be changed later on. Feel free to change the image and type regarding your needs.
Step 1.1 - Update the servers (Optional)
It is recommended to update the servers after creation. Log onto each server and run
all$ apt-get update
all$ apt-get dist-upgrade
all$ rebootStep 2 - Configure the network
Each server will have a private and a public IP address configured. Those are noted as <10.0.0.X> as private and <116.203.0.X> as public IP address.
Step 3 - Install Kubernetes
To install the Kubernetes cluster on the servers we will utilise kubeadm. The interface between Kubernetes and the Hetzner Cloud will be the Hetzner Cloud Controller Manager and the Hetzner Cloud Container Storage Interface. Both tools are provided by the Hetzner Cloud team.
Step 3.1 - Prepare the Cloud Controller Manager
The Hetzner Cloud Controller Manager requires the Kubernetes cluster to be set up to use an external cloud provider. Therefore, create a file /etc/systemd/system/kubelet.service.d/20-hetzner-cloud.conf on each server
[Service]
Environment="KUBELET_EXTRA_ARGS=--cloud-provider=external"This will make sure, that kubelet is started with the
cloud-provider = externalflag. Any further configuration will be handled by kubeadm later on.
Step 3.2 - Install containerd and Kubernetes Packages
As containerd and Kubernetes are installed on a distribution with systemd as init system (run ps -p 1 -o comm= to verify), containerd should be set up to use the systemd cgroups. To do so, download the containerd.service unit file on each server
all$ wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.serviceall$ mv containerd.service /usr/lib/systemd/system/Then, reload the systemd unit files on all servers
all$ systemctl daemon-reloadThen, install the required packages by executing the following commands on all servers
To double-check the architecture of your server, use:
dpkg --print-architecture
-
Install containerd
Make sure to download the archive that matches your own architecture (
containerd-<VERSION>-<OS>-<ARCH>.tar.gz)all$ wget https://github.com/containerd/containerd/releases/download/v2.2.2/containerd-2.2.2-linux-amd64.tar.gzall$ tar Czxvf /usr/local containerd-2.2.2-linux-amd64.tar.gzall$ systemctl enable --now containerd all$ systemctl status containerd
-
Install runc
Make sure to download the binary that matches your own architecture (
runc.<ARCH>)all$ wget https://github.com/opencontainers/runc/releases/download/v1.4.1/runc.amd64all$ install -m 755 runc.amd64 /usr/local/sbin/runc
-
Install CNI plugins
Make sure to download the archive that matches your own architecture (
cni-plugins-<OS>-<ARCH>-<VERSION>.tgz)all$ wget https://github.com/containernetworking/plugins/releases/download/v1.9.0/cni-plugins-linux-amd64-v1.9.0.tgzall$ mkdir -p /opt/cni/bin all$ tar Czxvf /opt/cni/bin cni-plugins-linux-amd64-v1.9.0.tgzSet up
config.tomlall$ mkdir -p /etc/containerd/ all$ containerd config default | sudo tee /etc/containerd/config.tomlTo use the systemd cgroup driver with runc, edit the
/etc/containerd/config.tomlfile and setSystemdCgroupto "true":[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] ... [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = trueall$ systemctl restart containerd
-
Install Kubernetes
all$ curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.35/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpgall$ cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.35/deb/ / EOFall$ apt-get update all$ apt-get install kubeadm kubectl kubelet
-
Configure kubelet
all$ nano /etc/default/kubeletAdd (replace
<private_ip>with the actual IP of each server):KUBELET_EXTRA_ARGS=--node-ip=<privae_ip>Save the changes and run:
systemctl daemon-reexec && systemctl restart kubelet
-
Set the sysctl settings
You need to make sure that the system can actually forward traffic between the nodes and pods. Set the following sysctl settings on each server
all$ cat <<EOF | tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOFall$ modprobe overlay all$ modprobe br_netfilterall$ cat <<EOF | tee /etc/sysctl.d/k8s.conf # Allow IP forwarding for kubernetes net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.ipv6.conf.default.forwarding = 1 EOFall$ sysctl --systemThese settings will allow forwarding of IPv4 and IPv6 packages between multiple network interfaces. This is required because each container has its own virtual network interface.
Step 3.3 - Setup control plane
The servers are now prepared to finally install the Kubernetes cluster. Log on to the master node and initialize the cluster
master$ kubeadm config images pull
master$ kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--kubernetes-version=v1.35.2 \
--upload-certs \
--apiserver-cert-extra-sans 10.0.0.1If the first command does not work, execute
containerd config default > /etc/containerd/config.tomlto reset the containerd configuration and try again.
The kubeadm init process will print a kubeadm join command in between. You should copy that command for later use (not required as you can always create a new token when needed). The --apiserver-cert-extra-sans flag ensures your internal IP is recognized as valid IP for the apiserver.
If the server is CX11, the
--ignore-preflight-errors=NumCPUflag needs to be added. You can add additional flags regarding your needs. Check the documentation for further information.
When the initialisation is complete, begin with setting up the required master components in the cluster. For ease of use, configure the kubeconfig of the root user to use the admin config of the Kubernetes cluster
master$ mkdir -p /root/.kube
master$ cp -i /etc/kubernetes/admin.conf /root/.kube/configThe cloud controller manager and the container storage interface require a secret in the kube-system namespace containing the access token for the Hetzner Cloud API
master$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: hcloud
namespace: kube-system
stringData:
token: "<hetzner_api_token>"
network: "<hetzner_network_id>"
EOFTo create a Hetzner Cloud API token, see the official getting started. You will not be able to fetch the secret key again later on, so don't close the popup before you have copied the token.
Now deploy the Hetzner Cloud controller manager into the cluster
master$ kubectl apply -f https://raw.githubusercontent.com/hetznercloud/hcloud-cloud-controller-manager/master/deploy/ccm-networks.yamlAnd set up the cluster networking
master$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlThis tutorial uses flannel, as the CNI has very low maintenance requirements. For other options and comparisons check the official documentation
As Kubernetes with the external cloud provider flag activated will add a taint to uninitialized nodes, the cluster critical pods need to be patched to tolerate these
master$ kubectl -n kube-flannel patch ds kube-flannel-ds --type json -p '[{"op":"add","path":"/spec/template/spec/tolerations/-","value":{"key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true","effect":"NoSchedule"}}]'
master$ kubectl -n kube-system patch deployment coredns --type json -p '[{"op":"add","path":"/spec/template/spec/tolerations/-","value":{"key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true","effect":"NoSchedule"}}]'Taints are flags on a node, that only specific sets of pods are allowed to be scheduled on that node (which is defined by the pod tolerations). The controller-manager will add a
node.cloudprovider.kubernetes.io/uninitializedtaint to each node which is not yet initialized by the controller-manager. But for the initialization to succeed the networking on the node and Cluster-DNS must be functional. Further information on taints and tolerations can be found in the documentation.
If you just use a single node cluster, take care to taint the master node to accept pods:
kubectl taint nodes --all node-role.kubernetes.io/master-
Last but not least deploy the Hetzner Cloud Container Storage Interface to the cluster
master$ kubectl apply -f https://raw.githubusercontent.com/hetznercloud/csi-driver/main/deploy/kubernetes/hcloud-csi.ymlYour control plane is now ready to use. Fetch the kubeconfig from the master server to be able to use kubectl locally
local$ scp root@<116.203.0.1>:/etc/kubernetes/admin.conf ${HOME}/.kube/configOr merge your existing kubeconfig with the admin.conf accordingly.
Step 3.4 - Secure nodes
Using the Hetzner Firewall you can secure your nodes. Replace <116.203.0.x> with the public IPs of your node servers.
local$ hcloud firewall add-rule k8s-nodes --protocol=tcp --direction=in --source-ips <116.203.0.1>/32 --source-ips <116.203.0.2>/32 --source-ips <116.203.0.3>/32 --port any
local$ hcloud firewall add-rule k8s-nodes --protocol=udp --direction=in --source-ips <116.203.0.1>/32 --source-ips <116.203.0.2>/32 --source-ips <116.203.0.3>/32 --port anyStep 3.5 - Join worker nodes
In the kubeadm init process a join command for the worker nodes was printed. If you don't have that command noted anymore, a new one can be generated by running the following command on the master node
master$ kubeadm token create --print-join-commandThen log on to each worker node and execute the join command
worker$ kubeadm join <116.203.0.1>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>When the join was successful list all nodes
local$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-1 Ready control-plane 11m v1.35.2
worker-1 Ready <none> 5m v1.35.2
worker-2 Ready <none> 5m v1.35.2Step 3.6 - Setup LoadBalancing (Optional)
Hetzner Cloud supports LoadBalancer as a Service. For more information about the option to use Hetzner Cloud Load Balancers with Kubernetes Services, see Kubernetes Cloud Controller Manager for Hetzner Cloud.
First, check if your nodes have the correct provider ID:
local$ kubectl describe node master-1 | grep "ProviderID"
local$ kubectl describe node worker-1 | grep "ProviderID"
local$ kubectl describe node worker-2 | grep "ProviderID"The output should be hcloud://<server_id>.
If the provider ID is missing, you can add it manually as a quick fix.
local$ hcloud server list | grep kubernetesYou should get output like this:
<server_id> <name> <status> <ipv4> <ipv6> <private_net> <location> <age>In the commands below, replace
<server_id>with the actual server ID.local$ kubectl patch node master-1 -p '{"spec":{"providerID":"hcloud://<server_id>"}}' local$ kubectl patch node worker-1 -p '{"spec":{"providerID":"hcloud://<server_id>"}}' local$ kubectl patch node worker-2 -p '{"spec":{"providerID":"hcloud://<server_id>"}}'
If they have the provider ID, you can test the Load Balancer with this example deployment:
local$ nano example-service.yamlapiVersion: apps/v1
kind: Deployment
metadata:
name: simple-web
spec:
replicas: 3
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
spec:
containers:
- name: simple-web
image: httpd:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
annotations:
load-balancer.hetzner.cloud/name: "kubernetes-load-balancer"
load-balancer.hetzner.cloud/location: "fsn1"
load-balancer.hetzner.cloud/use-private-ip: "true"
# If you have a public domain and a SSL certificate, upload the
# SSL certificate in Hetzner Console and uncomment the lines below.
#load-balancer.hetzner.cloud/protocol: "https"
#load-balancer.hetzner.cloud/http-redirect-http: "true"
#load-balancer.hetzner.cloud/http-certificates: "certificate-name"
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- protocol: TCP
# If you have a public domain and a SSL certificate,
# use port 443 instead of port 80.
port: 80
targetPort: 80Next, apply the file. Note that this will create a new Load Balancer with the name kubernetes-load-balancer.
local$ kubectl apply -f example-service.yaml
local$ kubectl get pods
local$ kubectl get deployments
local$ kubectl get svcIf everything worked as expected, you can go to http://<load_balancer_ip> and it should say "It works!"
When you're done, you can remove the deployment and Load Balancer:
local$ kubectl delete -f example-service.yamlConclusion
Congratulations, you now got a fully featured Kubernetes cluster running on Hetzner Cloud servers. Now you should start to install your first applications on the cluster or look into high availability for your control plane.