Get Rewarded! We will reward you with up to €50 credit on your account for every tutorial that you write and we publish!

How to set up K3S, GlusterFS and Hetzner's Cloud Load Balancer

profile picture
Author
Heiner Beck
Published
2021-10-22
Time to read
14 minutes reading time

Introduction

This tutorial will guide you through setting up a Kubernetes cluster using K3S. K3S is a lightweight Kubernetes distribution which is perfectly suited for the small Hetzner VMs like the CX11. Additionally, you will set up Hetzner's Cloud Load Balancer which performs SSL offloading and forwards traffic to your Kubernetes system. Optionally, you will learn how to set up a distributed, replicated file system using GlusterFS. This allows you to move pods between the nodes while still having access to the pods' persistent data.

Prerequisites

This tutorial assumes you have set up Hetzner's CLI utility (hcloud) which has access to a Hetzner Cloud project in your account. If you don't have hcloud, you can create your resources in the Cloud Console.

The following terminology is used in this tutorial:

  • Domain: <example.com>
  • SSH Key: <your_ssh_key>
  • Random secret token: <your_secret_value>
  • Hetzner API token: <hetzner_api_token>
  • IP addresses (IPv4):
    • K3S Master: 10.0.0.2
    • K3S Node 1: 10.0.0.3
    • K3S Node 2: 10.0.0.4
    • Hetzner Cloud Load Balancer: 10.0.0.254

Step 1 - Create private network

First, we'll create a private network which is used by our Kubernetes nodes for communicating with each other. We'll use 10.0.0.0/16 as network and subnet.

hcloud network create --name network-kubernetes --ip-range 10.0.0.0/16
hcloud network add-subnet network-kubernetes --network-zone eu-central --type server --ip-range 10.0.0.0/16
Click here to view the Network details
Product IP range Name Network Zone
Network 10.0.0.0/16 network-kubernetes eu-central
Subnet 10.0.0.0/16

Step 2 - Create Placement Group and servers

Next, we'll create a "spread" Placement Group for our servers and then the servers themselves.

Step 2.1 - Create the spread Placement Group (Optional)

The Placement Group ensures your VMs run on different hosts, so in case one host has a failure, no other VMs are affected.

hcloud placement-group create --name group-spread --type spread

Step 2.2 - Create the virtual machines

hcloud server create --datacenter nbg1-dc3 --type cx11 --name master-1 --image debian-12 --ssh-key <your_ssh_key> --network network-kubernetes --placement-group group-spread
hcloud server create --datacenter nbg1-dc3 --type cx11 --name node-1 --image debian-12 --ssh-key <your_ssh_key> --network network-kubernetes --placement-group group-spread
hcloud server create --datacenter nbg1-dc3 --type cx11 --name node-2 --image debian-12 --ssh-key <your_ssh_key> --network network-kubernetes --placement-group group-spread
Click here to view the server details
Product OS Image Type Network Placement Group Name
Server Debian 12 CX11 network-kubernetes group-spread master-1
Server Debian 12 CX11 network-kubernetes group-spread node-1
Server Debian 12 CX11 network-kubernetes group-spread node-2

Step 3 - Create and apply Firewall

Now that our servers are up and running, let's create a Firewall and restrict ingoing and outgoing traffic. You may need to customize the rules to match your requirements.

  • Create the Firewall:
    hcloud firewall create --name firewall-kubernetes
  • Allow incoming SSH and ICMP:
    hcloud firewall add-rule firewall-kubernetes --description "Allow SSH In" --direction in --port 22 --protocol tcp --source-ips 0.0.0.0/0 --source-ips ::/0
    hcloud firewall add-rule firewall-kubernetes --description "Allow ICMP In" --direction in --protocol icmp --source-ips 0.0.0.0/0 --source-ips ::/0
  • Allow outgoing ICMP, DNS, HTTP, HTTPS and NTP:
    hcloud firewall add-rule firewall-kubernetes --description "Allow ICMP Out" --direction out --protocol icmp --destination-ips 0.0.0.0/0 --destination-ips ::/0
    hcloud firewall add-rule firewall-kubernetes --description "Allow DNS TCP Out" --direction out --port 53 --protocol tcp --destination-ips 0.0.0.0/0 --destination-ips ::/0
    hcloud firewall add-rule firewall-kubernetes --description "Allow DNS UDP Out" --direction out --port 53 --protocol udp --destination-ips 0.0.0.0/0 --destination-ips ::/0
    hcloud firewall add-rule firewall-kubernetes --description "Allow HTTP Out" --direction out --port 80 --protocol tcp --destination-ips 0.0.0.0/0 --destination-ips ::/0
    hcloud firewall add-rule firewall-kubernetes --description "Allow HTTPS Out" --direction out --port 443 --protocol tcp --destination-ips 0.0.0.0/0 --destination-ips ::/0
    hcloud firewall add-rule firewall-kubernetes --description "Allow NTP UDP Out" --direction out --port 123 --protocol udp --destination-ips 0.0.0.0/0 --destination-ips ::/0
  • Apply the Firewall rules to all three servers:
    hcloud firewall apply-to-resource firewall-kubernetes --type server --server master-1
    hcloud firewall apply-to-resource firewall-kubernetes --type server --server node-1
    hcloud firewall apply-to-resource firewall-kubernetes --type server --server node-2
Click here to view the Firewall details
Product Name
Firewall firewall-kubernetes

Rules:

Type Description Source / Destination IPs Protocol Port
Incoming Allow SSH In 0.0.0.0/0 ::/0 TCP 22
Incoming Allow ICMP In 0.0.0.0/0 ::/0 ICMP
---------------
Outgoing Allow ICMP Out 0.0.0.0/0 ::/0 ICMP
Outgoing Allow DNS TCP Out 0.0.0.0/0 ::/0 TCP 53
Outgoing Allow DNS UDP Out 0.0.0.0/0 ::/0 UDP 53
Outgoing Allow HTTP Out 0.0.0.0/0 ::/0 TCP 80
Outgoing Allow HTTPS Out 0.0.0.0/0 ::/0 TCP 443
Outgoing Allow NTP UDP Out 0.0.0.0/0 ::/0 UDP 123

Apply the Firewall to all three servers: master-1 node-1 node-2

Step 4 - Install K3S

It's showtime for K3S. Before we prepare our master node and agent nodes, first upgrade the system and install AppArmor. SSH into your newly created VMs and run this command on all of then:

apt update
apt upgrade -y
apt install apparmor apparmor-utils -y

Step 4.1 - Install K3S on master node

SSH into your master node and run the following command to install and start the K3S server:

Replace <your_secret_value> with an own random value of your choice.

curl -sfL https://get.k3s.io | sh -s - server \
    --disable-cloud-controller \
    --disable metrics-server \
    --write-kubeconfig-mode=644 \
    --disable local-storage \
    --node-name="$(hostname -f)" \
    --cluster-cidr="10.244.0.0/16" \
    --kube-controller-manager-arg="bind-address=0.0.0.0" \
    --kube-proxy-arg="metrics-bind-address=0.0.0.0" \
    --kube-scheduler-arg="bind-address=0.0.0.0" \
    --kubelet-arg="cloud-provider=external" \
    --token="<your_secret_value>" \
    --tls-san="$(hostname -I | awk '{print $2}')" \
    --flannel-iface=ens10

You can read more about the applied options in the K3S documentation. In short:

  • We disable the integrated cloud controller because we'll install Hetzner's Cloud Controller Manager in the next step.
  • We disable the metrics server to save some memory.
  • We disable the local storage because we'll use GlusterFS.
  • We set the Cluster CIDR to 10.244.0.0/16.
  • We make Kube Controller, Kube Proxy and Kube Scheduler listen on any address (which is not an issue as we've applied firewall rules and the nodes communicate with each other using the private network).
  • We set the shared secret token to <your_secret_value>.
  • We add the server's private IPv4 (should be 10.0.0.2) as an additional subject name to the TLS cert.
  • We make Flannel use ens10, which should be the interface of our private network.

Step 4.2 - Install Hetzner Cloud Controller Manager

Still on your master node, install the Hetzner Cloud Controller Manager:

Replace <hetzner_api_token> with your own API token.
Replace network-kubernetes with the name of your own Network if you gave it a different one.

kubectl -n kube-system create secret generic hcloud --from-literal=token=<hetzner_api_token> --from-literal=network=network-kubernetes
kubectl apply -f https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm-networks.yaml

Step 4.3 - Install System Upgrade Controller (Optional)

The System Upgrade Controller performs automatic updates of K3S. If you want to use this feature, install the controller using this command:

kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml

Step 4.4 - Install K3S on agent nodes

Now that our K3S server is up and running, SSH into your two agent nodes and run the following command to install the K3S agent and connect it to the server:

Replace 10.0.0.2 with the private IP of your master node if it is different.
Replace <your_secret_value> with the random value you chose in "Step 4.1".

curl -sfL https://get.k3s.io | K3S_URL=https://10.0.0.2:6443 K3S_TOKEN=<your_secret_value> sh -s - agent \
    --node-name="$(hostname -f)" \
    --kubelet-arg="cloud-provider=external" \
    --flannel-iface=ens10

You can read more about the applied options in the K3S documentation. In short:

  • We disable the integrated cloud controller because we've already installed Hetzner's Cloud Controller Manager.
  • We make Flannel use ens10, which should be the interface of our private network.

To check if the two agent nodes are available, run this on your master node:

root@master-1:~# kubectl get nodes
NAME       STATUS   ROLES
master-1   Ready    control-plane,master
node-1     Ready    <none>
node-2     Ready    <none>

Step 5 - Install GlusterFS (Optional)

GlusterFS is a free and open source software scalable network filesystem. You can use it to replicate files to all your VMs so that your pods can access their persistent storage no matter which node they are running on. Alternatively, you can take a look at Longhorn or OpenEBS.

Step 5.1 - Prepare all nodes

SSH into all three nodes and run the following command to install, enable and start the Gluster server on all machines:

wget -O - https://download.gluster.org/pub/gluster/glusterfs/9/rsa.pub | gpg --dearmor > /etc/apt/trusted.gpg.d/gluster.gpg
DEBID=$(grep 'VERSION_ID=' /etc/os-release | cut -d '=' -f 2 | tr -d '"') && DEBVER=$(grep 'VERSION=' /etc/os-release | grep -Eo '[a-z]+') && DEBARCH=$(dpkg --print-architecture)
echo "deb [signed-by=/etc/apt/trusted.gpg.d/gluster.gpg] https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/${DEBID}/${DEBARCH}/apt ${DEBVER} main" > /etc/apt/sources.list.d/gluster.list
apt update && apt install glusterfs-server -y
systemctl enable glusterd && systemctl start glusterd

Gluster works with so-called "bricks". A brick is a directory which is controlled by Gluster to manage the replicated file system. This file system is then mounted using GlusterFS. Create the necessary directories:

mkdir -p /data/glusterfs/k8s/brick1
mkdir -p /mnt/gluster-k8s

Step 5.2 - Set up the cluster

Only on your master node, add the two other nodes as peers:

gluster peer probe 10.0.0.3
gluster peer probe 10.0.0.4

Verify the peer status on the master and agent nodes:

gluster peer status

Step 5.3 - Create the volume

On your master node, run the following command to create and start a replicated volume:

gluster volume create k8s replica 3 \
    10.0.0.2:/data/glusterfs/k8s/brick1/brick \
    10.0.0.3:/data/glusterfs/k8s/brick1/brick \
    10.0.0.4:/data/glusterfs/k8s/brick1/brick \
    force
gluster volume start k8s
gluster volume info

This will create and start a replicated volume named "k8s" with three replicas (our three VMs).

Step 5.4 - Mount the GlusterFS volume

On all three nodes, mount the newly created GlusterFS volume:

echo "127.0.0.1:/k8s /mnt/gluster-k8s glusterfs defaults,_netdev 0 0" >> /etc/fstab
mount /mnt/gluster-k8s

Step 6 - Set up load balancing

We'll use Hetzner's Load Balancer for SSL offloading and for routing HTTP requests to your K3S setup.

Step 6.1 - Enable proxy protocol in Traefik

To use the proxy protocol, enable it in your K3S' Traefik configuration by setting the Cloud Load Balancer as a trusted IP address on your master node:

cat <<EOF > /var/lib/rancher/k3s/server/manifests/traefik-config.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: traefik
  namespace: kube-system
spec:
  valuesContent: |-
    additionalArguments:
      - "--entryPoints.web.proxyProtocol.trustedIPs=10.0.0.254"
      - "--entryPoints.web.forwardedHeaders.trustedIPs=10.0.0.254"
EOF
If you need the real IP, click here

With the configuration above you do not get the real IP. If you need the real IP, you can use this configuration instead:

cat <<EOF > /var/lib/rancher/k3s/server/manifests/traefik-config.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: traefik
  namespace: kube-system
spec:
  valuesContent: |-
    additionalArguments:
      - "--entryPoints.web.proxyProtocol.trustedIPs=0.0.0.0/0"
EOF

When you create a Load Balancer as explained in the next step, make sure the service has "Proxy protocol" enabled.

Step 6.2 - Create the Load Balancer

Create the Load Balancer and attach it to the private network using the static private IP 10.0.0.254:

hcloud load-balancer create --type lb11 --location nbg1 --name lb-kubernetes
hcloud load-balancer attach-to-network --network network-kubernetes --ip 10.0.0.254 lb-kubernetes

Add your three VMs as targets and make sure traffic is routed using the private network:

hcloud load-balancer add-target lb-kubernetes --server master-1 --use-private-ip
hcloud load-balancer add-target lb-kubernetes --server node-1 --use-private-ip
hcloud load-balancer add-target lb-kubernetes --server node-2 --use-private-ip
Click here to view the Load Balancer details

Load Balancer:

Name Type Network Targets
lb-kubernetes LB11 network-kubernetes
10.0.0.254
  • master-1
  • node-1
  • node-2

  • Use private IP
    • With SSL certificate

      • Let Hetzner create a managed Let's Encrypt certificate for <example.com> and get the certificate ID <certificate_id>:

        hcloud certificate create --domain <example.com> --type managed --name cert-t1
        hcloud certificate list
      • Add the HTTP service for <example.com> using proxy protocol and enable the health check:

        hcloud load-balancer add-service lb-kubernetes --protocol https --http-redirect-http --proxy-protocol --http-certificates <certificate_id>
        hcloud load-balancer update-service lb-kubernetes --listen-port 443 --health-check-http-domain <example.com>
        Click here to view the service details
        Protocol Source port Destination port Certificate HTTP-Redirect Proxy protocol
        https 443 80 <cert-t1> checked enabled
    • Without SSL certificate

      • Without an SSL certificate, the service needs to look like this:

        hcloud load-balancer add-service lb-kubernetes --protocol http --listen-port 80 --proxy-protocol
        Click here to view the service details
        Protocol Source port Destination port Proxy protocol
        http 80 80 enabled

    This will route HTTP requests from Hetzner's Load Balancer (which performs SSL offloading) to your Kubernetes' Traefik reverse proxy, which in turn routed the request to configured ingress routes. Alternatively, you can route incoming HTTP requests directly from Hetzner's Load Balancer to an exposed service in your Kubernetes cluster, skipping Traefik. I've chosen to use Traefik as it i.e. allows me to set additional HTTP response headers.

    Step 7 - Test your setup (Optional)

    Your K3S setup is now complete. It's time to test your setup by deploying an nginx pod and publish the HTTP service via K3S' integrated Traefik.

    Step 7.1 - Write to GlusterFS volume

    Create a static index.html file:

    mkdir /mnt/gluster-k8s/webtest1
    echo "Hello World!" > /mnt/gluster-k8s/webtest1/index.html

    Step 7.2 - Deploy a webserver

    Create an nginx deployment, mount the GlusterFS volume for the static content, expose HTTP port 80 using a service and create a Traefik ingress route for your domain <example.com>:

    Replace <example.com> with your domain or with the IP address of your Load Balancer.

    cat <<"EOF" | kubectl apply -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: webtest1
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: webtest1
      template:
        metadata:
          labels:
            app: webtest1
        spec:
          volumes:
            - name: volume-webtest1
              hostPath:
                path: "/mnt/gluster-k8s/webtest1"
          containers:
          - image: nginx
            name: nginx
            ports:
            - name: port-nginx
              containerPort: 80
            volumeMounts:
              - mountPath: "/usr/share/nginx/html"
                name: volume-webtest1
                readOnly: false
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: webtest1
    spec:
      ports:
        - port: 80
          protocol: TCP
          targetPort: 80
      selector:
        app: webtest1
      type: ClusterIP
    ---
    apiVersion: traefik.io/v1alpha1
    kind: IngressRoute
    metadata:
      name: webtest1
    spec:
      entryPoints:
        - web
      routes:
      - match: Host(`<example.com>`)
        kind: Rule
        services:
        - name: webtest1
          port: 80
    EOF

    Step 7.3 - Access your website

    In Hetzner's Cloud Console, your Load Balancer should turn to healthy green after a few minutes and you should be able to access your website: https://<example.com>

    Conclusion

    You have successfully set up a K3S Kubernetes cluster with one server node and two agent nodes. Hetzner's highly available Load Balancer delivers traffic to your system and performs HTTPS offloading. You're ready to put some workload on your K3S.

    License: MIT
    Want to contribute?

    Get Rewarded: Get up to €50 in credit! Be a part of the community and contribute. Do it for the money. Do it for the bragging rights. And do it to teach others!

    Report Issue
    Try Hetzner Cloud

    Get 20€ free credit!

    Valid until: 31 December 2024 Valid for: 3 months and only for new customers
    Get started
    Want to contribute?

    Get Rewarded: Get up to €50 credit on your account for every tutorial you write and we publish!

    Find out more