Introduction
When running a backend application on Hetzner Cloud, a common pattern is to keep your app servers away from the public internet — no direct public IP, no exposure. Instead, a Load Balancer sits in front and handles all incoming traffic, while a NAT gateway handles outbound traffic from the private servers (package updates, API calls, etc.).
This tutorial walks through setting up that pattern fully automated using OpenTofu. By the end, you will have:
- A Hetzner Private Network with two subnets
- App servers with no public IP — only accessible through the Load Balancer or NAT gateway
- A NAT gateway that gives private servers outbound internet access
- A public Hetzner Load Balancer with health checks distributing traffic to the app servers
- Everything configured via cloud-init — swap the example web server for your own app
This setup works as a reusable template. The core networking, NAT, and Load Balancer config stays the same — you only need to change the cloud-init.yaml.tpl file to run your own application.
Prerequisites
- Hetzner Cloud API token
- An SSH key added to your Hetzner Console project
- OpenTofu installed locally
- Basic familiarity with the terminal
Example values used in this tutorial
| Resource | Value |
|---|---|
| Network CIDR | 10.42.0.0/16 |
| Public subnet | 10.42.10.0/24 |
| Private subnet | 10.42.20.0/24 |
| Network gateway | 10.42.0.1 |
| NAT gateway private IP | 10.42.20.2 |
| App server 1 | 10.42.20.11 |
| App server 2 | 10.42.20.12 |
| Load Balancer private IP | 10.42.20.10 |
Architecture
Before writing any code, it helps to understand how the pieces connect:
Internet
│
├── HTTP/HTTPS → Load Balancer (public IP)
│ │ Private Network
│ ├── App Server 1 (10.42.20.11) ← no public IP
│ └── App Server 2 (10.42.20.12) ← no public IP
│
└── SSH → NAT Gateway (public IP)
│ SSH jump
├── App Server 1
└── App Server 2
App servers → outbound internet → NAT Gateway → InternetKey points:
- App servers have no public IP. The only way in is through the Load Balancer (HTTP) or NAT gateway (SSH jump).
- The NAT gateway has both a public IP (for internet egress) and a private IP in the same subnet as the app servers.
- Hetzner's SDN routes all
0.0.0.0/0traffic from the private subnet through the NAT gateway using a network-level route.
Step 1 - Project structure
Create the project directory and all required files:
mkdir -p hetzner-private-lb-nat/script && cd hetzner-private-lb-nat
touch {main,variables,providers,network,nat,servers,firewall,load_balancer,outputs}.tf \
terraform.tfvars \
script/{cloud-init,nat-cloud-init}.yaml.tplThe final structure should look like this:
hetzner-private-lb-nat/
├── main.tf
├── variables.tf
├── providers.tf
├── network.tf
├── nat.tf
├── servers.tf
├── firewall.tf
├── load_balancer.tf
├── outputs.tf
├── terraform.tfvars
└── script/
├── cloud-init.yaml.tpl (app server bootstrap)
└── nat-cloud-init.yaml.tpl (NAT gateway bootstrap)Step 2 - Providers and variables
providers.tf
This uses version 1.60 of the Terraform Provider for the Hetzner Cloud. You can find the latest version here.
terraform {
required_version = ">= 1.0"
required_providers {
hcloud = {
source = "hetznercloud/hcloud"
version = "~> 1.60"
}
}
}
provider "hcloud" {
token = var.hcloud_token
}main.tf
locals {
common_labels = {
project = var.project_name
managedBy = "opentofu"
owner = var.owner
}
}variables.tf — the key variables:
You don't need to edit any values in this file. We will add the values for these variables in
terraform.tfvarsbelow.
variable "hcloud_token" {
description = "Hetzner Cloud API token"
type = string
sensitive = true
}
variable "project_name" {
description = "Name prefix for all resources"
type = string
default = "hetzner-private-lb-nat"
}
variable "owner" {
description = "Owner of the resources"
type = string
default = "holu"
}
variable "location" {
type = string
default = "nbg1"
}
variable "network_zone" {
type = string
default = "eu-central"
}
variable "ssh_key_name" {
description = "Existing SSH key name in Hetzner Cloud"
type = string
}
variable "ssh_allowed_cidrs" {
description = "Your IP(s) allowed to SSH into the NAT gateway"
type = list(string)
default = []
}
variable "network_cidr" {
type = string
default = "10.42.0.0/16"
}
variable "network_gateway_ip" {
type = string
default = "10.42.0.1"
}
variable "public_subnet_cidr" {
type = string
default = "10.42.10.0/24"
}
variable "private_subnet_cidr" {
type = string
default = "10.42.20.0/24"
}
variable "nat_gateway_private_ip" {
type = string
default = "10.42.20.2"
}
variable "lb_private_ip" {
type = string
default = "10.42.20.10"
}
variable "server_type" {
type = string
default = "cx23"
}
variable "server_image" {
type = string
default = "ubuntu-24.04"
}
variable "nat_gateway_server_type" {
type = string
default = "cx23"
}
variable "nat_gateway_image" {
type = string
default = "ubuntu-24.04"
}
variable "app_servers" {
description = "Map of app servers with their private IPs"
type = map(object({
private_ip = string
location = optional(string)
}))
default = {
app1 = { private_ip = "10.42.20.11" }
app2 = { private_ip = "10.42.20.12" }
}
}terraform.tfvars
Replace the example values below with:
- Your own API token
- Your own name which will be added as a label to each resource in the format
owner:my-name- Your SSH key that is added in Hetzner Console
- Your IP (
curl -4 https://ip.hetzner.com)
hcloud_token = "your-api-token"
owner = "my-name"
ssh_key_name = "your-ssh-key-name"
ssh_allowed_cidrs = ["203.0.113.10/32"] # replace with your IPThe default value of app_servers includes app1 and app2, meaning that two app servers without public IPs will be created. If you want a different number of app servers, add them in terraform.tfvars like this:
app_servers = {
app1 = { private_ip = "10.42.20.11" }
app2 = { private_ip = "10.42.20.12" }
app3 = { private_ip = "10.42.20.13" }
}Step 3 - Network and subnets
network.tf
resource "hcloud_network" "main" {
name = "${var.project_name}-network"
ip_range = var.network_cidr
labels = local.common_labels
}
resource "hcloud_network_subnet" "public" {
network_id = hcloud_network.main.id
type = "cloud"
network_zone = var.network_zone
ip_range = var.public_subnet_cidr
}
resource "hcloud_network_subnet" "private" {
network_id = hcloud_network.main.id
type = "cloud"
network_zone = var.network_zone
ip_range = var.private_subnet_cidr
}Both subnets live inside the same VPC (10.42.0.0/16). Servers in both subnets can communicate with each other — the separation is about access control (firewall rules) and internet access (public IP or not).
Step 4 - NAT gateway
The NAT gateway is the most important piece. It is a regular Hetzner server that has:
- A public IP for internet egress
- A private IP in the same subnet as the app servers
When an app server sends traffic to the internet, it goes through the NAT gateway. The gateway replaces the private source IP with its public IP using iptables MASQUERADE — this is standard Linux NAT.
nat.tf
resource "hcloud_firewall" "nat" {
name = "${var.project_name}-nat-fw"
dynamic "rule" {
for_each = var.ssh_allowed_cidrs
content {
direction = "in"
protocol = "tcp"
port = "22"
source_ips = [rule.value]
}
}
labels = merge(local.common_labels, { role = "nat" })
}
resource "hcloud_server" "nat" {
name = "${var.project_name}-nat"
server_type = var.nat_gateway_server_type
image = var.nat_gateway_image
location = var.location
ssh_keys = [data.hcloud_ssh_key.selected.id]
firewall_ids = [hcloud_firewall.nat.id]
network {
network_id = hcloud_network.main.id
ip = var.nat_gateway_private_ip
}
public_net {
ipv4_enabled = true # required for outbound internet egress
ipv6_enabled = false
}
user_data = templatefile("${path.module}/script/nat-cloud-init.yaml.tpl", {
private_subnet_cidr = var.private_subnet_cidr
})
labels = merge(local.common_labels, { role = "nat" })
depends_on = [hcloud_network_subnet.private]
}
# Tell Hetzner's SDN to route all internet-bound traffic through the NAT gateway
resource "hcloud_network_route" "private_default_egress" {
network_id = hcloud_network.main.id
destination = "0.0.0.0/0"
gateway = var.nat_gateway_private_ip
depends_on = [hcloud_server.nat]
}The hcloud_network_route resource is what makes app servers reach the internet without their own public IP. Hetzner's SDN intercepts all outbound traffic and sends it to the NAT gateway's private IP.
script/nat-cloud-init.yaml.tpl
This cloud-init config runs on first boot to configure the NAT gateway:
#cloud-config
write_files:
- path: /usr/local/sbin/configure-nat.sh
permissions: "0755"
owner: root:root
content: |
#!/usr/bin/env bash
set -euo pipefail
# Detect the public interface dynamically
WAN_IF=$(ip route | awk '/^default/ {print $5; exit}')
# Enable IP forwarding so the server can route packets
sysctl -w net.ipv4.ip_forward=1
sed -i 's/^#\?net.ipv4.ip_forward=.*/net.ipv4.ip_forward=1/' /etc/sysctl.conf
# Replace source IP of packets from the private subnet with the NAT gateway's public IP
iptables -t nat -C POSTROUTING -s ${private_subnet_cidr} -o "$WAN_IF" -j MASQUERADE 2>/dev/null || \
iptables -t nat -A POSTROUTING -s ${private_subnet_cidr} -o "$WAN_IF" -j MASQUERADE
# Allow forwarding outbound packets from private subnet
iptables -C FORWARD -s ${private_subnet_cidr} -o "$WAN_IF" -j ACCEPT 2>/dev/null || \
iptables -A FORWARD -s ${private_subnet_cidr} -o "$WAN_IF" -j ACCEPT
# Allow return traffic for established connections
iptables -C FORWARD -d ${private_subnet_cidr} -m conntrack --ctstate ESTABLISHED,RELATED -i "$WAN_IF" -j ACCEPT 2>/dev/null || \
iptables -A FORWARD -d ${private_subnet_cidr} -m conntrack --ctstate ESTABLISHED,RELATED -i "$WAN_IF" -j ACCEPT
- path: /etc/systemd/system/nat-gateway.service
permissions: "0644"
owner: root:root
content: |
[Unit]
Description=Configure NAT for private subnet egress
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
ExecStart=/usr/local/sbin/configure-nat.sh
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
runcmd:
- systemctl daemon-reload
- systemctl enable nat-gateway.service
- systemctl start nat-gateway.serviceThree things worth noting here:
ip_forward=1— without this the Linux kernel drops packets it receives that are not destined for itself. Enabling it turns the server into a router.MASQUERADE— rewrites the source IP of outgoing packets so the internet sees the NAT gateway's public IP, not the private server's IP.- The systemd service re-applies the iptables rules on every reboot, so the NAT keeps working after a server restart.
Step 5 - App servers
firewall.tf
App servers should not be reachable from the internet directly. The firewall allows HTTP only from the Load Balancer's private IP, and SSH only from the NAT gateway.
resource "hcloud_firewall" "app" {
name = "${var.project_name}-app-fw"
# Allow HTTP from Load Balancer only
rule {
direction = "in"
protocol = "tcp"
port = "80"
source_ips = [var.lb_private_ip]
}
# Allow SSH from NAT gateway (jump host access)
rule {
direction = "in"
protocol = "tcp"
port = "22"
source_ips = [var.nat_gateway_private_ip]
}
labels = merge(local.common_labels, { role = "app" })
}servers.tf
data "hcloud_ssh_key" "selected" {
name = var.ssh_key_name
}
resource "hcloud_server" "app" {
for_each = var.app_servers
name = "${var.project_name}-${each.key}"
server_type = var.server_type
image = var.server_image
location = coalesce(each.value.location, var.location)
ssh_keys = [data.hcloud_ssh_key.selected.id]
firewall_ids = [hcloud_firewall.app.id]
network {
network_id = hcloud_network.main.id
ip = each.value.private_ip
}
public_net {
ipv4_enabled = false # no public IP — intentional
ipv6_enabled = false
}
user_data = templatefile("${path.module}/script/cloud-init.yaml.tpl", {
server_name = each.key
nat_gateway_private_ip = var.nat_gateway_private_ip
network_gateway_ip = var.network_gateway_ip
})
labels = merge(local.common_labels, { role = "app", name = each.key })
depends_on = [hcloud_network_subnet.private]
}script/cloud-init.yaml.tpl
This is the app server bootstrap. The example runs a simple Python HTTP server — replace this with your own application:
#cloud-config
write_files:
- path: /var/www/html/index.html
permissions: "0644"
owner: root:root
content: |
<html>
<body><h1>${server_name}</h1></body>
</html>
- path: /var/www/html/health
permissions: "0644"
owner: root:root
content: "ok"
- path: /etc/systemd/system/simple-web.service
permissions: "0644"
owner: root:root
content: |
[Unit]
Description=Simple web server
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
WorkingDirectory=/var/www/html
ExecStart=/usr/bin/python3 -m http.server 80 --bind 0.0.0.0
Restart=always
User=root
[Install]
WantedBy=multi-user.target
- path: /etc/systemd/resolved.conf
content: |
[Resolve]
DNS=185.12.64.2 185.12.64.1
FallbackDNS=8.8.8.8
append: true
runcmd:
# Set default route through the Hetzner Private Network gateway
- IFACE=$(ip route | awk '/10\.42\.0\.0\/16/ {print $5; exit}') && ip route add default via ${network_gateway_ip} dev "$IFACE" || true
- systemctl enable simple-web.service
- systemctl restart simple-web.service
- systemctl restart systemd-resolvedTo deploy your own app, replace the write_files and runcmd sections. For example, to run a Node.js app:
runcmd:
- IFACE=$(ip route | awk '/^default/{print $5; exit}') && ip route add default via ${network_gateway_ip} dev "$IFACE" || true
- apt-get update -q && apt-get install -y nodejs npm
- cd /opt/myapp && npm install && npm startThe important line to keep in all cases is the ip route add command — it tells the server to use the Hetzner Private Network gateway as its default route, which then forwards internet-bound traffic to the NAT gateway.
Step 6 - Load Balancer
load_balancer.tf
resource "hcloud_load_balancer" "public" {
name = "${var.project_name}-lb"
load_balancer_type = "lb11"
location = var.location
labels = merge(local.common_labels, { role = "edge" })
}
resource "hcloud_load_balancer_network" "public_subnet_attachment" {
load_balancer_id = hcloud_load_balancer.public.id
subnet_id = hcloud_network_subnet.private.id
ip = var.lb_private_ip
}
resource "hcloud_load_balancer_target" "app" {
for_each = hcloud_server.app
type = "server"
load_balancer_id = hcloud_load_balancer.public.id
server_id = each.value.id
use_private_ip = true # communicate with servers via Private Network
}
resource "hcloud_load_balancer_service" "http" {
load_balancer_id = hcloud_load_balancer.public.id
protocol = "http"
listen_port = 80
destination_port = 80
health_check {
protocol = "http"
port = 80
interval = 10
timeout = 5
retries = 3
http {
path = "/health"
status_codes = ["200"]
}
}
}use_private_ip = true is required here because the app servers have no public IP — the Load Balancer must reach them through the Private Network.
The health check hits /health on each server. If a server returns anything other than 200, the Load Balancer stops sending it traffic until it recovers.
Step 7 - Outputs
outputs.tf
output "load_balancer_public_ipv4" {
description = "Public IPv4 of the Load Balancer — use this to reach your app"
value = hcloud_load_balancer.public.ipv4
}
output "nat_gateway_public_ipv4" {
description = "Public IPv4 of the NAT gateway — use this as SSH jump host"
value = hcloud_server.nat.ipv4_address
}
output "app_server_private_ips" {
description = "Private IPs of app servers"
value = {
for name, server in hcloud_server.app : name => one(server.network).ip
}
}Step 8 - Deploy
Initialize and apply:
cd $HOME/hetzner-private-lb-nat
tofu init
tofu fmt -recursive
tofu validate
tofu plan
tofu applyAfter tofu apply completes, you should see output similar to:
Outputs:
app_server_private_ips = {
"app1" = "10.42.20.11"
"app2" = "10.42.20.12"
}
load_balancer_public_ipv4 = "203.0.113.10"
nat_gateway_public_ipv4 = "203.0.113.20"Step 9 - Test
Before you test the Load Balancer, wait a few minutes until everything is setup. In Hetzner Console, you can check if the targets are healthy.
Test the Load Balancer:
LB_IP=$(tofu output -raw load_balancer_public_ipv4)
curl -i "http://$LB_IP/"You should get an HTTP 200 response. Run it a few times — the Load Balancer distributes requests between the two app servers, so you will see responses from app1 and app2 alternating.
Test the health endpoint:
curl "http://$LB_IP/health"
# Expected: okTest outbound internet from a private server (via NAT):
NAT_IP=$(tofu output -raw nat_gateway_public_ipv4)
# SSH into a private server using the NAT gateway as a jump host
ssh -J root@$NAT_IP root@10.42.20.11
# From inside the private server, test outbound internet
curl -4 https://ip.hetzner.com
# This should return the NAT gateway's public IP, not the private server's IPThis confirms the NAT gateway is working: the private server has no public IP of its own, but its outbound traffic goes through the NAT gateway and appears to come from the NAT gateway's IP.
SSH config for convenience:
Host nat
HostName 203.0.113.20
User root
Host app1
HostName 10.42.20.11
User root
ProxyJump nat
Host app2
HostName 10.42.20.12
User root
ProxyJump natStep 10 - Use your own application
The cloud-init template in script/cloud-init.yaml.tpl is the only file you need to change to deploy your own application. The network, NAT, Load Balancer, and firewall configuration stays exactly the same.
The one requirement is that your app:
- Listens on port
80 - Responds with HTTP
200on/health
For example, to replace the Python web server with Nginx:
runcmd:
- IFACE=$(ip route | awk '/^default/{print $5; exit}') && ip route add default via ${network_gateway_ip} dev "$IFACE" || true
- apt-get update -q && apt-get install -y nginx
- echo "ok" > /var/www/html/health
- systemctl enable nginx
- systemctl start nginxOr to run a Docker container:
runcmd:
- IFACE=$(ip route | awk '/^default/{print $5; exit}') && ip route add default via ${network_gateway_ip} dev "$IFACE" || true
- apt-get update -q && apt-get install -y docker.io
- docker run -d -p 80:8080 your-image:latestThe ip route add line must always be there — it is what connects the private server to the internet through the NAT gateway.
Cleanup
cd $HOME/hetzner-private-lb-nat
tofu init
tofu destroyConclusion
You now have a production-style private network setup on Hetzner Cloud:
- App servers are not directly reachable from the internet
- Inbound traffic goes through the Load Balancer with health checks
- Outbound traffic goes through the NAT gateway
- SSH access is through the NAT gateway acting as a jump host
The whole setup is defined in code, reproducible, and takes about two minutes to deploy. To use it for a real application, edit cloud-init.yaml.tpl, run tofu apply, and your app is live behind the Load Balancer.
Full source code: hetzner-private-lb-nat-tofu