Introduction
In the previous tutorial, Getting Started with Pulumi and TypeScript on Hetzner Cloud, you provisioned a single server using Pulumi. That is a good starting point, but a single server is not enough for a real application. If it goes down, your app goes down. If traffic spikes, it has nowhere to go.
The standard solution is a Load Balancer in front of multiple app servers. The Load Balancer receives all incoming traffic and distributes it across your servers. If one server fails its health check, the Load Balancer stops sending traffic to it automatically.
This tutorial builds that setup using Pulumi and TypeScript. By the end, you will have:
- A private network connecting the Load Balancer and app servers
- Two app servers with public IPs but locked down by firewall — HTTP is only accepted from the Load Balancer
- A public Hetzner Load Balancer routing traffic to both servers over the private network
- Health checks so the Load Balancer only sends traffic to servers that are ready
Prerequisites
- Completed Getting Started with Pulumi and TypeScript on Hetzner Cloud, or already familiar with Pulumi basics
- Hetzner Cloud API token
- Node.js 18 or later
- Pulumi CLI installed and logged in:
pulumi login --local
Example values used in this tutorial
| Resource | Value |
|---|---|
| Network CIDR | 10.44.0.0/16 |
| Private subnet | 10.44.10.0/24 |
| Load Balancer private IP | 10.44.10.10 |
| App server 1 private IP | 10.44.10.11 |
| App server 2 private IP | 10.44.10.12 |
| Location | nbg1 |
| Server type | cx23 |
| OS image | ubuntu-24.04 |
Architecture
Internet
│
└── HTTP → Load Balancer (public IP: <203.0.113.1>)
│ Private network (10.44.10.0/24)
├── App Server 1 (10.44.10.11) ← HTTP only from LB
└── App Server 2 (10.44.10.12) ← HTTP only from LBKey points:
- App servers have public IPs but the firewall blocks HTTP from the open internet. HTTP is only accepted from the Load Balancer's private IP.
- SSH is only allowed from IPs you explicitly allow — your own machine.
- The Load Balancer uses the private network to reach the servers (
usePrivateIp: true), so traffic between the LB and servers never leaves Hetzner's internal network.
Step 1 - Set up the project
Create the project directory and install dependencies:
mkdir hetzner-lb-two-servers && cd hetzner-lb-two-servers
pulumi new typescript -y
npm install @pulumi/hcloudCreate a .env file to store your credentials:
HCLOUD_TOKEN=your_hetzner_api_token_here
PULUMI_CONFIG_PASSPHRASE=your_passphrase_hereAdd .env to your .gitignore so it is never committed:
echo ".env" >> .gitignoreLoad the environment variables:
set -a && source .env && set +aSet your SSH key and Hetzner token. Replace ~/.ssh/id_rsa.pub with the path to your public key if it is different:
pulumi config set sshPublicKeyPath ~/.ssh/id_rsa.pub
pulumi config set --secret hcloudToken "$HCLOUD_TOKEN"Whitelist your IP for SSH access. The command below fetches your current public IP automatically:
pulumi config set sshAllowedCidrs "[\"$(curl -4 https://ip.hetzner.com)/32\"]"Check your values:
pulumi config get sshPublicKeyPath
pulumi config get hcloudToken
pulumi config get sshAllowedCidrsStep 2 - Write the infrastructure code
Open index.ts and replace its contents with the following. Each section is explained below.
Step 2.1 - Configuration
import * as fs from "node:fs";
import * as path from "node:path";
import * as pulumi from "@pulumi/pulumi";
import * as hcloud from "@pulumi/hcloud";
const stack = pulumi.getStack();
const project = pulumi.getProject();
const config = new pulumi.Config();
const projectConfig = new pulumi.Config(project);
const hcloudToken = config.requireSecret("hcloudToken");
const sshPublicKeyPath = config.require("sshPublicKeyPath");
const sshAllowedCidrs = config.getObject<string[]>("sshAllowedCidrs") ?? [];
const location = "nbg1";
const networkZone = "eu-central";
const networkCidr = "10.44.0.0/16";
const privateSubnetCidr = "10.44.10.0/24";
const lbPrivateIp = "10.44.10.10";
const appPrivateIps = ["10.44.10.11", "10.44.10.12"];
const namePrefix = `${project}-${stack}`;
const sshPublicKey = fs
.readFileSync(path.resolve(sshPublicKeyPath.replace(/^~/, process.env.HOME ?? "~")), "utf-8")
.trim();
const provider = new hcloud.Provider("hcloud", { token: hcloudToken });
const opts: pulumi.CustomResourceOptions = { provider };config.requireSecret("hcloudToken") reads the token from the encrypted stack config and keeps it secret throughout the program. The namePrefix ensures all resources are named consistently per project and stack, so you can run multiple stacks (dev, prod) without naming conflicts.
Step 2.2 - Private network
const network = new hcloud.Network("network", {
name: `${namePrefix}-network`,
ipRange: networkCidr,
}, opts);
const subnet = new hcloud.NetworkSubnet("subnet", {
networkId: network.id.apply(Number),
type: "cloud",
networkZone,
ipRange: privateSubnetCidr,
}, opts);The private network is what connects the Load Balancer to the app servers internally. When the Load Balancer sends traffic to an app server, it goes through this network — not through the public internet.
Step 2.3 - SSH key and firewall
const sshKey = new hcloud.SshKey("ssh-key", {
name: `${namePrefix}-key`,
publicKey: sshPublicKey,
}, opts);
const firewall = new hcloud.Firewall("firewall", {
name: `${namePrefix}-fw`,
rules: [
{
description: "HTTP from Load Balancer only",
direction: "in",
protocol: "tcp",
port: "80",
sourceIps: [`${lbPrivateIp}/32`],
},
...sshAllowedCidrs.map((cidr, i) => ({
description: `SSH allowlist ${i + 1}`,
direction: "in" as const,
protocol: "tcp" as const,
port: "22",
sourceIps: [cidr],
})),
],
}, opts);The firewall has two types of rules:
- HTTP: only the Load Balancer's private IP (
10.44.10.10) can reach port 80. Any HTTP request coming directly to the server's public IP is blocked. - SSH: only the IPs you added to
sshAllowedCidrscan connect. Everything else is blocked by default.
Step 2.4 - App servers
Each server gets a cloud-init script that installs a simple Python web server. Create the file cloud-init/app-server.yaml:
mkdir cloud-init
touch cloud-init/app-server.yamlcloud-init/app-server.yaml
#cloud-config
write_files:
- path: /var/www/html/index.html
permissions: "0644"
owner: root:root
content: |
<html>
<body>
<h1>Hello from ${SERVER_NAME}</h1>
<p>Private IP: ${SERVER_PRIVATE_IP}</p>
</body>
</html>
- path: /var/www/html/health
permissions: "0644"
owner: root:root
content: "ok"
- path: /etc/systemd/system/web.service
permissions: "0644"
owner: root:root
content: |
[Unit]
Description=Simple web server
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
WorkingDirectory=/var/www/html
ExecStart=/usr/bin/python3 -m http.server 80 --bind 0.0.0.0
Restart=always
RestartSec=2
User=root
[Install]
WantedBy=multi-user.target
runcmd:
- systemctl enable web.service
- systemctl start web.service${SERVER_NAME} and ${SERVER_PRIVATE_IP} are placeholders that will be replaced per server in index.ts. This is what you will see in the browser response — it tells you which server handled your request.
The cloud-init file here runs a simple Python web server as a demo. In a real project, you can replace it with whatever your application needs — install your runtime, clone your repo, start your service. Two things must stay in place: your app must listen on port 80 (the Load Balancer routes to port 80), and the
/healthendpoint must returnokwith a200status (the Load Balancer health check depends on it).
Now add the server provisioning to index.ts:
const cloudInitTemplate = fs.readFileSync(
path.join(__dirname, "cloud-init", "app-server.yaml"),
"utf-8"
);
const firewallId = firewall.id.apply(Number);
const servers = appPrivateIps.map((ip, i) => {
const name = `${namePrefix}-app-${i + 1}`;
const userData = cloudInitTemplate
.split("${SERVER_NAME}").join(name)
.split("${SERVER_PRIVATE_IP}").join(ip);
return new hcloud.Server(`server-${i + 1}`, {
name,
serverType: "cx23",
image: "ubuntu-24.04",
location,
sshKeys: [sshKey.id],
firewallIds: [firewallId],
publicNets: [{ ipv4Enabled: true, ipv6Enabled: false }],
networks: [{ networkId: network.id.apply(Number), ip }],
userData,
labels: { managedBy: "pulumi", role: "app" },
}, { ...opts, dependsOn: [subnet] });
});The dependsOn: [subnet] ensures the subnet is created before the servers try to attach to the network.
Step 2.5 - Load Balancer
const lb = new hcloud.LoadBalancer("lb", {
name: `${namePrefix}-lb`,
loadBalancerType: "lb11",
location,
}, opts);
const lbNetwork = new hcloud.LoadBalancerNetwork("lb-network", {
loadBalancerId: lb.id.apply(Number),
subnetId: subnet.id,
ip: lbPrivateIp,
enablePublicInterface: true,
}, { ...opts, dependsOn: [subnet, lb] });
const lbTargets = servers.map((server, i) =>
new hcloud.LoadBalancerTarget(`lb-target-${i + 1}`, {
type: "server",
loadBalancerId: lb.id.apply(Number),
serverId: server.id.apply(Number),
usePrivateIp: true,
}, { ...opts, dependsOn: [lbNetwork, server] })
);
const lbService = new hcloud.LoadBalancerService("lb-service", {
loadBalancerId: lb.id,
protocol: "http",
listenPort: 80,
destinationPort: 80,
healthCheck: {
protocol: "http",
port: 80,
interval: 10,
timeout: 5,
retries: 3,
http: {
path: "/health",
statusCodes: ["200"],
response: "ok",
},
},
}, { ...opts, dependsOn: lbTargets });usePrivateIp: true is the key setting. It tells the Load Balancer to route traffic to each server using its private IP on the internal network, not its public IP.
The health check polls /health every 10 seconds. If a server responds with 200 ok, it stays in rotation. If it fails 3 times, the Load Balancer stops sending traffic to it until it recovers.
Step 2.6 - Outputs
export const loadBalancerIp = lb.ipv4;
export const appUrl = pulumi.interpolate`http://${lb.ipv4}/`;
export const healthUrl = pulumi.interpolate`http://${lb.ipv4}/health`;Step 3 - Deploy
Preview the changes first to verify what will be created:
set -a && source .env && set +a
pulumi previewYou should see 9 resources to be created: network, subnet, SSH key, firewall, 2 servers, Load Balancer, LB network attachment, LB targets, and LB service.
Deploy when ready:
pulumi upPulumi will prompt you to confirm. Select yes. The deployment takes about 1–2 minutes.
Step 4 - Verify
Get the Load Balancer URL:
pulumi stack output appUrlOpen it in your browser or run:
curl "$(pulumi stack output appUrl)"Refresh a few times. You will see responses alternating between app-1 and app-2 — the Load Balancer is distributing traffic between both servers.
Check the health endpoint:
curl "$(pulumi stack output healthUrl)"
# expected: okTo confirm the firewall is working, try hitting the app server directly using its public IP. You should get no response:
# This should time out — HTTP is blocked on the server's public IP
curl --max-time 5 http://<server-public-ip>/Step 5 - Clean up
When you are done testing:
set -a && source .env && set +a
pulumi destroy
pulumi stack rm devConclusion
You now have a working Load Balancer setup with two app servers on Hetzner Cloud, deployed entirely through Pulumi TypeScript. The Load Balancer distributes traffic across both servers, health checks ensure only healthy servers receive traffic, and the firewall keeps the servers from being directly reachable over HTTP.
This is a solid foundation for a real deployment. Some directions to explore next:
- Add a NAT gateway so the servers can reach the internet for outbound traffic (package updates, API calls)
- Terminate TLS on the Load Balancer using a Hetzner managed certificate
- Scale beyond two servers by extending the
appPrivateIpsarray
Full source code: hetzner-private-lb-pulumi