Introduction
In this how-to you will learn how to use Hetzner Object Storage as a backend for Terraform. This allowed me to centralize Terraform backends and gave me the ability to easily reuse remote states into different projects. To aid newcomers, I will be using WebUI to show all steps.
As an example, this tutorial will use Terraform to create a new Hetzner cloud server. Terraform will store the information about that cloud server (e.g. name, ID, IPv4 address) in a Hetzner Bucket. The final step shows how to retrieve that server data from the Bucket via a different Terraform project.
Prerequisites
- Basic knowledge about the Hetzner Cloud
- Terraform or OpenTofu installed. I will be using OpenTofu, but everything should be the same for Terraform.
Step 1 - S3 Setup
First let's create an S3 bucket and generate credentials. I will proceed with creating a new project and S3 Bucket inside this new project.
Create a new S3 Bucket and S3 credentials as explained in the official getting starteds:
Save Access Key and Secret Key somewhere.
Step 2 - Terraform setup
Let's create an empty directory and add main.tf file inside this folder:
mkdir ~/terraform-backend-tutorial
cd ~/terraform-backend-tutorial
touch main.tfReplace placeholders with your actuals values.
Terraform » S3 and example
Check available versions at Terraform » hetznercloud/hcloud
# Run these commands in the terminal to set the variables
export BUCKET_NAME="YOUR_BUCKET_NAME"
export ENDPOINT="https://nbg1.your-objectstorage.com"
# Run this command in the terminal to add the content to main.tf
cat <<EOF > ~/terraform-backend-tutorial/main.tf
terraform {
backend "s3" {
bucket = "$BUCKET_NAME"
key = "terraform-tutorial/terraform.tfstate"
endpoints = {
s3 = "$ENDPOINT"
}
skip_requesting_account_id = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_region_validation = true
use_path_style = true
}
required_providers {
hcloud = {
source = "hetznercloud/hcloud"
version = "1.59.0"
}
}
}
EOFExport these variables in your terminal session, use the Access Key and Secret Key that you saved before. Leave AWS_DEFAULT_REGION as is with any valid AWS region:
export AWS_ACCESS_KEY_ID="YOUR_HETZNER_ACCESS_KEY"
export AWS_SECRET_ACCESS_KEY="YOUR_HETZNER_SECRET_KEY"
export AWS_DEFAULT_REGION="eu-central"After this, choose one of these options:
- If you are using Terraform, run:
If the output mentions
terraform initTerraform has been successfully initialized!, Congrats! it means setup was a success.
- If you are using OpenTofu like me, run:
If the output mentions
tofu initOpenTofu has been successfully initialized!, Congrats! it means setup was a success.
Step 3 - Create a cloud server on Hetzner Console
Let's head over to Hetzner Console and generate an API token as explained in the official getting started:
Make sure you set "Read & Write" permissions. Terraform will use this API token.
We will use this token to create an example cloud server. Save the API token somewhere, since we will need it soon.
Cloud server specs:
- Name: tutorial-vm
- Image: ubuntu-24.04
- Server Type: cx23
- Location: nbg1 (Nuremberg)
Add this block to main.tf:
cat <<EOF >> ~/terraform-backend-tutorial/main.tf
# Hetzner Cloud related configs
variable "hcloud_token" {
sensitive = true
}
provider "hcloud" {
token = var.hcloud_token
}
# Provision a single cloud server on Hetzner
resource "hcloud_server" "vm" {
name = "tutorial-vm"
image = "ubuntu-24.04"
server_type = "cx23"
location = "nbg1"
public_net {
ipv4_enabled = true
ipv6_enabled = false
}
labels = {
managed_by = "terraform"
}
}
EOFExport token as a variable that will be picked up by Terraform/OpenTofu
export TF_VAR_hcloud_token="your-hcloud-api-token"Now we can plan and apply main.tf:
-
If you're using Terraform:
terraform plan terraform apply -
If you're using OpenTofu:
tofu plan tofu apply
Example output:
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
OpenTofu will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
[...]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.Congrats, you have created a VM!
Now we also need to define outputs, a resource that can be accessed remotely.
Append this to main.tf:
cat <<EOF >> ~/terraform-backend-tutorial/main.tf
output "vm_id" {
description = "ID of the provisioned VM"
value = hcloud_server.vm.id
}
output "vm_ipv4" {
description = "Public IPv4 address of the VM"
value = hcloud_server.vm.ipv4_address
}
output "vm_name" {
description = "Name of the VM"
value = hcloud_server.vm.name
}
EOF-
If you are using Terraform, run:
terraform apply -
If you are using OpenTofu, run:
tofu apply
Example output:
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
vm_id = "115987908"
vm_ipv4 = "203.0.113.1"
vm_name = "tutorial-vm"Step 4 - Access cloud server state from different Project
Now let's create a new terraform project and access the cloud server state.
mkdir ~/terraform-read-state
cd ~/terraform-read-state
touch main.tfContents of main.tf:
# Run these commands in the terminal to set the variables
export BUCKET_NAME="YOUR_BUCKET_NAME"
export ENDPOINT="https://nbg1.your-objectstorage.com"
# Run this command in the terminal to add the content to main.tf
cat <<EOF > ~/terraform-read-state/main.tf
terraform {
backend "s3" {
bucket = "$BUCKET_NAME"
key = "terraform-read-state/terraform.tfstate"
endpoints = {
s3 = "$ENDPOINT"
}
skip_requesting_account_id = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_region_validation = true
use_path_style = true
}
}
data "terraform_remote_state" "vm_state" {
backend = "s3"
config = {
bucket = "$BUCKET_NAME"
key = "terraform-tutorial/terraform.tfstate"
endpoints = {
s3 = "$ENDPOINT"
}
skip_requesting_account_id = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_region_validation = true
use_path_style = true
}
}
output "vm_id" {
description = "ID of the provisioned VM"
value = data.terraform_remote_state.vm_state.outputs.vm_id
}
output "vm_ipv4" {
description = "Public IPv4 address of the VM"
value = data.terraform_remote_state.vm_state.outputs.vm_ipv4
}
output "vm_name" {
description = "Name of the VM"
value = data.terraform_remote_state.vm_state.outputs.vm_name
}
EOFNow lets run:
-
If you are using Terraform
terraform init terraform apply -
If you are using OpenTofu
tofu init tofu apply
❯ tofu init
OpenTofu has been successfully initialized!
❯ tofu apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
vm_id = "115987908"
vm_ipv4 = "203.0.113.1"
vm_name = "tutorial-vm"If everything worked as expected, you should see cloud server data as an output. If not try to trace back the steps to check if you missed something.
If you see correct output, then hooray!
Also, if you take a closer look, you can see that the hcloud provider is not used in the new main.tf, but we still got the data.
Conclusion
You've successfully set up Hetzner Object Storage as a Terraform backend and learned how to share state between projects. This approach helps you manage infrastructure more efficiently and keeps your Terraform states centralized. You can now use this setup for your own projects and teams.
For more information, check out the Hetzner API documentation and Terraform S3 backend docs.