Introduction
This tutorial explains how to provision vServer machines at Hetzner Cloud with the latest version of Fedora CoreOS, using Terraform Infrastructure as Code (IaC).
Prerequisites
- Basic knowledge about the Hetzner Cloud
- Hetzner Cloud API token generated at Hetzner Cloud Console
terraform
CLI installed
Example terminology
- IP addresses:
<203.0.113.1>
and<2001:db8:5678::1>
Step 1 - Install Terraform Hetzner Cloud Provider
Create a new folder as the working directory for the next steps, e.g. terraform-coreos
, and create a versions.tf
file with the following content:
# versions.tf
terraform {
required_providers {
hcloud = {
source = "hetznercloud/hcloud"
}
}
required_version = ">= 0.13"
}
Now execute terraform init
to install the Terraform Hetzner Cloud Provider.
Step 2 - Build coreos-installer
🔨
Currently, the coreos-installer
is not available as a binary. Therefore, you need to install rust's package manager cargo
. Once the installation is complete, remember to configure your current shell as suggested below Rust is installed now. Great!
and make sure you can use rustc and cargo (rustc --version && cargo --version
).
Build the binary on a Linux machine:
cargo install --target-dir . coreos-installer
If you get an error, you might be missing some prerequisites (e.g. build-essential
, pkg-config
, libssl-dev
, libzstd-dev
). After it is built, look for the coreos-installer binary. If the coreos-installer
command is already working, you can use ls -l $(which coreos-installer)
to find the location of the binary.
Later, Terraform will copy the binary to the vServer in rescue mode, so you have to move it as coreos-installer
into your working directory. Building the binary in rescue mode on the vServer will fail, if there is not enough space on the initramfs (like on cx11).
Step 3 - Terraform configuration 🤓
Step 3.1 - Required configuration files
To instruct Terraform how to setup and configure your servers, create a main.tf
file:
# main.tf
####
# Variables
##
variable "hcloud_token" {
description = "Hetzner Cloud API Token"
type = string
}
variable "ssh_public_key_file" {
description = "Local path to your public key"
type = string
default = "~/.ssh/id_rsa.pub"
}
variable "ssh_private_key_file" {
description = "Local path to your private key"
type = string
default = "~/.ssh/id_rsa"
}
variable "ssh_public_key_name" {
description = "Name of your public key to identify at Hetzner Cloud portal"
type = string
default = "My-SSH-Key"
}
variable "hcloud_server_type" {
description = "vServer type name, lookup via `hcloud server-type list`"
type = string
default = "cx21"
}
variable "hcloud_server_datacenter" {
description = "Desired datacenter location name, lookup via `hcloud datacenter list`"
type = string
default = "fsn1-dc14"
}
variable "hcloud_server_name" {
description = "Name of the server"
type = string
default = "www1"
}
# Update version to the latest release of Butane
variable "tools_butane_version" {
description = "See https://github.com/coreos/butane/releases for available versions"
type = string
default = "0.19.0"
}
####
# Infrastructure config
##
provider "hcloud" {
token = var.hcloud_token
}
resource "hcloud_ssh_key" "key" {
name = var.ssh_public_key_name
public_key = file(var.ssh_public_key_file)
}
resource "hcloud_server" "master" {
name = var.hcloud_server_name
labels = { "os" = "coreos" }
server_type = var.hcloud_server_type
datacenter = var.hcloud_server_datacenter
# Image is ignored, as we boot into rescue mode, but is a required field
image = "fedora-39"
rescue = "linux64"
ssh_keys = [hcloud_ssh_key.key.id]
connection {
host = self.ipv4_address
timeout = "5m"
private_key = file(var.ssh_private_key_file)
# Root is the available user in rescue mode
user = "root"
}
# Wait for the server to be available
provisioner "local-exec" {
command = "until nc -zv ${self.ipv4_address} 22; do sleep 5; done"
}
# Copy config.yaml and replace $ssh_public_key variable
provisioner "file" {
content = replace(file("config.yaml"), "$ssh_public_key", trimspace(file(var.ssh_public_key_file)))
destination = "/root/config.yaml"
}
# Copy coreos-installer binary, as initramfs has not sufficient space to compile it in rescue mode
provisioner "file" {
source = "coreos-installer"
destination = "/usr/local/bin/coreos-installer"
}
# Install Butane in rescue mode
provisioner "remote-exec" {
inline = [
"set -x",
# Convert ignition yaml into json using Butane
"wget -O /usr/local/bin/butane 'https://github.com/coreos/butane/releases/download/v${var.tools_butane_version}/butane-x86_64-unknown-linux-gnu'",
"chmod +x /usr/local/bin/butane",
"butane --strict < config.yaml > config.ign",
# coreos-installer binary is copied, if you have sufficient RAM available, you can also uncomment the following
# two lines and comment-out the `chmod +x` line, to build coreos-installer in rescue mode
# "apt install cargo",
# "cargo install coreos-installer",
"chmod +x /usr/local/bin/coreos-installer",
# Download and install Fedora CoreOS to /dev/sda
"coreos-installer install /dev/sda -i /root/config.ign",
# Exit rescue mode and boot into coreos
"reboot"
]
}
# Wait for the server to be available
provisioner "local-exec" {
command = "until nc -zv ${self.ipv4_address} 22; do sleep 15; done"
}
# Configure CoreOS after installation
provisioner "remote-exec" {
connection {
host = self.ipv4_address
timeout = "1m"
private_key = file(var.ssh_private_key_file)
# This user is configured in config.yaml
user = "core"
}
inline = [
"sudo hostnamectl set-hostname ${self.name}"
# Add additional commands if needed
]
}
}
Additionally we need a Fedore CoreOS Configuration file named config.yaml
:
# For docs, see: https://coreos.github.io/butane/specs/
variant: fcos
version: 1.5.0
passwd:
users:
- name: core
groups:
- docker
- wheel
- sudo
ssh_authorized_keys:
# Will be replaced by terraform script
- $ssh_public_key
# If you need ssh password login, generate hash via `openssl passwd -1` and insert it below
# password_hash: $1$1234567890abcdef...
# If you need ssh password login, uncomment the following lines
#
# storage:
# files:
# - path: /etc/ssh/sshd_config.d/20-enable-passwords.conf
# mode: 0644
# contents:
# inline: |
# # Fedora CoreOS disables SSH password login by default.
# # Enable it.
# # This file must sort before 40-disable-passwords.conf.
# PasswordAuthentication yes
Step 3.2 - Adjust the configuration files
In the section Variables
of the main.tf
, adjust the default values for ssh_public_key
, ssh_public_key_name
, hcloud_server_type
, hcloud_server_datacenter
and hcloud_server_name
to your needs or add a terraform.tfvars
file to set your desired settings.
Optionally, have a look at the config.yaml
. This file will be used by the coreos-installer
to configure the OS. For the first try, you should leave it as it is. But after successful provisioning, you want to add systemd
service definitions to run containers on it.
Step 4 - Get it up and running! 🚀
Run terraform apply
, enter your Hetzner Cloud API Token if prompted, and let the magic be done!
After a successful installation, you will see the message Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
.
terraform show
now outputs you all information about your new machine:
# hcloud_server.master:
resource "hcloud_server" "master" {
backups = false
datacenter = "fsn1-dc14"
id = "1234567"
image = "fedora-39"
ipv4_address = "<203.0.113.1>"
ipv6_address = "<2001:db8:5678::1>"
ipv6_network = "<2001:db8:5678::/64>"
keep_disk = false
labels = {
"os" = "coreos"
}
location = "fsn1"
name = "www1"
rescue = "linux64"
server_type = "cx21"
ssh_keys = [
"1234567",
]
status = "running"
}
As defined in the config.yaml
, a new user core
has been added, so you can ssh via ssh core@<203.0.113.1>
to your fresh Fedora CoreOS machine!
Step 5 - Do it again 🔂
You can now incrementally fill the two configuration files, to setup your services/containers. Run terraform destroy
, confirm with yes
and now adjust the files and recreate the infrastructure via terraform apply
. But keep a look at the Hetzner Cloud Portal to ensure that there haven't been spawned any "ghost machines" 👻
Conclusion
You have now a fully working Fedora CoreOS machine at Hetzner Cloud, set up with Terraform. 🎉