Introduction
Technically, Hetzner Cloud Volumes can only be mounted on one server. Over the last year I ran a production cluster with a Hetzner Cloud Volume mounted on multiple Hetzner Cloud Servers via sshfs
. This tutorial describes how to set this up on two fresh debian-12
cx11
instances using the hcloud
Command-line interface for Hetzner Cloud, and how to configure sshfs
manually or with an optional Ansible role.
Don't worry if you're not familiar with Ansible! All the steps in this tutorial can be executed manually, but Ansible just automates it (very useful in case you'd like to add more servers but dislike repetitive tasks).
Prerequisites
We assume that hcloud
and ansible
are installed. If this isn't the case, please install them. The hcloud/cli documentation and Ansible documentation have installation instructions for various operating systems.
Limitations
- The aim of this tutorial is to provide a proof of concept and minimum working example to mount a Hetzner Cloud Volume on multiple Hetzner Cloud Servers. All other configuration that is not strictly needed for the
sshfs
setup has thus been left to the default values. We purposely did not harden the server such that our tutorial only shows the necessary steps for thesshfs
mount. We refer to the Securing SSH tutorial for instructions on how one could harden the server by securing the SSH service, but caution again that oursshfs
implementation currently relies onPermitRootLogin yes
whereas the aforementioned tutorial recommends to setPermitRootLogin no
. One could further secure the setup by mounting the Cloud Volume as a non-root user. - This tutorial limits the scope to 2 servers only, but can easily be extended to N servers.
- The order of (re)booting the servers is important! If
<server1>
is not reachable from<server2>
on boot, then the Volume will of course not be mounted! The described solution is thus not entirely foolproof and caution is required when (re)booting your servers. In some cases a manual mount may be needed. - This setup does not work with
hc-utils
, but requires manual configuration of the network via DHCP.
Step 1 - Setup hcloud
If you haven't already, first create a project <myproject>
in your Hetzner Cloud Console and generate an API token via Security > API Tokens > Generate API Token. Next, open up a terminal and execute the following.
source <(hcloud completion bash) # if you want command completion - trust me, you do!
hcloud context create myproject
hcloud context list
hcloud context use myproject # only if it isn't active just yet
Note that hcloud context create
stores the API token in ~/.config/hcloud/cli.toml
.
Step 2 - Create services
With the project and API token in place it is time to fire up 2 servers and 1 Volume. Note that the Volume named <storage>
will belong to one of your servers (<server1>
) while the other server(s) mount it later on via sshfs
with traffic over a Hetzner Cloud Network. For convenience, we first add our SSH public key to our Hetzner Cloud project such that we can add the key to the server on creation.
# If you don't have an SSH key, first create it
ssh-keygen -t rsa -C "MyComputer --> Hetzner Cloud"
# Add the SSH key to the Hetzner Cloud project
hcloud ssh-key create --public-key-from-file=~/.ssh/id_rsa.pub --name my-ssh-key
# Create a Hetzner Cloud Network
hcloud network create --name network --ip-range 10.0.0.0/24
hcloud network add-subnet network --type server --network-zone eu-central --ip-range 10.0.0.0/28
# Create 2 cx11 servers with debian-12 in Nuremberg that are accessible wih your SSH key
hcloud server create --ssh-key my-ssh-key --location nbg1 --type cx11 --image debian-12 --name server1
hcloud server create --ssh-key my-ssh-key --location nbg1 --type cx11 --image debian-12 --name server2
# Add the servers to the network
hcloud server attach-to-network server1 --network network --ip 10.0.0.2
hcloud server attach-to-network server2 --network network --ip 10.0.0.3
# Create 1 Volume of 10GB that is ext4 formatted, belongs to server1, and added to /etc/fstab to automount
hcloud volume create --size 10 --name storage --server server1 --format ext4 --automount
Step 3 - Configure sshfs
Step 3.1 - Setup the network manually
We follow the Hetzner Cloud Docs section on "Manual configuration via DHCP". It seems that Hetzner Cloud's auto-configuration package hc-utils
sets the network up in a way that the network interface for the virtual private network is unavailable on boot in time for sshfs
to mount via the fstab
. I noticed that mount on boot does work when sshfs
(Step 3.2) is set up to connect to the external IP address of the server. However, this would come at the expense of more bandwidth used (with respect to the virtual private network where data transfer does not contribute towards the 20 TB bandwidth of the server) and a marginally longer route (more hops, although they still do seem to be within the Hetzner Cloud data center network). On the other hand, the implementation does work with sshfs
on the internal IP address of the server within the Hetzner Cloud Network when setting the latter up using the manual configuration. Note that we thus do not use Hetzner Cloud's auto-configuration package hc-utils
for our private network, but use manual configuration instead!
# Execute on server2 and on server1
# First remove hc-utils
apt-get remove hc-utils
# Then configure the manual setup of the private network
touch /etc/network/interfaces.d/61-my-private-network.cfg
# Add the private network interface
cat <<EOF >> /etc/network/interfaces.d/61-my-private-network.cfg
auto ens10:0
iface ens10:0 inet dhcp
EOF
service networking restart
Step 3.2 - Setup the (root) SSH key
We are now going to configure sshfs
to mount the Volume from <server1>
to <server2>
.
# SSH to server2 using your SSH key pair
ssh -i ~/.ssh/id_rsa root@$(hcloud server ip server2)
# Execute these commands on server2
# Create .ssh dir /w mode 700 if it doesn't exist
if [ ! -d /root/.ssh ]; then mkdir /root/.ssh && chmod 700 /root/.ssh; fi
cd /root/.ssh
# Create .ssh/config /w mode 600 if it doesn't exist
if [ ! -f /root/.ssh/config ]; then touch /root/.ssh/config && chmod 600 /root/.ssh/config; fi
# Add the config to SSH from server2 to server1 to /root/.ssh/config
cat <<EOF > /root/.ssh/config
Host server1
HostName 10.0.0.2
Port 22
User root
IdentityFile ~/.ssh/id_rsa
IdentitiesOnly yes
LogLevel QUIET
EOF
# Create server2's SSH id_rsa as promised in the config above
if [ ! -f /root/.ssh/id_rsa ]; then ssh-keygen -t rsa -C "server2 --> server1"; fi
cat /root/.ssh/id_rsa.pub # and copy the contents
# Now disconnect from server2
# SSH to server1 using your SSH key pair
ssh -i ~/.ssh/id_rsa root@$(hcloud server ip server1)
# Execute these commands on server1
if [ ! -d /root/.ssh ]; then mkdir /root/.ssh && chmod 700 /root/.ssh; fi
if [ ! -f /root/.ssh/authorized_keys ]; then touch /root/.ssh/authorized_keys && chmod 644 /root/.ssh/authorized_keys; fi
vi /root/.ssh/authorized_keys # and add the contents of server2's /root/.ssh/id_rsa.pub
# Disconnect from server1 again, and connect to server2
ssh -i ~/.ssh/id_rsa root@$(hcloud server ip server2)
# Execute on server2: connect from server2 to server1
ssh server1 # uses the rules set in the ssh config
# The first time you connect you'll accept server1's host key
# The authenticity of host '10.0.0.2 (10.0.0.2)' can't be established.
# ECDSA key fingerprint is SHA256:hash.
# Are you sure you want to continue connecting (yes/no)? yes
# Check the fstab on server1, and ensure that it will be mounted to /mnt/storage
cat /etc/fstab
# /dev/disk/by-id/scsi-0HC_Volume_somenumber /mnt/storage ext4 discard,nofail,defaults 0 0
# Disconnect from server1 and proceed to the next step. You can stay connected to server2
Step 3.3 - Setup sshfs
We can add sshfs to /etc/fstab to auto-mount the Volume on boot.
We assume you still have a terminal with ssh connection to <server2>
open.
# Execute on server2
# Install sshfs
apt update && apt install sshfs
# Create the mount point
mkdir /mnt/storage
# Add the Cloud Volume that sits on server1 to /etc/fstab by mounting it via sshfs
# Note that the location of the Cloud Volume on server1 is /mnt/storage, and
# for convenience we chose to mount it to /mnt/storage on server2
cat <<EOF >> /etc/fstab
sshfs#server1:/mnt/storage /mnt/storage fuse defaults,allow_other,reconnect,_netdev,users,ServerAliveInterval=15,ServerAliveCountMax=3 0 0
EOF
Step 4 - Provision the servers with Ansible (Optional)
git clone git@github.com:tlrh314/ansible-hetzner-cloud-volume-sshfs.git
cd ansible-hetzner-cloud-volume-sshfs
# Setup Ansible
virtualenv --python=python3 .venv
source .venv/bin/activate
pip install ansible hcloud
# Setup shell variables for the Ansible hcloud plugin to auto-detect the servers
# to provision through the hcloud cli
export HCLOUD_CONTEXT=myproject
export HCLOUD_TOKEN=mytoken
# Create the setup at Hetzner Cloud (replacement of step 2)
./create.sh -c
# Provision the servers with Ansible (replacement of steps 3.1-3.3)
ansible-playbook provision.yml
# Destroy the setup at Hetzner Cloud
./create.sh -d
Conclusion
Congratulations, you now have a Hetzner Cloud Volume that is available on all of your Hetzner Cloud Servers, overcoming the "limitation" that a Volume is only available on one server. Moreover, you can also mount the Cloud Volume on your dedicated root server. It's time for a celebratory beer.