Introduction
Since Proxmox VE 8.2 (April 2024), the official proxmox-auto-install-assistant tool allows you to embed an answer file directly into the installer ISO. The result is a fully unattended installation: boot the ISO, walk away, and come back to a running Proxmox node — no VNC session, no keyboard input, no interactive wizard.
This tutorial walks through the entire process on a Hetzner dedicated server (AX41-NVMe) using Hetzner's Rescue System. Everything runs on the rescue system itself — no local tooling required. Steps covered:
- Boot your server into Rescue Mode and detect its boot mode (Legacy or UEFI).
- Gather network and disk information using
proxmox-auto-install-assistant. - Write an
answer.tomlthat configures networking, ZFS storage, and interface name pinning. - Build a customized ISO directly on the rescue server using a Trixie chroot.
- Run the modified ISO inside QEMU — writing directly to the physical NVMe disks.
- Reboot the server into a fresh Proxmox VE node.
Note: This tutorial is the spiritual successor to Installing any OS on Hetzner via QEMU/VNC in Rescue Mode. The approach is cleaner and fully unattended.
Prerequisites
- A Hetzner dedicated server (this tutorial uses an AX41-NVMe with 2 × 512 GB NVMe drives)
- Access to Hetzner Robot to activate Rescue Mode
- An SSH key added to your Hetzner account
- A local machine to SSH into the server from
No additional software is required on your local machine — everything is done inside the Rescue System.
Step 1 - Activate Rescue Mode
- Log in to Hetzner Robot.
- Select your server → Rescue tab.
- Choose Linux and your SSH public key, then click Activate rescue system.
- Go to the Reset tab and trigger an automatic hardware reset.
After roughly 60 seconds, the server will boot into a minimal Debian-based Rescue System.
Connect via SSH:
ssh root@<YOUR_SERVER_IP>Step 2 - Detect the Server Boot Mode
The server's firmware boot mode (Legacy BIOS or UEFI) determines which QEMU command is used in Step 8. Check it now to know which path to follow:
[ -d /sys/firmware/efi ] && echo "UEFI" || echo "Legacy"If UEFI: proceed as written — Step 8 uses a UEFI QEMU command with OVMF firmware.
If Legacy: Step 8 uses a simplified QEMU command without OVMF. You can also optionally switch your server to UEFI mode first — see the note below.
Switching to UEFI (optional): Hetzner provides KVM access on request at no extra cost. With a KVM session you can enter the server's BIOS and change the boot mode to UEFI. When doing so, also disable CSM Support (Compatibility Support Module). The Hetzner Rescue System boots via PXE, and leaving CSM enabled alongside UEFI mode prevents PXE from completing — the server will fail to boot into rescue. With UEFI mode on and CSM off, PXE works correctly. After saving and rebooting into rescue, the UEFI QEMU path applies. See Hetzner's UEFI documentation for details. If you prefer not to use KVM, the Legacy path works equally well.
Step 3 - Install proxmox-auto-install-assistant
proxmox-auto-install-assistant is distributed via the Proxmox package repository. At the time of writing, the Hetzner Rescue System runs Debian Bookworm, which only provides version 8.x of the tool. Version 9.x is required to build the ISO with [network.interface-name-pinning] support — that is handled in Step 7 via a Trixie chroot. For now, install the available Bookworm version, which is sufficient for the device-info hardware inspection commands in Step 4.
echo "deb [arch=amd64] http://download.proxmox.com/debian/pve bookworm pve-no-subscription" \
> /etc/apt/sources.list.d/pve-install-repo.list
wget -qO - https://enterprise.proxmox.com/debian/proxmox-release-bookworm.gpg \
| gpg --dearmor > /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg
apt update
apt install -y proxmox-auto-install-assistantStep 4 - Gather Network and Disk Information
Use the device-info subcommand to inspect your hardware. This output shows the exact property values to use in answer.toml.
Step 4.1 - Network interface
proxmox-auto-install-assistant device-info -t networkExample output:
{
"nics": {
"eth0": {
"ID_NET_NAME_MAC": "enx00005e005300",
"ID_NET_NAME_PATH": "enp9s0",
...
}
}
}Note the ID_NET_NAME_MAC value. The value is the MAC address without colons, prefixed with enx. Strip the enx prefix and prepend * to build the filter pattern:
ID_NET_NAME_MAC = "enx00005e005300" → filter value: "*00005e005300"The * wildcard matches the enx prefix that the kernel adds. You can also set the filter to "*" to select the first available interface, but specifying the MAC is more reliable on servers with multiple NICs.
Besides ID_NET_NAME_MAC, other udev properties can be used as filters — for example ID_NET_NAME_PATH to match by PCI path. See the Proxmox filter documentation for the full list.
Also note your IP address, prefix length, and gateway:
ip -c a
ip -c rMAC and IP address
You can find the MAC and IP address undereth02: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether <your-mac-address> brd ff:ff:ff:ff:ff:ff altname enp8s0 inet <your-ip-address>/<your-prefix-length> scope global eth0 valid_lft forever preferred_lft forever
Gateway
You can find the gateway in the default route:default via <your-gateway-ip> dev eth0
Step 4.2 - Disks
proxmox-auto-install-assistant device-info -t diskExample output for AX41-NVMe:
{
"disks": {
"nvme0n1": {
"ID_MODEL": "SAMSUNG MZVLB512HBJQ-00000",
"ID_SERIAL": "SAMSUNG_MZVLB512HBJQ-00000_XXXXXXXXXXXX_1",
...
},
"nvme1n1": {
"ID_MODEL": "SAMSUNG MZVLB512HBJQ-00000",
"ID_SERIAL": "SAMSUNG_MZVLB512HBJQ-00000_YYYYYYYYYYYY_1",
...
}
}
}Note the disk names (nvme0n1, nvme1n1) — these go into the disk-list of answer.toml. Because QEMU is run with NVMe device emulation in Step 8, the installer sees the same names as bare metal, so no translation is needed.
Tip: Instead of
disk-list, you can also select disks using filters on properties likeID_MODEL,ID_SERIAL, or others. The same filter documentation applies to both disk and network selections.
Step 5 - Download the Proxmox VE ISO
Create a working directory and download the latest ISO from the Proxmox downloads page:
mkdir -p /tmp/work
cd /tmp/work
curl -O https://enterprise.proxmox.com/iso/proxmox-ve_9.1-1.isoStep 6 - Write the Answer File
Create /tmp/work/answer.toml. Replace the placeholder values with those gathered in Step 4.
cat > /tmp/work/answer.toml << 'EOF'
[global]
keyboard = "en-us"
country = "de"
fqdn = "pve.example.com"
mailto = "admin@example.com"
timezone = "Europe/Berlin"
root-password-hashed = "$6$rounds=656000$..."
root-ssh-keys = [
"ssh-ed25519 AAAA..."
]
reboot-mode = "power-off"
[network]
source = "from-answer"
cidr = "203.0.113.10/26"
gateway = "203.0.113.1"
dns = "1.1.1.1"
filter.ID_NET_NAME_MAC = "*00005e005300"
[network.interface-name-pinning]
enabled = true
[network.interface-name-pinning.mapping]
"00:00:5e:00:53:00" = "nic0"
[disk-setup]
filesystem = "zfs"
zfs.raid = "raid0"
disk-list = [
"nvme0n1",
"nvme1n1"
]
EOFReplace the placeholders:
| Placeholder | What to put here |
|---|---|
pve.example.com |
Your server's hostname |
admin@example.com |
Your email for Proxmox alerts |
$6$rounds=656000$... |
Hashed root password (see below) |
ssh-ed25519 AAAA... |
Your SSH public key |
203.0.113.10/26 |
Your server IP and prefix |
203.0.113.1 |
Your default gateway |
185.12.64.1 |
DNS server (Hetzner resolver or 1.1.1.1) |
*00005e005300 |
* + your MAC without colons (from Step 4.1) |
"00:00:5e:00:53:00" |
Your MAC with colons, lowercase (from Step 4.1) |
| Generating a hashed root password |
Using Enter your desired password when prompted. Paste the resulting |
|||||||||
| Network interface name pinning |
During installation inside QEMU, the virtual NIC receives a kernel-assigned name based on its PCI path inside the VM — for example The
|
|||||||||
| reboot-mode |
The |
|||||||||
| ZFS RAID level |
The
This tutorial uses |
|||||||||
| First-boot hook (optional) |
The |
Step 7 - Build the Modified ISO
The [network.interface-name-pinning] feature requires proxmox-auto-install-assistant version 9.x, which is only available in the Proxmox Trixie repository. The Trixie .deb requires libc6 ≥ 2.39, but Bookworm ships 2.36, so it cannot be installed directly on the rescue system.
The solution is a Trixie chroot built with debootstrap — no Docker, no local machine, everything stays on the rescue server.
Step 7.1 - Create the Trixie chroot
proxmox-auto-install-assistant --version
apt install -y debootstrap
debootstrap trixie /opt/trixie http://deb.debian.org/debianThis takes a few minutes. Once done, bind-mount the working directory so the ISO and answer file are visible inside the chroot:
mkdir -p /opt/trixie/tmp/work
mount --bind /tmp/work /opt/trixie/tmp/workStep 7.2 - Install the tool and build the ISO
Enter the chroot:
chroot /opt/trixie bashInside the chroot, add the Proxmox Trixie repository:
apt update
apt install -y curl gnupg2
echo "deb [arch=amd64] http://download.proxmox.com/debian/pve trixie pve-no-subscription" > /etc/apt/sources.list.d/pve.list
curl -fsSL https://enterprise.proxmox.com/debian/proxmox-release-trixie.gpg | gpg --dearmor > /etc/apt/trusted.gpg.d/proxmox-release-trixie.gpg
apt updateInstall proxmox-auto-install-assistant:
apt install -y proxmox-auto-install-assistantValidate the answer file:
proxmox-auto-install-assistant --version
proxmox-auto-install-assistant validate-answer /tmp/work/answer.tomlFix any errors reported before proceeding. Then build the ISO:
proxmox-auto-install-assistant prepare-iso \
--fetch-from iso \
--answer-file /tmp/work/answer.toml \
--output /tmp/work/proxmox-auto.iso \
/tmp/work/proxmox-ve_9.1-1.isoExit the chroot and unmount the working directory:
exit
umount /opt/trixie/tmp/workThe modified ISO is now at /tmp/work/proxmox-auto.iso.
Step 8 - Run the Automated Installation via QEMU
Install QEMU on the rescue system:
apt install -y qemu-system-x86Read the server's MAC address into a variable:
MAC=$(ip link show eth0 | awk '/ether/ {print $2}')
echo "Using MAC: $MAC"Choose the QEMU command that matches your server's boot mode detected in Step 2.
Legacy boot
qemu-system-x86_64 \
-enable-kvm \
-machine q35 \
-m 8096 \
-drive file=/dev/nvme0n1,format=raw,if=none,id=drive0 \
-device nvme,drive=drive0,serial=nvme0 \
-drive file=/dev/nvme1n1,format=raw,if=none,id=drive1 \
-device nvme,drive=drive1,serial=nvme1 \
-cdrom /tmp/work/proxmox-auto.iso \
-boot order=d \
-netdev user,id=net0 \
-device virtio-net-pci,netdev=net0,mac=$MAC \
-nographic \
-serial mon:stdioUEFI boot
Install the OVMF UEFI firmware package first:
apt install -y ovmfqemu-system-x86_64 \
-enable-kvm \
-machine q35 \
-m 8096 \
-drive if=pflash,format=raw,readonly=on,file=/usr/share/OVMF/OVMF_CODE.fd \
-drive if=pflash,format=raw,file=/usr/share/OVMF/OVMF_VARS.fd \
-drive file=/dev/nvme0n1,format=raw,if=none,id=drive0 \
-device nvme,drive=drive0,serial=nvme0 \
-drive file=/dev/nvme1n1,format=raw,if=none,id=drive1 \
-device nvme,drive=drive1,serial=nvme1 \
-cdrom /tmp/work/proxmox-auto.iso \
-boot order=d \
-netdev user,id=net0 \
-device virtio-net-pci,netdev=net0,mac=$MAC \
-nographic \
-serial mon:stdioKey flags common to both variants:
| Flag | Purpose |
|---|---|
-machine q35 |
Emulates Intel ICH9 chipset — uses a PCIe bus instead of the default i440fx PCI bus, providing a more accurate modern hardware topology |
-drive ...,if=none + -device nvme,... |
Exposes physical NVMe disks as NVMe inside the VM — the installer sees nvme0n1/nvme1n1, identical to bare metal |
-netdev user + mac=$MAC |
Virtual NIC with the real server's MAC — matches filter.ID_NET_NAME_MAC and interface-name-pinning in answer.toml |
-nographic -serial mon:stdio |
All output goes to your terminal — no VNC session needed |
-enable-kvm |
Hardware-accelerated virtualisation — dramatically speeds up the install |
-m 8096 |
8 GB RAM for the installer |
After QEMU starts, the installer boots and displays:
Loading Proxmox VE Automatic Installer ...
Loading initial ramdisk ...After a brief timeout the unattended installation begins automatically — no input required. The process takes 1–2 minutes. When the installation completes, QEMU exits automatically — no final success message is printed.
If QEMU is still running after several minutes, something likely went wrong. Double-check your
answer.tomland the QEMU command. To see what is happening inside the installer, stop QEMU and re-run it withvncenabled for debugging.
Because reboot-mode = "power-off" is set, the VM powers off and QEMU exits automatically.
You can verify the installation was written to the physical disks:
lsblk -fExpected output:
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
nvme0n1
├─nvme0n1p1
├─nvme0n1p2 vfat FAT32 XXXX-XXXX
└─nvme0n1p3 zfs_member 5000 rpool <pool-uuid>
nvme1n1
├─nvme1n1p1
├─nvme1n1p2 vfat FAT32 YYYY-YYYY
└─nvme1n1p3 zfs_member 5000 rpool <pool-uuid>If your output appears different, wait about 30 minutes for the setup to complete.
Both disks should have the EFI partition (vfat) and the ZFS member partition (zfs_member) with the same pool name (rpool).
In the rescue system, lsblk may show linux_raid_member instead of zfs_member. However, if you can see vtfat, you can confidently reboot as explained in the next step. After the reboot, you should be able to see both, vfat and zfs_member.
Click here to view an example lsblk output without
zfs_memberroot@rescue ~ # lsblk -f NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS loop0 ext2 1.0 <uuid-1> nvme0n1 ├─nvme0n1p1 vfat FAT32 <uuid-2> │ └─md0 swap 1 <uuid-4> ├─nvme0n1p2 │ └─md1 ext3 1.0 <uuid-5> └─nvme0n1p3 linux_raid_member 1.2 rescue:2 <uuid-6> └─md2 ext4 1.0 <uuid-7> nvme1n1 ├─nvme1n1p1 vfat FAT32 <uuid-3> │ └─md0 swap 1 <uuid-4> ├─nvme1n1p2 │ └─md1 ext3 1.0 <uuid-5> └─nvme1n1p3 linux_raid_member 1.2 rescue:2 <uuid-6> └─md2 ext4 1.0 <uuid-7>
Step 9 - Reboot into Proxmox
With QEMU exited, the physical disks now contain a fully installed Proxmox VE system. Reboot the rescue system — do not re-activate Rescue Mode, just reboot:
reboot nowWait about 60–90 seconds for the server to boot from the installed disk. Since Proxmox is a fresh installation its SSH host key has changed — remove the old entry from your known hosts to avoid a connection error:
ssh-keygen -R <YOUR_SERVER_IP>Then connect:
ssh root@<YOUR_SERVER_IP>If the connection succeeds, Proxmox VE is running on bare metal.
Step 10 - Post-Install Configuration
Step 10.1 - Disable the Enterprise Repository
Proxmox enables its paid enterprise repository by default. Without a subscription this causes apt errors. Proxmox VE 9.x uses the DEB822 .sources format — remove the enterprise files and add the no-subscription repository:
Remove enterprise repos:
rm /etc/apt/sources.list.d/pve-enterprise.sources
rm /etc/apt/sources.list.d/ceph.sourcesAdd no-subscription repo:
CODENAME=$(. /etc/os-release && echo "$VERSION_CODENAME")
cat > /etc/apt/sources.list.d/pve-no-sub.sources << EOF
Types: deb
URIs: http://download.proxmox.com/debian/pve
Suites: ${CODENAME}
Components: pve-no-subscription
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg
EOFUpdate and upgrade:
apt update && apt dist-upgrade -yEverything should now be running without any errors:
systemctl status pve-cluster
systemctl status pvedaemon
systemctl status pveproxy
systemctl status pvestatdStep 10.2 - Access the Web UI
Open a browser and navigate to:
https://<YOUR_SERVER_IP>:8006Log in with username root and the password you set in answer.toml. Accept the self-signed certificate warning — you can replace the certificate later.
You now have a fully operational Proxmox VE node.
Conclusion
You have installed Proxmox VE on a Hetzner dedicated server without touching an interactive installer, without local tooling, and without KVM access. The key steps were:
- Using
proxmox-auto-install-assistant device-infoto discover the exact NIC and disk identifiers needed for the answer file. - Pinning the management NIC by MAC address using
filter.ID_NET_NAME_MAC, and assigning it a stable name via[network.interface-name-pinning]so the network configuration written during installation in QEMU remains valid on bare metal. - Using a
debootstrapTrixie chroot on the rescue server to run the newer version ofproxmox-auto-install-assistantneeded to build the ISO — no Docker, no local machine required. - Running QEMU with NVMe device emulation so the installer sees the same disk names (
nvme0n1,nvme1n1) as the bare-metal boot environment. - Using
reboot-mode = "power-off"so QEMU exits cleanly after installation. - Detecting the server's boot mode upfront and using the matching QEMU command (Legacy or UEFI).
Tip: If you're automating multiple server deployments, add a
[post-installation-webhook]section toanswer.toml. Proxmox will send a POST request with the server's IP and SSH host keys immediately after a successful installation — useful for triggering downstream automation. See the Proxmox docs for details.
Next Steps
The next tutorial in this series (coming soon) will build a virtual office network on this Proxmox node:
Building a Virtual Office Network on a Hetzner Dedicated Server with Proxmox, OPNsense and Open vSwitch (coming soon)
It will cover Open vSwitch, OPNsense as a virtual firewall/gateway, and an isolated private LAN — turning a single dedicated server into a full cloud studio.