Introduction
Proxmox Virtual Environment (Proxmox VE) is an open-source virtualization platform with support for Kernel-based Virtual Machine (KVM) and Linux Containers (LXC). Proxmox provides a web-based management interface, CLI tools, and REST API for management purposes as well as a great documentation including detailed Proxmox VE Administration Guide, manual pages and API viewer.
This tutorial shows how to install Proxmox VE 8 on Debian 12 and configure IP addresses on virtual machines.
Before the Installation
First, some suggestions and advice before starting to set up the new environment:
- Will you only use Linux machines? Then, under certain circumstances, LXC would be sufficient.
- Should you use LXC or KVM? Both have their advantages as well as disadvantages.
A thoughtful decision and good research can provide less work/trouble in the future.
- Although KVM is not as performant as LXC, it provides a complete hardware virtualization and enables the operation of all the most common operating systems (including Windows).
A conversion of the virtual disks in formats such as VMDK is simple.
Step 1 - Installation
Step 1.1 - The Basic Installation on a Hetzner Server
- Boot the server into the Rescue System.
- Run installimage, select and install the required Debian 12 (Bookworm).
- Configure the RAID level, hostname, and partitioning.
- Save the configuration and after completion of the installation perform a restart.
Step 1.2 - Adjust the APT Sources
- Add the GPG key and adjust the APT sources:
curl -o /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg http://download.proxmox.com/debian/proxmox-release-bookworm.gpg echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list
- If you do not have a Proxmox VE Enterprise subscription and wish to avoid unauthorized access errors, you can comment out the Proxmox VE Enterprise repository with the following command:
echo '# deb https://enterprise.proxmox.com/debian/pve bookworm InRelease' > /etc/apt/sources.list.d/pve-enterprise.list
- Update the packages:
apt update # Update package lists apt full-upgrade # Update system
Step 1.3 - Install Proxmox VE
- Install Proxmox VE
apt install proxmox-ve
- Restart the server
- After a restart, the Proxmox kernel should be loaded. Run the following command in order to see kernel release information:
The output should contain
uname -r
pve
, for example:6.5.13-1-pve
You can access the web interface by navigating to https://<main address>:8006
.
Step 2 - Network Configuration
First of all, it is important to decide which virtualization solution (LXC and/or KVM) and which variant (bridged/routed) to use.
If you are not familiar with mentioned technologies, please find a simple comparison in the "Pros and Cons of xxx option" sections below.
-
Virtualization solution
LXC Option
Advantages Disadvantages - Lightweight, fast, lower RAM requirement.
- Mixed fast startup and shutdown times, enhancing the manageability and scalability of containerized applications.
- Allows for more containers to run simultaneously on the same hardware compared to virtual machines.
- Uses the kernel of the host system.
- You can only use Linux distributions.
KVM Option
Advantages Disadvantages - Each VM operates with its own kernel, enhancing security and stability.
- You can install almost any operating system.
- Can take advantage of hardware virtualization features (e.g., Intel VT-x and AMD-V) to improve performance.
- VMs require their own set of resources, including a full copy of the operating system, leading to increased consumption of CPU, RAM, and storage.
-
Variant
Routed Network Option
Advantages Disadvantages - You can use multiple single IP addresses and subnets on one VM.
- Requires extra routes on the host.
- Requires a point-to-point setup.
Bridged Network Option
Advantages Disadvantages - The host is transparent and not part of the routing.
- VMs can directly communicate with the gateway of the assigned IP.
- VMs can get their single IPv4 address from Hetzner's DHCP server.
- VMs may only communicate via the MAC address assigned to the respective IP address.
- You have to request that MAC address in Hetzner Robot.
- You can use IP addresses from additional subnets only on the host system or on a single VM with a single IP (if the subnet is routed to it) (applies to both IPv4 and IPv6 subnets).
Enable IP Forwarding on the Host
With a routed setup, the bridges (vmbr0
, for example) are not connected with the physical interface. You need to activate IP forwarding on the host system. Please note that packet forwarding between network interfaces is disabled for the default Hetzner installation. In order to make IP forwarding persistent across reboots, use the following commands bellow:
- For IPv4 and IPv6:
sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf sed -i 's/#net.ipv6.conf.all.forwarding=1/net.ipv6.conf.all.forwarding=1/' /etc/sysctl.conf
- Apply the changes:
sysctl -p
- Check if the forwarding is active:
sysctl net.ipv4.ip_forward sysctl net.ipv6.conf.all.forwarding
Add the network configuration
Choose one of the variants:
-
vSwitch with a public subnet
» Assign IP addresses to VM's/Containers from a vSwitch with a public subnet. -
Hetzner Cloud Network
» Establish a connection between the virtual machines/LXC in Proxmox. -
Masquerading (NAT)
» Enable VMs with pvt IPs to have internet access through the host's public IP.
Routed Setup
In a routed configuration, the host system's bridge IP address serves as the gateway by default, and manual route addition to a virtual machine is required when the extra IP does not belong to the same subnet. That's why we will set the mask to /32
, because devices such as vmbr0 inherently route traffic only for IP addresses that fall within the same subnet. By using a /32
mask, each additional IP is treated as its own unique network entity which will ensure that the traffic reaches its correct destination even if the additional IP is not from the same subnet.
-
Host system Routed
We will use the following example addresses to represent a real-use-case scenario:
- Main IP:
198.51.100.10/24
- Gateway of the main IP:
198.51.100.1/24
- Additional Subnet:
203.0.113.0/24
- Additional single IP:
192.0.2.20/24
- IPv6:
2001:DB8::/64
# /etc/network/interfaces auto lo iface lo inet loopback iface lo inet6 loopback auto enp0s31f6 iface enp0s31f6 inet static address 198.51.100.10/32 #Main IP gateway 198.51.100.1 #Gateway # IPv6 for the main interface iface enp0s31f6 inet6 static address 2001:db8::2/128 # /128 on the ethernet interface, /64 on the bridge (to route all other addresses via the bridge) gateway fe80::1 # Bridge for single IP's (foreign and same subnet) auto vmbr0 iface vmbr0 inet static address 198.51.100.10/32 #Main IP bridge-ports none bridge-stp off bridge-fd 0 up ip route add 192.0.2.20/32 dev vmbr0 # Foreign subnet up ip route add 198.51.100.30/32 dev vmbr0 # Additional IP from the same subnet # IPv6 for the bridge iface vmbr0 inet6 static address 2001:db8::3/64 # Should not be the same address as the main Interface # Additional Subnet 203.0.113.0/24 auto vmbr1 iface vmbr1 inet static address 203.0.113.1/24 # Set one usable IP from the subnet range bridge-ports none bridge-stp off bridge-fd 0
- Main IP:
-
Guest system Routed (Debian 12)
The IP of the bridge in the host system is always used as the gateway, i.e. the main IP for individual IPs and the IP from the subnet configured in the host system for subnets.
Guest configuration:
-
With an additional IP from the same subnet:
# /etc/network/interfaces auto lo iface lo inet loopback auto ens18 iface ens18 inet static address 198.51.100.30/32 # Additional IP gateway 198.51.100.10 # Main IP # IPv6 iface ens18 inet6 static address 2001:DB8::4 # IPv6 address of the subnet netmask 64 # /64 gateway 2001:DB8::3 # Bridge Address
-
With a foreign Additional IP:
# /etc/network/interfaces auto lo iface lo inet loopback auto ens18 iface ens18 inet static address 192.0.2.20/32 # Additional IP from a foreign subnet gateway 198.51.100.10 # Main IP
-
With an IP assigned from the additional subnet
# /etc/network/interfaces auto lo iface lo inet loopback auto ens18 iface ens18 inet static address 203.0.113.10/24 # Subnet IP gateway 203.0.113.1 # Gateway is the IP of the bridge (vmbr1)
-
Bridged Setup
When setting up Proxmox in bridged mode, it is absolutely crucial to request virtual MAC addresses for each IP address through the Robot Panel. In this mode, the host acts as a transparent bridge and is not part of the routing path. This means that packets arriving at the router will have the source MAC address of the virtual machines. If the source MAC address is not recognized by the router, the traffic will be flagged as "Abuse" and might lead to the server being blocked. Therefore, it is essential to request virtual MAC addresses in the Robot Panel.
-
Host system Bridged
We configure only the main IP of the server here. The additional IPs will be configured in the Guest Systems.
# /etc/network/interfaces auto lo iface lo inet loopback auto enp0s31f6 iface enp0s31f6 inet manual auto vmbr0 iface vmbr0 inet static address 198.51.100.10/32 # Main IP gateway 198.51.100.1 # Gateway bridge-ports enp0s31f6 bridge-stp off bridge-fd 0
-
Guest system Bridged (Debian 12)
Here we use the gateway of the additional IP, or if the additional IP is within the same subnet as the main IP, we use the gateway of the main IP.
Static configuration:
# /etc/network/interfaces auto ens18 iface ens18 inet static address 192.0.2.20/32 # Additional IP gateway 192.0.2.1 # Additional Gateway IP
In bridged mode, DHCP can also be used to automatically configure the network settings. However, it is essential to configure the virtual machine to use the virtual MAC address obtained from the Robot Panel for the specific IP configuration.
You can also adjust this manually in the Virtual Machine itself, in
/etc/network/interfaces
:# /etc/network/interfaces auto lo iface lo inet loopback auto ens18 iface ens18 inet dhcp hwaddress ether aa:bb:cc:dd:ee:ff # The MAC address is just an example
You can do the same thing for LXC containers via the Proxmox GUI. Simply click on the container, navigate to "Network", and then click on the bridge. Select DHCP and add the correct MAC address from the Robot Panel (in our case the example would be
aa:bb:cc:dd:ee:ff
):
vSwitch with a public subnet
Proxmox can also be set up to link directly with a Hetzner vSwitch that manages routing for a public subnet, allowing IP addresses from that subnet to be assigned directly to VM's and containers. The setup has to be a bridged setup and a virtual interface has to be created in order for the packets to be able to reach the vSwitch. The bridge does not require to be VLAN-aware and no VLAN configuration has to be made within the VM or LXC containers, the tagging is here done by the subinterface, in our example the enp0s31f6.4009
. Every packet that goes through the interface will be tagged with the appropriate VLAN ID. (Please note that this configuration is meant for the LXC/VM's, If you would like the host itself to be able to communicate with the vSwitch you will have to create an additional Routing table.) In this case we will use 203.0.113.0/24
as an example subnet.
- Host system configuration:
# /etc/network/interfaces auto enp0s31f6.4009 iface enp0s31f6.4009 inet manual auto vmbr4009 iface vmbr4009 inet static bridge-ports enp0s31f6.4009 bridge-stp off bridge-fd 0 mtu 1400 #vSwitch Subnet 203.0.113.0/24
- Guest system configuration:
# /etc/network/interfaces auto lo iface lo inet loopback auto ens18 iface ens18 inet static address 203.0.113.2/24 # Subnet IP from the vSwitch gateway 203.0.113.1 # vSwitch Gateway
Hetzner Cloud Network
It is also possible to establish a connection between the virtual machines/LXC in Proxmox with the Hetzner Cloud Network. For the sake of this example, let's assume that you have already setup your Cloud Network and have added the vSwitch with the following configuration:
192.168.0.0/16
- Your cloud network (parent network)192.168.0.0/24
- Cloud server subnet192.168.1.0/24
- vSwitch (#12345)
The configuration should look something like this:
Similarly as in the example before, we will first create a virtual interface and define the VLAN ID, so in our case that would be the enp0s31f6.4000
. We will have to add the route to the Cloud Network 192.168.0.0/16
via the vSwitch. Please note that adding a route to the Hetzner Cloud network and assigning an IP address from the private subnet range of the vSwitch to the bridge is only necessary if the Proxmox host itself is to communicate with the Hetzner Cloud network.
- Host system configuration:
# /etc/network/interfaces auto enp0s31f6.4000 iface enp0s31f6.4000 inet manual auto vmbr4000 iface vmbr4000 inet static address 192.168.1.10/24 bridge-ports enp0s31f6.4000 bridge-stp off bridge-fd 0 mtu 1400 up ip route add 192.168.0.0/16 via 192.168.1.1 dev vmbr4000 #vSwitch-to-cloud Private Subnet 192.168.1.0/24
- Guest system configuration
# /etc/network/interfaces auto lo iface lo inet loopback auto ens18 iface ens18 inet static address 192.168.1.2/24 gateway 192.168.1.1
Masquerading (NAT)
Exposing the virtual machines/LXC containers to the internet is also possible without configuring/having any further public additional IP addresses. Hetzner has a strict IP/MAC binding which means that if the traffic is not routed properly, it will cause abuse and can lead to server blockings. In order to avoid this problem, we can route the traffic from the LXC/VM's through the main interface of the host. This ensures that the MAC address is the same across all network packets. Masquerading enables virtual machines with private IP addresses to have internet access through the host's public IP address for outbound communications. Iptables modifies each outgoing data packet to seem as if it is coming from the host and incoming replies are adjusted so they can be directed back to the initial sender.
# /etc/network/interfaces
auto lo
iface lo inet loopback
iface lo inet6 loopback
auto enp0s31f6
iface enp0s31f6 inet static
address 198.51.100.10/24
gateway 198.51.100.1/24
#post-up iptables -t nat -A PREROUTING -i enp0s31f6 -p tcp -m multiport ! --dports 22,8006 -j DNAT --to 172.16.16.2
#post-down iptables -t nat -D PREROUTING -i enp0s31f6 -p tcp -m multiport ! --dports 22,8006 -j DNAT --to 172.16.16.2
auto vmbr4
iface vmbr4 inet static
address 172.16.16.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
post-up iptables -t nat -A POSTROUTING -s '172.16.16.0/24' -o enp0s31f6 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '172.16.16.0/24' -o enp0s31f6 -j MASQUERADE
#NAT/Masq
Please note that these rules (bellow) are not necessary for the LXC/VM's to have internet access. This rule is optional and serves the purpose of external accessibility to a specific VM/Container. It redirects all incoming traffic, besides on ports 22 and 8006 (here, 22 is excluded so that you can still connect to Proxmox via SSH, and 8006 is the port for the web interface), to a designated virtual machine at 172.16.16.2
within the subnet. This is a common scenario/setup for router VM's like pfSense where all incoming traffic will be redirected to the router VM and then routed accordingly.
post-up iptables -t nat -A PREROUTING -i enp0s31f6 -p tcp -m multiport ! --dports 22,8006 -j DNAT --to 172.16.16.2
post-down iptables -t nat -D PREROUTING -i enp0s31f6 -p tcp -m multiport ! --dports 22,8006 -j DNAT --to 172.16.16.2
Step 3 - Security
The web interface is protected by two different authentication methods:
- Proxmox VE standard authentication (Proxmox proprietary authentication)
- Linux PAM standard authentication
Nevertheless, additional protection measures would be recommended to protect against the exploitation of any security vulnerabilities or various other attacks.
Here are several possibilities:
Conclusion
By now, you should have installed and configured Proxmox VE as a virtualization platform on your server.
Proxmox VE also supports clustering. Please find details in tutorial "Setting up your own public cloud with Proxmox on Hetzner bare metal".