Proxmox VE on Debian Jessie with zfs - Hetzner

Anyone got this working with 7.0 ? Just reinstalled it, it's also booting fine in the rescue QUEMU, but its not reachable after reboot, cant even ping it.
6.4 with the same network settings worked fine.
 
Works fine on my site. Did you see that the Ethernet device names might change after upgrade to 7.00 ?
Mine did on a 10G based Melanox (eth4 -> enp1s0). On the Hetzner system with Intel I210, it did not.

e.g. have a look at "ip link" and /etc/network/interfaces. "ip link" must be executed somehow from the PVE and not from the rescue system, they do have different kernels and therefore device naming.
 
Last edited:
@Drag_and_Drop same issue here, no renaming of the network device. It works fine with the 'eth0' config, but when switching to 'vmbr0' it isn't working. I tried setting the MAC address on the bridge to that of 'eth0', but haven't gotten it to work yet
 
Works fine on my site. Did you see that the Ethernet device names might change after upgrade to 7.00 ?
Mine did on a 10G based Melanox (eth4 -> enp1s0). On the Hetzner system with Intel I210, it did not.

e.g. have a look at "ip link" and /etc/network/interfaces. "ip link" must be executed somehow from the PVE and not from the rescue system, they do have different kernels and therefore device naming.
Got a remote IP KVM, host booted fine, but do not have network.
NIC still named enp3s0
removed the bridge and gave enp3s0 my first IP, worked...
Guess it's related tho the mac address thing. Need to read a little bit more doku
 
Last edited:
iface vmbr0 inet static
address 144.76.XXX/32
gateway 144.76.XXX
bridge-ports enp3s0
bridge_hw 00:00:00:00:00
bridge-stp off
bridge-fd 0
pointopoint 144.76XX
up ip route add 148.251.XX/32 dev vmbr0
up ip route add 148.251.XX/32 dev vmbr0
up ip route add 148.251.XX/32 dev vmbr0
up ip route add 5.9.XX/32 dev vmbr0
 
Therefore use: hwaddress ab:cd:ef:12:34:56 - the documentation doesn't seem accurate in this case
 
As a follow up to my post #73 in this thread from 2019 where I shared steps for installing Proxmox 5.4 on Hetzner.

Last July (2023-07-08) I performed the following steps to create a Proxmox 8.x server on Hetzner.
Using a ZFS mirror as the rootfs / rpool.

FWIW posting my notes for future reference. Was on my TODO's to share this.

Bash:
### Deployment of blank Hetzner root/dedicated node with Proxmox 8.0 iso

# A freshly ordered node normally comes booted into the rescue system, note the generated password and login

# ELSE In Hetzner control panel - order rescue system with Linux 64 Bit
# note the generated root password
# if you had to order the rescue system, reboot node, wait a little and then login with root@nodeip and use the generated root password

# when you login, copy the Hardware and Network data for reference.

# In this case the most important bits for the install
# The disks we will use for the ZFS rpool
#   Disk /dev/sda: 480 GB (=> 447 GiB) doesn't contain a valid partition table
#   Disk /dev/sdb: 480 GB (=> 447 GiB) doesn't contain a valid partition table

#         MAC:  4c:52:XX:XX:XX:XX
#         IP:   138.XXX.XXX.XXX

root@rescue ~ # ip a
# take note of the ip subnet, in my case /26

# take note of the default gateway
root@rescue ~ # ip r
default via 138.XXX.XXX.XXX dev eth0


# get pmox iso image, replace $URL with a valid pmox ISO installer link
curl --globoff -L -O $URL
# verify $download file name etc, place image in /proxmox.iso
mv -iv $download /proxmox.iso
# checksum the iso and verify with vendors sums
sha256sum /proxmox.iso

# try get a list of predictable network interface names, note them for later
root@rescue ~ # udevadm test /sys/class/net/eth0 2>/dev/null |grep ID_NET_NAME_
ID_NET_NAME_MAC=enx4c52620e071e
ID_NET_NAME_PATH=enp0s31f6

# start a vm with the pmox installer and vnc
# man page reference https://manpages.debian.org/stretch/qemu-system-x86/qemu-system-x86_64.1.en.html
# -boot once=d = boot from the cdrom iso one time, next reboot will follow normal boot seq
# make sure to replace -smp -m and -drive options with ones matching your hardware

# !!!⚠⚠⚠ ACHTUNG ⚠⚠⚠!!! this will DESTROY the partition tables and DATA on the specified drives

qemu-system-x86_64 -enable-kvm -m 4096 -cpu host -smp 8 \
-drive file=/dev/sda,format=raw,cache=none,index=0,media=disk \
-drive file=/dev/sdb,format=raw,cache=none,index=1,media=disk \
-vnc :0 -cdrom /proxmox.iso -boot once=d

# Connect VNC to your host address:5900
# https://www.tightvnc.com/download.php
# Download TightVNC Java Viewer (I use version 2.8.3 but later version probably also work fine)

# install pmox via VNC GUI wizard
# GUI installer showed ens3 for nic, which is due to the qemu, ignore it

# reboot vm at the end of the install, it will boot grub, let it boot normally

# login to the new pve - edit network interfaces
# !!! ACHTUNG !!! check/update iface names and bridge ports
# as above my interface was predicted as enp0s31f6, this worked as hoped
# replace $EDITOR with your preferred editor, but nano might be the only pre-loaded right now
$EDITOR /etc/network/interfaces
# shutdown vm
shutdown -h now

# reboot the rescure image to boot pmox on the physical hardware
shutdown -r now

2. after the installation of proxmox is finished.
.. and you hit the reboot button
stop the qemu and DO NOT LET IT BOOT from disk for the first time
otherwise you will end up with QEMU all over the place in dmidecode, hwinfo, /dev/sdX devices instead of /dev/nvme* (in case you use nvme)
stop the qemu, reboot the rescue session and let the server boot, even when it might not bee reachable via ip.
just let it reboot, sit there for 10minutes to be safe, and then you can start the rescue session via qemu again, now without the cdrom.
then, if needed, change the network device name in /etc/network/interfaces. I looked up the /var/log/messages from the non-qemu reboot before, and found out the real network name that needs to be used (eno1 in my case).

Regarding this point from @CJnrLUaY9, I cannot say I've experienced this issue with "QEMU all over the place in dmidecode" OR "/dev/sdX devices instead of /dev/nvme*". I suspect if this does happen its only transient while booting proxmox via qemu? The choice is yours in the end.
Personally, my workflow is to let qemu boot proxmox and sort out any post install topics as mentioned in my notes.

Bridge MAC address
Its important that the main usually vmbr0 bridge interface clones the MAC address from the primary NIC. This should be handled gracefully/automatically by ifupdown2 which comes with proxmox 8.x.

If this is not setup up correctly, then your host will either have no network AND/OR will be blocked for "abuse" by Hetzner.

Hetzner said:
If the source MAC address is not recognized by the router, the traffic will be flagged as "Abuse" and might lead to the server being blocked.

You can check this with ip link and ensure that the MAC's are as expected. If not you'll have to use the hwaddress directive in vmbr0 config to specify the bridge uses the same MAC as the primary NIC. See this related forum post, and Hetzner's Proxmox VE guide.

Sample bridge + NAT'ed network config
This lives in the nodes /etc/network/interfaces

Note that proxmox best practice is to allow the GUI/proxmox to manage this file, and to utilise source or source-directory directives for manaul/custom network config.

Here is the warning text from my node:
Bash:
head -n11 /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

I don't personally follow that practice today because I don't rely on the proxmox GUI to manage the network cfg.
AFAIK proxmox GUI doesn't offer setting up or managing NAT/routing and port forwarding.

Code:
auto lo
iface lo inet loopback

iface enp0s31f6 inet manual

auto vmbr0
iface vmbr0 inet static
        address 138.XXX.XXX.XXX/26
        gateway 138.XXX.XXX.XXX
        bridge-ports enp0s31f6
        bridge-stp off
        bridge-fd 0
        hwaddress 4c:52:xx:xx:xx:xx

auto vmbr1
iface vmbr1 inet static
        address 192.168.XXX.1/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0

        # forwarding and NAT
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up   iptables -t nat -A POSTROUTING -s '192.168.XXX.0/24' -o vmbr0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '192.168.XXX.0/24' -o vmbr0 -j MASQUERADE

        # sample ip forwarding rules
        # CRITICAL / ACHTUNG! forwarded traffic will bypass the host/hypervisor firewall and require filtering on the destination nodes
        post-up   iptables -t nat -A PREROUTING -i vmbr0 -p tcp -d 138.201.31.114/32 --dport 80 -j DNAT --to 192.168.XXX.100:80
        post-up   iptables -t nat -A PREROUTING -i vmbr0 -p tcp -d 138.201.31.114/32 --dport 443 -j DNAT --to 192.168.XXX.100:443
        post-up   iptables -t nat -A PREROUTING -i vmbr0 -p tcp -d 138.201.31.114/32 --dport 8443 -j DNAT --to 192.168.XXX.55:443
        post-up   iptables -t nat -A PREROUTING -i vmbr0 -p tcp -d 138.201.31.114/32 --dport 8080 -j DNAT --to 192.168.XXX.55:80
        post-down echo 0 > /proc/sys/net/ipv4/ip_forward
        post-down iptables -t nat -F PREROUTING

Securing sshd and filtering traffic
You may wish to review my notes around secure defaults for Debian sshd_config and MFA.

This includes a section about using iptables ipsets / nftables sets to whitelist certain ISP's to your servers admin ports, or other sensitive ports.

Hetzner provides the ablity to configure a set of stateless firewall rules per root server. I utilise this to drop IPv6 traffic (so I can set this up securely later) and any IPv4 traffic that does not match the IPv4 (destination) IP of the server. I noticed from watching tcpdump on the server that there was a lot of traffic flying around the network that was not actually destined for the ip of the server. Adding these rules made things A LOT quieter.

1715089031416.png
 
  • Like
Reactions: UdoB

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!