Hi.
This setup should work:
- boot your system to Rescue Console
- figure out the real name of the network adapter with ip a
(find in section altname)
- Download Proxmox ISO: wget https://enterprise.proxmox.com/iso/proxmox-ve_8.4-1.iso
- Start qemu session:
With at least two NVMe devices:
qemu-system-x86_64 -machine pc-q35-5.2 -enable-kvm -smp 4 -m 4096 -boot once=d -cdrom ./proxmox-ve_8.4-1.iso -drive file=/dev/nvme0n1,format=raw,media=disk,if=virtio -drive file=/dev/nvme1n1,format=raw,media=disk,if=virtio -vnc :1
With at least two SSD or HDD devices:
qemu-system-x86_64 -machine pc-q35-5.2 -enable-kvm -smp 4 -m 4096 -boot once=d -cdrom ./proxmox-ve_8.4-1.iso -drive file=/dev/sda,format=raw,media=disk,if=virtio -drive file=/dev/sdb,format=raw,media=disk,if=virtio -vnc :1
- Attach to the VM with VNC viewer via main IP of your server on Port 1 (x.x.x.x:1)
- Install Proxmox VE (for example with ZFS mirror)
- After rebooting login to the system (still in the VM)
- edit /etc/network/interfaces
and change the name of the network device according to the altname and the IP adress + gateway (find in Robot)
- shutdown the VM
- reboot the server
- login to PVE web interface
- configure the DNS servers
- configure filewall to prevent messages from BSI/Hetzner for open NFS ports
> configure filewall to prevent messages from BSI/Hetzner for open NFS ports
I'm not getting warning about open NFS ports but I'm getting a ton of abuse messages about invalid MAC addresses.
Can you filter those out?
Installing the system was fine.
Here is my complete story to help the next user. It's a mission and there is a lot of contradictory information. With luck, it might just work the first time. There are scripts to do it, there are manuals to do it, and you can read all pages in this forum and still be confused.
- I have an AX162-S server
- Accessing rescue is easy
- I have four NVMe drives because I wanted ZRAID10 so I simply attached all four of them
- I tried without EFI first, but that didn't work, so the second time I added this switch `-bios /usr/share/ovmf/OVMF.fd`
- I didn't have to install anything on Debian rescue for the OVMF switch to work
Running `qemu-system-x86_64` the first time didn't help me to edit my interfaces which were empty, I had to run it again without the CD mounted to edit the interfaces. Rebooting was interesting and pretty much involves CONTROL-C
When I finally got it booted, after trying about 5 times with 5 different configurations, I had approximately this standard bridged network config:
Bash:
root@hv:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback
# Physical interface - no IP assigned
iface enp193s0f0np0 inet manual
# Main bridge for host IP only
auto vmbr0
iface vmbr0 inet static
address a.b.c.d/26
gateway a.b.c.e
bridge-ports enp193s0f0np0
bridge-stp off
bridge-fd 0
Elated I was! But the job wasn't complete, as I have to migrate 6 VMs away from Digital Ocean. I ordered a /29 and that's when the fun started...
First of all, the /29 had a weird gateway. Not a router, but the first machine I ordered! WTF? Anyway, I tried setting up the first VM, and sure as hell, it actually worked. So confused I carried on and configured the other 5.
Then these warnings:
Code:
[AbuseID:10xxxx:xx]: MAC-Errors: MAC-Report for #blablabla (a.b.c.d)
We have detected that your server is using different MAC addresses from those allowed by your Robot account.
Unallowed MACs:
- mac 1
- mac 2
Huh? What did I abuse? I just signed up! This is weird onboarding.
So reading the FAQ you have new studying:
---------------
Reasons you received an abuse email regarding MAC-Errors
- Bridged setup
You should use this configuration with additional single IPs which have their own virtual MAC address. The virtual NICs of your VM can be in the same virtual switch/bridge as your physical NIC. You can create virtual MACs for your additional single IP via Robot (Server -> IP).
The maximum number of single IPs is 6 per server.
- Routed setup
You should use this configuration with additional subnets. It is not possible to generate virtual MACs for your additional subnets, because subnets are routed onto one of your server's IPs. Therefore, you should also route these IPs within your server. Please make sure to place these VMs in a different virtual switch/bridge than your physical NIC.
---------------
After a lot of extra work, across Debian, AlmaLinux and Ubuntu, I routed the new IPs to private NAT interfaces. That didn't work because the abuse messages kept on coming. Some even hinted it's resolved only to be followed by new abuse warnings. I've just onboarded and being told you're going to have your new server switched off, at the start of a migration, is a very poor experience.
So then I routed the IPs as /32s. Not sure if that's going to work. Hetzner has sent me 10 automated messages today to which I've responded telling them I don't know what to do. It's a complete mess.
It seems to get it working you'll need a lot of "post-up" commands. For Ubuntu, you can't use gateway: 4 anymore, you need something like this:
Code:
routes:
- to: 0.0.0.0/0
via: a.b.c.d
on-link: true
Conclusion:
- To those just wanting to bring up Proxmox at Hetzner, it's possible. You'll succeed.
- To those who are running more than a few VM IPs, good luck. It's a huge whole day mission.