as I have had to fight that beast as well and finally won the battle, I thought I'd share my findings.
first, as laid out previously, the easiest way to bring it up and running at Hetzner is to ask the staff to put in an USB stick with proxmox 5.1 on it and then access your server via a LARA console.
For me, this didn't work out well, because I happen to have a 4K monitor and the LARA console was to tiny, that it was practically unreadable.
I've also tried to upload the image into the LARA console, but they only support uploads from SAMBA/CIFS shares, which I don't have available.
So I decided to take the "difficult" approach and managed to install proxmox 5.1 like below:
- reboot your server into rescue mode (linux 64bit)
- download the latest proxmox ISO from downloads.proxmox.com and store it as /proxmox.iso
- start the installation using QEMU
qemu-system-x86_64 -enable-kvm -m 1024 -hda /dev/sda -hdb /dev/sdb -cdrom /proxmox.iso -boot d -vnc :0
- the server I'm installing has old SATA drives, so they are available as /dev/sda and /dev/sdb, if you have different devices (like NVM storage, you'll have to change accordingly)
- if, like me, you have a non English keyboard, you can choose the appropriate keyboard layout by adding "-k de" or similar to the qemu command line
- connect to the newly started VM using any VNC viewer. Beware that there is no password protection, so at least in theory anybody else can do that, too, meaning that you should not let it run unattended for a longer time.
- despite the fact that you have started qemu with the "-enable-kvm" switch, the proxmox installer will complain about virtualization not available. Just ignore that message, because eventually proxmox will of course run on virtualization capable bare metal later.
- When the installer is done and it tells you to reboot your server, press the reboot button but don't let it boot again, but instead terminate the virtual machine you have started at step 3
- you may be tempted to just reboot your physical server as well and see, if it is reachable, but that won't work
The reason for that is that the network interface(s) have a different name now. Previously, during the qemu installation, your NICs were virtualized and got a virtual namem(in my case "ens3"). After the reboot, those virtual devices are not there anymore and thus the network doesn't come up
- so instead start the virtual machine again, using the same switches as previously:
qemu-system-x86_64 -enable-kvm -m 1024 -hda /dev/sda -hdb /dev/sdb -cdrom /proxmox.iso -boot d -vnc :0
- but this time don't choose "Install" but "Rescue Mode" BUT don't start that one either
- If you do, rescue mode will still not find a valid installation (for resons I have not investigated further). But you can still circumvent this obstacle by editing the GRUB entry. To do so, choose "Rescue Mode" at the initial screen, but don't press enter to select it but "e" to edit it.
- Change the entry to look like this:
Code:
setparams 'Rescue Boot'
insmod ext2
set tmproot=$root
insmod zfs
search --no-floppy --label rpool --set root
linux /ROOT/pve-1/@//boot/pve/vmlinuz ro ramdisk_size=16777216 root=ZFS=rpool/ROOT/pve-1 boot=zfs
initrd /ROOT/pve-1/@//boot/pve/initrd.img
boot
set root=$tmproot
- finally press Ctrl-x or F10 to boot
- log in as root and open /etc/default/grub and change the GRUB_COMMANDLINE_LINUX line to look like this:
Code:
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs net.ifnames=0 biosdevname=0"
This instructs systemd not to rename network devices upon boot.
- apply the updated GRUB configuration
Code:
grub-mkconfig -o /boot/grub/grub.cfg
- open /etc/network/interfaces
- the essential part is the "bridge_ports" line. You probably will only see one interface listed here, in my case it was the virtual interface named "ens3". In the glorious days of systemd, names for hardware devices are not "stable" anymore, instead they get enumerated depending on their physical location (=the hardware slot they have been inserted in). First I tried to just add the "usual" enp2s0 and enp3s0 interfaces to the bridge_ports list, but unfortunately that didn't work, at least not for my setup, so I ended up with disabling the "predictable naming" in steps 15 and 16 and reverting to the "old style" eth0, eth1 and so on style.
Code:
auto lo
iface lo inet loopback
auto vmbr0
iface vmbr0 inet static
address xxx.xxx.xxx.xxx
netmask xxx.xxx.xxx.xxx
gateway xxx.xxx.xxx.xxx
bridge_ports ens3 eth0 eth1 enp2s0 enp3s0
bridge_stp off
bridge_fd 0
After that, shutdown your virtual machine and reboot your physical server, wait for a minute and finally access your new shiny Proxmox 5.1 server.
In terms of "blaming" and asking "who made this so fucking complicated" ... there is no one to really blame.
Hetzner even has a ready to use Proxmox installation available, unfortunately it is not using ZFS but only ext4+LVM. What's really "breaking" things is the fact that debian stretch, which Proxmox 5.x is based on, has finally fully embraced systemd, leaving us with "predictable interface names" that somehow are not so predictable in the end, at least in some setups ...
Update 2018-08-03: fixed typo in grub config, thanks
@Michel V