Proxmox VE on Debian Jessie with zfs - Hetzner

Someone try to install proxmox 5.1 with this method? Is that work? I have no error but I can't ping my server. And when I boot from rescue with Freebsd I can't import rpool.
 
vmbr not working with new format. Only with eth.

that's not true at all.. all newly installed 5.x systems by default use the new predictable names, and of course set up a default vmbr..
 
Someone try to install proxmox 5.1 with this method? Is that work? I have no error but I can't ping my server. And when I boot from rescue with Freebsd I can't import rpool.
1. You can not import zfs 0.7 to 0.6
2. I wrote about not working vmbr with ens. I told about this because tested it. You need change grub line to GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs net.ifnames=0 biosdevname=0" and rename network intefaces to eth. Than reboot. And you can ping it, but only in 5.0 ver. 5.1 not working
 
that's not true at all.. all newly installed 5.x systems by default use the new predictable names, and of course set up a default vmbr..
On hetzner servers working only 4/5.0. i test it 1 month. And ens not working. Proxmox 5.1 not working too PX61/AX60
 
On hetzner servers working only 4/5.0. i test it 1 month. And ens not working. Proxmox 5.1 not working too PX61/AX60

then that is an issue with Hetzner's setup, not PVE..
 
1. You can not import zfs 0.7 to 0.6
2. I wrote about not working vmbr with ens. I told about this because tested it. You need change grub line to GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs net.ifnames=0 biosdevname=0" and rename network intefaces to eth. Than reboot. And you can ping it, but only in 5.0 ver. 5.1 not working

How to get the ZFS (0.7) mounted in a rescue? Which live cd has ZFS 0.7 build? (to be able to do a qemu with the live cd)
 
Ah.. sometimes it's easyer as it looks like.

I tried the "rescue boot" option before in the installer but this just throws a error.

Now with just saying "abort" in the installer the console works.

Thx
 
If you ask nicely the Hetzner DC personal will unofficially provide the USB pen drive with the latest stable PVE on it and plug it into the server when you order LARA. So you can install PVE within a few minutes. Another solution is to mount an ISO file with LARA. But this does not work always-So better ask them nicely and they'll help you. Hope this helps.
 
as I have had to fight that beast as well and finally won the battle, I thought I'd share my findings.

first, as laid out previously, the easiest way to bring it up and running at Hetzner is to ask the staff to put in an USB stick with proxmox 5.1 on it and then access your server via a LARA console.

For me, this didn't work out well, because I happen to have a 4K monitor and the LARA console was to tiny, that it was practically unreadable.

I've also tried to upload the image into the LARA console, but they only support uploads from SAMBA/CIFS shares, which I don't have available.

So I decided to take the "difficult" approach and managed to install proxmox 5.1 like below:
  1. reboot your server into rescue mode (linux 64bit)
  2. download the latest proxmox ISO from downloads.proxmox.com and store it as /proxmox.iso
  3. start the installation using QEMU
    qemu-system-x86_64 -enable-kvm -m 1024 -hda /dev/sda -hdb /dev/sdb -cdrom /proxmox.iso -boot d -vnc :0
  4. the server I'm installing has old SATA drives, so they are available as /dev/sda and /dev/sdb, if you have different devices (like NVM storage, you'll have to change accordingly)
  5. if, like me, you have a non English keyboard, you can choose the appropriate keyboard layout by adding "-k de" or similar to the qemu command line
  6. connect to the newly started VM using any VNC viewer. Beware that there is no password protection, so at least in theory anybody else can do that, too, meaning that you should not let it run unattended for a longer time.
  7. despite the fact that you have started qemu with the "-enable-kvm" switch, the proxmox installer will complain about virtualization not available. Just ignore that message, because eventually proxmox will of course run on virtualization capable bare metal later.
  8. When the installer is done and it tells you to reboot your server, press the reboot button but don't let it boot again, but instead terminate the virtual machine you have started at step 3
  9. you may be tempted to just reboot your physical server as well and see, if it is reachable, but that won't work :)
    The reason for that is that the network interface(s) have a different name now. Previously, during the qemu installation, your NICs were virtualized and got a virtual namem(in my case "ens3"). After the reboot, those virtual devices are not there anymore and thus the network doesn't come up
  10. so instead start the virtual machine again, using the same switches as previously:
    qemu-system-x86_64 -enable-kvm -m 1024 -hda /dev/sda -hdb /dev/sdb -cdrom /proxmox.iso -boot d -vnc :0
  11. but this time don't choose "Install" but "Rescue Mode" BUT don't start that one either
  12. If you do, rescue mode will still not find a valid installation (for resons I have not investigated further). But you can still circumvent this obstacle by editing the GRUB entry. To do so, choose "Rescue Mode" at the initial screen, but don't press enter to select it but "e" to edit it.
  13. Change the entry to look like this:
    Code:
    setparams 'Rescue Boot'
    
    insmod ext2
    set tmproot=$root
    insmod zfs
    search --no-floppy --label rpool --set root
    linux /ROOT/pve-1/@//boot/pve/vmlinuz ro ramdisk_size=16777216 root=ZFS=rpool/ROOT/pve-1 boot=zfs
    initrd /ROOT/pve-1/@//boot/pve/initrd.img
    boot
    set root=$tmproot
  14. finally press Ctrl-x or F10 to boot
  15. log in as root and open /etc/default/grub and change the GRUB_COMMANDLINE_LINUX line to look like this:
    Code:
    GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs net.ifnames=0 biosdevname=0"
    This instructs systemd not to rename network devices upon boot.
  16. apply the updated GRUB configuration
    Code:
    grub-mkconfig -o /boot/grub/grub.cfg
  17. open /etc/network/interfaces
  18. the essential part is the "bridge_ports" line. You probably will only see one interface listed here, in my case it was the virtual interface named "ens3". In the glorious days of systemd, names for hardware devices are not "stable" anymore, instead they get enumerated depending on their physical location (=the hardware slot they have been inserted in). First I tried to just add the "usual" enp2s0 and enp3s0 interfaces to the bridge_ports list, but unfortunately that didn't work, at least not for my setup, so I ended up with disabling the "predictable naming" in steps 15 and 16 and reverting to the "old style" eth0, eth1 and so on style.
    Code:
    auto lo
    iface lo inet loopback
    
    auto vmbr0
    iface vmbr0 inet static
            address xxx.xxx.xxx.xxx
            netmask xxx.xxx.xxx.xxx
            gateway xxx.xxx.xxx.xxx
            bridge_ports ens3 eth0 eth1 enp2s0 enp3s0
            bridge_stp off
            bridge_fd 0

After that, shutdown your virtual machine and reboot your physical server, wait for a minute and finally access your new shiny Proxmox 5.1 server.

In terms of "blaming" and asking "who made this so fucking complicated" ... there is no one to really blame.

Hetzner even has a ready to use Proxmox installation available, unfortunately it is not using ZFS but only ext4+LVM. What's really "breaking" things is the fact that debian stretch, which Proxmox 5.x is based on, has finally fully embraced systemd, leaving us with "predictable interface names" that somehow are not so predictable in the end, at least in some setups ...

Update 2018-08-03: fixed typo in grub config, thanks @Michel V
 
Last edited:
  • Like
Reactions: DerDanilo
If you manage to run a Debian live CD image using any available or working method, you can just use the usual partitioning tools and debootstrap to install a base system, reboot into that and install PVE on top of it... I've done that several times with success. The qemu way looks excessively complicated.
 
  • Like
Reactions: DerDanilo
Though this is a nice writeup it is faster and easier to just have a system ssd installed (single or dual), install debian with mdadm raid 1 on it.
Install pve the debian way and create + add zfs pools via cli.
This takes about 5-10 min until complete depending on your server hardware and its even mostly scriptable.
 
hopefully there are easier ways to achieve this, unfortunately none of them worked for me ...

The process I described looks actually more complicated than it is, I've repeated it yesterday for a couple of servers and if you know what to do, it doesn't take much more than 10 minutes per server.
 
hopefully there are easier ways to achieve this

I installed it in a local PVE with two disks and just dd'ed the disk images to the hetzner server via live boot. Actual "install" on the hetzner machine takes only minutes (depends on a fast internet connection).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!