Proxmox VE on Debian Jessie with zfs - Hetzner

Latest version of ZFS (0.7.9) contains a fix for SSD/nvme disks detection:
issue #7304

Using whole disks in ZFS pools increase speed because enabling the write cache for disks.
 
New metod working on 2018 :) Fastnest


My Metod:
Run server on Rescue mode = Linux(beta) 64bit

Download iso image:
links to proxmox downloads (I`m new user, can`t add url)​

Run Virtual ENV:
qemu-system-x86_64 -enable-kvm -m 4096 -cpu host -smp 8 -drive file=/dev/sda,format=raw,cache=none,index=0,media=disk -drive file=/dev/sdb,format=raw,cache=none,index=1,media=disk -cdrom proxmox-ve_5.1-3.iso -boot d -vnc :0​

Connect to VNC - install system, rebot

Run system, and setup network card
qemu-system-x86_64 -enable-kvm -m 4096 -cpu host -smp 8 -drive file=/dev/sda,format=raw,cache=none,index=0,media=disk -drive file=/dev/sdb,format=raw,cache=none,index=1,media=disk -vnc :0

nano /etc/network/interfaces​

Rebot server. It is olny 5min :)



#/etc/network/interfaces
auto lo
iface lo inet loopback

iface enp3s0 inet manual

auto vmbr0
iface vmbr0 inet static
address 68.19.11.155
netmask 255.255.255.224
gateway 68.19.11.129
bridge_ports enp3s0
bridge_stp off
bridge_fd 0
 
I followed @PanWaclaw's How To. It worked pretty fine... until the reboot ;-)
Server did not response to ping, also not after booting into Rescue. I requested a manual reset and support was so kind to directly attach a LARA. I was greeted by a PVE logon screen, so that was already fine - in my case the interface name was different (enp4s0). The problem of not booting into rescue was the BIOS configuration (HDD first), which I changed directly after correcting the interface names :)

However, when you're installing from the rescue system, you could identify the names that systemd will allocate for your interfaces (according to this blog): major.io/2015/08/21/understanding-systemds-predictable-network-device-names/

Code:
root@rescue ~ # udevadm info -e | grep -A 9 ^P.*eth0
P: /devices/pci0000:00/0000:00:1c.5/0000:04:00.0/net/eth0
E: COMMENT=PCI device 0x8086:0x10d3 (e1000e)
E: DEVPATH=/devices/pci0000:00/0000:00:1c.5/0000:04:00.0/net/eth0
E: ID_BUS=pci
E: ID_MODEL_FROM_DATABASE=82574L Gigabit Network Connection (Motherboard)
E: ID_MODEL_ID=0x10d3
E: ID_NET_DRIVER=e1000e
E: ID_NET_NAME_MAC=enx001122334455
E: ID_NET_NAME_PATH=enp4s0
E: ID_OUI_FROM_DATABASE=Asustek Computer Inc

According to the blogpost, the order of preference is the following:
  • ID_NET_NAME_FROM_DATABASE
  • ID_NET_NAME_ONBOARD
  • ID_NET_NAME_SLOT
  • ID_NET_NAME_PATH
  • ID_NET_NAME_MAC
So you should easily be able to find out which name will be used.

Also, for qemu, you could use the once flag to -boot (see manpage: qemu-system-x86_64(1)), so you wouldn't need to stop the VM and start it with a new command.
 
  • Like
Reactions: did-vmonroig
@udotirol
  1. Code:
    setparams 'Rescue Boot'
    
    insmod ext2
    set tmproot=$root
    insmod zfs
    search --no-floppy --label rpool --set root
    linux /BOOT/pve-1/@//boot/pve/vmlinuz ro ramdisk_size=16777216 root=ZFS=rpool/ROOT/pve-1 boot=zfs
    initrd /ROOT/pve-1/@//boot/pve/initrd.img
    boot
    set root=$tmproot
Hey, thanks fort his write up. For this code part: the first part in linux / should probably also be ROOT.
For people trying: the line is already in the config, just delete the other.

And note that this will erase your ssh key, so do remember the password you set and reapply your authorized_keys.
 
  • Like
Reactions: udotirol
Even if this works, as long as the rescue image does not provide ZFS natively I would no recommend using it this way for any business related projects. Re/Installing a host takes way to long this way and is not automatable.
Installting 2 additional SSDs (don't even have to be DC ones) is even cheap and systems usually have enough HDD slots to fit enough disks.
 
Even if this works, as long as the rescue image does not provide ZFS natively I would no recommend using it this way for any business related projects. Re/Installing a host takes way to long this way and is not automatable.
Installting 2 additional SSDs (don't even have to be DC ones) is even cheap and systems usually have enough HDD slots to fit enough disks.
Can you please explain this? I'm looking to use Proxmox for my business and I'm currently evaluating. What would be the best setup for production? You can just boot into the rescue image and install ZFS if you want to mess with ZFS filesystems. Before proxmox on another server, I used to boot into the rescue image, install ZFS quickly, and mount my pool (mirror array) to change config files when I would mess up the network and couldn't access the dedi anymore.

What's exactly the difference between Software RAID 1 or ZFS Mirror Array for 2 drives (hetzner dedi if it matters), which would be better for data preservation?
 
What's exactly the difference between Software RAID 1 or ZFS Mirror Array for 2 drives (hetzner dedi if it matters), which would be better for data preservation?

The later (ZFS) is supported by Proxmox Staff, so if you have a support subscription and want to run this in production, this is the way to go.

You can just boot into the rescue image and install ZFS if you want to mess with ZFS filesystems.

Sure you can do that but that is exactly @DerDanilo s point: It is manual and installing an compiling ZFS modules takes time, nothing you have in a production system outtake. Best would be to have such a system already running, e.g. build one with Debian Live with some scripts already at hand to deal with the restore of a PVE system and test it multiple times.
 
The later (ZFS) is supported by Proxmox Staff, so if you have a support subscription and want to run this in production, this is the way to go.



Sure you can do that but that is exactly @DerDanilo s point: It is manual and installing an compiling ZFS modules takes time, nothing you have in a production system outtake. Best would be to have such a system already running, e.g. build one with Debian Live with some scripts already at hand to deal with the restore of a PVE system and test it multiple times.
Can you please point me in that direction? I'm still conflicted, installing ZFS would be minutes in a recovery live situation...

Wouldn't the benefits outweigh the cons of installing everything? Wouldn't there be a much better chance of recovery of a failed drive in with ZFS versus just a software RAID?

Edit: I skipped over your first sentence somehow, I guess I will be going with ZFS since I do plan on buying the subscription.
 
Can you please point me in that direction? I'm still conflicted, installing ZFS would be minutes in a recovery live situation...

Unfortunately no, it's not. Debian does not and will not ship a binary compiled module of ZFS ("wrong" open source license), so you have to compile it on every install for every installed kernel. This can be done manually by creating your own live-distribution with the great live-live (and related) packages in Debian.
 
Unfortunately no, it's not. Debian does not and will not ship a binary compiled module of ZFS ("wrong" open source license), so you have to compile it on every install for every installed kernel. This can be done manually by creating your own live-distribution with the great live-live (and related) packages in Debian.
So you're recommending to use RAID without zfs over zfs?
 
panclaws solution does work but only with 2 disk max. this is because qemus default is IDE and well 4devices are max there (good old times)

if you need mroe disks use this
Code:
qemu-system-x86_64 -enable-kvm -m 4096 -cpu host -smp 8 \
-drive file=/dev/sda,format=raw,cache=none,index=0,media=disk,if=virtio \
-drive file=/dev/sdb,format=raw,cache=none,index=1,media=disk,if=virtio \
-drive file=/dev/sdc,format=raw,cache=none,index=2,media=disk,if=virtio \
-drive file=/dev/sdd,format=raw,cache=none,index=3,media=disk,if=virtio \
-drive file=/root/proxmox-ve_6.0-1.iso,format=raw,index=1,media=cdrom -boot d -vnc :1

if you ad more disks also dont forget to count index up.
index is each bus type, so the cdrom is 1 again because disks are now on the virtio bus, cdrom is default on IDE
 
Thanks all for sharing your solves here. Very helpful in my quest for a Proxmox+ZFS+Hetzner server.

Here was the approach that worked well for me, adapted from PanWaclaw and Mogli solves/knowledge.

You may also be interested in the answer I made about P2V'ing this node. i.e. Physical Linux to Virtual conversion using KVM and Proxmox, with a running P node, without having to obey the existing partition sizes etc. https://serverfault.com/a/988703/64325

HTH

Code:
### Reployment of existing Hetzner root/dedicated node with Proxmox 5.4 iso
### Physical node was using md RAID1 with spinning disks, herein referred to as P

# In Hetzner control panel - order rescue system with Linux 64 Bit
# note the generated root password

# reboot P node, wait a little and then login with root@nodeip and use the generated root password

# get pmox iso image, replace $URL with a valid pmox ISO installer link
curl -L -O $URL
# verify $download file name etc, place image in /proxmox.iso
mv -iv $download /proxmox.iso
# checksum the iso and verify with vendors sums
sha256sum /proxmox.iso

# try get a list of predictable network interface names, note them for later
root@rescue ~ # udevadm test /sys/class/net/eth0 2>/dev/null |grep ID_NET_NAME_
ID_NET_NAME_MAC=enx14dae9ef7043
ID_NET_NAME_PATH=enp4s0

# start a vm with the pmox installer and vnc
# man page reference https://manpages.debian.org/stretch/qemu-system-x86/qemu-system-x86_64.1.en.html
# -boot once=d = boot from the cdrom iso one time, next reboot will follow normal boot seq
# make sure to replace -smp -m and -drive options with ones matching your hardware
# !!! ACHTUNG !!! this will DESTROY the partition tables and DATA on the specified drives
qemu-system-x86_64 -enable-kvm -m 4096 -cpu host -smp 8 \
-drive file=/dev/sda,format=raw,cache=none,index=0,media=disk \
-drive file=/dev/sdb,format=raw,cache=none,index=1,media=disk \
-vnc :0 -cdrom /proxmox.iso -boot once=d

# Connect VNC to your host address:5900
# https://www.tightvnc.com/download.php
# Download TightVNC Java Viewer (Version 2.8.3)

# install pmox via VNC GUI wizard
# GUI installer showed ens3 for nic, which is due to the qemu, ignore it

# reboot vm at the end of the install, it will boot grub, let it boot normally

# login to the new pve - edit network interfaces
# !!! ACHTUNG !!! check/update iface names and bridge ports
# as above my interface was predicted as enp4s0, this worked as hoped
# replace $EDITOR with your preferred editor, but nano might be the only pre-loaded right now
$EDITOR /etc/network/interfaces
# shutdown vm
shutdown -h now

# reboot the rescure image to boot pmox on the physical hardware
# shutdown -r now
 
Last edited:
  • Like
Reactions: did-vmonroig
I would add that you should do these small additional steps too.

1. before you start the qemu session, enable nested kvm, so proxmox installation does not nag about kvm not being supported
# cat /sys/module/kvm_intel/parameters/nested
# rmmod kvm_intel
# modprobe kvm_intel nested=1
#cat /sys/module/kvm_intel/parameters/nested

2. after the installation of proxmox is finished.
.. and you hit the reboot button
stop the qemu and DO NOT LET IT BOOT from disk for the first time
otherwise you will end up with QEMU all over the place in dmidecode, hwinfo, /dev/sdX devices instead of /dev/nvme* (in case you use nvme)
stop the qemu, reboot the rescue session and let the server boot, even when it might not bee reachable via ip.
just let it reboot, sit there for 10minutes to be safe, and then you can start the rescue session via qemu again, now without the cdrom.
then, if needed, change the network device name in /etc/network/interfaces. I looked up the /var/log/messages from the non-qemu reboot before, and found out the real network name that needs to be used (eno1 in my case).
 
Heya

Been reading up on this thread and still am not 100% sure I follow it all.
I would like to install Proxmox 6 on a Hetzner Server (2 NVMEs, 2 SATAs) and I thought that asking for a bootable USB stick would allow me to boot the installer an Proxmox will take care of the zfs? (at least for the boot disks)?

Does this now work?

Thanks!
 
Heya

Been reading up on this thread and still am not 100% sure I follow it all.
I would like to install Proxmox 6 on a Hetzner Server (2 NVMEs, 2 SATAs) and I thought that asking for a bootable USB stick would allow me to boot the installer an Proxmox will take care of the zfs? (at least for the boot disks)?

Does this now work?

Thanks!

why going for an usb? also without lara this isnt really much fun either. and hetzner has no ipmi, shame on them.

in essence what we do is
- start rescue mode (whcih is basically linux in a ramdisk booted from network)
- we download install image (into that virtual ram disk in which rescue lives dont worry about it, its /)
- start a virtual machine from command line which boots from installimage and uses the physical harddrives as block devices
- now the tricky part we need a bare vnc connection to that virtual machine so we can see the installer
- install it. we basically install it into that VM we jsut started, which uses the real harddrives.
thats why we need to be careful to reconfig network card names as they will differ because of rescue


now the trick is downloading that image into rescue is basically like having it on usb.
the tricky part is to see the installer, which is why we boot up a virtual machine we can use with vnc
 
hi bofh (like that name btw, hope it has to do with the book)

I thought using the LARA and a USB installer would do the same thing, just a bit easier?
or does that not work?
 
Just install plain Debian with mdadm (Hetzner installer) and the proxmox pve. Takes about 5-7 minutes with DC NVME until the PVE interface. I should mention that we use Ansible to assist.
 
hi bofh (like that name btw, hope it has to do with the book)

I thought using the LARA and a USB installer would do the same thing, just a bit easier?
or does that not work?


I last installed Proxmox on a Hetzner Server with NVME Disks in 01/2020.

I used Lara with a USB-Image provided by Hetzner based on the official ISO.

Install Failed on the 6.1 ISO with the Hetzner burned ISO.

Then i directly mounted the Original Proxmxo ISOs via LARA.

install failed on
- 6.1
- 6.0

My suspicion was the 7 year old Hetzner Hardware in conjunction with UEFI.

Then I installed via Lara and directly mounted Debian, then stuck proxmox on-top. (No ZFS) - install worked no Problem.
System was unstable during high bandwith operation (turns out was NOT Proxmox, but the F'ed up Intel drivers for that particular NIC, causing the host to be unresponsive on said nic from www ).

So on a whim i (Still using LARA) did a install from a directly mounted Proxmox 5.4 and used the regular Upgrade-path to 6.1.
Worked like a charm. Proxmox with ZFS on NVME.

AFAIR i had the same issues regarding the 6.0 iso issue (from above) on a NVME-Disk in the beginning of 2019; Back then AFAIR a regular SSD Hetzner-Server iwth the same specs actually worked.
 
hi bofh (like that name btw, hope it has to do with the book)

I thought using the LARA and a USB installer would do the same thing, just a bit easier?
or does that not work?
you mean bastard operator from hell writing in an it forum,,naa cant be :)

well not really easier, its much easier todo this via console and kvm machine, the hole process takes under 4 minutes
and you better get used to it if you use zfs as a root system

cause its the only way to resucue that system without lara (and that can take a while to get it). and sadly, currently there many reasons you might need rescue
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!