Proxmox VE on Debian Jessie with zfs - Hetzner

sigo

Member
Aug 24, 2017
16
2
8
46
Latest version of ZFS (0.7.9) contains a fix for SSD/nvme disks detection:
issue #7304

Using whole disks in ZFS pools increase speed because enabling the write cache for disks.
 

PanWaclaw

New Member
Jan 30, 2017
2
2
3
33
New metod working on 2018 :) Fastnest


My Metod:
Run server on Rescue mode = Linux(beta) 64bit

Download iso image:
links to proxmox downloads (I`m new user, can`t add url)​

Run Virtual ENV:
qemu-system-x86_64 -enable-kvm -m 4096 -cpu host -smp 8 -drive file=/dev/sda,format=raw,cache=none,index=0,media=disk -drive file=/dev/sdb,format=raw,cache=none,index=1,media=disk -cdrom proxmox-ve_5.1-3.iso -boot d -vnc :0​

Connect to VNC - install system, rebot

Run system, and setup network card
qemu-system-x86_64 -enable-kvm -m 4096 -cpu host -smp 8 -drive file=/dev/sda,format=raw,cache=none,index=0,media=disk -drive file=/dev/sdb,format=raw,cache=none,index=1,media=disk -vnc :0

nano /etc/network/interfaces​

Rebot server. It is olny 5min :)



#/etc/network/interfaces
auto lo
iface lo inet loopback

iface enp3s0 inet manual

auto vmbr0
iface vmbr0 inet static
address 68.19.11.155
netmask 255.255.255.224
gateway 68.19.11.129
bridge_ports enp3s0
bridge_stp off
bridge_fd 0
 

Mogli

New Member
Apr 19, 2016
3
1
3
28
I followed @PanWaclaw's How To. It worked pretty fine... until the reboot ;-)
Server did not response to ping, also not after booting into Rescue. I requested a manual reset and support was so kind to directly attach a LARA. I was greeted by a PVE logon screen, so that was already fine - in my case the interface name was different (enp4s0). The problem of not booting into rescue was the BIOS configuration (HDD first), which I changed directly after correcting the interface names :)

However, when you're installing from the rescue system, you could identify the names that systemd will allocate for your interfaces (according to this blog): major.io/2015/08/21/understanding-systemds-predictable-network-device-names/

Code:
root@rescue ~ # udevadm info -e | grep -A 9 ^P.*eth0
P: /devices/pci0000:00/0000:00:1c.5/0000:04:00.0/net/eth0
E: COMMENT=PCI device 0x8086:0x10d3 (e1000e)
E: DEVPATH=/devices/pci0000:00/0000:00:1c.5/0000:04:00.0/net/eth0
E: ID_BUS=pci
E: ID_MODEL_FROM_DATABASE=82574L Gigabit Network Connection (Motherboard)
E: ID_MODEL_ID=0x10d3
E: ID_NET_DRIVER=e1000e
E: ID_NET_NAME_MAC=enx001122334455
E: ID_NET_NAME_PATH=enp4s0
E: ID_OUI_FROM_DATABASE=Asustek Computer Inc
According to the blogpost, the order of preference is the following:
  • ID_NET_NAME_FROM_DATABASE
  • ID_NET_NAME_ONBOARD
  • ID_NET_NAME_SLOT
  • ID_NET_NAME_PATH
  • ID_NET_NAME_MAC
So you should easily be able to find out which name will be used.

Also, for qemu, you could use the once flag to -boot (see manpage: qemu-system-x86_64(1)), so you wouldn't need to stop the VM and start it with a new command.
 

Michel V

Member
Jul 5, 2018
31
1
8
119
@udotirol
  1. Code:
    setparams 'Rescue Boot'
    
    insmod ext2
    set tmproot=$root
    insmod zfs
    search --no-floppy --label rpool --set root
    linux /BOOT/pve-1/@//boot/pve/vmlinuz ro ramdisk_size=16777216 root=ZFS=rpool/ROOT/pve-1 boot=zfs
    initrd /ROOT/pve-1/@//boot/pve/initrd.img
    boot
    set root=$tmproot
Hey, thanks fort his write up. For this code part: the first part in linux / should probably also be ROOT.
For people trying: the line is already in the config, just delete the other.

And note that this will erase your ssh key, so do remember the password you set and reapply your authorized_keys.
 
  • Like
Reactions: udotirol
Jan 21, 2017
282
27
33
30
Berlin
Even if this works, as long as the rescue image does not provide ZFS natively I would no recommend using it this way for any business related projects. Re/Installing a host takes way to long this way and is not automatable.
Installting 2 additional SSDs (don't even have to be DC ones) is even cheap and systems usually have enough HDD slots to fit enough disks.
 

catbodi

New Member
Oct 21, 2018
3
0
1
29
Even if this works, as long as the rescue image does not provide ZFS natively I would no recommend using it this way for any business related projects. Re/Installing a host takes way to long this way and is not automatable.
Installting 2 additional SSDs (don't even have to be DC ones) is even cheap and systems usually have enough HDD slots to fit enough disks.
Can you please explain this? I'm looking to use Proxmox for my business and I'm currently evaluating. What would be the best setup for production? You can just boot into the rescue image and install ZFS if you want to mess with ZFS filesystems. Before proxmox on another server, I used to boot into the rescue image, install ZFS quickly, and mount my pool (mirror array) to change config files when I would mess up the network and couldn't access the dedi anymore.

What's exactly the difference between Software RAID 1 or ZFS Mirror Array for 2 drives (hetzner dedi if it matters), which would be better for data preservation?
 

LnxBil

Famous Member
Feb 21, 2015
4,329
433
103
Germany
What's exactly the difference between Software RAID 1 or ZFS Mirror Array for 2 drives (hetzner dedi if it matters), which would be better for data preservation?
The later (ZFS) is supported by Proxmox Staff, so if you have a support subscription and want to run this in production, this is the way to go.

You can just boot into the rescue image and install ZFS if you want to mess with ZFS filesystems.
Sure you can do that but that is exactly @DerDanilo s point: It is manual and installing an compiling ZFS modules takes time, nothing you have in a production system outtake. Best would be to have such a system already running, e.g. build one with Debian Live with some scripts already at hand to deal with the restore of a PVE system and test it multiple times.
 

catbodi

New Member
Oct 21, 2018
3
0
1
29
The later (ZFS) is supported by Proxmox Staff, so if you have a support subscription and want to run this in production, this is the way to go.



Sure you can do that but that is exactly @DerDanilo s point: It is manual and installing an compiling ZFS modules takes time, nothing you have in a production system outtake. Best would be to have such a system already running, e.g. build one with Debian Live with some scripts already at hand to deal with the restore of a PVE system and test it multiple times.
Can you please point me in that direction? I'm still conflicted, installing ZFS would be minutes in a recovery live situation...

Wouldn't the benefits outweigh the cons of installing everything? Wouldn't there be a much better chance of recovery of a failed drive in with ZFS versus just a software RAID?

Edit: I skipped over your first sentence somehow, I guess I will be going with ZFS since I do plan on buying the subscription.
 

LnxBil

Famous Member
Feb 21, 2015
4,329
433
103
Germany
Can you please point me in that direction? I'm still conflicted, installing ZFS would be minutes in a recovery live situation...
Unfortunately no, it's not. Debian does not and will not ship a binary compiled module of ZFS ("wrong" open source license), so you have to compile it on every install for every installed kernel. This can be done manually by creating your own live-distribution with the great live-live (and related) packages in Debian.
 

catbodi

New Member
Oct 21, 2018
3
0
1
29
Unfortunately no, it's not. Debian does not and will not ship a binary compiled module of ZFS ("wrong" open source license), so you have to compile it on every install for every installed kernel. This can be done manually by creating your own live-distribution with the great live-live (and related) packages in Debian.
So you're recommending to use RAID without zfs over zfs?
 

bofh

Member
Nov 7, 2017
98
10
13
39
panclaws solution does work but only with 2 disk max. this is because qemus default is IDE and well 4devices are max there (good old times)

if you need mroe disks use this
Code:
qemu-system-x86_64 -enable-kvm -m 4096 -cpu host -smp 8 \
-drive file=/dev/sda,format=raw,cache=none,index=0,media=disk,if=virtio \
-drive file=/dev/sdb,format=raw,cache=none,index=1,media=disk,if=virtio \
-drive file=/dev/sdc,format=raw,cache=none,index=2,media=disk,if=virtio \
-drive file=/dev/sdd,format=raw,cache=none,index=3,media=disk,if=virtio \
-drive file=/root/proxmox-ve_6.0-1.iso,format=raw,index=1,media=cdrom -boot d -vnc :1
if you ad more disks also dont forget to count index up.
index is each bus type, so the cdrom is 1 again because disks are now on the virtio bus, cdrom is default on IDE
 

Kyle

New Member
Oct 18, 2017
3
2
3
37
Thanks all for sharing your solves here. Very helpful in my quest for a Proxmox+ZFS+Hetzner server.

Here was the approach that worked well for me, adapted from PanWaclaw and Mogli solves/knowledge.

You may also be interested in the answer I made about P2V'ing this node. i.e. Physical Linux to Virtual conversion using KVM and Proxmox, with a running P node, without having to obey the existing partition sizes etc. https://serverfault.com/a/988703/64325

HTH

Code:
### Reployment of existing Hetzner root/dedicated node with Proxmox 5.4 iso
### Physical node was using md RAID1 with spinning disks, herein referred to as P

# In Hetzner control panel - order rescue system with Linux 64 Bit
# note the generated root password

# reboot P node, wait a little and then login with root@nodeip and use the generated root password

# get pmox iso image, replace $URL with a valid pmox ISO installer link
curl -L -O $URL
# verify $download file name etc, place image in /proxmox.iso
mv -iv $download /proxmox.iso
# checksum the iso and verify with vendors sums
sha256sum /proxmox.iso

# try get a list of predictable network interface names, note them for later
root@rescue ~ # udevadm test /sys/class/net/eth0 2>/dev/null |grep ID_NET_NAME_
ID_NET_NAME_MAC=enx14dae9ef7043
ID_NET_NAME_PATH=enp4s0

# start a vm with the pmox installer and vnc
# man page reference https://manpages.debian.org/stretch/qemu-system-x86/qemu-system-x86_64.1.en.html
# -boot once=d = boot from the cdrom iso one time, next reboot will follow normal boot seq
# make sure to replace -smp -m and -drive options with ones matching your hardware
# !!! ACHTUNG !!! this will DESTROY the partition tables and DATA on the specified drives
qemu-system-x86_64 -enable-kvm -m 4096 -cpu host -smp 8 \
-drive file=/dev/sda,format=raw,cache=none,index=0,media=disk \
-drive file=/dev/sdb,format=raw,cache=none,index=1,media=disk \
-vnc :0 -cdrom /proxmox.iso -boot once=d

# Connect VNC to your host address:5900
# https://www.tightvnc.com/download.php
# Download TightVNC Java Viewer (Version 2.8.3)

# install pmox via VNC GUI wizard
# GUI installer showed ens3 for nic, which is due to the qemu, ignore it

# reboot vm at the end of the install, it will boot grub, let it boot normally

# login to the new pve - edit network interfaces
# !!! ACHTUNG !!! check/update iface names and bridge ports
# as above my interface was predicted as enp4s0, this worked as hoped
# replace $EDITOR with your preferred editor, but nano might be the only pre-loaded right now
$EDITOR /etc/network/interfaces
# shutdown vm
shutdown -h now

# reboot the rescure image to boot pmox on the physical hardware
# shutdown -r now
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!