NVME ZFS problem

arcanatigris

Active Member
Dec 1, 2016
15
3
43
Netherlands
Hey all,

Im trying to install a new server with PVE4.3 with a ZFS RAID 1 configuration but it always results into a
"unable to create zfs root pool" while a standard ext4 install works just fine.

Hardware:
2x Samsung SM961 1TB

Error:
Code:
GPT data structures destroyed! You may now partition the disk using disk or other utilities.
Creating new GPT entries.
Setting name!
partNum is 1
REALLY setting name!
The operation has completed successfully.
cannot create ‘rpool’: no such pool or dataset
unable to create zfs root pool
umount: /rpool/ROOT/pve-1/var/lib/vz: mount point not found
umount: /rpool/ROOT/pve-1/tmp: mount point not found
umount: /rpool/ROOT/pve-1/proc: mount point not found
umount: /rpool/ROOT/pve-1/sys: mount point not found

I have tried adding "intel_iommu=on rootdelay=10" with different values to the grub from the install disk.
"Install Proxmox VE" press 'e' and adding them at the end of the linux line. and then press F10

Maybe I am doing something wrong here? o_O
 
  • Like
Reactions: chrone
could you try the installer in debug mode and after it has failed, click abort and run "sgdisk -p /dev/DISK" (for each DISK, e.g. nvme0n1)?
 
For some reason debug mode hangs and stops responding to any input.
S6a1Tls.png


Here is the output from a debian live disk:
3g1uWlO.png
 
I can reproduce this here, but I am afraid there is a bigger issue that will prevent you from running this setup (unless you are capable of bootstrapping it using jessie-backports): our installer does not support installing ZFS with UEFI support (because there is currently no sane way to mirror the EFI boot partition, so we simply don't), and almost all systems don't allow you to boot NVME devices in legacy/BIOS mode..

you should be able to install with ZFS on regular disks (SSD or HD) and add the NVME devices as log and/or cache or as a second pool
 
  • Like
Reactions: chrone
I can reproduce this here, but I am afraid there is a bigger issue that will prevent you from running this setup (unless you are capable of bootstrapping it using jessie-backports): our installer does not support installing ZFS with UEFI support (because there is currently no sane way to mirror the EFI boot partition, so we simply don't), and almost all systems don't allow you to boot NVME devices in legacy/BIOS mode..

you should be able to install with ZFS on regular disks (SSD or HD) and add the NVME devices as log and/or cache or as a second pool
Has this been fixed yet?
 
Any update on this issue as of today ?


Best Regards,

Talion

ZFS+UEFI is still not supported - if that is the issue you are referring to?
 
Hi, i'm running into this issue,

But i've tried installing in legacy mode and i can boot just fine from the nvme. (using ext4)
I've then tried to install the system using zfs in legacy mode. But no luck.

Based on your observations @fabian shouldnt it be possible to install zfs in legacy mode to a nvme array?

Regards
 
Usually it's not possible to boot from an NVME-device in legacy/BIOS boot mode.
 
Hi @Stoiko Ivanov , thanks for the reply,

My Lenovo M700 boots just fine, i set the mode in the bios to 'legacy only' and installed proxmox on one of the nvme disks. Works just fine.

Any pointers on how to install proxmox using zfs in this situation?
Because i only have space for two disks and i bought 2 1TB ssds to setup as raid1, so setting a 3rd disk as root is not possible.
 
Last edited:
Please check on the M700 after installing in 'legacy only' mode (with ext4) whether you have a '/sys/firmware/efi'
Else you can try setting up the system with debian (the zfs on linux wiki has instructions) on the nvme - and then install proxmox-ve on top of that
 
one NVME device (usually) works, its multiple that is basically impossible with legacy/MBR booting. you can either roll your own EFI setup using Debian Stretch, or move /boot and Grub to a third, small non-redundant device (e.g. USB stick) after installation and boot from that. both is not really ideal of course - we are working on some kind of solution for PVE 6.x
 
Not that I think using nvme for boot is a good use of the disks, but I believe you can accomplish it as described here:
https://outflux.net/blog/archives/2018/04/19/uefi-booting-and-raid1/

essentially, you'd need to install debian on zfs as described here: https://github.com/zfsonlinux/zfs/wiki/Debian-Stretch-Root-on-ZFS
and create an efi partition on a md RAID1 partition (~200-500MB in size), taking care to substitute the md device(s) in the sgdisk steps. Once you have a running system, you can install proxmox-ve taking care not to mix up the proxmox zfs packages.
 
I had to order extra ssd to install proxmox on it.

Thanks for the info but that somehow would kill the purpose of a NVMe server :)

So I guess we have to wait. Or are there maybe other workarounds like maybe an easy way to install a non raid boot partition and still get to use most of the space for a RAID1 ZFS pool afterwards? That would at least offer some kind of redundancy and enable ZFS features for the most parts...
Unfortunately I don't have any NVMe hardware at hand with which I could test things and doing so with actual server at Hetzner requires remote hands/LARA and is therefore quite unhandy and a little costly...
 
I have the same problem . supermicro with four NVME I try to install proxmox with zfs RAIDZ (RAID5) with no luck. any solution ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!