ZFS Install fails for PVE 4.2 on Intel NUC6i7KYK + Samsung 950 Pro NVMe

jamver

New Member
Jul 6, 2014
5
0
1
Australia
Installing to EXT4 works fine, but installing to ZFS as either RAID 0 (single drive) or RAID 1 (2 NVMe drives) always fails with the following errors (including trying options mentioned by fabian elsewhere such as intel_iommu=on rootdelay=20 etc.)

[...]
REALLY setting name!
The operation has completed successfully.
cannot create 'rpool': no such pool or dataset
unable to create zfs root pool
umount: /rpool/ROOT/pve-1/var/lib/vz: mountpoint not found
umount: /rpool/ROOT/pve-1/tmp: mountpoint not found
umount: /rpool/ROOT/pve-1/proc: mountpoint not found
umount: /rpool/ROOT/pve-1/sys: mountpoint not found
 
Had the same kind of problem when installing ProxMox on zroot.

Unless you really need ZFS as your root volume, I would suggest to install ProxMox on a minimal partition using ext4 (let says 10Gb, it's far enough even when dealing with updates) and leave empty space that you will allocate to zfs after the system installation is complete.
It's a lot simpler and will prevent many headaches since the rootfs, except for booting the system is not really used after ProxMox is up and running.
 
I really need ZFS root and your suggested setup both complicates operations/management and reduces overall integrity and system resilience to failure.

I certainly do not intend to run both software raid1 on ext4+lvm and ZFS on the same system - that would be insane and reduces ZFS' efficiency having full access to the drive.
 
I personnaly keep on using ProxMox that I find is geting better and beter ... To me this is the simplest and most efficient to date.
I don't say other solutions are bad but they are either very complicated to setup and manage (Openstack, Cloudstack, ...) or very expensive (VMWare ...) especially on simple setups such as non clustered installation.
It's also a lot more featurefull than minimalistic setups such as LibVirt.

To me I would still recommend using a RAID1 partition with ext4 as the root filesystem and allocate the remaining space of the disk in a RAIDZ1 (ZFS) for VMs/CTs storage instead of putting everything on ZFS but that's my point of view and my experience.

anyway, for VMs/CTs storage, go for ZFS for sure, this is by far the best solution for local storage (in case of clustered setup, you should go for CEPH).
 
I personnaly keep on using ProxMox that I find is geting better and beter ... To me this is the simplest and most efficient to date.
I don't say other solutions are bad but they are either very complicated to setup and manage (Openstack, Cloudstack, ...) or very expensive (VMWare ...) especially on simple setups such as non clustered installation.
It's also a lot more featurefull than minimalistic setups such as LibVirt.

To me I would still recommend using a RAID1 partition with ext4 as the root filesystem and allocate the remaining space of the disk in a RAIDZ1 (ZFS) for VMs/CTs storage instead of putting everything on ZFS but that's my point of view and my experience.

anyway, for VMs/CTs storage, go for ZFS for sure, this is by far the best solution for local storage (in case of clustered setup, you should go for CEPH).

Thank you very much, your advice is valuable.

You use SSD NVMe or Normal M.2? I'm undecided whether to wait for the new Samsung 960 Pro (In Italy should arrive in late January) or buy the Crucial MX300 that are really cheap.
 
I don't use NUC as ProxMox nodes but I already installed ProxMox on many different configurations, from PCs to servers.
I currently run a few ProxMox clusters, my biggest one is a 20 nodes cluster with both ZFS local storage and CEPH.
My ZFS local storage runs with a RAIDZ1 cluster of 6 Samsung 850 EVO SSDs and it works very well. (no NVME, nor M2, just plain SATA3 SSDs and not even "Pro" version).
For your information, I have high r/w IOs on them since I'm hosting high performances VMs on this storage, especially databases) and after 1y of intensive usage, my media wearout level is at 50% so they won't last a lot more ... I'll have to replace them in a few months but they are quite cheap for the capacity.

However, I encountered lots of troubles with budget SSDs as root drive for ProxMox, they died in a very short time breaking my cluster quite often.
These drives where Corsair Force LS60 ... I highly discourage anyone to use these.
I replaced them with Intel DC S3510 series (SSDs for datacenters), they are twice the price but a lot more valuable for the money.
 
Last edited:
  • Like
Reactions: drego85

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!