Installation failure (with ZFS on NVME)

Dalibor Toman

New Member
Feb 1, 2018
5
0
1
57
Hi,

I have just tried to install (from USB stick) a PVE 5.1-3 on a Supermicro server with 2x 1TB NVMe and 2x10TB HD. During setup I selected ZFS on both NVMe and after it the setup crashed. The setup console (ALT2) shows that there was 'dd' used probably to wipe some data on three new partitions. One run suceeded and 2 ones failed with
dd: error writing '/dev/nvme0n1p1' : No space left on device
dd: error writing '/dev/nvme0n1p9' : No space left on device

dd on /dev/nvme0n1p2 succeded.

fdisk /dev/nvme0n1 says that /dev/nvme0n1p2 is the large partition and /dev/nvme0n1p1 is 2048 sectors on the start of the disk and /dev/nvme0n1p9 is some other small reserved space on the disk end.

It seems that the setup process tries to write too much data to the smaller partitions. Either it failed to get the real size of the paritions or it uses some (too big) predefined size...

Is there something I can do with it?

Thanks
 
Hi,

it explains the first 2048 sector sized partition (I already read it somewhere) but it doesn't explain why the installer should fail to initialize the partition.

Thanks
 
please boot in debug mode and provide the installer log (/tmp/install.log) after the installation has failed. that dd writes too much can easily happen, but is no problem (the installer does not check the size of existing partitions, it just writes a fixed amount of data to clear any remainder of previous file systems and ignores the error if the partition was actually smaller).
 
Hi,
i tried to install using debug mode. There is what I found in install.log:
#zpool create -f -o cachefile=none -o ashift=12 rpool mirror /dev/nvme0n1p2 /dev/nvme1n1p2
cannot create 'rpool': no such pool or dataset

I have 2 identical (new) servers and the installation failed with the same error on both.

I have tried to run the zpool create manually on single disks and when I tried /dev/nvme0n1p2 or /dev/nvme1n1p2 I got the same error.
But when I tried to create the pool on the whole discs (i.e. on /dev/nvme1n1 - not on partitions) it was created successfuly

I am new to zsf so I don't know why the zpool creation failed. The partitions looks like they are not used somewhere else (mount doesn't show them), they are accessible (tried to read from them using 'dd' and writing to them too).

I have tried to install proxmox ZFS mirror on /dev/sda and /dev/sda and it worked (after i cleared problem with zpool import which reported multiple 'rpool' pools because it found it on the NVMe discs too). So it looks only the NVMe drive

Thanks
 
Hi,

I just found that there is a problem with ZFS versus UEFI in Proxmox 5.1 (hoped it affected older releases only) and since I have to use UEFI to boot from NVMe I have another problem.

I thought I will install proxmox on the NVMe using ZFS raid1. Then I intended to use the NVMe for virtual machines too. And I planned to create another ZFS mirror on /dev/sda+/dev/sdb for storing data (from virtual machines which require larger space for data).

I can probably install the Proxmox on ZFS mirror made on sda+sdb and boot from it not using UEFI but I thing having proxmox on NVMe would be better ?
 
Hi,

I just found that there is a problem with ZFS versus UEFI in Proxmox 5.1 (hoped it affected older releases only) and since I have to use UEFI to boot from NVMe I have another problem.

I thought I will install proxmox on the NVMe using ZFS raid1. Then I intended to use the NVMe for virtual machines too. And I planned to create another ZFS mirror on /dev/sda+/dev/sdb for storing data (from virtual machines which require larger space for data).

I can probably install the Proxmox on ZFS mirror made on sda+sdb and boot from it not using UEFI but I thing having proxmox on NVMe would be better ?

PVE itself does not require much from the root storage (the most write-intensive things are syncing the DB behind /etc/pve every couple seconds, and various log files - it's just enough to trash cheap SSDs or USB sticks, but no problem otherwise). putting them on regular spinning disks should not make much of a difference.
 
Hi,

Thanks for the info. The SSDs are Intel DC P4500 - they should live ok with higher write load, I hope.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!