Proxmox VE 6 zfs root install issues

gnordli

New Member
Oct 5, 2019
11
0
1
54
Hi.

I am tyring to get zfs setup on a mirrored root. I have tried a bunch of different ways, but nothing seems to be working. After searching through the forum it seems most of the errors I can find are due to hardware raid controllers, but these are SATA devices.

This is a new supermicro server with two 64GB SATA DOM modules that will be used for the root disks.

1) When I try to create a raid1 configuration, it tells me the disks are different sizes, so it won't work -- but these are identical disks.
2) If I try to create a RAID0 configuration, it is able to go through the installation, but when rebooting I get a

error: unknown device 1.
Entering rescue mode...
grub rescue>

when I do a listing of the zfs dataset it says the compression algorithm not supported.

Selection_533.png


Any ideas on how to get a zfs root installed?

thanks,

Geoff
 
I also have two nvme optane drives i am going to be using as a slog. I tried installing on those drives and it gave me the mirrored disks must be the same size error message. That is the same error message as the SATADOMs when I tried to mirror those.
 
Are your using the current PVE 6.0 official ISO?
Would it be possible to use EFI with this server? As then grub wouldn't be used at all, could circumvent with and eventual issue in GRUBs ZFS code triggering in your setup.

1) When I try to create a raid1 configuration, it tells me the disks are different sizes, so it won't work -- but these are identical disks.

Can you use the Debug mode, and on the second prompt check with
Code:
lsblk
if the system really sees the disk as the same size?
 
In the debug mode you could also check if the pool was created successfully and the real ZFS on Linux module can load it:
Code:
modporbe zfs
zpool import
 
Are your using the current PVE 6.0 official ISO?
Would it be possible to use EFI with this server? As then grub wouldn't be used at all, could circumvent with and eventual issue in GRUBs ZFS code triggering in your setup.



Can you use the Debug mode, and on the second prompt check with
Code:
lsblk
if the system really sees the disk as the same size?


Yes, I am using the current PVE 6.0 ISO. I tried downloading it again and checked md5sum of both downloads, they were the same.
47361461dfd9a28f5ab3acf7882beda9 proxmox-ve_6.0-1.iso

Sure, I will use whatever boot method works. I haven't used EFI before, is there some docs on setting it up?

The disks are the same (sda,sdb). BTW I tried installing zfs raid0 on both disks and tried booting from each disk to see if that would fix anything. My plan was to try to get it to a zaid0, then manually add the mirror after installation.

Selection_534.png


thanks!!

Geoff
 
In the debug mode you could also check if the pool was created successfully and the real ZFS on Linux module can load it:
Code:
modporbe zfs
zpool import

OK, the debug is really helpful. I was able to import the pool and for some reason it think the pool is 7.32T. That would be the size of one of my other disks as you see above, but the SSD is only a 64GB disk.

Selection_535.png

Is it possible to just pre-create the zfs mirrored pool and select that during the install? Or install via command line like a debootstrap.

BTW, during the install i was in advanced options and it was showing 59GB as the hdsize.

thanks,
Geoff
 
Hi!

Sorry, if not into right thread.
Please, help me with my little trouble.

I am using Proxmox 5.4 on Dell T30. System is booting using BIOS loader.
Recently, I have installed Proxmox 6.0 on new nvme m.2 drive. Before installation I have changed to boot from UEFI.
Installation finished successfully. But after reboot, Proxmox load failed with error to mount 'rpool' (cannot import 'rpool' more than one matching pool).
How to fix this?

P.S. Currently, I have changed to boot from BIOS and Proxmox 5.4 booted successfully.

Thanks.
 
Sorry, if not into right thread.
Please, help me with my little trouble

Yes, it would be better to open a new thread, even if the issue touches similar topics.

Installation finished successfully. But after reboot, Proxmox load failed with error to mount 'rpool' (cannot import 'rpool' more than one matching pool).
That means that you had another rpool on some disks, zfs detects both the new and old and doesn't know which one to use..
Most of the time this can happens if a pool was previously used as RAIDZ then recreated as RAID10.
The Proxmox VE 6.0 installer was a bit improved regarding this, there all selected partiotions get any ZFS label cleared on installation, this avoids most issues, so you could check out that one.

Else, in the debug mode of the installer, second shell (first has no ZFS yet) load the ZFS module modprobe zfs
then clear all ZFS labels from all disks
Code:
# dangerous, this kills all existing ZFS pools.
for dev in /dev/sd*; do zpool labelclear -f  "$dev"; done[code]

but as said, please open a new thread if there are still issues.
 
Is it possible to just pre-create the zfs mirrored pool and select that during the install? Or install via command line like a debootstrap.

No, I'm afraid, that's not possible with our installer.

OK, the debug is really helpful. I was able to import the pool and for some reason it think the pool is 7.32T. That would be the size of one of my other disks as you see above, but the SSD is only a 64GB disk.

But you should be able to see the disk sizes of each disk in the installer, so just be sure to select the 64GB ones and de-select all others, then the installer should do the right thing (and allow both modes, as the size should then be the same).
 
Sure, I will use whatever boot method works. I haven't used EFI before, is there some docs on setting it up?

Normally, if the server and it's firmware support it then one normally only needs to change a setting in BIOS/UEFI settings for the boot mode. After that the ISO should detect it and do the rest automatically.

BTW, during the install i was in advanced options and it was showing 59GB as the hdsize.
It should look something like this:
Screenshot_2019-10-05 nina - Proxmox Virtual Environment.png
Here I use two NVMe drives as RAID1 rpool and do not touch the two remaining disks. The size is also visible here, you do not have to go at the advanced options just to check that one. :)
 
The Proxmox VE 6.0 installer was a bit improved regarding this, there all selected partiotions get any ZFS label cleared on installation, this avoids most issues, so you could check out that one.

Else, in the debug mode of the installer, second shell (first has no ZFS yet) load the ZFS module modprobe zfs
then clear all ZFS labels from all disks

Thank you very much for quick answer.
Second rpool was on another disk, and Proxmox installer has not cleared other 'rpool'. My previous Proxmox installation on other disk will be broken if installer do some action on that disk. Seems like Proxmox installer cannot do anything in such situation.

And I resolved it manually.
First, I mounted rpool from command line by id and booted successfully into Proxmox.
After that I mounted old rpool by id under 'rpool2' name. Then I exported rpool2 and rebooted.
Now there is only one rpool and boot succeeded.
 
Normally, if the server and it's firmware support it then one normally only needs to change a setting in BIOS/UEFI settings for the boot mode. After that the ISO should detect it and do the rest automatically.


It should look something like this:
View attachment 12153
Here I use two NVMe drives as RAID1 rpool and do not touch the two remaining disks. The size is also visible here, you do not have to go at the advanced options just to check that one. :)


I tried using EFI with the same results. I can't get by the installer thinking they are different sized disks. Here is the first page with all my disks showing.


Selection_536.png

then I just select the two Supermicro SSDs.

Selection_537.png

Then I get the mirrored disks must have the same size error message.

Selection_538.png


any thoughts on getting past that?

thanks,
Geoff
 
All of the extra drives were removed, just leaving the two supermicro SATA DOMs in the system. The error went away and I was able to get past that step.

There appears to be a bug in the installer.

Is this something you want to try to diagnose?

thanks,
Geoff
 
Is this something you want to try to diagnose?
yes :)
How many disk drives were in the system?
(my wild guess is that the error happened because there were still some disks detected past 'Harddisk 11' which were still selected for participating in the mirror)
 
There are 12 disks in the system, so you are probably right, it may be selecting that 12th disk. My memory is kind of foggy here because I was trying a lot of different things, but I believe it tried create the mirror as 8TB (the size of the 12th HDD) instead of 64GB (size of the SATADOM).
 
hmm - I guess we could either only select the first two disks - or make the panel scrollable ..
Could you please open an enhancement request over at https://bugzilla.proxmox.com so that we can track it there?
Thanks!
 
(my wild guess is that the error happened because there were still some disks detected past 'Harddisk 11' which were still selected for participating in the mirror)

nope, all disks detected are shown in the combobox selector itself, one would see if there are additionally disks that are not shown as "Hardisk entries" in the HD box. Further, the whole box is already scroll-able..

pve-installer-many-disks.png

And the installer in test mode does not cries if I try to replicate a RAID1 with two single disks from 13 available ones.

So, it maybe something else - anyway, we lack error information here; when we shout about non-matching disks size the disks and their size should be included in the error, else one has no idea in such a case..

I'll try to fire up a VM with 2 NVMe and >11 SCSI drives with roughly the sizes you're using - if it does not hits there I don't know..
 
Hi.

I got it to work.

If you look at my screenshot above, there is no scroll bar so the one drive is hidden. I needed to tab through all of the drives, then it showed the last drive and I could select it. After I tabbed to the last drive, the scroll bar showed.

thanks,
Geoff
 
If you look at my screenshot above, there is no scroll bar so the one drive is hidden. I needed to tab through all of the drives, then it showed the last drive and I could select it. After I tabbed to the last drive, the scroll bar showed.

Hmm, OK, strange - need to see if I can replicate this, as I had the scroll bar always with such many disk :)

But maybe we can avoid this completely by adding a "Deselect All" button, which could be useful anyway with many disks in a host..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!