PVE 4.4 ZFS rpool 8 drive limit not fixed

mram

Renowned Member
Mar 16, 2014
77
21
73
I just erased all my drives and tried to select 10 disks with new (woohoo!) installer that supports more drives. Unfortunately, I can't seem to create a pool with more than 8 drives no matter what I try. When I select RAIDZ1 all 10 drives are listed, but after boot there are only 8 in the pool. lsblk shows the sdi and sdj drives without any partitions. I did clear the MBR/GPT partitions before install.

Any ideas?

Added-Dell R620 10 bay with H310 (M1015) flashed to IT mode. Tried 3 times same result. Going to wipe the drives (ssd) and try RAIDZ2 next.
 
Last edited:
Still no luck. Wiped all 10 SSDs (Intel S3710 400GB). Tried RAIDZ2, RAID0. sdi and sdj will not go into the rpool during install. They do show up on the list and are already pre-selected during install.





root@pve1:~# zfs list

NAME USED AVAIL REFER MOUNTPOINT

rpool 9.73G 2.81T 19K /rpool

rpool/ROOT 1.23G 2.81T 19K /rpool/ROOT

rpool/ROOT/pve-1 1.23G 2.81T 1.23G /

rpool/data 19K 2.81T 19K /rpool/data

rpool/swap 8.50G 2.81T 12K -

root@pve1:~#
root@pve1:~# zpool status

pool: rpool

state: ONLINE

scan: none requested

config:


NAME STATE READ WRITE CKSUM

rpool ONLINE 0 0 0

sda2 ONLINE 0 0 0

sdb ONLINE 0 0 0

sdc ONLINE 0 0 0

sdd ONLINE 0 0 0

sde ONLINE 0 0 0

sdf ONLINE 0 0 0

sdg ONLINE 0 0 0

sdh ONLINE 0 0 0


errors: No known data errors

root@pve1:~#
root@pve1:~# lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

sda 8:0 0 372.6G 0 disk

├─sda1 8:1 0 1007K 0 part

├─sda2 8:2 0 372.6G 0 part

└─sda9 8:9 0 8M 0 part

sdb 8:16 0 372.6G 0 disk

├─sdb1 8:17 0 372.6G 0 part

└─sdb9 8:25 0 8M 0 part

sdc 8:32 0 372.6G 0 disk

├─sdc1 8:33 0 372.6G 0 part

└─sdc9 8:41 0 8M 0 part

sdd 8:48 0 372.6G 0 disk

├─sdd1 8:49 0 372.6G 0 part

└─sdd9 8:57 0 8M 0 part

sde 8:64 0 372.6G 0 disk

├─sde1 8:65 0 372.6G 0 part

└─sde9 8:73 0 8M 0 part

sdf 8:80 0 372.6G 0 disk

├─sdf1 8:81 0 372.6G 0 part

└─sdf9 8:89 0 8M 0 part

sdg 8:96 0 372.6G 0 disk

├─sdg1 8:97 0 372.6G 0 part

└─sdg9 8:105 0 8M 0 part

sdh 8:112 0 372.6G 0 disk

├─sdh1 8:113 0 372.6G 0 part

└─sdh9 8:121 0 8M 0 part

sdi 8:128 0 372.6G 0 disk

sdj 8:144 0 372.6G 0 disk

sr0 11:0 1 1024M 0 rom

sr1 11:1 1 498M 0 rom

sr2 11:2 1 521.8M 0 rom

sr3 11:3 1 510.2M 0 rom

zd0 230:0 0 8G 0 disk [SWAP]

root@pve1:~#
 
Last edited:
sorry, there is a bug in the iso (I missed a place where the 8 disk limit was hardcoded). will be fixed soon!
 
  • Like
Reactions: morph027
Thanks for the update. I will look for the fix in the next ISO release.
 
if you are using raid-10 (striped mirrors), you can just add the two skipped drives as mirrored vdev.. otherwise you'll have to wait ;) patch is on the list already, but the iso needs to be rebuilt
 
Thanks for the fixed ISO. I am using the new build to install. I checked the partitions and it appears the installer does now create a pool with all 10 drives. At least that part is fixed.

I am having boot problems. I have not been able to get server to boot into Proxmox with 10 drives in the rpool. The Dell R620 just seems to hang on BIOS boot to the first SSD. The only drives I have in the server at the moment are the 10 SSDs on an H310 flashed to IT mode. Reinstalling in UEFI mode does not seem to offer any boot option after install. BIOS and firmware are up to date.

I welcome any options. Otherwise it is back to the PERC H710 and RAID5/6.

Added - This is only a homelab box for testing.
Added2 - I am not sure the 10 drives is the root case of the booting issues. After pulling 2 drives and reinstalling on 8 drives it still hangs on boot. The testing continues...
 
Last edited:
Thanks for the fixed ISO. I am using the new build to install. I checked the partitions and it appears the installer does now create a pool with all 10 drives. At least that part is fixed.

I am having boot problems. I have not been able to get server to boot into Proxmox with 10 drives in the rpool. The Dell R620 just seems to hang on BIOS boot to the first SSD. The only drives I have in the server at the moment are the 10 SSDs on an H310 flashed to IT mode. Reinstalling in UEFI mode does not seem to offer any boot option after install. BIOS and firmware are up to date.

I welcome any options. Otherwise it is back to the PERC H710 and RAID5/6.

Added - This is only a homelab box for testing.
Added2 - I am not sure the 10 drives is the root case of the booting issues. After pulling 2 drives and reinstalling on 8 drives it still hangs on boot. The testing continues...

a lot of bios implementations and raid controller firmwares have pretty arbitrary limits on the number of boot devices.. you might be able to trick such systems into booting (by selecting the "right" disks as boot disks such that Grub sees an incomplete, but bootable zpool), but obviously such a setup is not what can be considered production ready..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!