unable to create zfs root pool problem

thans

Member
Oct 7, 2021
2
0
6
50
I have got server with 24 SSD disk.
I want to install proxmox 7.0-2 with RAIDZ-2
I selected RAIDZ-2.
Installation failed like below error.
Whats is the solution of this problem?
unable to create zfs root pool
 
Last edited:
Are you sure you want to do that?
That means PVE will install a ESP/bootloader/bootpartition to every of the 24 disks. So thats way more complicated if a drive fails because you would need to manually partition the new drive and copy over the bootloader/ESP before you can actually replace the ZFS partition and resilver the pool. And not sure how good ZFS spare drives work because you are not using the complete disk for ZFS but only a partition.
And it is generally not recommended to use such an amount of drives as a single raidz2 because of the resilvering time, performance and reliability. Would make more sense to use 2 or 3 dedicated drives in a mirror as your PVE system disks and the other disks as its own pool just as a VM storage. And maybe some smaller raidz2s of 6 or 10 disks each striped together.
 
Are you sure you want to do that?
That means PVE will install a ESP/bootloader/bootpartition to every of the 24 disks. So thats way more complicated if a drive fails because you would need to manually partition the new drive and copy over the bootloader/ESP before you can actually replace the ZFS partition and resilver the pool. And not sure how good ZFS spare drives work because you are not using the complete disk for ZFS but only a partition.
And it is generally not recommended to use such an amount of drives as a single raidz2 because of the resilvering time, performance and reliability. Would make more sense to use 2 or 3 dedicated drives in a mirror as your PVE system disks and the other disks as its own pool just as a VM storage. And maybe some smaller raidz2s of 6 or 10 disks each striped together.
Dear Dunuin;
Thanks you comment.
I used fewer disk. I was able to install proxmox with ZFS. Every thing is okey.
Best wishes...
 
Also remember to increase the volblocksize before creating your first virtual disk or you will possibly waste most of that capacity to padding overhead. Using the default vollbocksize of 8K you will loose 16 of you 24 drives to parity+padding overhead. What volblocksize to use depends on your ashift, pool layout and number of drives. With a 4x 6 disk raidz2 you would for example needa volblocksize of 64K or higher. With a 2x 10 disk raidz2 a volblocksize of 128K or higher, with a single 18 disk raidz2 a volblocksize of 64K and so on. Here is a useful table showing the parity+padding losses.
And if that works for you really depends on the workload yo uwant on that server. Write/read amplification for example would be really bad if you want to run some MySQL DBs doing 16K sync writes to a pool that uses a volblocksize greater than 16K.
So if you need high IOPS or lower blocksizes a striped mirror would be a better choice.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!