Create ZFS fails on GUI with "unknown" - on commandline "raidz contains devices of different sizes"

zaphyre

Member
Oct 6, 2020
55
4
13
35
Hi! I tried creating a ZFS (raidz2) and a datastore via the GUI. It just fails with unknown error. :oops:

All disks are initialized with GPT and should be "ready" for ZFS. After issueing the corresponding command via shell, proxmox-backup-manager disk zpool create backupz2 --add-datastore --devices sdc,sdd,sde,sdf,sdg,sdh,sdi,sdj,sdk,sdl,sdm,sdn --raidlevel raidz2 there is an error which says: "raidz contains devices of different sizes"

Code:
proxmox-backup-manager disk zpool create backupz2 --add-datastore --devices sdc,sdd,sde,sdf,sdg,sdh,sdi,sdj,sdk,sdl,sdm,sdn --raidlevel raidz2
create RaidZ2 zpool 'backupz2' on devices 'sdc,sdd,sde,sdf,sdg,sdh,sdi,sdj,sdk,sdl,sdm,sdn'
# "zpool" "create" "-o" "ashift=12" "-m" "/mnt/datastore/backupz2" "backupz2" "raidz2" "sdc" "sdd" "sde" "sdf" "sdg" "sdh" "sdi" "sdj" "sdk" "sdl" "sdm" "sdn"
TASK ERROR: command "zpool" "create" "-o" "ashift=12" "-m" "/mnt/datastore/backupz2" "backupz2" "raidz2" "sdc" "sdd" "sde" "sdf" "sdg" "sdh" "sdi" "sdj" "sdk" "sdl" "sdm" "sdn" failed - status code: 1 - invalid vdev specification
use '-f' to override the following errors:
raidz contains devices of different sizes

So, the sizes do vary a bit. How to deals with this? Is it safe to force the creation with "-f"?

Thanks for any help with this!


005 sizes Screenshot 2021-11-30 163816.jpg
 
By forcing it with "-f", does it just use the smallest of the disks sizes as size and it will just work as expected? This is more a safety switch for scenarios where one would accidentally mix totally different wrong disks, isnt it?
 
Thanks a lot!
The error comes from the underlying zfs commands not from the proxmox-backup-manager, doesnt it? And it does not understand "-f" so I use the zfs commands directly?
 
Again, I am missing an important piece here:
As I cannot use proxmox-backup-manager with the option -f because of the different sized diks (see above) I have to create the ZFS pool by "hand" with zpool create -f -o ashift=12 backupz2 raidz2 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn I cant benefit from the option --add-datastore to automagically create the corresponding datastore with proxmox-backup-manager.

So as I now have a ZFS pool called "backupz2", what is the "Backing Path"?
proxmox-backup-manager datastore create my-store-on-backupz2 /backup/disk1/store1 <- What is the backing path (with ZFS)?
 
You can check that with: zfs get mountpoint backupz2. But I would create another dataset on that pool first, instead of using the root path of that pool as the datastore. So you got the option to create multiple datastores on that pool in the future.
So I would first create a dataset like with this: zfs create backupz2/datastore1

Path for your datastore should then be "/backupz2/datastore1".
 
  • Like
Reactions: zaphyre
For better performance I would also enable relatime and disable compression (because PBS is already compressing on file level so you don't need to do that on blocklevel again):

zfs set atime=off relatime=on compression=none backupz2
 
For better performance I would also enable relatime and disable compression (because PBS is already compressing on file level so you don't need to do that on blocklevel again):

zfs set atime=off relatime=on compression=none backupz2
no don't do it that way, for relatime to work, atime must be set to *on*

see the quote from the zfsprops(7) manpage:

relatime=on|off
Controls the manner in which the access time is updated when atime=on is set. [...]
 
  • Like
Reactions: zaphyre and Dunuin
Sorry, you are right. I always mix that with how relatime for ext4 works.

So correct would be: zfs set atime=on relatime=on compression=none backupz2
 
  • Like
Reactions: zaphyre and dcsapak
Okay, thanks for updates on zfs options. Was a bit confused regarding compression, as I thought PBS backups are zstd compressed already so it might be redundat overhead to have ZFS compression, but the web based gui wizard for ZFS on PBS suggests compression=on (as a default)- so I was confused. But it also defaults to "Single Disk" - so its not really a suggestion for a recommended setup - my fault.
 
Last edited:
Sorry, you are right. I always mix that with how relatime for ext4 works.

So correct would be: zfs set atime=on relatime=on compression=none backupz2

zfs set atime=on relatime=on compression=off backupz2 worked (off instead of none).
 
Not sure about PBS, but if you use PVEs webUI for creating ZFS pools it won't optimize anything. It will just use OpenZFS default values, if they make sense or not, no matter how your pool looks like. The volblocksize for example will always be 8K but as soon as you use any kind of raidz1/2/3 you want a way bigger volblocksize because using the default 8K will waste up to half of your storage because of padding overhead. So ZFS with PVE is nothing that should be used plug & play. You always need to manually configure it so it fits your needs and pool layout before using it. Especially because many of the ZFS options can't be changed later and are only set at creation.
 
Last edited:
  • Like
Reactions: zaphyre
Not sure about PBS, but if you use PVEs webUI for creating ZFS pools it won't optimize anything. It will just use OpenZFS default values, if they make sense or not, no matter how your pool looks like. The volblocksize for example will always be 8K but as soon as you use any kind of raidz1/2/3 you want a way bigger volblocksize because using the default 8K will waste up to half of your storage because of padding overhead. So ZFS with PVE is nothing that should be used plug & play. You always need to manually configure it so its fits your needs and pool layout before using it. Especially because many of the ZFS options can't be changed later and are only set at creation.

I´ll have to read a lot then. Starting with volblocksize.
 
I´ll have to read a lot then. Starting with volblocksize.
Volblocksize only effects zvols (so for example all virtual disks of VM run by PVE). For datasets (so what LXCs on PVE and your datastore on PBS should use) the "recordsize" is used instead.

For volblocksize and padding overhead I can recommend this blog post of one of the ZFS engineers explaining why there is padding overhead and how to choose a good volblocksize for your setup to lower the padding overhead as much as possible.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!