Problem with volblocksize

MisterY

Well-Known Member
Oct 10, 2016
136
4
58
36
Hi,
I installed a new server and set up a new zfs pool (hdds) with ashift=12.
Now I get this error when I try to restore a backup of a VM to a zfs storage directly:
TASK ERROR: zfs error: cannot create 'zfstank/vm-100-disk-0': 'volblocksize' must be power of 2 from 512B to 1M

If I restore it to a folder on the zpool it works fine.
The same happens if I want to create a new VM.
But with a CT it works without a problem on the pool itself.

What is wrong?
 
Last edited:
Hi,
in the gui at cluster level in the storage configuration you can set the block size of your zfs pool. This size is choosen for new "disks" as in your example.
If the backup does not fit, it is often because block size is to big.
Try 4k.
But be warned, that may lead to bad performance.
(iirc, it´s long time ago since i worked with proxmox & zfs intensively)
 
Hi,
Hi,
I installed a new server and set up a new zfs pool (hdds) with ashift=12.
Now I get this error when I try to restore a backup of a VM to a zfs storage directly:
TASK ERROR: zfs error: cannot create 'zfstank/vm-100-disk-0': 'volblocksize' must be power of 2 from 512B to 1M

If I restore it to a folder on the zpool it works fine.
The same happens if I want to create a new VM.
But with a CT it works without a problem on the pool itself.

What is wrong?
did you configure a block size in your storage configuration? I.e. check for blocksize in your /etc/pve/storage.cfg or edit the storage via GUI to check.
 
blocksize is 12k. It was created with the default options with the GUI.
 
blocksize is 12k. It was created with the default options with the GUI.
ZFS requires it to be a power of 2, so you'll have to choose 8k or 16k instead. The default in the GUI should be 8k.
 
What is the value for the field Block Size when you go to Datacenter > Storage > Add > ZFS? What version are you using, i.e. pveversion -v?
 
@Fabian_E Is the type of zfs raid relevant to the number which needs to be setup for the volblocksize?

For instance i have a zfs2 (4 drives) and another member adviced me to
<<Also keep in mind that you need to increase your volblocksize before creating your first VM or you will waste alot of capacity. Look at this table here for raidz2. If you use the default volblocksize of 8K you will loose 67% of your raw capacity (and can only use 1/3 of your capacity) if using ashift of 12. You need a volblocksize of 24K or 256K if you want only to loose half of your raw capacity. You won't see this wasted space directly. It will still show you 8TB available but everything you write will consume 150% of the space it should. Because everything is 50% bigger, because of bad padding if your volblocksize is too small, your 8TB will be full after writing 5.33 TB. And with ZFS you should never fill up your pool more than 80 or 90 %. Over 80% it will get slow and over 90% it will switch into panic mode. So right now with the default volblocksize you can only use 4.266TB if you limit the quota of the pool to 80%.>>>

There is a difference here about the best number you have to use for hte volblocksize. You said 16, in the message above advised 24 or 256 ..... Isnt there a number that lets you use almost or all the capacitance you ve setup with zraid?

Any thumb rules for ashift number, meaning which factors is it depended on? For raidZ2 needs to be something different than 12 (what is the effect ofthe ashift number to the pool)

Thank you

PS How am I setting up quote for the storage?
 
Last edited:
Best to read the original article where the table comes from. The volblocksize needs to be a power of 2, so 24k is not possible. The optimal volblocksize depends on hardware/workload, so best to do some tests and see what works for you.
 
Best to read the original article where the table comes from. The volblocksize needs to be a power of 2, so 24k is not possible. The optimal volblocksize depends on hardware/workload, so best to do some tests and see what works for you.
Hi again. I ve read the article and since yesterday that I posted in this thread, kept reading a ton of stuff which only lead to confusion, There are plenty of users that need a theoretical good advice for their hardware / needs (that means to provide this stuf first) and have an advice from a more experience user that already had the time to take tests on different disks/raid levels .. etc.
Bottom line is it cant all IT people quit their jobs and start mastering all Proxmox (irrelevant here) and ZFS rules and spending week / months just to come to an assumption. That is what forums are for. Not to re invent the wheel. My opinion without attitude or whatever.

I start to see why most people still afraid to use ZFS and prefer lvm which way more documented and has several years in production
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!