Create a 100% sized disk for a VM

vince120

New Member
Jan 7, 2022
3
0
1
43
Hi,

I just installed proxmox and I had to destroy my mdraid (unsuported in proxmox) from previous ubuntu install.

After wiping my 3 disks I went to ZFS section and created a ZFS pool with them :
proxmox0.png

When I try adding a disk to my VM, I have selected it and we can see the raid5 with 3 3TB disks and it displays a little less than 6TB which is normal :proxmox1.png

But when I want to add it I have an error "out of space"proxmox2.png
Disk is 5.85TB so I entered 5850 and I also tried 5700 or 5500 and even 5000, every time it displays the same message.

How can I solve this ? And in a general matter, how can I add a disk to a VM with 100% of the size of the pool or LVM ?

Thanks in advance.
 
With a raidz you always got padding overhead when using a too small volblocksize. Using the defalt 8k volblocksize and ashift=12 and 3x 3TB disks in a raidz1 you bascially loose 33% of your raw storage to parity and additional 17% of your raw capacity to padding overhead. You can't directly see this lost capacity because it is created by all virtual disks just being 33% bigger. And then a ZFS pool should always got 10-20% of free space because that is needed for Copy-on-Write (CoW). So you only should use 80% of the pool if you don't want your pool to get slow and fragmented.

So actually you now only got this:
9 TB raw capacity (3x 3 TB)
-33% (-3 TB) parity data
-17% (-1.5 TB) padding overhead
----------------------------------------
4.5 TB
-20% (-0.9 TB) that should be kept free
----------------------------------------
3.6 TB (or 3.27 TiB) of usable capacity for zvols

And also keep in mind that PVE uses TiB not TB when creating virtual disks. So you should create a virtual disk with more than 3.27 TiB.
If you increase the volblocksize to 16K you won't get that padding overhead and 4.36 TiB would be usable.
 
Last edited:
Thanks for the quick reply.

Where can I specify the 16k value for volblocksize ?

Which ashift value should I use ?

PS : Even 4.36TiB seems low to me. I had ~5TB with mdraid with the same disks.
 
Thanks for the quick reply.

Where can I specify the 16k value for volblocksize ?
You can set the pools volblocksize using the WebUI: Datacenter -> Storage -> YourPool -> Edit -> Block Size
Which ashift value should I use ?
That depends on your disks. If they got a 4K physical sector size you can'T go lower than ashift=12. And the bigger your ashift the bigger your volblocksize needs to be.
PS : Even 4.36TiB seems low to me. I had ~5TB with mdraid with the same disks.
Thats because you don't need to keep the 20% free there because it is way simpler and isn't journaling. Best possible case you got 6 TB. 80% of that is 4.8 TB and 4.8TB are just 4.36 TiB.

So in case you want to use raidz1, increase the pools blocksize to 16K and create that virtual disk with up to 4.36 TiB.

By the way...mdadm isn't officially supported by PVE but that doesn't mean that it won't work. Its working totally fine here, just keep in mind that mdadm isn't that secure or reliable compared to ZFS (no bit rot protection, no snapshots, no replication and so on).
 
Last edited:
You can set the pools volblocksize using the WebUI: Datacenter -> Storage -> YourPool -> Edit -> Block Size
Thanks

Thanks to both of you for the quick replies.

I managed to get 16k volblocksize and I mounted the volume on my VM. I'm now restoring the data that I temporaly moved before wiping my mdraid.
 
Thanks

Thanks to both of you for the quick replies.

I managed to get 16k volblocksize and I mounted the volume on my VM. I'm now restoring the data that I temporaly moved before wiping my mdraid.

I have looked at the GUI and the ZFS menu, can you tell me where you were able to set the blocksize to 16k?

Stuart
 
Ah, wonderful! It was grayed out so I never even clicked on it...but I was able to type in 16K.
Does that matter that I am doing it after having created the pool or I should delete the pool and recreate it telling it 16K?
 
You don't have to recreate the pool, but you will need to destroy all your VMs virtual disks and recreate them (doing a backup + restore will work for that too)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!