3x6to in RAIDZ1 and proxmox 100% full with a 4.7To copy file

myz-rix

Member
Jan 12, 2020
9
0
6
44
Hi

It's not the first time I see this problem, but the previous times it was not very important so I dropped

Today I have a machine with 3 discs of 6TB in raidz1, so I have free free on the ZFS

I make several small VMs and one with a disc of 10.2to
The sum of VMdisk does not exceed the free 11to

I copy 4.7to on the 10.2To VM and my ZFS is saturated, worse, the proxmox disk space is also saturated, result the server is planted.

I do not understand how to 4.7to we saturate a 10.2to and worse VMdisk, how we fail the proxmox server by saturating it too.

1641328435342.png1641328391453.png
 

Attachments

  • Capture d’écran_2022-01-04_21-13-32.png
    Capture d’écran_2022-01-04_21-13-32.png
    98.5 KB · Views: 6

myz-rix

Member
Jan 12, 2020
9
0
6
44
my pool is exactly 11.6T,
10T <11.6T
What's the problem ? why when I copy 4.7T on this disk I saturate it and especially why the proxmox host is also saturated, not only the vmdisk

it is unthinkable on this type of system, it means that to make non-functional a proxmox server it is enough to fill 50% of a VMDISK?
 

ales

Member
Jul 26, 2020
17
6
8
45
Try to perform an fstrim insidie the VM with 10tb disk.
But I suspect that this is due to zfs block size and/or how data replication works on raidz1, that is a bit complicated to explain for me (write amplification and so on).
 

Dunuin

Famous Member
Jun 30, 2020
8,025
1,996
149
Germany
Google for 'volblocksize' and and padding overhead. If you got 3x 6TB as raidz1 with volblocksize=8k and ashift=12 you only got 9TB usable capacity for zvols of which only 7.2TB should be used because 10-20% of a ZFS pool always should be kept free or ZFS will get slow and finally stop operating. And PVe is using TiB not TB. 7.2TB is just 6.54TiB. So in case you dont increase the volblocksize to atleast 16k, your VMs only got 6.54 TiB of usable space.

Here is a blog post explainung ut in detail:
https://www.delphix.com/blog/delphi...or-how-i-learned-stop-worrying-and-love-raidz

Edit:
If you increase the volblocksize to 16K and destroy all zvols and recreate them, you should be able to use around 9.6TB or 8.7 TiB.
 
Last edited:
  • Like
Reactions: LnxBil

LnxBil

Famous Member
Feb 21, 2015
6,660
878
163
Saarland, Germany
In addition to @Dunuin good answer, I can only recommend to NOT use RAIDz* at all, just go with stripped mirroring which behaves like you would expect with respect to predicted space usage. In all raidz* setups, you need to adopt the volblocksize and ashift to a best-working common ground. Luckily, you're using a very good setup with 3 disks, volblocksize=8k and ashift=12, it could be much worse and the space waste could be significantly higher with more vdevs and more devices. raidz* is very good for backups in which you can use a huge volblocksize or recordsize, but not for VMs. It's also not fast for VM usage.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!