ZFS RAIDZ Pool tied with VM disks acts strange

MeeM_Kade

New Member
Aug 3, 2024
11
1
3
Hi, I recently got ahold of a second server that I have put PVE on, which I also plan to run PBS on. (PBS isn't relevant in this case but I will talk about it as the vm, but long story short, I did not want to run PBS baremetal because I would be wasting potential hardware if I just ran pbs)
Said server in question is a Dell Poweredge R720 with 2.5 drive bays, all filled
The current disk configuration is 8X 240.06 GB SSD's, and another 8X 960.20 SSD's.
I originally planned to put them all in one big pool via RAIDZ, but I realized that if I did so I would be wasting space because the 960's would act as the smaller 240's.
So I decided to take all the 960's and make them in a RAIDZ pool individually (as in seperate from the 240's), and did the same with the smaller SSD's, leaving me with 2 RAIDZ pools.
I then went to my PBS VM to allocate every single drop of storage from the pbs_storage pool (the 960 SSD's), and when I put in the pools capacity in GB, it said it was out of storage
I then put in 4000 GB, and saw the usage of the pool was about 5.7 TB (iirc), which seems abnormal to me since I was expecting it to be using about 4 TB, and I have no idea why it was showing an incorrect usage.
I found that I could only make the VM disk 4607 GB (if im remembering correctly) out of the 6720 GB that the pools actual size was
Why is this happening? Is there a way to correct this? The discrepency of VM disk size and capacity/usage dosent seem to happen on RAID10, however I dont want to use RAID10 because I would only have 3.70 TB of space available out of the total SSD's 7681 TB of storage, with RAIDZ giving me 6720. Is raid10 the only way to fix this? Im not sure what to do from here
 
> I then went to my PBS VM to allocate every single drop of storage from the pbs_storage pool (the 960 SSD's), and when I put in the pools capacity in GB, it said it was out of storage

You can't do that. If you want a VM to have complete control over ZFS storage, passthru the disk(s) or the controlling adapter.

Remember that ZFS needs free space for overhead/housekeeping, and you need to leave some free space for snapshots.

I have no idea what limits proxmox puts on ZFS space available to VMs (it might be a percentage), but best practice is usually to keep all pools used-space under ~85-90% so you don't hit reduced performance / fragmentation.
 
Last edited:
  • Like
Reactions: Johannes S
I cannot pass through the controller or the disks because if I do, then I won't be able to create the second pool with the 8X 240 SSD's, since they'll also be in PBS, and you can't use the controller alongside the VM with the main proxmox hypervisor.
 
RAIDz probably does not have the space you think it has and it tells you. Due to padding and metadata overhead, people are often disappointment (on this forum) by the usable space on a RAIDz1/2/3. This is a common ZFS thing.

(d)RAIDz1/2/3 is also often disappointing for running VMs on as people expect hardware RAID5/6 performance but due to the padding, check-sums and additional features of ZFS it gives much less IOPS. A stripe of mirrors works better. There are also more than one thread about this on this forum.
 
Please be aware that RAIDz is not ideal in terms of VM performance, bulk data is a different Story though:
 
RAIDz probably does not have the space you think it has and it tells you. Due to padding and metadata overhead, people are often disappointment (on this forum) by the usable space on a RAIDz1/2/3. This is a common ZFS thing.

(d)RAIDz1/2/3 is also often disappointing for running VMs on as people expect hardware RAID5/6 performance but due to the padding, check-sums and additional features of ZFS it gives much less IOPS. A stripe of mirrors works better. There are also more than one thread about this on this forum.
So does this mean I should just use RAID10 for VM's?
 
I think im going to try to passthrough the 960 disks via the /dev/disk/by-id method (I have used this to passthrough disks to my truenas VM on my main server) , and make a RAID10 pool with the 240's, making the RAID10 pool the vmstorage, and make the pasthrough disks a RAIDZ on the PBS (since PBS has the option for raid datastores), so hopefully that should work better
 
  • Like
Reactions: Kingneutron
Please note that for TrueNAS it's recommended to passthrough a dedicated HBA for the discs.
I am aware because of the smart data, but as a similar situation on my main server, I also couldnt passthrough the whole HBA since my server has 8X disks, 4 managed by truenas, the other 4 managed by proxmox, If i passed through the whole controller, same issue, the other 4 disks wouldnt be usable by proxmox
 
I think im going to try to passthrough the 960 disks via the /dev/disk/by-id method (I have used this to passthrough disks to my truenas VM on my main server) , and make a RAID10 pool with the 240's, making the RAID10 pool the vmstorage, and make the pasthrough disks a RAIDZ on the PBS (since PBS has the option for raid datastores), so hopefully that should work better
Just finished passing through all the /dev/disk/by-id/'s, and it looks like PBS isnt complaining and it worked fine with RAIDZ on pbs, so I think everythings great now
The only thing is that PBS wont have smart data (because however passing through works with proxmox/qemu, it just dosent do that), but honestly I could care less since I'll be checking on smart data on the main hypervisor anyways, and wont be logging into PBS unless I need to change something
1738251697006.png
1738251724052.png
1738251733034.png