So ProxMox appears to employ some sort of logic or methodology when creating ZFS Pools or Volumes that consume SIGNIFICANTLY more data or providing SIGNIFICANTLY less capacity than they should.
I have 8 8TB Drives in RaidZ1 We'll round it down to 7TB to more than account for the fuzzy harddrive math hdd buttheads have used for a hundred years.
7 * 7 = 49TB (cause -1 for Z1)
But this:
https://pastebin.com/p3aihnrs (Had to pastebin because post was "greater than 10000 characters")
So the Master pool says 47.5T is "used" as expected when we create a 47.5T volume under it.
The logical used says 23T which makes sense cause Windows is showing this volume has 22.6TB of data on it.
However, when we look at vm-100-disk-0 the actual volume. It says "47.5" is used, and that 8.79T is 'available' and that '38.7T' is 'referenced'
I had to tweak the refreservation values because ProxMox refused to allow me to create a volume anywhere near the 47T I -should- be able to create in a RaidZ1 volume.
This says "dataset" is using 38.7T, and that 8.7T is "available" .... HOW HOW HOW HOW HOW? In previous ZFS Systems I've run, I plain got the X*Y-1 capacity. Based on what I'm reading here, I can only write 8.7T more data to this volume, even though it SHOULD have 23T free because only 23T of the 46T is in use.
Can someone please help me understand this insanity??? Before what should be my 49T Volume crashes like the other one I've been talking about in this thread?
Is there some setting or something I can adjust or tweak to stop ProxMox's implementation of ZFS Pools and Volumes from consuming significantly more data than actually exists?!?!??!
No implementation of ZFS I've dealt with to date suffered from this confusing, vague, and seemingly senseless consumption of excess space inside the pools/volumes.
I have 8 8TB Drives in RaidZ1 We'll round it down to 7TB to more than account for the fuzzy harddrive math hdd buttheads have used for a hundred years.
7 * 7 = 49TB (cause -1 for Z1)
But this:
https://pastebin.com/p3aihnrs (Had to pastebin because post was "greater than 10000 characters")
So the Master pool says 47.5T is "used" as expected when we create a 47.5T volume under it.
The logical used says 23T which makes sense cause Windows is showing this volume has 22.6TB of data on it.
However, when we look at vm-100-disk-0 the actual volume. It says "47.5" is used, and that 8.79T is 'available' and that '38.7T' is 'referenced'
I had to tweak the refreservation values because ProxMox refused to allow me to create a volume anywhere near the 47T I -should- be able to create in a RaidZ1 volume.
This says "dataset" is using 38.7T, and that 8.7T is "available" .... HOW HOW HOW HOW HOW? In previous ZFS Systems I've run, I plain got the X*Y-1 capacity. Based on what I'm reading here, I can only write 8.7T more data to this volume, even though it SHOULD have 23T free because only 23T of the 46T is in use.
Can someone please help me understand this insanity??? Before what should be my 49T Volume crashes like the other one I've been talking about in this thread?
Is there some setting or something I can adjust or tweak to stop ProxMox's implementation of ZFS Pools and Volumes from consuming significantly more data than actually exists?!?!??!
No implementation of ZFS I've dealt with to date suffered from this confusing, vague, and seemingly senseless consumption of excess space inside the pools/volumes.