The summary page: https://ss.ecansol.com/uploads/2023/03/01/chrome_1677690960.png
Shows the same as the pool page: https://ss.ecansol.com/uploads/2023/03/01/chrome_1677690983.png
I am aware that the actual used space is amplified by 3.
But as illustrated above I am only using 7TB but Ceph...
I feel like I described it pretty fully, but let me see if I can provide additional detail:
Only 7.3TB is in use on this volume: https://ss.ecansol.com/uploads/2023/03/01/ncplayer_1677660784.png
Ceph -> Pools says it's using 23.63 TB ...
I've got a problem that's going to turn into a big problem real fast.
I have a Ceph cluster setup. 3 systems, 3 x 18TB Mechanical drives each.
I have a 49TB volume in the pool for storage.
It says that 11TB is in use, but I formatted the drive and only 4TB is actually in use.
When I try to...
OK well I changed the interface to scsi on the VirtIO SCSI controller to get away from Sata on one of the Linux machines, but now of course it tries to boot off of the PXE Boot and not the virt-scsi bus/disk. I can hit esc and choose it, but would prefer to fix it so it boots of disk first like...
LOL Shit, of course, it doesn't, so basically I have to move that data, recreate it with virtio....
On VM 101 I've got this:
root@pmox:~# qm config 101
agent: 1,type=virtio
balloon: 0
bios: seabios
bootdisk: sata0
cores: 4
memory: 32768
name: CRM
net0: virtio=F2:F1:62:5F:AD:A6,bridge=vmbr1...
also NVM about vm-100-disk-0 I just realized the "USED" is the total volume size, the "Refer" on the right appears to be how much is actually in use by the FS itself.
Well I looked up the flag but was seeing that's not the best way to do it, that using fstrim.timer / fstrim.service was better because it doesn't put as much load on the file system so I -believe- I've enabled those.
However, I've setup discard, rebooted the system, and observed the cleaning up...
Good Gravy H4R0 are you serious? lol. Then why wouldn't "discard" be enabled for every volume created against a zvol anyway?
I hate to be needy, but can you please share how to flag discard in /etc/fstab?
And am I correct in assuming that fstrim in Linux is the equivalent of the sdelete...
well that seems silly, why in the world would "windows" need to "forward delete operations" to "underlying storage" ....
If a file gets deleted, the hypervisor should see that and figure it out, why does it have to do some special extra thing?
Anyway, here's the config, help is appreciated...
Here's the thing, I never configured snapshots, or backups, or anything like that. So if your saying @fabian that by default ProxMox employes snapshots and creates a misinterpretation of available space, perhaps that's a bad default behavior, and or it should be more clearly explained and a...
Mira, I appreciate your responses, but they are very brief and lack any actual explanation of how to do things.1
A -disk- is not the problem, the problem is the sub volume vm-100-disk-0 is apparently full, but how do I mount it so I can make it unfull?
fstrim doesn't help me because I can't...
So ProxMox appears to employ some sort of logic or methodology when creating ZFS Pools or Volumes that consume SIGNIFICANTLY more data or providing SIGNIFICANTLY less capacity than they should.
I have 8 8TB Drives in RaidZ1 We'll round it down to 7TB to more than account for the fuzzy...
Ugh:
root@pmox:~# zfs set mountpoint=/mnt/tempmount SIXTBSATA/vm-100-disk-0
cannot set property for 'SIXTBSATA/vm-100-disk-0': 'mountpoint' does not apply to datasets of this type
Well right, the pool is full because it's allocated to a storage volume, which is of a fixed size. It's worked fine for 3 months and now all of a sudden is a problem.
Also Available 000000 would indicate full to me, that says there is 1.51Megs available.
Perhaps something is somehow over...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.