Search results

  1. E

    How to Trim Win Server 2022 with RDB CephFS Backed Storage

    The summary page: https://ss.ecansol.com/uploads/2023/03/01/chrome_1677690960.png Shows the same as the pool page: https://ss.ecansol.com/uploads/2023/03/01/chrome_1677690983.png I am aware that the actual used space is amplified by 3. But as illustrated above I am only using 7TB but Ceph...
  2. E

    How to Trim Win Server 2022 with RDB CephFS Backed Storage

    I feel like I described it pretty fully, but let me see if I can provide additional detail: Only 7.3TB is in use on this volume: https://ss.ecansol.com/uploads/2023/03/01/ncplayer_1677660784.png Ceph -> Pools says it's using 23.63 TB ...
  3. E

    How to Trim Win Server 2022 with RDB CephFS Backed Storage

    Don't mean to bump but this has been pushed multiple pages down.
  4. E

    How to Trim Win Server 2022 with RDB CephFS Backed Storage

    I've got a problem that's going to turn into a big problem real fast. I have a Ceph cluster setup. 3 systems, 3 x 18TB Mechanical drives each. I have a 49TB volume in the pool for storage. It says that 11TB is in use, but I formatted the drive and only 4TB is actually in use. When I try to...
  5. E

    Force kill a VM?

    OK well I changed the interface to scsi on the VirtIO SCSI controller to get away from Sata on one of the Linux machines, but now of course it tries to boot off of the PXE Boot and not the virt-scsi bus/disk. I can hit esc and choose it, but would prefer to fix it so it boots of disk first like...
  6. E

    ProxMox Implementation of ZFS o_O

    LOL Shit, of course, it doesn't, so basically I have to move that data, recreate it with virtio.... On VM 101 I've got this: root@pmox:~# qm config 101 agent: 1,type=virtio balloon: 0 bios: seabios bootdisk: sata0 cores: 4 memory: 32768 name: CRM net0: virtio=F2:F1:62:5F:AD:A6,bridge=vmbr1...
  7. E

    ProxMox Implementation of ZFS o_O

    also NVM about vm-100-disk-0 I just realized the "USED" is the total volume size, the "Refer" on the right appears to be how much is actually in use by the FS itself.
  8. E

    ProxMox Implementation of ZFS o_O

    Well I looked up the flag but was seeing that's not the best way to do it, that using fstrim.timer / fstrim.service was better because it doesn't put as much load on the file system so I -believe- I've enabled those. However, I've setup discard, rebooted the system, and observed the cleaning up...
  9. E

    ProxMox Implementation of ZFS o_O

    Good Gravy H4R0 are you serious? lol. Then why wouldn't "discard" be enabled for every volume created against a zvol anyway? I hate to be needy, but can you please share how to flag discard in /etc/fstab? And am I correct in assuming that fstrim in Linux is the equivalent of the sdelete...
  10. E

    ProxMox Implementation of ZFS o_O

    Also sorry I admittedly missed how to mitigate the problem @hr40 just saw "post config" :-D
  11. E

    VM Crashing with io-error

    Thanks for the more detailed information mira. Trying this now.
  12. E

    ProxMox Implementation of ZFS o_O

    Thanks everyone for the suggestions. Should I turn discard on all my hosts then? Just for safety? The other two are CentOS installations. Thanks, Matt
  13. E

    ProxMox Implementation of ZFS o_O

    well that seems silly, why in the world would "windows" need to "forward delete operations" to "underlying storage" .... If a file gets deleted, the hypervisor should see that and figure it out, why does it have to do some special extra thing? Anyway, here's the config, help is appreciated...
  14. E

    ProxMox Implementation of ZFS o_O

    Here's the thing, I never configured snapshots, or backups, or anything like that. So if your saying @fabian that by default ProxMox employes snapshots and creates a misinterpretation of available space, perhaps that's a bad default behavior, and or it should be more clearly explained and a...
  15. E

    VM Crashing with io-error

    Mira, I appreciate your responses, but they are very brief and lack any actual explanation of how to do things.1 A -disk- is not the problem, the problem is the sub volume vm-100-disk-0 is apparently full, but how do I mount it so I can make it unfull? fstrim doesn't help me because I can't...
  16. E

    ProxMox Implementation of ZFS o_O

    So ProxMox appears to employ some sort of logic or methodology when creating ZFS Pools or Volumes that consume SIGNIFICANTLY more data or providing SIGNIFICANTLY less capacity than they should. I have 8 8TB Drives in RaidZ1 We'll round it down to 7TB to more than account for the fuzzy...
  17. E

    VM Crashing with io-error

    Ugh: root@pmox:~# zfs set mountpoint=/mnt/tempmount SIXTBSATA/vm-100-disk-0 cannot set property for 'SIXTBSATA/vm-100-disk-0': 'mountpoint' does not apply to datasets of this type
  18. E

    VM Crashing with io-error

    Well right, the pool is full because it's allocated to a storage volume, which is of a fixed size. It's worked fine for 3 months and now all of a sudden is a problem. Also Available 000000 would indicate full to me, that says there is 1.51Megs available. Perhaps something is somehow over...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!