#diskspace

  1. A

    SSD ZFS Pool keeps increasing in usage space

    I have a 3 node cluster with a SSD ZFS pool that keeps increasing in size. Each server has 24 SSD drives in a ZFS raid. The 'autotrim' setting is set to on for each server node. The two attachment show the space being slowly used up on the server node. All disks on VMs on all server nodes have...
  2. A

    Proxmox ZFS pool full causing io-error - how do I mount and free up space?

    I'm not strong with Linux, but I think I have figured out that my ZFS pool ran out of space. I noticed the issue when the TrueNAS VM the volume is attached just hangs at boot at the ix-zfs.service - import ZFS pools and the Proxox status = io-error. I need help mounting it and freeing up...
  3. M

    Hard Disk Size VM < ZFS USED

    Hello, I'm currently trying to find the reason for a size discrepancy between the Hard Disk of a VM and the ZFS used size on storage. A Windows VM was created with a 500 GB Hard Disk on a ZFS Storage. Then this disk was expanded within Windows Disk Management to 1500 GB. "zfs list" shows...
  4. S

    [TUTORIAL] Understanding Proxmox ZFS HD and Disk Usage Display

    Hi, I tried to answer a question with a link explaining Proxmox disk usage displays in the Web GUI, but I did not find a posting explaining it (possibly because I searched wrongly), so I decided to write a brief overview. I try to explain simple, even if sometimes not 100% accurate...
  5. J

    Ceph RBD Storage Shrinking Over Time – From 10TB up to 8.59TB

    I have a cluster with three Proxmox servers connected via Ceph. Since the beginning, the effective storage was 10TB, but over time, it has decreased to 8.59TB, and I don’t know why. The filesystem is RBD. Why is my Ceph RBD storage shrinking? How can I reclaim lost space?