Disk Usage Information Not Displayed for Specific VM in Proxmox GUI

excho0

New Member
Jun 1, 2024
1
0
1
I have encountered an issue where the disk usage information for a specific VM, is not displayed in the Proxmox GUI. Here are the details:

  • Issue: The disk information, specifically disk usage, retrieved via qm agent 103 get-fsinfo does not display in the Proxmox GUI for a specific VM.
  • Observations:
    • The disk information retrieved by executing qm agent 103 get-fsinfo on the Proxmox host shows valid disk details for the VM.
    • However, this disk information is not reflected in the Proxmox GUI for the Specific VM.
    • All other VMs in the Proxmox environment display disk information accurately within the GUI.
  • Steps Taken:
    • Verified VM configuration (/etc/pve/qemu-server/103.conf) and ensured it is correctly configured.
    • Restarted the VM and verified that the issue persists.
    • Checked for disk errors and reviewed system logs within the VM, but found no relevant errors or warnings.
    • Verified that the QEMU Guest Agent is running and properly configured within the VM.
    • Executed qm agent 103 get-fsinfo on the Proxmox host to retrieve filesystem information for the VM, which returned valid disk information.
  • Environment:
    • Proxmox version: pve-manager/8.2.2/9355359cd7afbae4 (running kernel: 6.8.4-3-pve)
    • VM configuration:

      Code:
                                            /etc/pve/qemu-server/103.conf
      agent: 1
      boot: order=scsi0;net0
      cores: 2
      cpu: host
      machine: q35
      memory: 16384
      meta: creation-qemu=8.1.5,ctime=1716890176
      name: mail
      net0: virtio=BC:24:11:18:13:8A,bridge=vmbr0
      numa: 0
      onboot: 1
      ostype: l26
      scsi0: local-lvm:vm-103-disk-0,cache=writeback,discard=on,iothread=1,size=60G
      scsihw: virtio-scsi-single
      smbios1: uuid=3ab22f0a-ff26-485c-8a08-55ad14d132d5
      sockets: 1
      tags: mailcow;mariadb;nginx
      vmgenid: e998184a-ff1b-4359-a822-9e3f509ed858


      df -h output inside the VM:


      Code:
      Filesystem      Size  Used Avail Use% Mounted on
      /dev/sda1        58G   9.5G   46G    18%         /

      qm agent 103 get-fsinfo output from the host:


      Code:
         {
            "disk" : [
               {
                  "bus" : 0,
                  "bus-type" : "scsi",
                  "dev" : "/dev/sda1",
                  "pci-controller" : {
                     "bus" : 9,
                     "domain" : 0,
                     "function" : 0,
                     "slot" : 1
                  },
                  "serial" : "0QEMU_QEMU_HARDDISK_drive-scsi0",
                  "target" : 0,
                  "unit" : 0
               }
            ],
            "mountpoint" : "/",
            "name" : "sda1",
            "total-bytes" : 58893008896,
            "type" : "ext4",
            "used-bytes" : 10095894528
         }

image_2024-06-01_140152147.png
 
Last edited:
I am having exactly the same issue. My setup:
  • Proxmox 7.4-19
  • VM running Debian 12 (qemu-guest-agent version 1:7.2 installed and running correctly).
  • SCSI controller: VirtIO SCSI single
  • Qemu Agent enabled in VM options, with Type as Default (VirtIO).
  • Running qm agent 242 get-fsinfo returns all the info, including disk used.
The summary of the VM in the Proxmox WebGUI does not show disk used.

VM configuration:
Code:
# cat /etc/pve/qemu-server/242.conf
agent: 1
boot: order=scsi0;ide2;net0
cores: 2
ide2: none,media=cdrom
memory: 2048
meta: creation-qemu=7.2.10,ctime=1730107629
name: andronautic
net0: virtio=3A:97:AD:96:6A:02,bridge=vmbr4002,firewall=1,mtu=1400
numa: 0
ostype: l26
scsi0: zfspool:vm-242-disk-0,iothread=1,size=3G
scsihw: virtio-scsi-single
smbios1: uuid=2f0b6301-97e2-4ac8-827a-48c809232f65
sockets: 1
vmgenid: 25098c83-6b70-4fc6-a4d6-b2c87cc7d493

Did you manage to solve it, @excho0 ?
 
The summary of the VM in the Proxmox WebGUI does not show disk used.
Why is this information useful? It's the used space the guest thinks it uses, not the actual space it uses. It's the same with the memory usage some people want to be displayed for guests as their used value, not the amount of memory actually used (don't get me started on PCIe passthrough-usecases). This is totally useless and missleading in an virtualization environment. You want to know what the VM actually uses (which is always greater or equal to the number of the guest hallucination).
 
  • Like
Reactions: Johannes S
Why is this information useful? It's the used space the guest thinks it uses, not the actual space it uses.
It is useful when taking a quick look on the status of that VM or node, albeit imprecise. Whether those 50 GB actually only take 30 GB on the ZFS pool because of compression does not render such information impractical, in my opinion.
 
In my PVE instance, only LXCs show used disk-space (& while running), none of my VMs show used disk-space at all.
 
Same to me. I still have an instance running Proxmox 6 and it's been like this since then.
My PVE is on latest.
Maybe this is by design. (Maybe disk-type /setup dependent?).
Can anyone confirm that they get VMs (NOT LXCs) disk-usage info in GUI.

All other VMs in the Proxmox environment display disk information accurately within the GUI.
OP, is this for VMs or LXCs?
 
Hi,

please note that this is (not yet) implemented in the UI for VMs, only containers.
Retrieving file system works quite differently for VMs than for LXCs -- with LXCs, the host has direct access to the filesystems, but VMs are completely (by design!) isolated. This information can be retrieved through the QEMU Guest Agent tho, as you noted. But nobody has implemented & fleshed out the details for that yet.
 
It is useful when taking a quick look on the status of that VM or node, albeit imprecise. Whether those 50 GB actually only take 30 GB on the ZFS pool because of compression does not render such information impractical, in my opinion.
You don't get my point. As others already stated, the disk usage in a VM does work fundamentally different than in an container. You can have 100% disk usage on the hypervisor, yet only 0% used the in the VM and exactly this is totally useless. That is what I meant with unreliable. This is the same with memory. It has nothing to do with compression, but with giving unused storage/memory back to the hypervisor. Deleting stuff will not free it for the hypervisor, you need to trim/discard the disk inside of the guest (not just checking a box in PVE). This is true for any hypervisor and not PVE specific.