API agent/get-fsinfo missing VM ID in response

soufiyan

New Member
Oct 12, 2025
3
1
3
Hello Proxmox community!

I'm working on a monitoring script that collects filesystem information from multiple VMs using the agent/get-fsinfo API endpoint. However, I've hit a major roadblock:

The Problem:
When I call GET /api2/json/nodes/{node}/qemu/{vmid}/agent/get-fsinfo, the response contains detailed filesystem data but doesn't include the VM ID in the response body. This makes it impossible to correlate the data back to the specific VM when processing results from multiple VMs.

Even More Critical Issue:
This limitation makes it impossible to properly combine data from agent/get-fsinfo with other API endpoints like:

/api2/json/nodes/{{node}}/qemu/{{vmid}}/status/current

Example scenario:
  • I have 30 VMs running across multiple nodes
  • I loop through all of them calling get-fsinfo and status/current
  • The get-fsinfo responses contain no VM identifier
  • The status/current responses do contain VM ID
  • I cannot reliably merge this data because there's no common identifier in the get-fsinforesponse

My question to the community:
  1. Is this by design? Am I missing something obvious?
  2. How are others solving this problem in their scripts?
  3. Would it make sense to have the VM ID included in the response?
  4. Any clever workarounds you've implemented?
What I'd love to see:

{
"vmid": 100,
"data": [
{
"name": "sda1",
"type": "ext4",
"size": 528449536,
"mountpoint": "/"
}
]
}


Has anyone else faced this issue? How did you solve it? Would this be a useful feature request for the Proxmox team?
 
Hi @soufiyan , welcome to the forum.

I think the expectation is that you already know the $vmid. You can always do something like this:

Code:
pvesh get /nodes/pve-2/qemu/$vmid/agent/get-fsinfo --output-format json   | jq --argjson vmid "$vmid" ' {vmid: $vmid} + .'
{
  "vmid": 3000,
  "result": [
    {
      "disk": [
        {
          "bus": 0,
          "bus-type": "scsi",
          "dev": "/dev/sda15",
          "pci-controller": {
            "bus": 9,
            "domain": 0,
            "function": 0,
            "slot": 1
          },
          "serial": "0QEMU_QEMU_HARDDISK_drive-scsi0",
          "target": 0,
          "unit": 0
        }
      ],
      "mountpoint": "/boot/efi",
      "name": "sda15",
      "total-bytes": 109395456,
      "type": "vfat",
      "used-bytes": 6346240
    },
Even More Critical Issue:
This limitation makes it impossible to properly combine data from agent/get-fsinfo with other API endpoints like:
Its JSON, you can mangle it any way you like.

I am not saying its a bad idea, however changing API format always risks breaking someone else's existing workflow. Its not broken now, you can do post-processing on the client side.

Cheers.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: soufiyan
Hi @soufiyan , welcome to the forum.

I think the expectation is that you already know the $vmid. You can always do something like this:

Code:
pvesh get /nodes/pve-2/qemu/$vmid/agent/get-fsinfo --output-format json   | jq --argjson vmid "$vmid" ' {vmid: $vmid} + .'
{
  "vmid": 3000,
  "result": [
    {
      "disk": [
        {
          "bus": 0,
          "bus-type": "scsi",
          "dev": "/dev/sda15",
          "pci-controller": {
            "bus": 9,
            "domain": 0,
            "function": 0,
            "slot": 1
          },
          "serial": "0QEMU_QEMU_HARDDISK_drive-scsi0",
          "target": 0,
          "unit": 0
        }
      ],
      "mountpoint": "/boot/efi",
      "name": "sda15",
      "total-bytes": 109395456,
      "type": "vfat",
      "used-bytes": 6346240
    },

Its JSON, you can mangle it any way you like.

I am not saying its a bad idea, however changing API format always risks breaking someone else's existing workflow. Its not broken now, you can do post-processing on the client side.

Cheers.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
You're right, the jq trick works. But that's my whole point - it's a workaround, not a solution.

When I'm managing 30+ VMs, I shouldn't have to manually stitch data together that the API already knows. Every other endpoint gives me the VMID - why not this one?

It's like getting a package delivered without your address on it. Yeah, you know who it's for, but why make you figure it out?
 
I happen to agree with the OP that this behavior is inconsistent; if the status/current responses do contain the VMID, then the agent/get-fsinfo should also. I fully understand that the latter is being retrieved through a VM agent (that is not aware of the VMID), but the API/endpoint should parse/add this info.
 
  • Like
Reactions: soufiyan
Again, I dont disagree with the premise. Not having to run multiple API calls, or stitch data for basic things, is good.
Every other endpoint gives me the VMID
This does not seem to track with the actual output. A quick check shows there is no VMID in:
/nodes/pve-2/qemu/3000/snapshot
/nodes/pve-2/qemu/3000/config
/nodes/pve-2/qemu/3000/pending
/nodes/pve-2/qemu/3000/firewall

And probably many others.

Its a choice between building a solution now, or submitting feature request (I doubt it qualifies as bug). To be consistent, as you'd like, many things have to be changed. You guess whether it would be a priority.

There is always: https://www.proxmox.com/en/about/open-source/developers



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: soufiyan
I went ahead and filed an official bug report with Proxmox and even built a fix that works.
Here's the bug ticket: https://bugzilla.proxmox.com/show_bug.cgi?id=6922

Where things stand:
  • I've submitted a working patch that adds VMID to agent responses
  • The fix was actually pretty simple once we found the right files
  • It's tested and working on my end
  • Now we wait for Proxmox team to review it

Thanks to everyone who weighed in earlier
 
  • Like
Reactions: bbgeek17