Use API to get storage location for VM's

lifeboy

Renowned Member
I need to extract which storage is assigned to each VM and LXC in our cluster. I can retrieve the total allocation for the boot disk, but can't see an obvious way to get the detail for each storage volume allocated.

Some of our VM's have a boot disk on an ceph SSD pool and a logging disk on ceph spinning disks. So Instead of the following:
Code:
<pre># pvesh get /cluster/resources --type vm --human-readable=1
┌──────────┬──────┬─────────────┬─────────┬─────────┬────────────┬─────────┬───────┬────────┬────────────┬────────────┬─────────────┬────────────────────┬───────────┬────────────┬──────────────┬─────────┬─────────┬────────────┬──────┐
│ id       │ type │ cgroup-mode │ content │     cpu │       disk │ hastate │ level │ maxcpu │    maxdisk │     maxmem │         mem │ name               │ node      │ plugintype │ pool         │ status  │ storage │     uptime │ vmid │
╞══════════╪══════╪═════════════╪═════════╪═════════╪════════════╪═════════╪═══════╪════════╪════════════╪════════════╪═════════════╪════════════════════╪═══════════╪════════════╪══════════════╪═════════╪═════════╪════════════╪══════╡
│ lxc/103  │ lxc  │             │         │   0.02% │  27.95 GiB │         │       │      4 │  48.91 GiB │   8.00 GiB │    1.71 GiB │ pre                │ FT1-NodeB │            │ IMB          │ running │         │  7w 2d 19h │  103 │
├──────────┼──────┼─────────────┼─────────┼─────────┼────────────┼─────────┼───────┼────────┼────────────┼────────────┼─────────────┼────────────────────┼───────────┼────────────┼──────────────┼─────────┼─────────┼────────────┼──────┤
</pre>

I want something like this:
Code:
┌──────────┬──────┬─────────────┬─────────┬─────────┬────────────┬─────────┬───────┬────────┬────────────┬───────────┬────────────┬─────────────┬────────────────────┬───────────┬────────────┬──────────────┬─────────┬─────────┬────────────┬──────┐
│ id       │ type │ cgroup-mode │ content │     cpu │       disk │ hastate │ level │ maxcpu │    maxdisk | disk pool │     maxmem │         mem │ name               │ node      │ plugintype │ pool         │ status  │ storage │     uptime │ vmid │
╞══════════╪══════╪═════════════╪═════════╪═════════╪════════════╪═════════╪═══════╪════════╪════════════╪═══════════╪════════════╪═════════════╪════════════════════╪═══════════╪════════════╪══════════════╪═════════╪═════════╪════════════╪══════╡
│ lxc/103  │ lxc  │             │         │   0.02% │  27.95 GiB │         │       │      4 │  48.91 GiB |    speedy │   8.00 GiB │    1.71 GiB │ pre                │ FT1-NodeB │            │ IMB          │ running │         │  7w 2d 19h │  103 │
├──────────┼──────┼─────────────┼─────────┼─────────┼────────────┼─────────┼───────┼────────┼────────────┼───────────┼────────────┼─────────────┼────────────────────┼───────────┼────────────┼──────────────┼─────────┼─────────┼────────────┼──────┤
</pre>


Notice the "disk pool" I added.

Is that in any way possible?
 
Yes, you have to get the list of VMs from Cluster as you already found, then iterate through each VM to get it's configuration/details. Keep in mind that you will need to provide node name in the path, which you can also extract from resources.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

Could you point in the right direction with the config of each VM? There doesn't seem to be a way to query that via the API, is there? Or did you mean I should use the config file for that VM and get it with bash and grep or something like that?
 
A rough pseudo code would look something like this:

for VM in $(pvesh get cluster/resources --output json|jq 'extract VM/LXC and node|format it into appropriate string);do
pvesh get config node/vm --output json|jq 'extract scsi|ide'|process as needed
done

Keep in mind that VM can have multiple disks, may be not in your case. However, when it does - your proposal in opening post is not workable, as each disk can be on its own storage.

Good luck


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
A rough pseudo code would look something like this:

for VM in $(pvesh get cluster/resources --output json|jq 'extract VM/LXC and node|format it into appropriate string);do
pvesh get config node/vm --output json|jq 'extract scsi|ide'|process as needed
done
Thanks, that gives me a good idea on how to do this. However, it seems that that my pvesh is somehow deficient. When I do:

Code:
~# pvesh get config node/vm
No 'get' handler defined for 'config'

also, I can't enter just

~# pvesh
ERROR: no command specified

Although that should allow me to browse the API option.

Code:
:~# pvesh get version
┌─────────┬──────────┐
│ key     │ value    │
╞═════════╪══════════╡
│ console │ html5    │
├─────────┼──────────┤
│ release │ 7.4      │
├─────────┼──────────┤
│ repoid  │ 0f39f621 │
├─────────┼──────────┤
│ version │ 7.4-16   │
└─────────┴──────────┘

Keep in mind that VM can have multiple disks, may be not in your case. However, when it does - your proposal in opening post is not workable, as each disk can be on its own storage.

That is indeed that case and they are on different storages. I'm trying to analyse the usage in a spreadsheet.

Good luck

Thanks for your help!
 
So, I have created a basic script to list the storage locations for virtual machines on my Proxmox clusters. I will expand this to include lxc as well, but for now I'm just trying to learn how jq achieves this.

Code:
#!/bin/bash
#IFS=$'\n'
rm qemu.config
touch qemu.config
for i in $(pvesh get /nodes/ --output-format json|jq '[.[] | {node: .node}]' | /usr/local/bin/dasel -r json -w csv)
do
    #echo $i
    for j in $(/usr/bin/pvesh get /nodes/$i/qemu --output-format json|jq '.[] .vmid')
    do
#echo $j
pvesh get /nodes/$i/qemu/$j/config --output-format json| jq '{name: .name, cores:.cores, memory:.memory, virtio0:.virtio0, virtio1:.virtio1, scsi0:.scsi0, scsi1:.scsi1}' >>qemu.config
    done
done
/usr/local/bin/dasel -r json -w csv < qemu.config

The output however repeats the headers, like so:

Code:
name,cores,memory,virtio0,virtio1,scsi0,scsi1
IMB-Win01,2,8192,"standard:vm-125-disk-0,size=60G","standard:vm-125-disk-1,size=102399M",null,null
name,cores,memory,virtio0,virtio1,scsi0,scsi1
VO-simba-poller,8,16384,null,null,"local-lvm:vm-130-disk-1,discard=on,iothread=1,size=130G,ssd=1","cephfs:iso/FreeBSD-12.4-RELEASE-amd64-disc1.iso,media=cdrom,size=982436K"
name,cores,memory,virtio0,virtio1,scsi0,scsi1
New-Win10-1,2,16384,null,"standard:base-179-disk-1/vm-184-disk-1,discard=on,size=100G",null,null
name,cores,memory,virtio0,virtio1,scsi0,scsi1
Win-Axis,2,5120,null,"Remote-cephfs:113/vm-113-disk-0.qcow2,size=100G",null,null
<snip>

How can I change this to only have the headers for the first record and not repeated for every other one?
 
"man pvesh"
Code:
You can also completely suppress output using option --quiet.

       --human-readable <boolean> (default = 1)
           Call output rendering functions to produce human readable text.

       --noborder <boolean> (default = 0)
           Do not draw borders (for text format).

       --noheader <boolean> (default = 0)
           Do not show column headers (for text format).

       --output-format <json | json-pretty | text | yaml> (default = text)
           Output format.

       --quiet <boolean>
           Suppress printing results.

print it before the loop via any means, then dont print it.


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Here is what I ended up doing:

Bash:
#!/bin/bash
if [ -f vms.config ]; then
    rm vms.config
fi
echo 'node,type,name,cores,memory,disk1,virtio1,scsi0,scsi1' >vms.config
for i in $(pvesh get /nodes/ --noheader --output-format json|jq -r '.[] .node')
do
    for j in $(/usr/bin/pvesh get /nodes/$i/qemu --noheader --output-format json | jq '.[] .vmid')
    do
        pvesh get /nodes/$i/qemu/$j/config --output-format json > qemu1.json
        pvesh get /nodes/$i/qemu/$j/status/current --output-format json > qemu2.json
    jq -s '.[0] + .[1]' qemu1.json qemu2.json > qemu-combined.json
    jq -r '. + {"node":"'$i'","type":"qemu"} | [.node,.type,.name,.status,.cores,.memory,.virtio0,.virtio1,.scsi0,.scsi1] | @csv' qemu-combined.json >>vms.config
    done
    for k in $(/usr/bin/pvesh get /nodes/$i/lxc --noheader --output-format json | jq -r '.[] .vmid')
    do
        pvesh get /nodes/$i/lxc/$k/config --output-format json > lxc1.json
    pvesh get /nodes/$i/lxc/$k/status/current --output-format json > lxc2.json
    jq -s '.[0] + .[1]' lxc1.json lxc2.json > lxc-combined.json
    jq -r '. + {"node":"'$i'", "type":"lxc"} | [.node,.type,.hostname,.status,.cores,.memory,.rootfs,.mp0,.scsi0,.scsi1] | @csv' lxc-combined.json >>vms.config   
    done
done

It's not all there yet. I still need to find the pool that a machine belongs to and the inability to list the api calls with pvesh is super frustrating. I'll open a separate ticket for that.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!