API to read QEMU VM creation time or uptime?

victorhooi

Member
Apr 3, 2018
167
11
18
33
I'm trying to script a way to automatically stop and destroy VMs after a fixed period.

I was thinking I could script this using the API or pvesh.

However I can't seem to find a field for VM creation time or even uptime?

Is anybody aware of such a thing? Or another easy way of getting this at the ProxMox level?
 

dietmar

Proxmox Staff Member
Staff member
Apr 28, 2005
16,597
338
103
Austria
www.proxmox.com
The /cluster/resources call and /nodes/{nodename}/qemu/{vmid}/status/current both contains the uptime.
 

victorhooi

Member
Apr 3, 2018
167
11
18
33
Awesome - thank you!

(I did check the docs at https://pve.proxmox.com/pve-docs/api-viewer/index.html, and the doc page for Path: /nodes/{node}/qemu/{vmid}/status/current doesn't mention returning uptime - it actually just says "Returns: object".)

To clarify - this is just current uptime, as in, how long the machine has currently been booted for?

If I want to get how long the VM has actual existed for - i.e. days since creation time - is there any way to get that?

Also - any way to get which user created a VM?
 

dietmar

Proxmox Staff Member
Staff member
Apr 28, 2005
16,597
338
103
Austria
www.proxmox.com

victorhooi

Member
Apr 3, 2018
167
11
18
33
Damn - random thought - is there perhaps a logfile where this stuff is stored (creation time of VM, which user created it etc.).

Or some way of increasing logging verbosity so it is logged?

I'm thinking whether I could parse such a logfile, as a workaround.
 

victorhooi

Member
Apr 3, 2018
167
11
18
33
So I’m tailing /var/log/messages.

When I start a VM, I see:

Code:
Jul  3 23:06:17 syd1 pvedaemon[617005]: <root@pam> starting task UPID:syd1:000A4DBA:0112844F:5D1CA849:qmstart:108:root@pam:
When I shutdown a VM:
Code:
Jul  3 23:07:20 syd1 pvedaemon[617005]: <root@pam> starting task UPID:syd1:000A4F8E:01129CB7:5D1CA888:qmshutdown:108:root@pam:
When I clone a template to a new VM:
Code:
Jul  3 23:08:01 syd1 pvedaemon[675696]: <root@pam> starting task UPID:syd1:000A506E:0112ACD6:5D1CA8B1:qmclone:104:root@pam:
Jul  3 23:08:01 syd1 pvedaemon[675696]: <root@pam> end task UPID:syd1:000A506E:0112ACD6:5D1CA8B1:qmclone:104:root@pam: OK
Is there a guide somewhere for what each of the fields mean? And how come the clone action doesn't give you the ID for the new cloned machine?

What is the number after pvedaemon?
The string after UPID I assume is the Proxmox node name in the cluster.
There are three colon-delimited strings dirclty after that - what are they?
And then the user is at the end.

The context for all of this is - we need to track which users created which VMs, and at what time, and how long each VM has been around for.

Is parsing /var/log/messages the best way to go about doing this?

Thanks,
Victor
 

oguz

Proxmox Staff Member
Staff member
Nov 19, 2018
1,249
138
63
What is the number after pvedaemon?
pid of daemon worker

The string after UPID I assume is the Proxmox node name in the cluster.
yes

There are three colon-delimited strings dirclty after that - what are they?
Is there a guide somewhere for what each of the fields mean?
it's documented in the source code for pve-manage or pve-common

And how come the clone action doesn't give you the ID for the new cloned machine?
you can find it in the detailed log for that task under /var/log/pve/tasks/

like this:

Code:
create full clone of drive scsi0 (vms:base-6000-disk-0)
  Using default stripesize 64.00 KiB.
  For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100.
  Logical volume "vm-8000-disk-0" created.
where vm-ID-disk-DISKID is the format

general format for UPID:
Code:
UPID:$node:$pid:$pstart:$starttime:$dtype:$id:$user
where pid, pstart and starttime are hex encoded

The context for all of this is - we need to track which users created which VMs, and at what time, and how long each VM has been around for.

Is parsing /var/log/messages the best way to go about doing this?
maybe...
but they only get logged as 'pvedaemon' if gui/api was used to create the task. if for example i use the command `qm` to create/stop/clone or similarly , then it'll get logged as 'qm' and not 'pveademon', similarly it will be logged 'pct' for containers because that's the name of the process.

another option is to parse /var/log/pveproxy for POST/DELETE requests. you can also see human readable date/time info there along with what kind of request was made and the http response

edit:

forgot to mention that pveproxy log is also for gui actions
 
Last edited:

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
3,702
569
133
there is also "pvenode task list", e.g. with "pvenode task list --vmid 987 --type qmcreate" you should get the tasks that create VMs for this ID, unless they have been rotated out of the task archive already. it's also available over the API of course (GET /nodes/NODE/tasks)
 

victorhooi

Member
Apr 3, 2018
167
11
18
33
Thank you for the detailed answer! I would never have discovered this otherwise. (Maybe I should document it somewhere?)

I used the info you provided to search source code - it seems part of the logline is constructed in pve-common/src/PVE/Tools.pm.

1. One question - what is “dtype”? Are the various possible values stored somewhere?

I just checked /var/log/pveproxy/access.log - here are the equivalent log lines there.

Starting a VM:
Code:
127.0.0.1 - root@pam [03/07/2019:23:06:17 +1000] "POST /api2/extjs/nodes/syd1/qemu/108/status/start HTTP/1.1" 200 81
Shutdown a VM:
Code:
127.0.0.1 - root@pam [03/07/2019:23:07:20 +1000] "POST /api2/extjs/nodes/syd1/qemu/108/status/shutdown HTTP/1.1" 200 84
Cloning a VM:
Code:
127.0.0.1 - root@pam [03/07/2019:23:08:01 +1000] "POST /api2/extjs/nodes/syd1/qemu/104/clone HTTP/1.1" 200 81
I had a look in /var/log/pve/tasks as well - it seems this is more detailed info, which is what I need. For example:
Code:
root@syd1:/var/log/pve/tasks/3# cat "UPID:syd1:0013547F:0201D71B:5D147AA3:qmclone:105:kyotani@anguslab.io:"
create linked clone of drive efidisk0 (vm-storage:base-105-disk-1)
clone base-105-disk-1: base-105-disk-1 snapname __base__ to vm-135-disk-0
create linked clone of drive sata0 (vm-storage:base-105-disk-0)
clone base-105-disk-0: base-105-disk-0 snapname __base__ to vm-135-disk-1
TASK OK
I had a look in /var/log/pve/tasks/index - however, I’m not sure how this file works.

2. How do you map a particular task to a subdirectory under /var/log/pve/tasks?

3. How long are tasks preserved here, before they are rotated out?

4. The friendly name for the clone doesn’t appear to be printed anywhere? Can you get this somewhere else?

5. Likewise, in the task log for qmclone - it mentions the disk (e.g. base-105-disk-0) - but how do you get the friendly name for VM 105?

I also looked at the pvenode task list command:
Code:
root@syd1:/var/log/pve/tasks/3# pvenode task list --type qmcreate
┌─────────────────────────────────────────────────────────────┬──────────┬─────┬──────────┬────────────┬────────────┬────────┐
│ UPID                                                        │ Type     │ ID  │ User     │  Starttime │    Endtime │ Status │
├─────────────────────────────────────────────────────────────┼──────────┼─────┼──────────┼────────────┼────────────┼────────┤
│ UPID:syd1:000B4ACF:012C738E:5D0F5189:qmcreate:131:root@pam: │ qmcreate │ 131 │ root@pam │ 1561285001 │ 1561285001 │ OK     │
└─────────────────────────────────────────────────────────────┴──────────┴─────┴──────────┴────────────┴────────────┴────────┘
root@syd1:/var/log/pve/tasks/3# pvenode task list --type qmclone
┌───────────────────────────────────────────────────────────────────────┬─────────┬─────┬─────────────────────┬────────────┬────────────┬────────┐
│ UPID                                                                  │ Type    │ ID  │ User                │  Starttime │    Endtime │ Status │
├───────────────────────────────────────────────────────────────────────┼─────────┼─────┼─────────────────────┼────────────┼────────────┼────────┤
│ UPID:syd1:000229D3:0037CCD3:5D0FE60D:qmclone:105:nsakala@anguslab.io: │ qmclone │ 105 │ nsakala@anguslab.io │ 1561323021 │ 1561323022 │ OK     │
├───────────────────────────────────────────────────────────────────────┼─────────┼─────┼─────────────────────┼────────────┼────────────┼────────┤
│ UPID:syd1:0008B98C:00E770FD:5D11A7B2:qmclone:104:root@pam:            │ qmclone │ 104 │ root@pam            │ 1561438130 │ 1561438130 │ OK     │
├───────────────────────────────────────────────────────────────────────┼─────────┼─────┼─────────────────────┼────────────┼────────────┼────────┤
│ UPID:syd1:000A506E:0112ACD6:5D1CA8B1:qmclone:104:root@pam:            │ qmclone │ 104 │ root@pam            │ 1562159281 │ 1562159281 │ OK     │
├───────────────────────────────────────────────────────────────────────┼─────────┼─────┼─────────────────────┼────────────┼────────────┼────────┤
│ UPID:syd1:000AC746:011F0FF5:5D1CC867:qmclone:105:rupais@anguslab.io:  │ qmclone │ 105 │ rupais@anguslab.io  │ 1562167399 │ 1562167399 │ OK     │
├───────────────────────────────────────────────────────────────────────┼─────────┼─────┼─────────────────────┼────────────┼────────────┼────────┤
│ UPID:syd1:001316E6:01FC7BEB:5D146CEC:qmclone:105:kyotani@anguslab.io: │ qmclone │ 105 │ kyotani@anguslab.io │ 1561619692 │ 1561619693 │ OK     │
├───────────────────────────────────────────────────────────────────────┼─────────┼─────┼─────────────────────┼────────────┼────────────┼────────┤
│ UPID:syd1:00131985:01FCAE8A:5D146D6E:qmclone:105:kyotani@anguslab.io: │ qmclone │ 105 │ kyotani@anguslab.io │ 1561619822 │ 1561619823 │ OK     │
├───────────────────────────────────────────────────────────────────────┼─────────┼─────┼─────────────────────┼────────────┼────────────┼────────┤
│ UPID:syd1:00132079:01FD3FAA:5D146EE2:qmclone:102:kyotani@anguslab.io: │ qmclone │ 102 │ kyotani@anguslab.io │ 1561620194 │ 1561620194 │ OK     │
├───────────────────────────────────────────────────────────────────────┼─────────┼─────┼─────────────────────┼────────────┼────────────┼────────┤
│ UPID:syd1:00134B36:020141CA:5D147924:qmclone:105:root@pam:            │ qmclone │ 105 │ root@pam            │ 1561622820 │ 1561622821 │ OK     │
├───────────────────────────────────────────────────────────────────────┼─────────┼─────┼─────────────────────┼────────────┼────────────┼────────┤
│ UPID:syd1:0013547F:0201D71B:5D147AA3:qmclone:105:kyotani@anguslab.io: │ qmclone │ 105 │ kyotani@anguslab.io │ 1561623203 │ 1561623203 │ OK     │
└───────────────────────────────────────────────────────────────────────┴─────────┴─────┴─────────────────────┴────────────┴────────────┴────────┘
It seems like an alternative way to get the data versus /var/log/pve/tasks

What are the pros/cons of this command, versus watching the log files? (I assume I’d have to setup a filesystem watch process to watch for new files in /var/log/pve/tasks, versus just running pvenode task list every 1 minute etc.)

Does it make sense for Proxmox to store this data in a database somewhere?

My other thought was using the new hookscript functionality - but not sure how easy it’d be to get all of these fields at VM creation time?

What do you think?
 

victorhooi

Member
Apr 3, 2018
167
11
18
33
Is anybody able to help with the above questions, in help deciphering tasks output?

In particular, I'm stuck on how to get the friendly name for clones (as they appear in the GUI), or friendly names for disks?

And - is there any interest in getting some kind of dashboard, or export of this data to some kind of logging/event database? (e.g. InfluxDB)
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
3,702
569
133
Sorry for missing your initial set of questions. You probably figured out most of them yourself by now ;) if not, just ask again!

the "friendly name" (I assume you mean hostname?) for clones is not contained in the clone task log, so you'd need an extra API/qm/pct call to retrieve it. note that the "VMXXX" place holder is only set in the cluster-wide resource API call (that get's used to display the tree in the GUI, or the content of the search panel). either /cluster/resources or /nodes/$NODE/lxc/$VMID/config//nodes/$NODE/qm/$VMID/config should work.

I'd like to see more structured task information added to the mix (e.g., to make it possible to return a JSON object as task result, instead of just an enum for status and unstructured task log content). we've discussed this a few times on pve-devel already, but no patches have emerged yet.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!