Avoid Reuse of VMID

Sahil

Member
Jun 14, 2022
32
1
6
Hi Team,

We are facing challenges in maintaining the backup in PBS. due to the reuse of VMIDs

VMIDs should be auto-incremental, like in a database.

Currently, when you remove a VM/CT, the ID becomes available again, and the next time you create a virtual machine It will be used (proxmox will suggest the least available ID).

Example: VM id 100 is backed up in PBS and VM ID 100 is deleted from PvE after some time then you create a new VM. Pve will choose the Free id which is "100 again" So if we take the backup it overwrites the old 100vmid's data.

So we need a solution for this, it should only create the VMS in incremental Last created VMID+1. if I delete the VMid 100 it should not use the same 100 ids again in the future, instead, it should create on VMID-101.
 
See the changelog for PVE 7.2:
  • Support configuring the range that new VMIDs are selected from when creating a VM or CT.
You can set the upper and lower boundaries in the datacenter's options panel. Setting lower equal to upper disables auto-suggestion completely.
So in case multiple clusters share the same PBS you could give each cluster another non-overlapping VMID range to choose from.

And on the PBS side you could make use of the namespaces added in PBS 2.2, so each cluster is using its own namespace. That way its no problem for multiple guests using the same VMIDs and you still get the good deduplication ratio.
 
See the changelog for PVE 7.2:
I think OP's problem is unrelated to multi-cluster collision. If I understand the post correctly, OP has original VM100 that is backed up with retention=1, as an example. The VM is deleted later, but backup needs to be preserved.
A new VM is created and it is auto-assigned next available ID which is 100. On next backup the retention will cause original VM data to be discarded from PBS.

Short term solution for OP is to always specify VM ID on creation and to keep track of it outside of PVE. Another path is to move the backed up data elsewhere on VM removal if backup needs to be preserved.

@Sahil given that such functionality does not exist currently, you would need to file an enhancement request : https://forum.proxmox.com/threads/where-to-post-feature-requests.46317/


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: patrice damezin
Silimar problem is when you create permission to user for particular VMID. This remains valid after you delete server and that user has access to completely different server when the VMID is reused :(

Not to mention that we are trying to pair VMID to billing to another system and it creates problems with duplicate entries.
 
Not to mention that we are trying to pair VMID to billing to another system and it creates problems with duplicate entries.
already answered you in the bug report as well - but this reallyyyy sounds like you need something on top of PVE that does that integration and tracking for you..
 
@fabian We are currently building a Proxmox ManageIQ Provider together with the ManageIQ Team.

But once again we face the issue of reusing the VMIDs.

As you mentioned above, something outside of PVE should keep track of it. But PVE does not offer a unique ID to keep Track of something.
Is there a chance that the Patch in Bug 4369 will be merged in the near future? Currently it's a Blocker for the Provider: https://github.com/ManageIQ/manageiq-providers-proxmox/issues/8


Aside from the Provider we see issues with reused VMIDs in our daily business. Its opening up a lot of stuff around, metrics, backups and everything with is tied to the VMID.

Thanks,
Rene
 
Last edited:
as far as I can tell, there was never a new version of the patch series after the last round of reviews..
 
I don't know of any plans to pick up that particular patch series on our end, but we'd be happy to review if somebody else wants to drive it across the finish line. I think for your particular use case, you could use a tuple of (smbios uuid, vmid) as identifier, or use "different smbiod uuid" for invalidating/renaming/.. an old record for the same vmid?
 
I don't think this is a good or stable approach. And I know its open source and we could contribute the technical side of the code. But this seems to be a pretty important architectural decision on the core of Proxmox and Backup Server and much more.

What would be needed to start an internal discussion about this?
 
we've had plenty of discussions about this. we have no plans at the moment to switch the used IDs to a non-human-readable one, and there are already two IDs in the VM config that identify the VM and are generated on creation and re-generated at various life cycle points:
- smbios1 uuid
- vmgenid

these (or the creation metadata in the VM config) can be used to disambiguate VM instances reusing a particular VMID.
 
I don't speak about changing it to a non-human-readable format. I talk about reusing de VMID, which is indeed a problem in a lot of usecases in your own software. Speaking of Backups, disks and other stuff which is linked to a VMID.

When reusing the ID some Random Backup or Disk can be associated to my VM, which is not helpful nor a desired mechanism.

I see the IDs you mention, but I can't "talk" to them via the API. I cant get infos from smbios1 uuid=xyz, i have to call the api .../100/config

And all of the IDs can be changed by the user, Proxmox is missing a ID which is not changeable and ascending. It would be enough to just make the current VMID strictly ascending and not changeable.
 
And all of the IDs can be changed by the user, Proxmox is missing a ID which is not changeable and ascending. It would be enough to just make the current VMID strictly ascending and not changeable.

"burning" previously used IDs is what the referenced patch series is about, so if you want that semantics, somebody needs to pick it up and drive it over the finish line, like I wrote above.

When reusing the ID some Random Backup or Disk can be associated to my VM, which is not helpful nor a desired mechanism.

I see the IDs you mention, but I can't "talk" to them via the API. I cant get infos from smbios1 uuid=xyz, i have to call the api .../100/config

yes, reuse in the context of backup/restore is the most common issue (but not the only one, so to avoid all of them you simply must never reuse an ID for a different "logical" guest), and there you have both configs and can compare the IDs before restoring. I guess we could add a hint on the UI if there is mismatch?
 
Hi Fabian,

when there is consent about the problematic behavior why is the "somebody" not one in the proxmox team? I mean it would simplify a lot in the process, when you implement it in house, or not? On our side, we never developed something in the Proxmox Codebase and reading into the code, building a testsystem, then using your Mail Based Git workflow is a lot of work to begin with. And then not getting the code into the main Project because of Architectural topics is another risk we can't afford now, saying that straight, we wont develop the patch, sry.

I think a hint would be a good start, but maybe a setting where you can toggle a blocking mechanism to not reuse old VMIDs would be the best. Or a switch where you configure "use the next free one (current reuse behaviour)", "random ID", "unique next ID". Because unfortunately just "simply" never reuse a VMID already used, is not that simple in the current state.

I just want to revive the Topic, because I think its an ongoing Problem and it doesn't help integrate Proxmox into other (Management-, Backup-, Deployment-, ...)systems. And it seems to be a solvable Problem to me.