VMID conflicts on datastore

AX_florian

New Member
May 11, 2021
8
0
1
My department is currently encountering a known issue with identical VMIDs from different PVE servers on one PBS datastore.
We have several independent PVE instances (no clusters) and one PBS instance with two datastores. When two or more PVEs are creating snapshots on the same datastore, some of the VMIDs (starting with 100 on every server) overlap. See Link 1 and Link 2 for similar threads.

Both threads mention a "namespace" feature on the Roadmap:
"Backup to one (physical) datastore from multiple Proxmox VE clusters, avoiding backup naming conflicts"

My questions are:
  1. Is there an ETA for this specific feature?
  2. Since clusters are specifically mentioned, will they be needed for this feature to work?
  3. Is clustering considered a Best Practice and should we use it anyway?
  4. What workarounds are there, except using multiple datastores?
Also see my previous post for information on the systems in question. Thank you in advance.
 
Last edited:
My department is currently encountering a known issue with identical VMIDs from different PVE servers on one PBS datastore.
We have several independent PVE instances (no clusters) and one PBS instance with two datastores. When two or more PVEs are creating snapshots on the same datastore, some of the VMIDs (starting with 100 on every server) overlap. See Link 1 and Link 2 for similar threads.

Both threads mention a "namespace" feature on the Roadmap:
"Backup to one (physical) datastore from multiple Proxmox VE clusters, avoiding backup naming conflicts"

My questions are:
  1. Is there an ETA for this specific feature?
no
  1. Since clusters are specifically mentioned, will they be needed for this feature to work?
no - it applies to multiple standalone nodes as well ;)
  1. Is clustering considered a Best Practice and should we use it anyway?
it depends on your environment - are the nodes colocated (fast LAN between) and would you benefit from a single point of administration, potentially setting up HA? if so, you might want to consider creating a cluster. if you don't miss any of that so far, keeping the setup with multiple standalone nodes is of course perfectly fine as well.
  1. What workarounds are there, except using multiple datastores?
ensuring that VMIDs are not duplicated across nodes ;) multiple datastores are probably your best bet atm..
 
Thank you, that pretty much confirms my expectations.
Is there a (relatively) painless way to change IDs or is Backup/Restore still the way to go?
And for new VMs, can we set a different starting ID (i.e. begin counting from 700 instead of 100)?
 
Thank you, that pretty much confirms my expectations.
Is there a (relatively) painless way to change IDs or is Backup/Restore still the way to go?
And for new VMs, can we set a different starting ID (i.e. begin counting from 700 instead of 100)?
Hello,

Just edit the /usr/share/perl5/PVE/API2/Cluster.pm file.

Edit this line: for (my $i = 100; $i < 10000; $i++) { . Change the number from 100 to the number you want the number of new VMs to start with.

After the change, you need to restart pveproxy. (systemctl restart pveproxy.service)
 
Thank you, that pretty much confirms my expectations.
Is there a (relatively) painless way to change IDs or is Backup/Restore still the way to go?
no, unless you feel comfortable with manually renaming storage volumes and changing the references inside config files accordingly..
And for new VMs, can we set a different starting ID (i.e. begin counting from 700 instead of 100)?
no
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!