[SOLVED] VM ID numbers

dpearceFL

Member
Jun 1, 2020
77
3
13
65
Q1: I create/destroy a lot of VMs. Therefore I reuse the automatically created ID numbers. Since PBS uses the VM ID numbers to store the backups under
Code:
/mnt/datastore/1TB/vm
, will this confuse PBS considering it uses "incremental backups, deduplication" methods?

Q2: If I have multiple PVE servers NOT in the same Datacenter, there will be duplicate VM IDs. Will this confuse PBS?
 
will this confuse PBS considering it uses "incremental backups, deduplication" methods?
Incremental Backups and Deduplication should not be affected, but backups of different VMs that have the same ID will be grouped together since PBS assumes that they belong to the same VM. This can affect Prune Jobs and Retention for instance, but also other things. It would be strongly recommended to allocate new VMIDs for new VMs if you don't want to run into those kinds of problems.

If I have multiple PVE servers NOT in the same Datacenter, there will be duplicate VM IDs. Will this confuse PBS?
For this purpose we support namespaces since 2.2, so each different Cluster can use a different namespace for backups [1].

[1] https://pbs.proxmox.com/docs/storage.html#backup-namespaces
 
Last edited:
Concerning the first answer, thank you. I will start using a a range of VM IDs for each PVE.

Where do I see the namespace and how do I make it unique?

All of my backups are stored under /mnt/datastore/1TB/vm with no differentiation between PVE hosts on the PBS server.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!