Greetings, and thank you for sharing such a great product.
I have run into something with my home lab that I think others may run into in a more production oriented environment.
I have a Proxmox backup server running as a VM on a stand alone physical server, which is running a single instance of Proxmox VE dedicated to backups and minimal DR / cluster recovery functionality in the event my primary Proxmox cluster is down. The Backup server VM owns a number of SSDs provided to it via hardware passthrough and is running them as a zfs pool for backup storage. All of this works great. For clarity, the Proxmox VE instance on the backup hardware is not part of my main cluster, it operates as a stand alone node.
Where I have run into an issue is with the backup server using the VM ID / Number as the primary identifier for the backup group. I have a case where the same VM IDs are being reused between the Proxmox VE instance on the backup hardware and the Proxmox VE instances on my primary cluster. For example I have a router pair using VM IDs 201 and 203 on both the the cluster and the backup node. Even though there are backup jobs that include VM 201 and 203 on both the cluster and backup node, only the VMs from the cluster are being written to the backup storage pool.
I do have unique users defined for connecting to the cluster and and backup node to the backup server, like I would expect to see in a typical production environment.
Perhaps a future feature might be to possibly include some kind of source identifier or user prefix in combination with the VM ID to allow for the same VM ID to be stored from multiple different sources? I can see this kind of issue arising in a business where more than one cluster exists.
I can manage a work-around by changing the VM IDs on the VMs impacted by this in my home lab, but this could definitely become more of an issue in a busy production environment, and I could even see the potential for production data loss if someone was unaware of this VM ID conflict.
I have run into something with my home lab that I think others may run into in a more production oriented environment.
I have a Proxmox backup server running as a VM on a stand alone physical server, which is running a single instance of Proxmox VE dedicated to backups and minimal DR / cluster recovery functionality in the event my primary Proxmox cluster is down. The Backup server VM owns a number of SSDs provided to it via hardware passthrough and is running them as a zfs pool for backup storage. All of this works great. For clarity, the Proxmox VE instance on the backup hardware is not part of my main cluster, it operates as a stand alone node.
Where I have run into an issue is with the backup server using the VM ID / Number as the primary identifier for the backup group. I have a case where the same VM IDs are being reused between the Proxmox VE instance on the backup hardware and the Proxmox VE instances on my primary cluster. For example I have a router pair using VM IDs 201 and 203 on both the the cluster and the backup node. Even though there are backup jobs that include VM 201 and 203 on both the cluster and backup node, only the VMs from the cluster are being written to the backup storage pool.
I do have unique users defined for connecting to the cluster and and backup node to the backup server, like I would expect to see in a typical production environment.
Perhaps a future feature might be to possibly include some kind of source identifier or user prefix in combination with the VM ID to allow for the same VM ID to be stored from multiple different sources? I can see this kind of issue arising in a business where more than one cluster exists.
I can manage a work-around by changing the VM IDs on the VMs impacted by this in my home lab, but this could definitely become more of an issue in a busy production environment, and I could even see the potential for production data loss if someone was unaware of this VM ID conflict.