No common base snapshot on volume

eurowerfr

Renowned Member
Jun 25, 2015
19
0
66
Hi,

I have 2 servers Proxmox with ZFS as storage.

I create 1 CT with 1 root disk on SSD ZFS and 1 mount point on the HDD ZFS

I create a replication rule on the 2nd serveur : all is OK !

I do the "same thing" with another CT, but I have always the following error :

2024-11-09 13:55:11 5201-0: start replication job
2024-11-09 13:55:11 5201-0: guest => CT 5201, running => 1
2024-11-09 13:55:11 5201-0: volumes => local-zfs:subvol-5201-disk-0,zdata:subvol-5201-disk-0
2024-11-09 13:55:11 5201-0: end replication job with error: No common base snapshot on volume(s) local-zfs:subvol-5201-disk-0,zdata:subvol-5201-disk-0
Please remove the problematic volume(s) from the replication target or delete and re-create the whole job 5201-0

I try to delete all snapshot, I have destroy and create again my CT, no snapshot found relative to 5201 with zfs list -t snapshot.
I create once again the replication rule : same error.

This error (bug ?) is on Proxmox 8.2.4 and also I have the same problem with another cluster with same Proxmox

I don't understand what is wrong !

Please can you help me ?
 
I've seen that also. I just did what it told me: "... or delete and re-create the whole job 5201-0". (In the GUI, without CLI magic.)
 
Exact same here.
I was not even able to remove the job, I had to use --force in the CLI.
Then took a snapshot as a base and recreated the replication Job.
Still the same.
 
Solved.

I migrated the VM back to it's original server, and setup the replication again.
That did the trick, it replicates fine now.

But I'm not wiser now ;-)
I don't know why it happened. Also checked if disk full or other disk problems, but none.

So, if anyone can tell me what the problem is with this error message, I would much appreciate.
 
Proxmox, I have bug on bug, nothing pro with it :-(

And this poor sentence tells us... what???


If you have a new problem start a new thread and describe exactly what that problem is - and describe your host/network/cluster/storage/xyz. The more details you post the higher the chance to get a useful answer.
 
I'm no longer looking for help, I'm disappointed by the bug after bug that we encounter for nothing, on freshly installed and up-to-date servers... I see a spectacular deterioration in the functioning of software over the last 15 years... That's all LOL
 
I had the same problem - but the solution from bratak didn't work for me.
As I had a closer look to the volumes of the 3 proxmox machines I found differences in their size.
On my prox1 - there the CT runs - are their 2 Volumes with 35 GB and 12 GB.
On both other -there I tryed to replicate it to - I found 2 volumes (on prox2) and 4 (on prox3) all with the size of 12 GB.
So I have deleted them an recreated the replication jobs - all works fine.
No idea what the reason for this. May be a shutdown during a replication.
 
Last edited:
I had the same problem and believe it was caused by a power interruption on a switch during replication. Deleting the target replication device allowed the next replication to work smoothly. It would be prudent to have a fresh backup when you do this.