Often, especially when migrating VMs back and forth (A -> B and then back to B -> A) during maintenance, my ZFS replication gets in a state where it fails with errors like:
"volume 'ssdtank/vmdata/vm-117-disk-0' already exists"
or claims target and source don't have a common ancestor version.
"Already exists" is of course true, but there most definitely is a common ancestor -- the volume was just migrated from A->B, but migrating it back (B->A) now fails. Proxmox has clearly lost some metadata in the process, since it consider the volume existing on A a surprise ("already exists").
The problem is, vm-117-disk-0 is 10.5 TB, so I really can't always delete and re-sync from the scratch.
What might this lost metadata be, and is there a way to fix this state manually so it that replication can resume?
"volume 'ssdtank/vmdata/vm-117-disk-0' already exists"
or claims target and source don't have a common ancestor version.
"Already exists" is of course true, but there most definitely is a common ancestor -- the volume was just migrated from A->B, but migrating it back (B->A) now fails. Proxmox has clearly lost some metadata in the process, since it consider the volume existing on A a surprise ("already exists").
The problem is, vm-117-disk-0 is 10.5 TB, so I really can't always delete and re-sync from the scratch.
What might this lost metadata be, and is there a way to fix this state manually so it that replication can resume?
Last edited: