Hello,
the backup of one container fails with the message in the title.
when i "pct unlock" it, the next backup works (i used the backup for development purposes, and i'm quite sure that the backup is complete).
That container is the largest one on the machine, one of the two important ones, and the one which has two filesystems (for quite stupid reasons). It also is replicated to another machine.
In the logfiles (log.txt, attached) i found that:
zfs error: cannot destroy snapshot tank/compressed/subvol-134-disk-1@vzdump: dataset is busy
Status as of now:
System:
pve 6.3-3 (Update coming, possibly even soon)
Linux 5.4.78-2-pve
Backup Server pbs 1.1-5
There was nothing in the current kernel/system logs, there are no zfs entries, and there is just on log entry related to the volume in this months kernel logs:
May 3 17:45:07 x9 pvesr[5687]: 134-0: got unexpected replication job error - command 'set -o pipefail && pvesm export compressed:subvol-134-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_134-0_1620055939__ -base __replicate_134-0_1620054180__ | /usr/bin/cstream -t 50000000 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=PBSHOST' root@IPV6 -- pvesm import compressed:subvol-134-disk-1 zfs - -with-snapshots 1 -allow-rename 0 -base __replicate_134-0_1620054180__' failed: exit code 255
The replication target server went down for hardware reasons.
Will updating to 6.4 help? If not, what can i do to debug this?
Regards, Uwe
the backup of one container fails with the message in the title.
when i "pct unlock" it, the next backup works (i used the backup for development purposes, and i'm quite sure that the backup is complete).
That container is the largest one on the machine, one of the two important ones, and the one which has two filesystems (for quite stupid reasons). It also is replicated to another machine.
In the logfiles (log.txt, attached) i found that:
zfs error: cannot destroy snapshot tank/compressed/subvol-134-disk-1@vzdump: dataset is busy
Status as of now:
Bash:
# zfs list -tall -r /containers/compressed/subvol-134-disk-1
NAME USED AVAIL REFER MOUNTPOINT
tank/compressed/subvol-134-disk-1 244G 157G 243G /containers/compressed/subvol-134-disk-1
tank/compressed/subvol-134-disk-1@vzdump 873M - 243G -
tank/compressed/subvol-134-disk-1@__replicate_134-0_1620709385__ 102M - 243G -
System:
pve 6.3-3 (Update coming, possibly even soon)
Linux 5.4.78-2-pve
Backup Server pbs 1.1-5
There was nothing in the current kernel/system logs, there are no zfs entries, and there is just on log entry related to the volume in this months kernel logs:
May 3 17:45:07 x9 pvesr[5687]: 134-0: got unexpected replication job error - command 'set -o pipefail && pvesm export compressed:subvol-134-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_134-0_1620055939__ -base __replicate_134-0_1620054180__ | /usr/bin/cstream -t 50000000 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=PBSHOST' root@IPV6 -- pvesm import compressed:subvol-134-disk-1 zfs - -with-snapshots 1 -allow-rename 0 -base __replicate_134-0_1620054180__' failed: exit code 255
The replication target server went down for hardware reasons.
Will updating to 6.4 help? If not, what can i do to debug this?
Regards, Uwe