offline migration schlägt fehl: failed: got signal 13

TheMrg

Well-Known Member
Aug 1, 2019
122
4
58
43
Wir haben erfolgreich von cluster26 -> cluster27 migriert.
Zurück geht es aber nicht:

2022-03-22 10:51:19 starting migration of VM 100000 to node 'cluster26' (192.168.0.26)
2022-03-22 10:51:19 found local disk 'zfs_local:vm-100000-disk-0' (in current VM config)
2022-03-22 10:51:19 copying local disk images
2022-03-22 10:51:20 Unknown option: snapshot
2022-03-22 10:51:20 400 unable to parse option
2022-03-22 10:51:20 pvesm import <volume> <format> <filename> [OPTIONS]
2022-03-22 10:51:20 full send of rpool/vm-100000-disk-0@__migration__ estimated size is 29.1K
2022-03-22 10:51:20 total estimated size is 29.1K
2022-03-22 10:51:20 command 'zfs send -Rpv -- rpool/vm-100000-disk-0@__migration__' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2022-03-22 10:51:20 ERROR: storage migration for 'zfs_local:vm-100000-disk-0' to storage 'zfs_local' failed - command 'set -o pipefail && pvesm export zfs_local:vm-100000-disk-0 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=cluster26' root@192.168.0.26 -- pvesm import zfs_local:vm-100000-disk-0 zfs - -with-snapshots 0 -snapshot __migration__ -delete-snapshot __migration__ -allow-rename 1' failed: exit code 255
2022-03-22 10:51:20 aborting phase 1 - cleanup resources
2022-03-22 10:51:20 ERROR: migration aborted (duration 00:00:01): storage migration for 'zfs_local:vm-100000-disk-0' to storage 'zfs_local' failed - command 'set -o pipefail && pvesm export zfs_local:vm-100000-disk-0 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=cluster26' root@192.168.0.26 -- pvesm import zfs_local:vm-100000-disk-0 zfs - -with-snapshots 0 -snapshot __migration__ -delete-snapshot __migration__ -allow-rename 1' failed: exit code 255
TASK ERROR: migration aborted

cluster27: pve-manager/7.1-10
zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 27.5G 403G 104K /rpool
rpool/ROOT 1.59G 403G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.59G 403G 1.59G /
rpool/data 96K 403G 96K /rpool/data
rpool/vm-100000-disk-0 5.16G 408G 56K -

cluster26: pve-manager/6.3-3/eee5f901
rpool 601G 221G 104K /rpool
rpool/ROOT 13.2G 221G 96K /rpool/ROOT
rpool/ROOT/pve-1 13.2G 221G 13.2G /
rpool/data 96K 221G 96K /rpool/data

Beide haben im GUI einen Storage mit zfs_local

Wir sind für Tipps dankbar.
 
migration alt -> neu sollte immer funktionieren. migration neu -> alt ist best-effort und kann bei inkompatiblen aenderungen unter umstaenden nicht funktioneren. 6.3 ist sehr weit weg von 7.1, hier wird es einige dinge geben die nicht kompatibel sind..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!