I'm trying to remote-migrate my LXC containers between 2 separate clusters but it keeps failing. Remote VM migrations do succeed (both online/offline).
At this point I can't seem to find the exact point the migration fails at.
Things I have searched for:
Both boxes are freshly upgraded and running the same versions:
At this point I can't seem to find the exact point the migration fails at.
Things I have searched for:
- The error "failed: Insecure dependency in exec while running with -T switch at /usr/lib/x86_64-linux-gnu/perl-base/IPC/Open3.pm line 176.", but no meaningful results
- "ERROR: found stale volume copy 'main_ssd_zfs:subvol-129-disk-0' on node 'blackbox'", but I could not find any volumes matching this name. Maybe I didn't search in the right spot?
Code:
root@n01c01:~# pct remote-migrate 129 129 'apitoken=PVEAPIToken=root@pam!root=...,host=10.10.20.10,fingerprint=xx:xx:xx' --target-bridge vmbr0 --target-storage local-zfs
Establishing API connection with remote at '10.10.20.10'
2023-12-03 14:07:04 remote: started tunnel worker 'UPID:blackbox:00005699:000AA514:656C7D78:vzmtunnel:129:root@pam!root:'
tunnel: -> sending command "version" to remote
tunnel: <- got reply
2023-12-03 14:07:04 local WS tunnel version: 2
2023-12-03 14:07:04 remote WS tunnel version: 2
2023-12-03 14:07:04 minimum required WS tunnel version: 2
2023-12-03 14:07:04 websocket tunnel started
2023-12-03 14:07:04 starting migration of CT 129 to node 'blackbox' (10.10.20.10)
tunnel: -> sending command "bwlimit" to remote
tunnel: <- got reply
2023-12-03 14:07:04 found local volume 'main_ssd_zfs:subvol-129-disk-0' (in current VM config)
tunnel: -> sending command "disk-import" to remote
tunnel: <- got reply
tunnel: accepted new connection on '/run/pve/129.storage'
tunnel: requesting WS ticket via tunnel
2023-12-03 14:07:04 using a bandwidth limit of 78643200 bytes per second for transferring 'main_ssd_zfs:subvol-129-disk-0'
command 'set -o pipefail && pvesm export main_ssd_zfs:subvol-129-disk-0 zfs - -with-snapshots 1 -snapshot __migration__ | /usr/bin/cstream -t 78643200' failed: Insecure dependency in exec while running with -T switch at /usr/lib/x86_64-linux-gnu/perl-base/IPC/Open3.pm line 176.
tunnel: -> sending command "query-disk-import" to remote
tunnel: established new WS for forwarding '/run/pve/129.storage'
tunnel: done handling forwarded connection from '/run/pve/129.storage'
tunnel: <- got reply
2023-12-03 14:07:04 disk-import: cannot receive: failed to read from stream
tunnel: -> sending command "query-disk-import" to remote
tunnel: <- got reply
2023-12-03 14:07:05 disk-import: cannot open 'rpool/data/subvol-129-disk-0': dataset does not exist
tunnel: -> sending command "query-disk-import" to remote
tunnel: <- got reply
2023-12-03 14:07:06 disk-import: command 'zfs recv -F -- rpool/data/subvol-129-disk-0' failed: exit code 1
tunnel: -> sending command "query-disk-import" to remote
tunnel: <- got reply
2023-12-03 14:07:07 ERROR: unknown query-disk-import result: error
2023-12-03 14:07:07 ERROR: storage migration for 'main_ssd_zfs:subvol-129-disk-0' to storage 'local-zfs' failed - command 'set -o pipefail && pvesm export main_ssd_zfs:subvol-129-disk-0 zfs - -with-snapshots 1 -snapshot __migration__ | /usr/bin/cstream -t 78643200' failed: Insecure dependency in exec while running with -T switch at /usr/lib/x86_64-linux-gnu/perl-base/IPC/Open3.pm line 176.
2023-12-03 14:07:07 aborting phase 1 - cleanup resources
2023-12-03 14:07:07 ERROR: found stale volume copy 'main_ssd_zfs:subvol-129-disk-0' on node 'blackbox'
tunnel: -> sending command "quit" to remote
tunnel: <- got reply
2023-12-03 14:07:08 start final cleanup
2023-12-03 14:07:08 ERROR: migration aborted (duration 00:00:04): storage migration for 'main_ssd_zfs:subvol-129-disk-0' to storage 'local-zfs' failed - command 'set -o pipefail && pvesm export main_ssd_zfs:subvol-129-disk-0 zfs - -with-snapshots 1 -snapshot __migration__ | /usr/bin/cstream -t 78643200' failed: Insecure dependency in exec while running with -T switch at /usr/lib/x86_64-linux-gnu/perl-base/IPC/Open3.pm line 176.
migration aborted
Code:
root@blackbox:~# zfs list -r rpool
NAME USED AVAIL REFER MOUNTPOINT
rpool 41.2G 1.64T 104K /rpool
rpool/ROOT 1.84G 1.64T 96K /rpool/ROOT
rpool/ROOT/pve-1 1.84G 1.64T 1.84G /
rpool/data 30.5G 1.64T 96K /rpool/data
rpool/data/vm-132-cloudinit 76K 1.64T 76K -
rpool/data/vm-132-disk-0 2.01G 1.64T 2.01G -
rpool/data/vm-133-disk-0 10.8G 1.64T 10.8G -
rpool/data/vm-133-disk-1 17.7G 1.64T 17.7G -
rpool/var-lib-vz 8.84G 1.64T 8.84G /var/lib/vz
Code:
proxmox-ve: 8.1.0 (running kernel: 6.5.11-6-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5: 6.5.11-6
proxmox-kernel-6.5.11-6-pve-signed: 6.5.11-6
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx7
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.2-1
proxmox-backup-file-restore: 3.1.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.3
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-2
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.4
pve-qemu-kvm: 8.1.2-4
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.0-pve4
Last edited: