Hi all,
I've just created a new Cluster with 2 nodes, and am trying to migrate running CT's from one system to the other. Upon trying the migration, I get:
Looking at /etc/pve/storage.cfg:
vgdisplay on source:
vgdisplay on target:
Running the failing command manually works fine on the source system:
Anyone come across this before? I'm at a bit of a loss...
I've just created a new Cluster with 2 nodes, and am trying to migrate running CT's from one system to the other. Upon trying the migration, I get:
Code:
2020-09-23 15:11:52 shutdown CT 100
2020-09-23 15:11:55 starting migration of CT 100 to node 'cly-pm-1' (192.168.51.1)
2020-09-23 15:11:55 found local volume 'vm-storage:vm-100-disk-0' (in current VM config)
2020-09-23 15:11:56 blockdev: cannot open /dev/vm-storage/vm-100-disk-0: No such file or directory
2020-09-23 15:11:56 command '/sbin/blockdev --getsize64 /dev/vm-storage/vm-100-disk-0' failed: exit code 1
2020-09-23 15:11:56 import: no size found in export header, aborting.
send/receive failed, cleaning up snapshot(s)..
2020-09-23 15:11:56 ERROR: storage migration for 'vm-storage:vm-100-disk-0' to storage 'vm-storage' failed - command 'set -o pipefail && pvesm export vm-storage:vm-100-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=cly-pm-1' root@192.168.51.1 -- pvesm import vm-storage:vm-100-disk-0 raw+size - -with-snapshots 0 -allow-rename 0' failed: exit code 255
2020-09-23 15:11:56 aborting phase 1 - cleanup resources
2020-09-23 15:11:56 ERROR: found stale volume copy 'vm-storage:vm-100-disk-0' on node 'cly-pm-1'
2020-09-23 15:11:56 start final cleanup
2020-09-23 15:11:56 start container on source node
2020-09-23 15:11:57 ERROR: migration aborted (duration 00:00:05): storage migration for 'vm-storage:vm-100-disk-0' to storage 'vm-storage' failed - command 'set -o pipefail && pvesm export vm-storage:vm-100-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=cly-pm-1' root@192.168.51.1 -- pvesm import vm-storage:vm-100-disk-0 raw+size - -with-snapshots 0 -allow-rename 0' failed: exit code 255
TASK ERROR: migration aborted
Looking at /etc/pve/storage.cfg:
Code:
dir: local
path /var/lib/vz
content backup,vztmpl,iso
lvm: vm-storage
vgname vm-storage
content rootdir,images
shared 0
vgdisplay on source:
Code:
--- Volume group ---
VG Name vm-storage
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 4
Max PV 0
Cur PV 1
Act PV 1
VG Size 192.88 GiB
PE Size 4.00 MiB
Total PE 49378
Alloc PE / Size 43520 / 170.00 GiB
Free PE / Size 5858 / 22.88 GiB
VG UUID waJYdl-Ckox-UraJ-3cta-eev1-ih36-jENXiQ
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <39.50 GiB
PE Size 4.00 MiB
Total PE 10111
Alloc PE / Size 10111 / <39.50 GiB
Free PE / Size 0 / 0
VG UUID ML3WLS-4tOr-eOjy-MJfE-pIDd-mPxW-WzAw5c
vgdisplay on target:
Code:
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 10
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <59.50 GiB
PE Size 4.00 MiB
Total PE 15231
Alloc PE / Size 15231 / <59.50 GiB
Free PE / Size 0 / 0
VG UUID P8M2VG-I4oq-4Zyq-eLfb-k23K-nO27-xiE21q
--- Volume group ---
VG Name vm-storage
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <3.64 TiB
PE Size 4.00 MiB
Total PE 953829
Alloc PE / Size 0 / 0
Free PE / Size 953829 / <3.64 TiB
VG UUID VJvyQa-soX8-TKKz-TzX6-KA2K-JlLt-fmaPyS
Running the failing command manually works fine on the source system:
Code:
# /sbin/blockdev --getsize64 /dev/vm-storage/vm-100-disk-0
21474836480
Anyone come across this before? I'm at a bit of a loss...