Hi all! In these days i have configured my second node in proxmox. I'm looking for migrating some CTs to new node but always when via browser interface i start migrating process i see this error:
And then when i try to re-migrate to the original node these CTs i have this another error:
My nodes are updated at Virtual Environment 5.4-11.
I have missed to say that my nodes don’t have zfs file system, this is a classical installation.
I’ve checked the hard disks and the both SMART status and it’s all ok.
I’ve tried to look on the forum and some users have uses the specific command of proxmox for removing the disk but it doesn’t work.
Can you help me? Thank you!
Code:
Virtual Environment 5.4-11
Search
Container 110 (SANET) on node 'MELCHIOR'
Server View
Logs
()
2019-08-16 20:41:50 starting migration of CT 110 to node 'MELCHIOR' (192.168.2.119)
2019-08-16 20:41:50 found local volume 'local-lvm:vm-110-disk-0' (in current VM config)
Using default stripesize 64.00 KiB.
Logical volume "vm-110-disk-0" created.
131072+0 records in
131072+0 records out
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 116.662 s, 73.6 MB/s
719+510380 records in
719+510380 records out
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 125.539 s, 68.4 MB/s
2019-08-16 20:43:57 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=MELCHIOR' root@192.168.2.119 pvesr set-state 110 \''{}'\'
2019-08-16 20:44:04 ERROR: removing local copy of 'local-lvm:vm-110-disk-0' failed - lvremove 'pve/vm-110-disk-0' error: Logical volume pve/vm-110-disk-0 in use.
2019-08-16 20:44:04 start final cleanup
2019-08-16 20:44:05 ERROR: migration finished with problems (duration 00:02:15)
TASK ERROR: migration problems
And then when i try to re-migrate to the original node these CTs i have this another error:
Code:
Virtual Environment 5.4-11
Search
Container 110 (SANET) on node 'MELCHIOR'
Server View
Logs
()
2019-08-16 20:47:16 starting migration of CT 110 to node 'CASPER' (192.168.2.120)
2019-08-16 20:47:16 found local volume 'local-lvm:vm-110-disk-0' (in current VM config)
volume pve/vm-110-disk-0 already exists
send/receive failed, cleaning up snapshot(s)..
2019-08-16 20:47:17 ERROR: command 'set -o pipefail && pvesm export local-lvm:vm-110-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=CASPER' root@192.168.2.120 -- pvesm import local-lvm:vm-110-disk-0 raw+size - -with-snapshots 0' failed: exit code 255
2019-08-16 20:47:17 aborting phase 1 - cleanup resources
2019-08-16 20:47:17 ERROR: found stale volume copy 'local-lvm:vm-110-disk-0' on node 'CASPER'
2019-08-16 20:47:17 start final cleanup
2019-08-16 20:47:17 ERROR: migration aborted (duration 00:00:02): command 'set -o pipefail && pvesm export local-lvm:vm-110-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=CASPER' root@192.168.2.120 -- pvesm import local-lvm:vm-110-disk-0 raw+size - -with-snapshots 0' failed: exit code 255
TASK ERROR: migration aborted
My nodes are updated at Virtual Environment 5.4-11.
I have missed to say that my nodes don’t have zfs file system, this is a classical installation.
I’ve checked the hard disks and the both SMART status and it’s all ok.
I’ve tried to look on the forum and some users have uses the specific command of proxmox for removing the disk but it doesn’t work.
Code:
lvchange -an -v /dev/pve/vm-100-disk-0
Deactivating logical volume pve/vm-100-disk-0.
Logical volume pve/vm-100-disk-0 in use.
Can you help me? Thank you!
Last edited: