Error Migrating CT - Failed lremove in migrating process

stefano.zaniboni

New Member
Apr 22, 2019
7
0
1
30
Hi all! In these days i have configured my second node in proxmox. I'm looking for migrating some CTs to new node but always when via browser interface i start migrating process i see this error:

Code:
Virtual Environment 5.4-11
Search
Container 110 (SANET) on node 'MELCHIOR'
Server View
Logs
()
2019-08-16 20:41:50 starting migration of CT 110 to node 'MELCHIOR' (192.168.2.119)
2019-08-16 20:41:50 found local volume 'local-lvm:vm-110-disk-0' (in current VM config)
  Using default stripesize 64.00 KiB.
  Logical volume "vm-110-disk-0" created.
131072+0 records in
131072+0 records out
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 116.662 s, 73.6 MB/s
719+510380 records in
719+510380 records out
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 125.539 s, 68.4 MB/s
2019-08-16 20:43:57 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=MELCHIOR' root@192.168.2.119 pvesr set-state 110 \''{}'\'
2019-08-16 20:44:04 ERROR: removing local copy of 'local-lvm:vm-110-disk-0' failed - lvremove 'pve/vm-110-disk-0' error:   Logical volume pve/vm-110-disk-0 in use.
2019-08-16 20:44:04 start final cleanup
2019-08-16 20:44:05 ERROR: migration finished with problems (duration 00:02:15)
TASK ERROR: migration problems

And then when i try to re-migrate to the original node these CTs i have this another error:

Code:
Virtual Environment 5.4-11
Search
Container 110 (SANET) on node 'MELCHIOR'
Server View
Logs
()
2019-08-16 20:47:16 starting migration of CT 110 to node 'CASPER' (192.168.2.120)
2019-08-16 20:47:16 found local volume 'local-lvm:vm-110-disk-0' (in current VM config)
volume pve/vm-110-disk-0 already exists
send/receive failed, cleaning up snapshot(s)..
2019-08-16 20:47:17 ERROR: command 'set -o pipefail && pvesm export local-lvm:vm-110-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=CASPER' root@192.168.2.120 -- pvesm import local-lvm:vm-110-disk-0 raw+size - -with-snapshots 0' failed: exit code 255
2019-08-16 20:47:17 aborting phase 1 - cleanup resources
2019-08-16 20:47:17 ERROR: found stale volume copy 'local-lvm:vm-110-disk-0' on node 'CASPER'
2019-08-16 20:47:17 start final cleanup
2019-08-16 20:47:17 ERROR: migration aborted (duration 00:00:02): command 'set -o pipefail && pvesm export local-lvm:vm-110-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=CASPER' root@192.168.2.120 -- pvesm import local-lvm:vm-110-disk-0 raw+size - -with-snapshots 0' failed: exit code 255
TASK ERROR: migration aborted

My nodes are updated at Virtual Environment 5.4-11.

I have missed to say that my nodes don’t have zfs file system, this is a classical installation.

I’ve checked the hard disks and the both SMART status and it’s all ok.

I’ve tried to look on the forum and some users have uses the specific command of proxmox for removing the disk but it doesn’t work.

Code:
lvchange -an -v /dev/pve/vm-100-disk-0
    Deactivating logical volume pve/vm-100-disk-0.
  Logical volume pve/vm-100-disk-0 in use.

Can you help me? Thank you!
 
Last edited:
I've rebooted both nodes and at start on source node i can do it. From browser i've removed from local-lvm the vm-100-data. But why this problem happen? I want to do mmy migration without rebooting every time XD
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!