This is my first time working with Proxmox so please bear with me on this. Yesterday I created 3 new Proxmox machines. Last night I added them all together into a cluster and decided to call it a night after that. This morning I decided to play around with host migration so I made a new CT on my first node. Then after it was up I told my main node to migrate it to my second node. It errors out with this output:
2018-09-10 09:30:38 shutdown CT 100
2018-09-10 09:30:40 starting migration of CT 100 to node 'Charlemagne' (10.10.0.6)
2018-09-10 09:30:40 found local volume 'local-lvm:vm-100-disk-1' (in current VM config)
Volume group "pve" not found
Cannot process volume group pve
command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,lv_name,lv_size,lv_attr,pool_lv,data_percent,metadata_percent,snap_percent,uuid,tags pve' failed: exit code 5
send/receive failed, cleaning up snapshot(s)..
2018-09-10 09:30:41 ERROR: command 'set -o pipefail && pvesm export local-lvm:vm-100-disk-1 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=Charlemagne' root@10.10.0.6 -- pvesm import local-lvm:vm-100-disk-1 raw+size - -with-snapshots 0' failed: exit code 5
2018-09-10 09:30:41 aborting phase 1 - cleanup resources
2018-09-10 09:30:41 ERROR: found stale volume copy 'local-lvm:vm-100-disk-1' on node 'Charlemagne'
2018-09-10 09:30:41 start final cleanup
2018-09-10 09:30:41 start container on source node
2018-09-10 09:30:44 ERROR: migration aborted (duration 00:00:06): command 'set -o pipefail && pvesm export local-lvm:vm-100-disk-1 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=Charlemagne' root@10.10.0.6 -- pvesm import local-lvm:vm-100-disk-1 raw+size - -with-snapshots 0' failed: exit code 5
TASK ERROR: migration aborted
I was also unable to create a new CT on one of the secondary nodes. That errors out saying:
TASK ERROR: no such volume group 'pve'
After doing some googling I realized my "local-lvm" storage on my second node had a question mark on it. When I click on it and look in summary it says "Active No". Is there a way to reactivate it so the VM has somewhere to migrate to?
2018-09-10 09:30:38 shutdown CT 100
2018-09-10 09:30:40 starting migration of CT 100 to node 'Charlemagne' (10.10.0.6)
2018-09-10 09:30:40 found local volume 'local-lvm:vm-100-disk-1' (in current VM config)
Volume group "pve" not found
Cannot process volume group pve
command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,lv_name,lv_size,lv_attr,pool_lv,data_percent,metadata_percent,snap_percent,uuid,tags pve' failed: exit code 5
send/receive failed, cleaning up snapshot(s)..
2018-09-10 09:30:41 ERROR: command 'set -o pipefail && pvesm export local-lvm:vm-100-disk-1 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=Charlemagne' root@10.10.0.6 -- pvesm import local-lvm:vm-100-disk-1 raw+size - -with-snapshots 0' failed: exit code 5
2018-09-10 09:30:41 aborting phase 1 - cleanup resources
2018-09-10 09:30:41 ERROR: found stale volume copy 'local-lvm:vm-100-disk-1' on node 'Charlemagne'
2018-09-10 09:30:41 start final cleanup
2018-09-10 09:30:41 start container on source node
2018-09-10 09:30:44 ERROR: migration aborted (duration 00:00:06): command 'set -o pipefail && pvesm export local-lvm:vm-100-disk-1 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=Charlemagne' root@10.10.0.6 -- pvesm import local-lvm:vm-100-disk-1 raw+size - -with-snapshots 0' failed: exit code 5
TASK ERROR: migration aborted
I was also unable to create a new CT on one of the secondary nodes. That errors out saying:
TASK ERROR: no such volume group 'pve'
After doing some googling I realized my "local-lvm" storage on my second node had a question mark on it. When I click on it and look in summary it says "Active No". Is there a way to reactivate it so the VM has somewhere to migrate to?
Last edited: