Proxmox(6.1-7) cluster - migration between nodes not working

sboket

New Member
Feb 7, 2020
4
0
1
39
Hello everybody,
This might be an old topic but I have a cluster (master + 3 nodes) and migrating between nodes is not working.
Have to specify that all storage is local for each node.
This is the error message I'm getting:

Task viewer: VM 3016 - Migrate (proxmox-worker1 ---> proxmox-worker3)

OutputStatus


2020-03-04 17:29:17 use dedicated network address for sending migration traffic (10.40.146.13)
2020-03-04 17:29:17 starting migration of VM 3016 to node 'proxmox-worker3' (10.40.146.13)
2020-03-04 17:29:18 found local disk 'storage-nitro:3016/vm-3016-disk-0.qcow2' (via storage)
2020-03-04 17:29:18 found local disk 'storage-worker1:3016/vm-3016-disk-0.qcow2' (in current VM config)
2020-03-04 17:29:18 found local disk 'storage-worker2:3016/vm-3016-disk-0.qcow2' (via storage)
2020-03-04 17:29:18 found local disk 'storage-worker3:3016/vm-3016-disk-0.qcow2' (via storage)
2020-03-04 17:29:19 copying local disk images
cannot import format raw+size into a file of format qcow2
qemu-img: /dev/stdout: error while converting raw: Could not resize file: Invalid argument
command 'qemu-img convert -f qcow2 -O raw /storage-nitro/images/3016/vm-3016-disk-0.qcow2 /dev/stdout' failed: exit code 1
send/receive failed, cleaning up snapshot(s)..
2020-03-04 17:29:19 ERROR: Failed to sync data - command 'set -o pipefail && pvesm export storage-worker3:3016/vm-3016-disk-0.qcow2 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=proxmox-worker3' root@10.40.146.13 -- pvesm import storage-worker3:3016/vm-3016-disk-0.qcow2 raw+size - -with-snapshots 0' failed: exit code 255
2020-03-04 17:29:19 aborting phase 1 - cleanup resources
2020-03-04 17:29:19 ERROR: found stale volume copy 'storage-worker3:3016/vm-3016-disk-0.qcow2' on node 'proxmox-worker3'
2020-03-04 17:29:19 ERROR: migration aborted (duration 00:00:02): Failed to sync data - command 'set -o pipefail && pvesm export storage-worker3:3016/vm-3016-disk-0.qcow2 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=proxmox-worker3' root@10.40.146.13 -- pvesm import storage-worker3:3016/vm-3016-disk-0.qcow2 raw+size - -with-snapshots 0' failed: exit code 255
TASK ERROR: migration aborted

Any ideas?

Also when I want to create a VM on a node using a template stored on a different node I get an error "Node proxmox-worker3 is not allowed for this action"

If needed I can provide any configs - but just help me with the commands / files outputs, as I'm no expert.

Thanks!
 
Hi,

This seems like the issue with local migration of unused disks (stale copy in your log)
If, it should be fixed with qemu-server in version 6.1-7 which is currently available in the pvetest repository.

See: https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_test_repo for details on temporarily enable it, if you want to test this.

Also when I want to create a VM on a node using a template stored on a different node I get an error "Node proxmox-worker3 is not allowed for this action"

Does this template use any local stuff, could you please post it's config with qm config VMID?
 
2020-03-04 17:29:19 ERROR: found stale volume copy 'storage-worker3:3016/vm-3016-disk-0.qcow2' on node 'proxmox-worker3'

alternatively coud could also evaluate why this stale (left over) volume is there, and if it's really not used anywhere move it off the storage (I would delete it only after using the system for another few weeks, not that some hidden usage comes to light and you do not have that volume anymore)
 
Hi,

This seems like the issue with local migration of unused disks (stale copy in your log)
If, it should be fixed with qemu-server in version 6.1-7 which is currently available in the pvetest repository.

See: https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_test_repo for details on temporarily enable it, if you want to test this.



Does this template use any local stuff, could you please post it's config with qm config VMID?
root@proxmox-worker1:~# qm config 3001
agent: 1
bootdisk: sata0
cores: 8
hotplug: disk,network,usb
memory: 65536
name: nx-apps1-vision1
net0: e1000=1A:46:1B:80:0A:28,bridge=vmbr1
numa: 1
onboot: 1
ostype: l26
parent: cia_snapshot
sata0: storage-worker1:3001/vm-3001-disk-0.qcow2,size=750G
scsihw: virtio-scsi-pci
smbios1: uuid=5e3ce86c-d829-4dc5-9165-1525ee20de9d
sockets: 2
vmgenid: 5b94728f-723f-456e-b73f-8ce60b91688c
 
root@proxmox-worker1:~# qm config 3001
agent: 1
bootdisk: sata0
cores: 8
hotplug: disk,network,usb
memory: 65536
name: nx-apps1-vision1
net0: e1000=1A:46:1B:80:0A:28,bridge=vmbr1
numa: 1
onboot: 1
ostype: l26
parent: cia_snapshot
sata0: storage-worker1:3001/vm-3001-disk-0.qcow2,size=750G
scsihw: virtio-scsi-pci
smbios1: uuid=5e3ce86c-d829-4dc5-9165-1525ee20de9d
sockets: 2
vmgenid: 5b94728f-723f-456e-b73f-8ce60b91688c


And for a template - tried with both ide2 (cdrom) attached and not

root@proxmox-worker1:~# qm config 9014
agent: 1
bootdisk: sata0
cores: 8
ide2: storage-worker1:iso/RHEL7751-0001.iso,media=cdrom
memory: 65536
name: w1-T-nx-vision-16C-64G-250G
net0: e1000=5A:7E:0E:95:B7:29,bridge=vmbr1,firewall=1
numa: 1
onboot: 1
ostype: l26
sata0: storage-worker1:9014/base-9014-disk-0.qcow2,size=250G
scsihw: virtio-scsi-pci
smbios1: uuid=e8896d42-a87f-4440-a7ae-dcdf1d389cfc
sockets: 2
template: 1
vmgenid: f32cbf5c-a9d6-442e-ace5-d0d3809d89cc

just upgraded to qemu-server 6.1-6. But yeah, would be good to try 6.1-7 to check this.
 
Last edited:
alternatively coud could also evaluate why this stale (left over) volume is there, and if it's really not used anywhere move it off the storage (I would delete it only after using the system for another few weeks, not that some hidden usage comes to light and you do not have that volume anymore)


Thanks for your reply, but I;m not sure I understand your answer : D
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!