[SOLVED] Proxmox Migration Error code 255

Y0nderBoi

Well-Known Member
Sep 23, 2019
30
4
48
31
Hello all,

I just installed proxmox on a new Optiplex to act as a new node for my little homelab. It has one NIC and on SSD and I installed and configured the OS and it all seems to be working. I was able to create a cluster and then join the new node to the cluster without any issues. Now the issue arises when I try to move a live VM from one node to another. I get an error:

Code:
2020-03-21 10:48:19 starting migration of VM 113 to node 'proxmox-pve-optiplex980' (192.168.1.14)
2020-03-21 10:48:20 found local disk 'Storage:113/vm-113-disk-0.raw' (in current VM config)
2020-03-21 10:48:20 found local disk 'Storage:113/vm-113-disk-1.raw' (in current VM config)
2020-03-21 10:48:20 copying local disk images
2020-03-21 10:48:20 starting VM 113 on remote node 'proxmox-pve-optiplex980'
2020-03-21 10:48:21 [proxmox-pve-optiplex980] lvcreate 'pve/vm-113-disk-0' error:   Run `lvcreate --help' for more information.
2020-03-21 10:48:21 ERROR: online migrate failure - remote command failed with exit code 255
2020-03-21 10:48:21 aborting phase 2 - cleanup resources
2020-03-21 10:48:21 migrate_cancel
2020-03-21 10:48:22 ERROR: migration finished with problems (duration 00:00:03)
TASK ERROR: migration problems

Here is the output of the pveversion -v on the new node:
Code:
oot@proxmox-pve-optiplex980:~# pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve)
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-helper: 6.1-7
pve-kernel-5.3: 6.1-5
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-5
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-22
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-6
pve-ha-manager: 3.0-9
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

And here is the output of the cat /etc/pve/storage.cfg command:
Code:
root@proxmox-pve-optiplex980:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

dir: Storage
        path /mnt/pve/Storage
        content iso,rootdir,vztmpl,snippets,images
        is_mountpoint 1
        nodes proxmox-pve
        shared 0

lvmthin: VM-Storage
        thinpool VM-Storage
        vgname VM-Storage
        content images,rootdir
        nodes proxmox-pve


Anyone have any ideas what the issue might be? I am sure it has something to do with my storage configuration but I am not super famialir with Linux storage configs. Any help or advice would be great!
 
Hi,

can you please send the config of the VM

Code:
qm conf 113
 
Hi,

can you please send the config of the VM

Code:
qm conf 113
I was actually just about to post that I solved the issue. I the VM I was trying to migrate, was simply larger then the node I was trying to migrate it too. So naturally that is not going to work. Thank for taking the time to help me though!
 
I was actually just about to post that I solved the issue. I the VM I was trying to migrate, was simply larger then the node I was trying to migrate it too. So naturally that is not going to work. Thank for taking the time to help me though!
Hey could you explain that a bit more detailed do you mean the availble storage on the node you wanna migrate to ore something else ?
 
Hi,
I have the same too
please share the full migration task log, output of pveversion -v from source and target node and VM configuration, qm config <ID> replacing <ID> with the actual ID of the VM.
 
Hi,

please share the full migration task log, output of pveversion -v from source and target node and VM configuration, qm config <ID> replacing <ID> with the actual ID of the VM.
I'm sorry, but my problem has been resolved. It was simply because the host I wanted to migrate the VM to didn't have enough storage space. Thank you for asking!
 
  • Like
Reactions: fiona