Migration Issue

okieunix1957

Member
Feb 11, 2020
71
5
8
67
I am trying to migration VM's back to my new node but not able to. Any suggestions:

2020-02-21 16:23:26 starting migration of VM 107 to node 'host10' (x.x.x..21)
2020-02-21 16:23:26 copying disk images
2020-02-21 16:23:26 starting VM 107 on remote node 'host10'
2020-02-21 16:23:30 start failed: command '/usr/bin/kvm -id 107 -name ivasilyeu -chardev 'socket,id=qmp,path=/var/run/qemu-server/107.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/107.pid -daemonize -smbios 'type=1,uuid=699200e4-0442-447c-a241-191aa62d2bb2' -smp '8,sockets=2,cores=4,maxcpus=8' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc unix:/var/run/qemu-server/107.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 40960 -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'vmgenid,guid=c4e9f4af-75f9-41ab-8482-06c6b8805c5f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:1b6f9e9dcabb' -drive 'file=/mnt/pve/volume02/images/107/vm-107-cloudinit.qcow2,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/mnt/pve/volume02/images/107/vm-107-disk-0.qcow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap107i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=06:AE:0C:7F:86:20,netdev=net0,bus=pci.0,addr=0x12,id=net0' -machine 'type=pc-i440fx-2.12' -incoming unix:/run/qemu-server/107.migrate -S' failed: exit code 1
2020-02-21 16:23:30 ERROR: online migrate failure - command '/usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=-host10' root@x.x.x21 qm start 107 --skiplock --migratedfrom host07 --migration_type secure --stateuri unix --machine pc-i440fx-2.12' failed: exit code 255
2020-02-21 16:23:30 aborting phase 2 - cleanup resources
2020-02-21 16:23:30 migrate_cancel
2020-02-21 16:23:31 ERROR: migration finished with problems (duration 00:00:06)
TASK ERROR: migration problems

Can anyone point me why looks like hostkeys maybe causing the problems?
 
Please post the pveversion -v output of both source and target nodes. Also the VM config (qm config 107) and the storage config (cat /etc/pve/storage.cfg).
Looks like the VM couid not be started on the target node for some reason.
 
Please post the pveversion -v output of both source and target nodes. Also the VM config (qm config 107) and the storage config (cat /etc/pve/storage.cfg).
Looks like the VM couid not be started on the target node for some reason.

Here is the source node:
-host07:~# pveversion -v
proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve)
pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
pve-kernel-4.15: 5.4-12
pve-kernel-4.15.18-24-pve: 4.15.18-52
pve-kernel-4.15.18-12-pve: 4.15.18-36
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-12
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-56
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-7
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-38
pve-container: 2.0-41
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-7
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-55
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
host07:~#

-host07:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,backup,vztmpl

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

nfs: volume02
export /export/sc4-devops-qa-prxmx-kub2/volume01
path /mnt/pve/volume02
server x.x.x.223
content images,iso,backup
maxfiles 3
options vers=3

host07:~#

Here is the destination Node:

-host10:~# pveversion -v
proxmox-ve: 5.4-2 (running kernel: 4.15.18-25-pve)
pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
pve-kernel-4.15: 5.4-13
pve-kernel-4.15.18-25-pve: 4.15.18-53
pve-kernel-4.15.18-12-pve: 4.15.18-36
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-12
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-56
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-7
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-38
pve-container: 2.0-41
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-7
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-55
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
host10:~#

host10:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,backup,vztmpl

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

nfs: volume02
export /export/sc4-devops-qa-prxmx-kub2/volume01
path /mnt/pve/volume02
server x.x.x.223
content images,iso,backup
maxfiles 3
options vers=3

host10:~#
 
Last edited:
Here is the source node:
-host07:~# pveversion -v
proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve)
pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
pve-kernel-4.15: 5.4-12
pve-kernel-4.15.18-24-pve: 4.15.18-52
pve-kernel-4.15.18-12-pve: 4.15.18-36
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-12
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-56
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-7
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-38
pve-container: 2.0-41
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-7
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-55
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
host07:~#

-host07:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,backup,vztmpl

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

nfs: volume02
export /export/sc4-devops-qa-prxmx-kub2/volume01
path /mnt/pve/volume02
server x.x.x.223
content images,iso,backup
maxfiles 3
options vers=3

host07:~#

Here is the destination Node:

-host10:~# pveversion -v
proxmox-ve: 5.4-2 (running kernel: 4.15.18-25-pve)
pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
pve-kernel-4.15: 5.4-13
pve-kernel-4.15.18-25-pve: 4.15.18-53
pve-kernel-4.15.18-12-pve: 4.15.18-36
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-12
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-56
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-7
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-38
pve-container: 2.0-41
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-7
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-55
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
host10:~#

host10:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,backup,vztmpl

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

nfs: volume02
export /export/sc4-devops-qa-prxmx-kub2/volume01
path /mnt/pve/volume02
server x.x.x.223
content images,iso,backup
maxfiles 3
options vers=3

host10:~#

I forgot to post this:

host07:~# qm config 107 | grep -v ssh
boot: c
bootdisk: scsi0
ciuser: root
cores: 4
ide2: volume02:107/vm-107-cloudinit.qcow2,media=cdrom,size=4M
ipconfig0: ip=x.x.x.209/22,gw=x.x.x..254
memory: 40960
name: workfource-ivasilyeu
nameserver: x.x.x.15
net0: virtio=06:AE:0C:7F:86:20,bridge=vmbr0,firewall=1,tag=1109
numa: 0
ostype: l26
scsi0: volume02:107/vm-107-disk-0.qcow2,size=164G
scsihw: virtio-scsi-pci
searchdomain: zone
smbios1: uuid=699200e4-0442-447c-a241-191aa62d2bb2
sockets: 2
vmgenid: c4e9f4af-75f9-41ab-8482-06c6b8805c5f
host07:~#
 
Last edited:
I forgot to post this:

host07:~# qm config 107 | grep -v ssh
boot: c
bootdisk: scsi0
ciuser: root
cores: 4
ide2: volume02:107/vm-107-cloudinit.qcow2,media=cdrom,size=4M
ipconfig0: ip=x.x.x.209/22,gw=x.x.x..254
memory: 40960
name: workfource-ivasilyeu
nameserver: x.x.x.15
net0: virtio=06:AE:0C:7F:86:20,bridge=vmbr0,firewall=1,tag=1109
numa: 0
ostype: l26
scsi0: volume02:107/vm-107-disk-0.qcow2,size=164G
scsihw: virtio-scsi-pci
searchdomain: zone
smbios1: uuid=699200e4-0442-447c-a241-191aa62d2bb2
sockets: 2
vmgenid: c4e9f4af-75f9-41ab-8482-06c6b8805c5f
host07:~#

Any updates
 
There should be a task log on the target node for the start during the migration. Please provide this one as well.
 
There should be a task log on the target node for the start during the migration. Please provide this one as well.

I do not see a task log at all. see below:
host10:/etc/pve# ls -la
total 13
drwxr-xr-x 2 root www-data 0 Dec 31 1969 .
drwxr-xr-x 100 root root 195 Feb 24 02:16 ..
-rw-r----- 1 root www-data 451 Oct 21 04:39 authkey.pub
-r--r----- 1 root www-data 11999 Dec 31 1969 .clusterlog
-rw-r----- 1 root www-data 955 Feb 21 13:43 corosync.conf
-rw-r----- 1 root www-data 16 Oct 21 04:36 datacenter.cfg
-rw-r----- 1 root www-data 2 Dec 31 1969 .debug
lrwxr-xr-x 1 root www-data 0 Dec 31 1969 local -> nodes/host10
lrwxr-xr-x 1 root www-data 0 Dec 31 1969 lxc -> nodes/host10/lxc
-r--r----- 1 root www-data 634 Dec 31 1969 .members
drwxr-xr-x 2 root www-data 0 Oct 21 04:39 nodes
lrwxr-xr-x 1 root www-data 0 Dec 31 1969 openvz -> nodes/host10/openvz
drwx------ 2 root www-data 0 Oct 21 04:39 priv
-rw-r----- 1 root www-data 2057 Oct 21 04:39 pve-root-ca.pem
-rw-r----- 1 root www-data 1679 Oct 21 04:39 pve-www.key
lrwxr-xr-x 1 root www-data 0 Dec 31 1969 qemu-server -> nodes/host10/qemu-server
-r--r----- 1 root www-data 9121 Dec 31 1969 .rrd
-rw-r----- 1 root www-data 294 Nov 27 17:07 storage.cfg
-rw-r----- 1 root www-data 1738 Oct 31 08:42 user.cfg
-r--r----- 1 root www-data 725 Dec 31 1969 .version
-r--r----- 1 root www-data 3381 Dec 31 1969 .vmlist
-rw-r----- 1 root www-data 119 Oct 21 04:39 vzdump.cron
host10:/etc/pve#

BTW, I was able to get a VM to migrate to this node but it's the only one.
host10:/etc/pve# qm list
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
111 vm20 running 16384 40.00 9255
host10:/etc/pve#

Phillip
 
You can view the task log in the GUI on the target node (Node -> Task History). Otherwise you can find it in /var/log/pve/tasks.

Please post the config of VM 111.
 
You can view the task log in the GUI on the target node (Node -> Task History). Otherwise you can find it in /var/log/pve/tasks.

Below is the task log
Please post the config of VM 111.
2020-02-25 19:34:31 starting migration of VM 107 to node 'host10' (x.x.x.21)
2020-02-25 19:34:31 copying disk images
2020-02-25 19:34:31 starting VM 107 on remote node 'host10'
2020-02-25 19:34:31 Debian GNU/Linux 9
2020-02-25 19:34:34 start failed: command '/usr/bin/kvm -id 107 -name ivasilyeu -chardev 'socket,id=qmp,path=/var/run/qemu-server/107.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/107.pid -daemonize -smbios 'type=1,uuid=699200e4-0442-447c-a241-191aa62d2bb2' -smp '8,sockets=2,cores=4,maxcpus=8' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc unix:/var/run/qemu-server/107.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 40960 -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'vmgenid,guid=c4e9f4af-75f9-41ab-8482-06c6b8805c5f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:1b6f9e9dcabb' -drive 'file=/mnt/pve/volume02/images/107/vm-107-cloudinit.qcow2,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/mnt/pve/volume02/images/107/vm-107-disk-0.qcow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap107i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=xx:xx:xx:xx:xx:20,netdev=net0,bus=pci.0,addr=0x12,id=net0' -machine 'type=pc-i440fx-2.12' -incoming unix:/run/qemu-server/107.migrate -S' failed: exit code 1
2020-02-25 19:34:34 ERROR: online migrate failure - command '/usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=sc4-devops-qa-prxmx-host10' root@x.x.x.21 qm start 107 --skiplock --migratedfrom host07 --migration_type secure --stateuri unix --machine pc-i440fx-2.12' failed: exit code 255
2020-02-25 19:34:34 aborting phase 2 - cleanup resources
2020-02-25 19:34:34 migrate_cancel
2020-02-25 19:34:35 ERROR: migration finished with problems (duration 00:00:05)
TASK ERROR: migration problems
 
Please post the task log from the TARGET node, it should contain something like 'VM <vmid> - Start'. As any info why the start failed would be of interest.
 
I see this in the active file listed in /var/log/pve/tasks directory:
UPID:sc4-devops-qa-prxmx-host10:00005B74:01ED62BB:5E553D2F:qmstart:107:root@pam: 1 5E553D31 start failed: command '/usr/bin/kvm -id 107
-name workfource-ivasilyeu -chardev 'socket,id=qmp,path=/var/run/qemu-server/107.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -c
hardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-serve
r/107.pid -daemonize -smbios 'type=1,uuid=699200e4-0442-447c-a241-191aa62d2bb2' -smp '8,sockets=2,cores=4,maxcpus=8' -nodefaults -boot'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc unix:/var/run/qemu-server/107.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 40960 -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f'-device 'pcibridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'vmgenid,guid=c4e9f4af-75f9-41ab-8482-06c6b8805c5f' -device 'pii
x3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -d
evice 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:1b6f9e9dcabb' -drive 'file=/
mnt/pve/volume02/images/107/vm-107-cloudinit.qcow2,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,driv
e=drive-ide2,id=ide2' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/mnt/pve/volume02/images/107/vm-107-disk-0.q
cow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun
=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap107i0,script=/var/lib/qemu-server/pve-bridge,downscrip
t=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=06:AE:0C:7F:86:20,netdev=net0,bus=pci.0,addr=0x12,id=net0'
-machine 'type=pc-i440fx-2.12' -incoming unix:/run/qemu-server/107.migrate -S' failed: exit code 1
 
Here is what found in tasks/0/directory:

host10:/var/log/pve/tasks/0# more *qmstart:107*
Total translation table size: 0
Total rockridge attributes bytes: 417
Total directory bytes: 0
Path table size(bytes): 10
Max brk space used 0
179 extents written (0 MB)
kvm: cannot set up guest memory 'pc.ram': Cannot allocate memory

TASK ERROR: start failed: command '/usr/bin/kvm -id 107 -name workfource-ivasilyeu -chardev 'socket,id=qmp,path=/var/run/qemu-server/10
7.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chard
ev=qmp-event,mode=control' -pidfile /var/run/qemu-server/107.pid -daemonize -smbios 'type=1,uuid=699200e4-0442-447c-a241-191aa62d2bb2'
-smp '8,sockets=2,cores=4,maxcpus=8' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.
jpg' -vnc unix:/var/run/qemu-server/107.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 40960 -device
'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'vmgenid,
guid=c4e9f4af-75f9-41ab-8482-06c6b8805c5f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uh
ci.0,port=1' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name
=iqn.1993-08.org.debian:01:1b6f9e9dcabb' -drive 'file=/mnt/pve/volume02/images/107/vm-107-cloudinit.qcow2,if=none,id=drive-ide2,media=c
drom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -d
rive 'file=/mnt/pve/volume02/images/107/vm-107-disk-0.qcow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on'
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=ta
p107i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=06:A
E:0C:7F:86:20,netdev=net0,bus=pci.0,addr=0x12,id=net0' -machine 'type=pc-i440fx-2.12' -incoming unix:/run/qemu-server/107.migrate -S' f
ailed: exit code 1

So I don't know why this system cannot allocate the memory as this new box and has plenty of memory in it.
So now I know the problem:

host07:~# free -g
total used free shared buff/cache available
Mem: 188 105 53 0 30 81
Swap: 7 0 7
host07:~#


host10:~# free -g
total used free shared buff/cache available
Mem: 23 17 5 0 0 5
Swap: 0 0 0
host10:~#

I'll have to add a lot more memory to the server or build a new server that has more memory.
Now I know how to get answers to these migration issue.

Thanks,
 
Yes, the VM can't be started with only 5G available if it requires 40G.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!