KVM online migration problems

Aug 29, 2019
3
0
6
22
Hi!

When I try to migrate a kvm guest online from one node to other node, I get this error:
Code:
2020-04-29 12:27:19 starting migration of VM 5010 to node 'e01-srv-022' (172.20.10.22)
2020-04-29 12:27:19 starting VM 5010 on remote node 'e01-srv-022'
2020-04-29 12:27:22 start remote tunnel
2020-04-29 12:27:23 ssh tunnel ver 1
2020-04-29 12:27:23 starting online/live migration on unix:/run/qemu-server/5010.migrate
2020-04-29 12:27:23 migrate_set_speed: 8589934592
2020-04-29 12:27:23 migrate_set_downtime: 0.1
2020-04-29 12:27:23 set migration_caps
2020-04-29 12:27:23 set cachesize: 2147483648
2020-04-29 12:27:23 start migrate command to unix:/run/qemu-server/5010.migrate
2020-04-29 12:27:24 migration status error: failed
2020-04-29 12:27:24 ERROR: online migrate failure - aborting
2020-04-29 12:27:24 aborting phase 2 - cleanup resources
2020-04-29 12:27:24 migrate_cancel
2020-04-29 12:27:25 ERROR: migration finished with problems (duration 00:00:06)
TASK ERROR: migration problems

On the remote node, I can see in /var/log/syslog some info but I can not see what is the problem. I attached a file with syslog info on remote node.

Are there more logs in other place? Can I enable debug logs?

Thanks
 

Attachments

hi,

we need pveversion -v and qm config VMID for starters.

were you able to migrate this VM before?

if yes, when did it stop working?

do you have problems migrating other VMs?
 
I can migrate other kvm guest to/from this server without problem. Only this kvm is the problem. I never tried before to migrate it. I created it in this node. I can move this kvm disk from one ceph to another ceph and to local disk, so I think disk are not problem.

pveversion output:
Code:
proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve)
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-helper: 6.1-8
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph: 14.2.9-pve1
ceph-fuse: 14.2.9-pve1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1                                                            
libjs-extjs: 6.0.1-10                                          
libknet1: 1.15-pve1                  
libpve-access-control: 6.0-6      
libpve-apiclient-perl: 3.0-3                                                          
libpve-common-perl: 6.0-17          
libpve-guest-common-perl: 3.0-5                        
libpve-http-server-perl: 3.0-5                    
libpve-storage-perl: 6.1-5                          
libqb0: 1.0.5-1                            
libspice-server1: 0.14.2-4~pve6+1                                            
lvm2: 2.03.02-pve4                              
lxc-pve: 3.2.1-1                                          
lxcfs: 3.0.3-pve60                                    
novnc-pve: 1.1.0-1              
openvswitch-switch: 2.12.0-1                                                  
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3  
pve-cluster: 6.1-4                            
pve-container: 3.0-23                              
pve-docs: 6.1-6        
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-7            
pve-ha-manager: 3.0-9          
pve-i18n: 2.0-4                
pve-qemu-kvm: 4.1.1-4            
pve-xtermjs: 4.3.0-1            
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1  
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

qm config output:
Code:
agent: 1,fstrim_cloned_disks=1
boot: cdn
bootdisk: scsi0
cores: 3
description: 2020-03-03 09%3A33%3A03
hotplug: disk,network,usb,memory,cpu
ide2: none,media=cdrom
memory: 19456
name: emulti5010
net0: virtio=86:BE:8A:28:A3:01,bridge=vmbr0,tag=3225
net1: virtio=56:D8:5A:3C:AC:2C,bridge=vmbr0,tag=1055
net2: virtio=3E:8E:83:6B:F6:90,bridge=vmbr0,tag=2055
numa: 1
onboot: 1
ostype: l26
scsi0: ceph-ssd:vm-5010-disk-1,discard=on,size=100G
scsi1: ceph-ssd:vm-5010-disk-0,discard=on,size=500G
scsihw: virtio-scsi-pci
smbios1: uuid=3aba0531-1088-41be-bbd0-d0db7df124e0
sockets: 2
vmgenid: c1ebb5c0-4f2b-48fa-ad92-b2a0ea2cac44
 
I need to stop the kvm in order to solve this problem. After stopping it, I can migrate kvm without problems.

All this problems came after a network problem where host was without connectivity some minutes.