Online migration fails with Cloud-Init

sa10

Renowned Member
Feb 6, 2009
46
0
71
Canada
Hello,

I need the online migrate all VE to the second node without downtime, but the migration failed.
I use local storage.
pve-manager/6.1-5/9bf06119 (running kernel: 5.3.13-2-pve)

qm migrate 105 node2 --online --with-local-disks
2020-02-12 00:14:47 use dedicated network address for sending migration traffic (100.10.10.2)
2020-02-12 00:14:47 starting migration of VM 105 to node 'node2' (100.10.10.2)
2020-02-12 00:14:47 found generated disk 'MainPoolFiles:105/vm-105-cloudinit.qcow2' (in current VM config)
2020-02-12 00:14:47 found local disk 'MainPoolFiles:105/vm-105-disk-0.raw' (in current VM config)
2020-02-12 00:14:47 found local disk 'MainPoolFiles:105/vm-105-disk-1.raw' (in current VM config)
2020-02-12 00:14:47 copying local disk images
2020-02-12 00:14:47 ERROR: Failed to sync data - can't live migrate VM with local cloudinit disk. use a shared storage instead
2020-02-12 00:14:47 aborting phase 1 - cleanup resources
2020-02-12 00:14:47 ERROR: migration aborted (duration 00:00:00): Failed to sync data - can't live migrate VM with local cloudinit disk. use a shared storage instead

I have to move the Cloud-init disks to a shared storage or just delete them from the VE configuration.
I would like to remove the Cloud-init disks before the migration and recreate them after migration .
But I can’t find a way to disconnect/remove Cloud-init without stopping virtual servers.
I found that the fastest way is hiprenate and then qm resume ID , but this is still mean to break work.
However, stopping virtual servers is not possible.

Could you suggest a solution please?

pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.13-2-pve)
pve-manager: 6.1-5 (running version: 6.1-5/9bf06119)
pve-kernel-5.3: 6.1-2
pve-kernel-helper: 6.1-2
pve-kernel-5.3.13-2-pve: 5.3.13-2
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.12-pve1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 2.0.1-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-10
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
openvswitch-switch: 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12+deb10u1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-2
pve-cluster: 6.1-3
pve-container: 3.0-18
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 4.3.0-1
pve-zsync: 2.0-1
qemu-server: 6.1-4
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

The similar problem ()
https://bugzilla.proxmox.com/show_bug.cgi?id=1810
 
Currently removing it via the GUI is a pending change, but you can use the CLI to eject it: qm set <vmid> --ide2 none,media=cdrom
Replace 'ide2' with whatever you're using for the cloudinit disk.
 
  • Like
Reactions: rnmkr and sa10

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!