PVE 8.2.4 Can not migrate live vm with cloud drive and multiple nics

pixel

Renowned Member
Aug 6, 2014
138
3
83
This is on pve 8.2.4, no-subscription (testing for work, where we do have a subscription)
When making cloud init clones, they can be live migrated as long as they only have one nic. With more nics, we get this error

Code:
root@pve1:~# qm migrate 104 pve2 --online
2024-09-16 22:03:56 ERROR: migration aborted (duration 00:00:00): target node is too old (manager <= 7.2-13) and doesn't support new cloudinit section
migration aborted

once the cloud drive is removed, vms can be live migrated again.
 
When making cloud init clones, they can be live migrated as long as they only have one nic. With more nics, we get this error
I tested this here on my ceph cluster. Work normally. I tested it with 3 nics in my cloud init template.

target node is too old (manager <= 7.2-13) and doesn't support new cloudinit section
migration aborted

This message confuses me a little. Do both Proxmox servers have the same version? pveversion -v
 
I also tested with ceph, but my cloud init template only has one nic. Two other nics were added after the clones initial boot.
The webui shows blank entries for those nics in cloudinit data, but qm config does not. pvesh get .../cloudinit only shows the added nics even though get .../config shows the ipconfig line for nic0. This may have something to do with why proxmox gets confused. Offline migration is also blocked.
Proxmox versions are the same.

[user@proxmox-controller ~]$ diff <(ssh pve1 pveversion -v) <(ssh pve2 pveversion -v)
[user@proxmox-controller ~]$
$ ssh pve1 pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.12-1-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-1
proxmox-kernel-6.8.12-1-pve-signed: 6.8.12-1
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph-fuse: 18.2.2-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx9
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.2
libpve-guest-common-perl: 5.1.4
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.7-1
proxmox-backup-file-restore: 3.2.7-1
proxmox-firewall: 0.5.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.3
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.13-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 9.0.2-2
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.4
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1

Code:
pvesh get /nodes/pve1/qemu/104/cloudinit --noborder
key     delete pending                                     value
net1           virtio=BC:24:11:B2:9A:32,bridge=vmbr1,tag=3
net2           virtio=BC:24:11:49:1C:40,bridge=vmbr1,tag=4
sshkeys

pvesh get /nodes/pve1/qemu/104/config --noborder
key value
boot c
bootdisk scsi0
cores 2
digest c59a9c2bc511e571f63a600634654e0b0b167d09
ide2 ceph:vm-104-cloudinit,media=cdrom,size=4M
ipconfig0 ip=192.168.124.73/24,gw=192.168.124.1
memory 2048
name eve
net0 virtio=BC:24:11:6B:AC:6D,bridge=vmbr0
net1 virtio=BC:24:11:B2:9A:32,bridge=vmbr1,tag=3
net2 virtio=BC:24:11:49:1C:40,bridge=vmbr1,tag=4
numa 0
scsi0 ceph:vm-104-disk-0,discard=on,size=10G
scsi1 ceph:vm-104-disk-1,size=20G
scsihw virtio-scsi-pci
serial0 socket
smbios1 uuid=9f65d84c-490f-44bd-8f74-657c6e3f524f
sockets 1
sshkeys
vga serial0
vmgenid d69a4b94-6ce0-448a-9f2d-ae1968404ee6

template
boot: c
bootdisk: scsi0
cores: 2
ide2: ceph:vm-100-cloudinit,media=cdrom
memory: 2048
meta: creation-qemu=9.0.2,ctime=1726417016
name: ubuntu24
net0: virtio=BC:24:11:40:B5:90,bridge=vmbr0
numa: 0
scsi0: ceph:base-100-disk-0,discard=on,size=10G
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=d6847588-46a5-46f2-bfdd-84ea08d9dc37
sockets: 1
template: 1
vga: serial0
vmgenid: 1bbe2166-465e-47c7-816b-f9ef07c4b323

clone that can migrate, only one nic
boot: c
bootdisk: scsi0
cores: 2
ide2: ceph:vm-100-cloudinit,media=cdrom
memory: 2048
meta: creation-qemu=9.0.2,ctime=1726417016
name: ubuntu24
net0: virtio=BC:24:11:40:B5:90,bridge=vmbr0
numa: 0
scsi0: ceph:base-100-disk-0,discard=on,size=10G
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=d6847588-46a5-46f2-bfdd-84ea08d9dc37
sockets: 1
template: 1
vga: serial0
vmgenid: 1bbe2166-465e-47c7-816b-f9ef07c4b323
[user@proxmox-controller ~]$ ssh pve1 qm config 103
boot: c
bootdisk: scsi0
cores: 2
ide2: ceph:vm-103-cloudinit,media=cdrom,size=4M
ipconfig0: ip=192.168.124.72/24,gw=192.168.124.1
memory: 2048
meta: creation-qemu=9.0.2,ctime=1726417016
name: bob
net0: virtio=BC:24:11:0D:E5:24,bridge=vmbr0
numa: 0
scsi0: ceph:vm-103-disk-0,discard=on,size=10G
scsi1: ceph:vm-103-disk-1,size=60G
scsi2: ceph:vm-103-disk-2,size=60G
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=4780f873-0dbb-44ba-9c33-e20a388c3a10
sockets: 1
sshkeys:
vga: serial0
vmgenid: 28d33628-4e2f-4098-9877-d68f236a288b

clone that can't migrate
boot: c
bootdisk: scsi0
cores: 2
ide2: ceph:vm-104-cloudinit,media=cdrom,size=4M
ipconfig0: ip=192.168.124.73/24,gw=192.168.124.1
memory: 2048
meta: creation-qemu=9.0.2,ctime=1726417016
name: eve
net0: virtio=BC:24:11:6B:AC:6D,bridge=vmbr0
net1: virtio=BC:24:11:B2:9A:32,bridge=vmbr1,tag=3
net2: virtio=BC:24:11:49:1C:40,bridge=vmbr1,tag=4
numa: 0
scsi0: ceph:vm-104-disk-0,discard=on,size=10G
scsi1: ceph:vm-104-disk-1,size=20G
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=9f65d84c-490f-44bd-8f74-657c6e3f524f
sockets: 1
sshkeys:
vga: serial0
vmgenid: d69a4b94-6ce0-448a-9f2d-ae1968404ee6
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!