migration aborted (duration 00:00:00): target node is too old (manager <= 7.2-13) and doesn't support new cloudinit section

Oct 16, 2021
14
1
8
40
Upgraded one of our production clusters this weekend as we were adding a new host.

Went to live migrate a VM from one to another and got:

2023-05-01 08:52:32 ERROR: migration aborted (duration 00:00:00): target node is too old (manager <= 7.2-13) and doesn't support new cloudinit section

Source host:

Code:
proxmox-ve: 7.4-1 (running kernel: 5.15.102-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.4-1
pve-kernel-5.4: 6.4-20
pve-kernel-5.15.104-1-pve: 5.15.104-2
pve-kernel-5.15.102-1-pve: 5.15.102-1
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: not correctly installed
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-4
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-network-perl: 0.7.3
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.1-1
proxmox-backup-file-restore: 2.4.1-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.5
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

Target host:
Code:
proxmox-ve: 7.4-1 (running kernel: 5.15.104-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.4-1
pve-kernel-5.4: 6.4-20
pve-kernel-5.15.104-1-pve: 5.15.104-2
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: not correctly installed
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-4
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-network-perl: 0.7.3
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.1-1
proxmox-backup-file-restore: 2.4.1-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.5
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

What should I do?
 
is "pvestatd" running correctly on both nodes? could you try restarting it?
 
Hey all,

On all six nodes in the cluster, which denotes the time I ran the upgrade:

Code:
-- Journal begins at Mon 2022-12-19 06:05:25 CST, ends at Tue 2023-05-02 17:07:39 CDT. --
Apr 30 15:55:10 node444 systemd[1]: Reloading PVE Status Daemon.
Apr 30 15:55:10 node444 pvestatd[625311]: send HUP to 2840
Apr 30 15:55:10 node444 pvestatd[2840]: received signal HUP
Apr 30 15:55:10 node444 pvestatd[2840]: server shutdown (restart)
Apr 30 15:55:10 node444 systemd[1]: Reloaded PVE Status Daemon.
Apr 30 15:55:11 node444 pvestatd[2840]: restarting server
Apr 30 16:41:01 node444 pvestatd[2840]: local sdn network configuration is too old, please reload

I restarted the service on many of them, but ended up moving the VM via pbs as I have to schedule a maintenance window to re-test the live migration. I failed to mention in my original post, when I attempted the live migration and got the failure, something caused my BGP session to my TORs to flap (we use SDN) sending A LOT of stuff offline for a min (production cluster to ~10K individual customers).
 
Hi,

i had a similar issue, when migrating from 7.3.3 to 8.0.3

Rebooting the 8.0.3 ( that was updated recently ) helped to have the problem looking another way.

Now i receive:

Code:
2023-08-09 16:22:10 ERROR: migration aborted (duration 00:00:00): internal error: cannot check version of invalid string '8.0.4' at /usr/share/perl5/PVE/QemuServer/Helpers.pm line 186.

TASK ERROR: migration aborted

but not for all VM's.....

This VM can be transfered:

Code:
agent: 1
boot: order=sata0
cipassword: [removed]
citype: nocloud
ciuser: root
cores: 2
cpu: cputype=kvm64,flags=+pdpe1gb;+aes
ide0: hpvc1-fra1-nvme-sc1:vm-252-cloudinit,media=cdrom,size=4M
ide2: none,media=cdrom
ipconfig0: [removed]
keyboard: en-us
kvm: 1
machine: pc
memory: 8192
meta: creation-qemu=6.1.1,ctime=1650810644
migrate_downtime: 2
name: eaze
net0: e1000=16:10:de:c5:f5:da,bridge=vmbr0,firewall=1,link_down=0,rate=125
numa: 0
onboot: 1
ostype: l26
sata0: hpvc1-fra1-nvme-sc1:base-197-disk-0/vm-252-disk-0,iops_rd=1000,iops_wr=1000,mbps_rd=976,mbps_wr=976,size=80G
scsihw: lsi
serial0: socket
smbios1: uuid=1b7cf92d-6c6e-4f91-ab11-d200219e2fd9
sockets: 1
vmgenid: bfc431eb-e012-48aa-bc53-897352d68b34

while this here can not be migrated:


Code:
agent: 1
boot: order=sata0
cipassword: [removed]
citype: nocloud
ciuser: root
cores: 2
cpu: cputype=kvm64,flags=+pdpe1gb;+aes
ide0: hpvc1-fra1-nvme-sc1:vm-133-cloudinit,media=cdrom,size=4M
ide2: none,media=cdrom
ipconfig0: [removed]
keyboard: en-us
kvm: 1
machine: pc
memory: 8192
meta: creation-qemu=6.1.1,ctime=1650810644
migrate_downtime: 2
name: marianooo
net0: e1000=16:f9:f1:61:10:df,bridge=vmbr0,firewall=1,link_down=0,rate=125
numa: 0
onboot: 1
ostype: l26
sata0: hpvc1-fra1-nvme-sc1:base-197-disk-0/vm-133-disk-0,iops_rd=1000,iops_wr=1000,mbps_rd=976,mbps_wr=976,size=80G
scsihw: lsi
serial0: socket
smbios1: uuid=f935b33c-f3b0-4a27-aed7-8d337993f944
sockets: 1
vmgenid: d75e4195-a882-4370-ae71-4a09d8e098f1

[special:cloudinit]
cipassword: [removed]
ipconfig0: [removed]

While this transfered already worked before the reboot of the 8.0.3 server.

So any way to get around this?

Greetings
Oliver
 
I have also encountered this problem, how should I solve it
()
2025-03-04 16:36:55 ERROR: migration aborted (duration 00:00:00): target node is too old (manager <= 7.2-13) and doesn't support new cloudinit section
TASK ERROR: migration aborted
This is my host version:

root@tjdev02-node2090111:~# pveversion -v
proxmox-ve: 8.1.0 (running kernel: 6.5.11-8-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5: 6.5.11-8
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
ceph-fuse: 17.2.7-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.4-1
proxmox-backup-file-restore: 3.1.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.4
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-3
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.2.0
pve-qemu-kvm: 8.1.5-2
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1
 
Hi,
I have also encountered this problem, how should I solve it
()
2025-03-04 16:36:55 ERROR: migration aborted (duration 00:00:00): target node is too old (manager <= 7.2-13) and doesn't support new cloudinit section
TASK ERROR: migration aborted
This is my host version:

root@tjdev02-node2090111:~# pveversion -v
proxmox-ve: 8.1.0 (running kernel: 6.5.11-8-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5: 6.5.11-8
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
ceph-fuse: 17.2.7-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.4-1
proxmox-backup-file-restore: 3.1.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.4
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-3
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.2.0
pve-qemu-kvm: 8.1.5-2
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1
please upgrade to the latest version, the check is outdated nowadays and was removed in qemu-server >= 8.2.8. Otherwise, you can try running systemctl restart pvestatd.service on the migration target node. This should refresh the version information.