migration aborted (duration 00:00:00): target node is too old (manager <= 7.2-13) and doesn't support new cloudinit section

Oct 16, 2021
11
1
3
39
Upgraded one of our production clusters this weekend as we were adding a new host.

Went to live migrate a VM from one to another and got:

2023-05-01 08:52:32 ERROR: migration aborted (duration 00:00:00): target node is too old (manager <= 7.2-13) and doesn't support new cloudinit section

Source host:

Code:
proxmox-ve: 7.4-1 (running kernel: 5.15.102-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.4-1
pve-kernel-5.4: 6.4-20
pve-kernel-5.15.104-1-pve: 5.15.104-2
pve-kernel-5.15.102-1-pve: 5.15.102-1
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: not correctly installed
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-4
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-network-perl: 0.7.3
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.1-1
proxmox-backup-file-restore: 2.4.1-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.5
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

Target host:
Code:
proxmox-ve: 7.4-1 (running kernel: 5.15.104-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.4-1
pve-kernel-5.4: 6.4-20
pve-kernel-5.15.104-1-pve: 5.15.104-2
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: not correctly installed
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-4
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-network-perl: 0.7.3
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.1-1
proxmox-backup-file-restore: 2.4.1-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.5
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

What should I do?
 
is "pvestatd" running correctly on both nodes? could you try restarting it?
 
Hey all,

On all six nodes in the cluster, which denotes the time I ran the upgrade:

Code:
-- Journal begins at Mon 2022-12-19 06:05:25 CST, ends at Tue 2023-05-02 17:07:39 CDT. --
Apr 30 15:55:10 node444 systemd[1]: Reloading PVE Status Daemon.
Apr 30 15:55:10 node444 pvestatd[625311]: send HUP to 2840
Apr 30 15:55:10 node444 pvestatd[2840]: received signal HUP
Apr 30 15:55:10 node444 pvestatd[2840]: server shutdown (restart)
Apr 30 15:55:10 node444 systemd[1]: Reloaded PVE Status Daemon.
Apr 30 15:55:11 node444 pvestatd[2840]: restarting server
Apr 30 16:41:01 node444 pvestatd[2840]: local sdn network configuration is too old, please reload

I restarted the service on many of them, but ended up moving the VM via pbs as I have to schedule a maintenance window to re-test the live migration. I failed to mention in my original post, when I attempted the live migration and got the failure, something caused my BGP session to my TORs to flap (we use SDN) sending A LOT of stuff offline for a min (production cluster to ~10K individual customers).
 
Hi,

i had a similar issue, when migrating from 7.3.3 to 8.0.3

Rebooting the 8.0.3 ( that was updated recently ) helped to have the problem looking another way.

Now i receive:

Code:
2023-08-09 16:22:10 ERROR: migration aborted (duration 00:00:00): internal error: cannot check version of invalid string '8.0.4' at /usr/share/perl5/PVE/QemuServer/Helpers.pm line 186.

TASK ERROR: migration aborted

but not for all VM's.....

This VM can be transfered:

Code:
agent: 1
boot: order=sata0
cipassword: [removed]
citype: nocloud
ciuser: root
cores: 2
cpu: cputype=kvm64,flags=+pdpe1gb;+aes
ide0: hpvc1-fra1-nvme-sc1:vm-252-cloudinit,media=cdrom,size=4M
ide2: none,media=cdrom
ipconfig0: [removed]
keyboard: en-us
kvm: 1
machine: pc
memory: 8192
meta: creation-qemu=6.1.1,ctime=1650810644
migrate_downtime: 2
name: eaze
net0: e1000=16:10:de:c5:f5:da,bridge=vmbr0,firewall=1,link_down=0,rate=125
numa: 0
onboot: 1
ostype: l26
sata0: hpvc1-fra1-nvme-sc1:base-197-disk-0/vm-252-disk-0,iops_rd=1000,iops_wr=1000,mbps_rd=976,mbps_wr=976,size=80G
scsihw: lsi
serial0: socket
smbios1: uuid=1b7cf92d-6c6e-4f91-ab11-d200219e2fd9
sockets: 1
vmgenid: bfc431eb-e012-48aa-bc53-897352d68b34

while this here can not be migrated:


Code:
agent: 1
boot: order=sata0
cipassword: [removed]
citype: nocloud
ciuser: root
cores: 2
cpu: cputype=kvm64,flags=+pdpe1gb;+aes
ide0: hpvc1-fra1-nvme-sc1:vm-133-cloudinit,media=cdrom,size=4M
ide2: none,media=cdrom
ipconfig0: [removed]
keyboard: en-us
kvm: 1
machine: pc
memory: 8192
meta: creation-qemu=6.1.1,ctime=1650810644
migrate_downtime: 2
name: marianooo
net0: e1000=16:f9:f1:61:10:df,bridge=vmbr0,firewall=1,link_down=0,rate=125
numa: 0
onboot: 1
ostype: l26
sata0: hpvc1-fra1-nvme-sc1:base-197-disk-0/vm-133-disk-0,iops_rd=1000,iops_wr=1000,mbps_rd=976,mbps_wr=976,size=80G
scsihw: lsi
serial0: socket
smbios1: uuid=f935b33c-f3b0-4a27-aed7-8d337993f944
sockets: 1
vmgenid: d75e4195-a882-4370-ae71-4a09d8e098f1

[special:cloudinit]
cipassword: [removed]
ipconfig0: [removed]

While this transfered already worked before the reboot of the 8.0.3 server.

So any way to get around this?

Greetings
Oliver
 
Hi,

it seems that was solved by updating the 7.3.3 install according to

Proxmox Bugtracker

And yes, the VM in question started to transfere without issues :)

I hope all of them will do.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!