Template with Cloud-Init disk does not work properly after upgrade.

onegreyonewhite

Active Member
Nov 8, 2019
8
1
43
33
Using Cloud-Init Support manual.

After update, creating new instances from template does not clone cloud-init disk. This leads to the fact that a new instance is not operational.
The workaround for this problem is to re-create the Cloud Init disk on each new instance.
BTW, is it bug or feature?

YAML:
proxmox-ve: 6.0-2 (running kernel: 5.3.7-1-pve)
pve-manager: 6.0-11 (running version: 6.0-11/2140ef37)
pve-kernel-5.3: 6.0-11
pve-kernel-helper: 6.0-11
pve-kernel-5.0: 6.0-10
pve-kernel-5.3.7-1-pve: 5.3.7-1
pve-kernel-5.0.21-4-pve: 5.0.21-8
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph: 14.2.4-pve1
ceph-fuse: 14.2.4-pve1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-3
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-6
libpve-guest-common-perl: 3.0-2
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.0-9
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
openvswitch-switch: 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-8
pve-cluster: 6.0-7
pve-container: 3.0-10
pve-docs: 6.0-8
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-4
pve-ha-manager: 3.0-2
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.1-4
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-13
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
 
I also have this issue, which is not convenient.
In addition, the cloned vm in the GUI says : 'No CloudInit Drive found'.

So should we open an issue on qemu-server as reverting solved the issue ?
bugtracker for qemu is here : https://bugs.launchpad.net/qemu/
EDIT: where is the bug tracked and code for qemu ? this launchpad seems abandoned...
 
I have this issue, and not convenient is a understatement. It broke my ansible-based creation and provisioning of VMs.
Downgrading to qemu-server-6.0.9 did not work for me.
Also upgrading the kernek to 5.3.7-1 did not work.
Where is this issue being tracked. Any workarounds?
 
Had to downgrade qemu-server to 6.0-5.
This is the workaround for now that works for me.
I can also confirm qemu-server 6.0-5 works (6.0-9 and 6.0-13 does not load cloud images correctly)

Migrating these VMs, created on downgraded host with qemu-server 6.0-5, to a another host with latest qemu-server 6.0-13, then boot just fine.

NB: the command to downgrade is `apt install qemu-server=6.0-5`
 
Last edited:
I have this issue, and not convenient is a understatement. It broke my ansible-based creation and provisioning of VMs.
Downgrading to qemu-server-6.0.9 did not work for me.
Also upgrading the kernek to 5.3.7-1 did not work.
Where is this issue being tracked. Any workarounds?

Did you read this thread? I mentioned the issue already post #4.

As you needed to downgrade to 6.0-5 your issue might be a different one tough. For me downgrading to 6.0-9 was enough.
 
To prevent upgrading to a newer version of qemu-server until the issue is fixed, you can use apt pinning. Example:

Code:
root@tuxmaster:~# cat /etc/apt/preferences.d/qemu-server
Explanation: Otherwise cloned VM do not have a cloud init volume anymore, 12.11.2019
Explanation: Bug 2217 - don't copy cloudinit disk on VM clone
Explanation: https://bugzilla.proxmox.com/show_bug.cgi?id=2217
Package: qemu-server
Pin: version 6.0-9
Pin-Priority: 30000

Adapt version number, in case you need to downgrade to 6.0-5, and comment as you like.
 
I upgraded my Proxmox host today to latest of Community Edition and I still can use qemu-server 6.0-9 and do not need to downgrade to 6.0-5. Cloud Init is working with Debian 10, Devuan 3, Centos 7 + 8, SLES 15 SP 1 and SLES 12 SP 4 images.
 
FWIW, normally one should be able to workaround this by adding a cloudinit drive again, and then regenerate the image (should be done automatically on start, but just do be sure).. The cloudinit config is still there, but the disk is missing..
 
And when it would be in `pve-no-subscription`? We are waiting for it so much.

I'd say monday. The update is coupled with a few other package updates - as there was some cross-package changes we need to move them all together, so I'm a bit reluctant to do a "what could go wrong Fridays" move and then go home :)

You can switch to pvetest, do the update, and then switch back to -no-subscription if you want to have them earlier.
This way only that specific update set will be pulled from pvetest and in the future you're again on the more stable no-subscription.
 
Code:
TASK ERROR: clone failed: can't get size of '/dev/pve/vm-9100-cloudinit':   Failed to find logical volume "pve/vm-9100-cloudinit"

That's always happens when I clone template with cloud-init placed in different storage. You're so hardcoders :eek::D

YAML:
()
proxmox-ve: 6.0-2 (running kernel: 5.3.10-1-pve)
pve-manager: 6.0-15 (running version: 6.0-15/52b91481)
pve-kernel-5.3: 6.0-12
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.3.7-1-pve: 5.3.7-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph: 14.2.4-pve1
ceph-fuse: 14.2.4-pve1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-4
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-8
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.0-11
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
openvswitch-switch: 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12+deb10u1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-9
pve-cluster: 6.0-9
pve-container: 3.0-13
pve-docs: 6.0-9
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-8
pve-firmware: 3.0-4
pve-ha-manager: 3.0-5
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.1-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-16
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
 
That's always happens when I clone template with cloud-init placed in different storage. You're so hardcoders :eek::D
Hmm, have a VM with the disk on a LVM-Thin, and the Cloudinit-Disk on a ZFS, works like a charm..
Is this cross-node clone, where nodes have different storage definitions or the like?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!