cloudinit disks not cleaned up

May 17, 2023
5
0
1
Hi

We are deploying cloudinit images from terraform (telmate/proxmox), by cloning a template in proxmox already configured with cloud init.
The new machine gets created with the next available VMID, and a small 4MBdisk is created to feed the cloudinit settings. The disk created for cloudinit is called vm-{VMID}-cloudinit.
When deleting the VM created with cloudinit it seems that the small disk is not deleted, and are now living its own live, with no associated VM.

We did a lot of testing with terraform, and therefore created and deleted a lot of machines. Now the VMID count has reached to where we have some orpened VMID disks laying around, and we cannot create new cloudinit VM's because a disk with the name already exists.

My suspecion is that terraform (terraform destroy) is not deleting the cloudinit disk, because this is solely beeing created in proxmox, and terraform does not know about it, and therefore we end up with all these disks.

The storage is on ceph, so i dont have access to an actual filesystem - how do i manually delete these disks?

Any experiences or tips on how to prevent this?

Best regards, Kasper
 
"pvesm free VOLID" should work - but please double check that you only run it on orphaned disks, it works on the storage layer and does not check whether a corresponding guest exists!

how is terraform cleaning up the VMs (either the requests made from terraform logs, or at least the pveproxy access log with the API endpoints might give a clue)? which PVE version are you using (pveversion -v)?
 
Hi Fabian

Thank you for the quick reply.

root@maahcprox01:~# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.107-2-pve)

I think terraform is cleaning up based on the plan file, and this does not include the cloudinit file, as this is done be the proxmox cloudinit integration.

How can i check if a file is "orphaned" before i delete it?

Luckily pvesm list ceph_storage tells me the size, so i can assume that all disks on 4 MB belongs to cloud init, and it shouldn't be a disaster if i delete a wrong one.

BR Kasper
 
I think terraform is cleaning up based on the plan file, and this does not include the cloudinit file, as this is done be the proxmox cloudinit integration.
yeah, but the question is how does it clean up? it must make some sort of request or command, and normally, the one for destroying a VM should also clean up all referenced disks ;)

How can i check if a file is "orphaned" before i delete it?
if not corresponding guest exists in the cluster, it is orphaned
 
I made a small script called findorphaninitfile.sh containing the following:

#!/bin/bash pvesh get cluster/resources --type vm --output-format json-pretty | jq -r '.[] |.vmid' > vmids pvesm list ceph_storage | grep cloudinit | cut -d " " -f 35 | grep -v '^$' > initfiles grep -vwf vmids initfiles

Any output would indicate an orphaned disk.

BR Kasper
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!