add disk recover from previous vms and wipe disk too slow

ludo76

New Member
Feb 15, 2024
1
0
1
I have a Proxmox cluster with several storages type (lvm tlmthin, ceph, zfs (raid1). I use terraform for provisioning with bgp/proxmox module, with extra disk on "local-disk".
I notice a terraform destroy, then terraform apply, produce a new vms from root disk from a template is correct. BUT added disk are already fomated because, it recovere disk image of the previous destroyed vms. Even if I destroy manually, and force delete unreferences disks. Another thread : https://forum.proxmox.com/threads/z...isk-is-being-removed-takes-a-long-time.93755/

I notice a wipe remove option exist on this storage type, and looks to work correctly BUT the writing speed looks to be limited to 10MB/s ,so a small disk take days to be deleted.

So, what is the best way to use terraform and avoid recovering previous disk : dd if=/dev/zero form the vm before destroy ?, change wipe remove speed ... or another way ?

PS: >pvs after delete the vms and disk do no show the previous image, but new provisionning get exact same data image ???

Thank you,

My config:
root@totoddc1:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.4-3-pve)
pve-manager: 8.2.2 (running version: 8.2.2/9355359cd7afbae4)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.8: 6.8.4-3
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph: 17.2.7-pve3
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
dnsmasq: 2.89-1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.6
libpve-cluster-perl: 8.0.6
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.1
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.2.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.2-1
proxmox-backup-file-restore: 3.2.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.6
pve-container: 5.1.10
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.0
pve-firewall: 5.0.7
pve-firmware: 3.11-1
pve-ha-manager: 4.0.4
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2


root@totoddc1:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content snippets,backup,iso,vztmpl
shared 0

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images
nodes totoddc1,totoddc3,totoddc2

rbd: ssd-ceph
content images,rootdir
krbd 0
pool ssd-ceph

rbd: hdd-ceph
content rootdir,images
krbd 0
pool hdd-ceph

cephfs: cephfs
path /mnt/pve/cephfs
content backup,snippets,iso,vztmpl
fs-name cephfs

lvm: local-disk
vgname local-disk
content images,rootdir
nodes totoddc4,totoddc1,totoddc2,totoddc3
saferemove 1
shared 0

zfspool: raid1
pool raid1
content images,rootdir
mountpoint /raid1
nodes totoddc4,totoddc2,totoddc3,totoddc1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!