New VMs are retaining lvm data from previously removed VMs

FreePenCollector

New Member
Jul 9, 2013
10
0
1
Canada
Somewhere some data is being saved, I'm just not 100% sure where that would be.

A little while back I was restoring VM backups, one of which I cancelled as shown:

Code:
/var/log/pve/tasks# cat B/UPID:labhost:00041992:01758D18:5453CAAB:qmrestore:130:xxx@ad:restore vma archive: zcat /labbackups/dump/vzdump-qemu-134-2014_10_17-10_51_08.vma.gz|vma extract -v -r /var/tmp/vzdumptmp268690.fifo - /var/tmp/vzdumptmp268690
CFG: size: 278 name: qemu-server.conf
DEV: dev_id=1 size: 34359738368 devname: drive-ide0
CTIME: Fri Oct 17 10:51:10 2014
  Logical volume "vm-130-disk-1" created
new volume ID is 'lvmGroupLAB:vm-130-disk-1'
map 'drive-ide0' to '/dev/xxxVolGrp/vm-130-disk-1' (write zeros = 1)
progress 1% (read 343605248 bytes, duration 3 sec)
progress 2% (read 687210496 bytes, duration 7 sec)
progress 3% (read 1030815744 bytes, duration 12 sec)

This was an ubuntu 14.04 server VM. I'm finding now that when I create new VMs and try to install ubuntu on them, the new "vm-###-disk" has vg and lv data that corresponds to the VM i was restoring on Oct 17th. This causes the install to fail with: "Because the volume group(s) on the selected device consist of physical volumes on other devices, it is not considered safe to remove its LVM data automatically. If you wish to use this device for partitioning, please remove it's LVM data first."

There's ways around this, but newly created VMs shouldn't be retaining info from old VMs/cancelled qmrestores.

I've tried updating proxmox and rebooting to clear whatever file this data is being retained in, but no success. Where could I look?

As it's likely to be requested:

Code:
proxmox-ve-2.6.32: 3.3-138 (running kernel: 2.6.32-33-pve)pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-33-pve: 2.6.32-138
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-1
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
 
Well this got a lot of attention.

For anyone who googles this in future, a work around to this bug is to restore a VM that uses a different OS (and partitioning scheme), ie. Windows. After that any newly created VMs should install okay. It must clear or overwrite the bad data, where ever that is being erroneously saved.

However, as I found after a power outage, if your proxmox server reboots the bad data will be back, and you'll have to perform the bizarre work around again.

Pretty cool bug, hopefully a future update will address or clear it, I'm just glad this happened on my lab box.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!