Unused disk problem with backup/restore

Andrew Hart

Member
Dec 1, 2017
68
11
8
53
I have backed up and restored a few VMs from another proxmox. After the restore has completed the hardware of the VM sometimes has two disks of the same image :
Hard Disk (virtio0) disk_vm:vm-108-disk,size=50G
Unused Disk 0 disk_ct:vm-108-disk-1

The ceph pool is called "disk" and those are the default added storages. I know, from trying, that removing the "Unused Disk 0" results in a non-booting VM. The VM was never a container.
 
do you by any chance activated 'disk image' content type on the disk_ct storage?

when using our ceph tools, we create 2 storage definitions: <POOL>_vm and <POOL>_ct with the respective content types

so if disk_ct has the content 'disk images' it sees 2 disks for your vm, one on disk_vm and one on disk_ct
deleting one of them deletes the disk on the ceph pool
 
No, disk_ct is set to "Container" and disk_vm is set to "Disk image". Also it has only happened twice, not every time (from 4 so far)
 
can you post the output of 'pveversion -v'
this sounds like a bug that we already fixed
 
proxmox-ve: 5.1-32 (running kernel: 4.13.13-2-pve)
pve-manager: 5.1-41 (running version: 5.1-41/0b958203)
pve-kernel-4.13.13-2-pve: 4.13.13-32
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-18
pve-firmware: 2.0-3
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-5
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9
ceph: 12.2.2-pve1
 
mhmm, yes i can reproduce it, can you please open a bug?
for the moment, just ignore the unused disk (it does not do anything, besides being there)