Proxmox V5.2 - Ceph - Unused disk

TwiX

Renowned Member
Feb 3, 2015
311
23
83
Hi,

proxmox-ve: 5.2-2 (running kernel: 4.15.17-3-pve)
pve-manager: 5.2-3 (running version: 5.2-3/785ba980)
pve-kernel-4.15: 5.2-3
pve-kernel-4.15.17-3-pve: 4.15.17-13
pve-kernel-4.13.13-2-pve: 4.13.13-33
ceph: 12.2.5-pve1
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-34
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-9
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-1
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-27
pve-container: 2.0-23
pve-docs: 5.2-4
pve-firewall: 3.0-12
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-29
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9

I built a ceph pool with checked option 'add storages'

I restored a KVM virtual machine on dedicated KVM ceph pool (ceph_vm).
I noticed that GUI shows me a unused disk on ceph_ct storage which is same disk image !

Capture.jpg
 
Can you please post your storage configuration?
 
Contents of ceph_ct and ceph_vm :

Capture2.jpg


Capture3.jpg


I only have 3 KVM virtual machines on that fresh cluster, and 2 of them have their own true disk image as unused disk on ceph_ct storage.

Capture4.jpg
 
Code:
rbd: ceph_vm
        content images
        krbd 0
        pool ceph

rbd: ceph_ct
        content rootdir
        krbd 1
        pool ceph
 
Try a 'qm rescan', it may remove the entry from the config. Did the VM already exist on the cluster?
 
Thanks for your reply, I just did it but the unused disk still there.

This is a new cluster without any VMs. I restored 3 VMs from .vma backups.
 
Thanks, I will create 2 pools, one for vm and the other for ct.

In the meantime, you should ask user in GUI if the pool is dedicated to ct or vm (not for both), add storage with krbd or not if required. But one pool per kind of virtualization type.
 
In the meantime, you should ask user in GUI if the pool is dedicated to ct or vm (not for both), add storage with krbd or not if required. But one pool per kind of virtualization type.
This should not be necessary, the content type of the storage should work as a filter. If you don't need the krbd (only KVM) then the storage 'ceph_vm' should suffice.
 
Yes but in that case the option 'add storages' create 2 storages and people may be affected by this issue...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!