Default creation ceph storage feature make unused disks appear for each VM

jeanlau

Renowned Member
May 2, 2014
53
11
73
Hi everyone !

I already answered to a few thread, but this is the first time that I create a new on my own so sorry if I'm not in the good section, I searched a lot in the forum and didn't found a similar subject, again, sorry if there is one I didn't found...

I set up a PVE CEPH cluster, which by the way perform very well, there is a lot of great features, I use proxmox since years, I always liked it but now it's a new level, now you can say it's a product as good as very expensive commercials product, unless you take a high level of subscription, in that case, Promox stays great but isn't so cheap anymore... but it's another story... Anyway, I love its ton of functionalities, its stability, its maturity, and so on...

Well, I stop telling my life... the question is :

As I said everything is alright and work very well...( for now lol ?)
I added the CEPH pools and directly created the accordingly storages definitions thanks to the new functionality that allow to directly create it from the pool creation section.
The system creates a storage called "MyPool_VM" and "MyPool_CT", and you can see the name of objects (disks) directly in both the sections.

For the moment I only use KVM VMs but I soon will begin to use LXC too, I just feel not enough secure for now with it but it's again another story...
My issue is that some of my VMs are able to see that there is a disks with their ID name in both CEPH Storages and the CT version of my virtual(s) disk(s) show up in the VM config as unused disks in the same time that they appear correctly connected to the VM with the storage "MyPool_VM"... (weirdly, not all the VMs are concerned, some don't see it...)

Moreover at a moment I had to issue a "qm rescan" and the described situation has spread on almost over all the VMs (still very weird that not all VMs are concerned by this, it seem random to my eyes but there should be an explanation...)

Anyway, I don't know if I was clear in the description of my problem, I can post config files and/or screenshots if needed.

Thank you in advance for any answer,
Best regards,
 
The system creates a storage called "MyPool_VM" and "MyPool_CT", and you can see the name of objects (disks) directly in both the sections.
This is just a different storage configuration, they use the same pool underneath.

I don't quite understand your description, but well start... What is your 'pveversion -v' and /etc/storage.cfg? What does a config of a affected VM look like (qm config <vmid>)? What displays a 'rbd -p <pool> ls'?
 
this is actually a bug, thanks for finding this :)

when rescanning (with qm rescan) we iterated through the list of disks, and tried to avoid aliases when adding one unused disk multiple times, but missed it when we already had the disk unused
 
I was thinking it's was a bug cause I suppose, the function search all object that match the vmid and as the object (disk-xxxx-x) is systematically present two times if we the same pools for vm & ct, it is normal that it appears as unused...
And indeed, the detection algorithm should exclude the disk if it already is used in the VM... so it's logical it's a bug...

It's not to much serious since it doesn't prevent anything to work correctly but it's important to be aware of the situation because it could lead to potential data loss/corruption (in case a distract user try to delete it while in use...

As it'a bug, I have some questions :

1 do you need me to provide anyway the infos Alwin asked me ? I think it's not necessary since you found the bug
2 there's no rush, just curiosity but how long will it take for this kind of bug for getting corrected ?
also curiosity, what are the different distincts steps

thank you very much for your answers :)
 
OK thank you very much for you efficiency and how fast you are !

May I ask a last question ? If I want to patch my system with your code, could let me know how to do it ? If you don't have time to explain, I will look forward by myself but at least can you tell me if you recommend not to do so and just wait ? (something I can live with)

Thank you again
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!