Ceph Pool Shows Full even deleted all the VMS

techadmin

New Member
Hello,

I have created a 3 node cluster with High Availability using ceph osd with same hardware configuration .
Configurations are as follows for all nodes
1 x 120 GB SSD for OS Installation
4 x 750 GB HDD for Ceph OSD
64 GB RAM
24 core CPU

I have configured 2 OSD per node as
1 disk for osd and another one for journel

We tried to test the maximum number of vm's we can create , we created few VM's and tested HA, it went well.

But ceph pool size shows full after created 10 VM's with size 100G and 50G vms. VM's are getting locked when we tried to create more vm's, so we tried to delete the vm's and now the cluster has no vm's in it. But the cpeh size shows full.

Any ideas?
 
1 disk for osd and another one for journel
With Bluestore there is no journal anymore. There the DB/WAL is placed on a separate device. It should be a faster device, as the DB/WAL has a lot of small read/writes. If you used one of the 4x 750 GB HDDs for the DB/WAL device, then you might want to reconsider and use it as an additional OSD to get more space.

But ceph pool size shows full after created 10 VM's with size 100G and 50G vms. VM's are getting locked when we tried to create more vm's, so we tried to delete the vm's and now the cluster has no vm's in it. But the cpeh size shows full.
Please check with rbd -p <pool> ls -l if all the images have been removed. Depending on IO load the removal process might take some time or run into a timeout.
 
Hello,

Thanks for your reply. I have tried the method and removed the images manually and now the size looks fine. But i heard that keeping separate disk for DB/WAL will help to recover data incase of failure, instead of keeping the DB/WAL on same disk for osd.

Like EX:

1x 750 HDD for both OSD and DB, in such case will it cause any issue in recovering the vm if one hard disk failed.

Can you suggest some any good solution.
 
The redundancy happens through the 3x replica that are stored on different hosts on the cluster. If the disk dies, it doesn't matter if the DB/WAL or the data disk is dead. You just replace the disk and re-create the OSD again.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!