Ceph osd class and RDB Total Size problem

parker0909

Well-Known Member
Aug 5, 2019
82
0
46
36
Hello,

I got some strange problem in my ceph cluster osd. I see that some class showing the hdd and ssd, but i confirmed that all drive should be ssd drive. I not sure why there have different detected for all nodes(I have attached the print screen file name:osd.png).
On the other hand, Ialso facing another strange problem, i had removed some vm in yesterday. I found that disk usage should be dropped in RDB volume,but i also found that the total also drop in the same time. My question should i not sure why the total size dropped.(i also attached the print screen size.png)

Thank you for all can provide some suggestion. Sorry for my poor english.

Parker
 

Attachments

  • osd.png
    osd.png
    60.5 KB · Views: 34
  • size.png
    size.png
    17.3 KB · Views: 34
Hi,
this thread has command to change the device class at the very end, please also note Alwins remark about RAID. For the strange size behavior, PVE uses librados to request used and max available size. The total is then just the sum of these two values and depending on your allocations the total will change. But it's still strange that it drops as much as the image. Do you also have other pools or is this the only one?
 
Hi,
this thread has command to change the device class at the very end, please also note Alwins remark about RAID. For the strange size behavior, PVE uses librados to request used and max available size. The total is then just the sum of these two values and depending on your allocations the total will change. But it's still strange that it drops as much as the image. Do you also have other pools or is this the only one?
Thank you.


May i know below command will affect osd disconnection?
ceph osd crush set-device-class

About RBD Pool size, i should only have one RBD pool in 45 osd.I not idea why the total size drops down a lot.

Thank you.
Parker
 
Hi,

I am afraid there have some wrong for ceph to cause the max size drops mush when i have removed some VMs.it there suggestion can confirm status is normal for ceph and pve. Thank you.

Parker
 
It turns out you have to first remove the old device-class with:
Code:
ceph osd crush rm-device-class osd.40 osd.41 osd.42 osd.43 osd.44
and then switch over to ssd
Code:
ceph osd crush set-device-class ssd osd.40 osd.41 osd.42 osd.43 osd.44

This can be done without restarting the OSDs and afterwards the OSDs will be treated as their new device class by the CRUSH algorithm.

Regarding the strange size change, you can use
Code:
ceph df
ceph osd df
to see the usage of your storages and pools respectively OSDs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!