Got the connect problem after install the Ceph via Portal

parker0909

Well-Known Member
Aug 5, 2019
82
0
46
36
Hi All,

My name is Parker , i am facing the problem related to ceph.I have tried to install the ceph via Proxmox VE portal, i got the disconnect message during the installation status andi unable to do the configuration. After i refresh the page, it seem the node ceph cant connected to the existing ceph. May i know any idea we can to fix the ceph problem?

I have attached the print screen to show the node 2 - Ceph init node and node 3 should be the node have ceph problem.

Thank you.
 

Attachments

  • Node2_Ceph.png
    Node2_Ceph.png
    28.8 KB · Views: 6
  • Node2_configuration.png
    Node2_configuration.png
    45.5 KB · Views: 7
  • Node3_Ceph.png
    Node3_Ceph.png
    27.8 KB · Views: 4
  • Node3_Configuration.png
    Node3_Configuration.png
    34.2 KB · Views: 4
I have attached the print screen i got the "got timeout" message during the ceph installation process.

Parker
 

Attachments

  • got timeout.png
    got timeout.png
    27.6 KB · Views: 6
Hi All,

The good news all node ceph should be normal now, but we facing another question when i try to create the OSD with DB disk. I got below message.


'/dev/sde' is smaller than requested size '800156322201' bytes


May i know how to calculate the smaller than requested size for DB disk? Thank you.

Parker
 
The DB partition size is 10% of the data disk, when a separate DB/WAL is selected.
 
The DB partition size is 10% of the data disk, when a separate DB/WAL is selected.
Thank you. May i know it is able to share with different OSD .For example we have three OSD (8TB SATA),so we can share one 800GB SSD for DB disk. May i know it is good idea for DB disk setting?

Parker
 
Thank you. May i know it is able to share with different OSD .For example we have three OSD (8TB SATA),so we can share one 800GB SSD for DB disk. May i know it is good idea for DB disk setting?
Yes.
 
I got the problem it seem OSD cant share the same SSD DB disk with not enough free space. it meaning should be i need arrange 3 800GB ssd disk for db disk if we have three 8TB OSD. May i know it is correct? Thank you.

3 OSD
3 x 8TB SATA disk
3 x 800GB SSD for DB cache

Error when i try to create another 8TB OSD
vg 'ceph-527c49e7-0546-403c-8606-8b88cff8c0e5' has not enough free space

Parker
 
I got the problem it seem OSD cant share the same SSD DB disk with not enough free space. it meaning should be i need arrange 3 800GB ssd disk for db disk if we have three 8TB OSD. May i know it is correct? Thank you.
As said, the DB/WAL partition/LV will be 10% of the data disks size. So 4 TB data disk, will result in ~400GB DB/WAL partition/LV. If the partitions geht to big, then you will need to set it in the ceph.conf.
https://forum.proxmox.com/threads/where-can-i-tune-journal-size-of-ceph-bluestore.44000/#post-212147
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!