Why PVECEPH only on Block Devices?

RaZZe

Renowned Member
Dec 12, 2013
9
0
66
Hi,

why should pveceph only accept full block devices and not only a partition ? Ceph itself do support this, i guess you know that.

i would to know the reason of this decision. And is there is a workaround for this?

The Probleme here is:
We have a machine with 6x500SAS.
4 of that HDD are OSD and 2 in a Softwareraid for Proxmox OS. But at the end of this 2 devices, there is a lot of unused space...

Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 262146047 262144000 125G 83 Linux
/dev/sda2 262146048 295700479 33554432 16G 83 Linux
/dev/sda3 295700480 975699967 679999488 324.3G 83 Linux

Is there a workaround ? it could be dirty, thats no problem.

Grettings
RaZZe
 
The reason is most likely because it is possible with ceph, but NOT RECOMMENDED. The recommendation is to
  1. use full HDD + Journal Partition on STABLE SSD
  2. use full HDD + same journal on same HDD if you can not do 1)
as such there was probably a decision made to not sink more time into this fringe-case when they made pveceph.

If you wonder why it is a bad idea, head over to the ceph documentation or one of the various blogs out there covering ceph.



to answer your question "how do i do it with proxmox regardless of the warnings" (and its not messy at all - unless you count the whole proposal in the first place ...)

Code:
- You first initialize your Ceph-node / -cluster on Proxmox via the pveceph commands.
- When it is time to add osd's you forget about pveceph and instead use the normal "ceph-disk" and "ceph" commands, like you would on a regular Ceph-node.

We use on our work-Clusters (82 nodes running proxmox/ceph in 3 separate clusters currently) the same methodology. pveceph for initialisation, then run and operate everything else via standard ceph commands (ie. custom crush rules, crush hooks for custom crush-maps, custom EC-pools/R-Pools, separate SSD/HDD OSDs, ceph-gui, etc ... )