Ceph OSD creation and SAS multipath

Whatever

Renowned Member
Nov 19, 2012
394
63
93
Does Proxmox (GUI and pveceph) on OSD creation take into consideration that SAS disk could have (and more likely does in correct server configuration) multipath enables/configured (with dm-multipath)?
 
Hm - have never actively tried it - usually Ceph redundancy is achieved by adding more nodes to a Ceph cluster, each containing fewer Disks/OSDs (that way also the network load is distributed better across the nodes).

A quick search indicates that running Ceph OSDs over multipath is supported since quite a while:
* https://tracker.ceph.com/issues/11881
* https://www.suse.com/support/kb/doc/?id=7023110

Where would you have a multipath setup? (I would recommend against using iSCSI exports as OSDs)

Hope this helps!
 
Actually, in my environment (4-nodes cluster) each node has only SAS drives, dual expander backplane connected to 2 different HBAs.
From my perspective using SAS multipath improve not only availability but overall performance as well.

In conclusion (correct me if i'm wrong) the correct way adding an OSD is:
1. Create SAS multipath (with device mapper multipath)
2. Use pveceph createosd /dev/mapper/disk... -journal_dev /dev/sd[Y]
3. Do not create OSD via GUI (until supported in GUI)
 
Depending on the naming of the mapped device, you may need to use ceph-disk directly.
What I meant with that is, pveceph will not recognize that 'disk1' as a valid device and you may need to use ceph-disk directly.