ceph: unable to create OSDs over iscsi

Spiros Pap

Well-Known Member
Aug 1, 2017
87
1
48
44
HI all,

I have a dozen MD3800i 100TB storage arrays that I can reach over 10G iscsi.
I would like to create ceph OSDs on these 100TB iscsi disks in order get the best of Ceph's redundancy (replication etc) for backup purposes.

When I try to create the OSD, it fails:
#pveceph osd create /dev/mapper/md2d3 <== multipathed iscsi disk
unable to get device info for '/dev/dm-15'

Any ideas? Do you know how to create the OSD on my setup?

I am on pve-manager: 7.2-7 (running version: 7.2-7/d0dd0e85) and ceph: 16.2.9-pve1

The cluster looks like this (one host now):
root@pxd:~# ceph -s cluster: id: b5aee603-f7a1-4d20-b205-506b6263b418 health: HEALTH_WARN OSD count 0 < osd_pool_default_size 2 services: mon: 1 daemons, quorum pxd (age 24h) mgr: pxd(active, since 24h) osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs:


Thanx,
Sp
 
Well yes, forums are full of advise for ceph/zfs that OSDs/disks should rely on local storage.
While both ceph/zfs are built with the assumption of local disks for perfectly valid reasons, you can always remap these reasons to your environment, take your risks and create something that suits your purpose. It's called engineering.

PS: The md3800i are raid6, but that doesn't give me chassis redudancy or other kind of flexibililty.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!