CEPH Jewel and iSCSI OSD supported ?

Belokan

Active Member
Apr 27, 2016
155
16
38
Hello and happy new year,

I've get rid of an iSCSI DRBD9 configuration in order to put a CEPH in place.
I've removed old LUNs, provisioned fresh ones, installed Jewel following the new wiki and I'm now at the point I have to create OSD.

The problem is that pveceph createosd keeps failing with "unable to get device info for 'sdX'" ...

I've tried with basic dos format, gpt format thru ceph-disk zap, non formatted thru dd, but nothing worked !

root@pve1:/sys/block# ls -l sdb
lrwxrwxrwx 1 root root 0 Jan 9 09:00 sdb -> ../devices/platform/host6/session1/target6:0:0/6:0:0:0/block/sdb

root@pve1:/sys/block# ls -l /dev/sdb
brw-rw---- 1 root disk 8, 16 Jan 9 09:06 /dev/sdb

root@pve1:/sys/block# fdisk -l /dev/sdb

Disk /dev/sdb: 128 GiB, 137438953472 bytes, 268435456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0x8a2ba188

Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 268435455 268433408 128G 83 Linux

root@pve1:/sys/block# pveceph createosd /dev/sdb
unable to get device info for 'sdb'

root@pve1:/sys/block# ceph-disk zap /dev/sdb

***************************************************************
Found invalid GPT and valid MBR; converting MBR to GPT format
in memory.
***************************************************************


Warning! Secondary partition table overlaps the last partition by
33 blocks!
You will need to delete this partition or resize it in another utility.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.

root@pve1:/sys/block# fdisk -l /dev/sdb

Disk /dev/sdb: 128 GiB, 137438953472 bytes, 268435456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: gpt
Disk identifier: 331CB45D-8A88-43BB-97C2-45249C228408

root@pve1:/sys/block# pveceph createosd /dev/sdb
unable to get device info for 'sdb'

root@pve1:/sys/block# dd if=/dev/zero of=/dev/sdb bs=1024 count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.00132699 s, 772 kB/s

root@pve1:/sys/block# pveceph createosd /dev/sdb
unable to get device info for 'sdb'


Any idea on how to create OSD over iSCSI LUNs ?

Thanks in advance !

Olivier
 
BTW, I'm up to date:

proxmox-ve: 4.4-77 (running kernel: 4.4.35-1-pve)
pve-manager: 4.4-5 (running version: 4.4-5/c43015a5)
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.2.6-1-pve: 4.2.6-26
pve-kernel-4.4.8-1-pve: 4.4.8-52
pve-kernel-4.4.15-1-pve: 4.4.15-60
pve-kernel-4.4.16-1-pve: 4.4.16-64
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3 corosync-pve: 2.4.0-1
libqb0: 1.0-1 pve-cluster: 4.0-48
qemu-server: 4.0-102
pve-firmware: 1.1-10
libpve-common-perl: 4.0-85
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-71
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1 pve-docs: 4.4-1
pve-qemu-kvm: 2.7.1-1
pve-container: 1.0-90
pve-firewall: 2.0-33
pve-ha-manager: 1.0-38
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.6-5
lxcfs: 2.0.5-pve2
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80
drbdmanage: not correctly installed
ceph: 10.2.5-1~bpo80+1
 
hi, no this is not supported via the pve tools, you can of course do it manually (see ceph documentation).

but another point, why do you want the ceph osd on iscsi?
i don't think this makes any sense, since ceph is already a distributed storage and expects the disks to be local.

your setup would mean every disk read/write would have to be sent over the network twice?
 
Hello Dominik,

Thanks for your answer !
I'll check ceph doc. Do I need to just create/initialize the OSDs outside of PVE and then return to the wiki starting from "Ceph Pools" section ?

Now of course your question makes sense. But my PVE HA environment is very humble. Based on 2 Synology NAS, 2 NUCs and a VM hosted on one of the NAS (in order to have a valid quorum based on 3 physical hosts).
This environment is used @home to serve small services like DNS/DHCP/VPN or "download tools" or a gateway switching between DSL/4G, etc, etc ... So very low low low I/O bound. I have a dedicated 1GB link between my NASs and the PVEs and it's more than enough. It worked so far with DRBD9 but I wanted to get snapshots back.

The thing is that I need to have a redundant storage because if those services are down (loss of PVE or NAS) my whole home network is down.

Unfortunaly, even if both NAS have "HA capabilities", Synology decided that it only works between NAS of the same generation ... I have a 412+ and a 415+ then it does not...

So I chose to provide iSCSI from each NAS to each physical PVE and then add a solution of cluster filesytem on top of that.

Olivier
 
I'll check ceph doc. Do I need to just create/initialize the OSDs outside of PVE and then return to the wiki starting from "Ceph Pools" section ?
i guess it would work like this, but i think removing will also not work

your use case is really narrow, but i currently can not think of a better solution for your setup...
 
You could also install ceph on the 415+ (even proxmox should work) - I did not do this with synology but with other Intel based NASes it worked fine..

Jonas
 
but another point, why do you want the ceph osd on iscsi?
i don't think this makes any sense, since ceph is already a distributed storage and expects the disks to be local.
In our case it is a way to mitigate the heterogeneity of our cluster. There are 2 "new" servers with Fibre Channel and one storage and the most servers has no fibre channel support.

your setup would mean every disk read/write would have to be sent over the network twice?
In our case two different and dedicated networks.

IMO if pveceph could manage OSD creation in fibre channel (and iscsi) LUNs would helpfoul.
Thanks,
 
Last edited:
hi, no this is not supported via the pve tools, you can of course do it manually (see ceph documentation).
Ceph's documentation refers to ceph-deploy but it is not installed with proxmox's flavored ceph packages.

I tried to know the necessary commands reading the sourcecode related with pveceph createosd, but I cannot determine wich command is triggered after ceph-disk prepare[... args].
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!