Question about Ceph and partitioning host disks

100percentjake

New Member
Jan 3, 2017
20
0
1
28
I have an incredibly horrible not-at-all-optimal cluster going on with some older HP hardware. All three have RAID 10 with a hot spare and all have Proxmox 4.4 running on them. Proxmox was installed on each computer with a 10GB limit, thus leaving the rest of each logical RAID drive unformatted. My hope is that I can format this unpartitioned space with a 10GB partition to dump logs to and the remainder formatted for use with Ceph. Thus, each computer would have 10gb for Proxmox, 10gb for logs (possibly overkill), and the remainder managed with Ceph. To this end I have two questions:

How can I partition/format the unpartitioned space from within Proxmox? fdisk simply shows the existing partitions and a series of ram## drives that are 64mb. No /dev/sda or anything else I'm used to when working with Linux.

Will any of what I'm trying to do work? Almost all docs or tutorials I can find regarding Ceph assume dedicated drives and identical RAID arrays for each node, which due to this cluster being made out of hand-me-down recycled and kludged together hardware is fairly unrealistic.


Thanks,
Jake
 
So I finally figured out how to make the partitions I wanted by pointing cfdisk at /dev/cciss/c0d0 but now that I have the partitions I want no amount of buggery will make them show up in the Ceph GUI like they are supposed do.

Code:
root@px2:~# ceph-disk prepare /dev/cciss/c0d0p5
2017-01-11 15:42:10.473142 7f0e5a3c0780 -1 did not load config file, using default settings.
2017-01-11 15:42:10.485767 7faae825a780 -1 did not load config file, using default settings.
2017-01-11 15:42:10.498413 7fdf3d9ca780 -1 did not load config file, using default settings.
2017-01-11 15:42:10.510746 7f39803b8780 -1 did not load config file, using default settings.
2017-01-11 15:42:10.523361 7fa061ef7780 -1 did not load config file, using default settings.
2017-01-11 15:42:10.535777 7fdabb207780 -1 did not load config file, using default settings.
2017-01-11 15:42:10.564856 7fa0d82bd780 -1 did not load config file, using default settings.
2017-01-11 15:42:10.577267 7f8324411780 -1 did not load config file, using default settings.
2017-01-11 15:42:10.589674 7f00db3d0780 -1 did not load config file, using default settings.
meta-data=/dev/cciss/c0d0p5      isize=2048   agcount=4, agsize=96337230 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=385348917, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=188158, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
root@px2:~# pveceph createosd /dev/cciss/c0d0p5
unable to get device info for 'cciss/c0d0p5'
root@px2:~#

And in the GUI I get "No Disks unused" when I go to Create: OSD. Super confused and not able to find anything through Google.
 
Hi Jake,
AFAIK there was an thread some days ago about naming with ceph and HP-raid (cciss)...
EDIT: Found - not an thread, but in the wiki: http://pve.proxmox.com/wiki/Ceph_Server

About your ceph plan... I think that's not the best scenario...

I don't know, if you can use the raid-controller in path through mode, but if yes, you can drop your Hot-swap-hdd and use this hdd as ceph-osd.
Depends on the raid-controller (some use the cache + BBU for path through - in this jase you don't need an seperate journal-partition) you can use an raid-10 partition as journal.

With this solution you have only 3 OSDs - which is not realy much and the speed will not be surprising good...

Udo
 
My problem is that I can't create any OSDs on my /dev/cciss/c0d0p5 partition. I don't know if it needs to be mounted in a directory for Ceph to use it (tried this to the best of my ability and it didn't work since it appears Ceph expects a device) or what but "pveceph createosd /dev/cciss/c0d0p5" just keeps resulting in "unable to get device info for 'cciss/c0d0p5' no matter what I try.

I'm not sure at all what you're suggesting with repurposing my hot spare. I suppose I could reconfigure the RAID array to have two spare disks and use one as a hot spare and the other one as an independant disk not a part of the array and install Proxmox on it but that would be a waste of a very large amount of space (the entire disk vs my tiny 10gb partition on my RAID array)
 
My problem is that I can't create any OSDs on my /dev/cciss/c0d0p5 partition. I don't know if it needs to be mounted in a directory for Ceph to use it (tried this to the best of my ability and it didn't work since it appears Ceph expects a device) or what but "pveceph createosd /dev/cciss/c0d0p5" just keeps resulting in "unable to get device info for 'cciss/c0d0p5' no matter what I try.

I'm not sure at all what you're suggesting with repurposing my hot spare. I suppose I could reconfigure the RAID array to have two spare disks and use one as a hot spare and the other one as an independant disk not a part of the array and install Proxmox on it but that would be a waste of a very large amount of space (the entire disk vs my tiny 10gb partition on my RAID array)
ceph-deploy (and I asume pve-ceph too) need whole disks for ceph - not an partition.
About the hot spare: You don't need the hot spare, if you change an failed drive fast enough (you need an cold spare of course). Without hot spare you have an free OSD - or not?

Udo
 
I still don't understand what getting rid of the hot spare does for me. Sure, then I'd have a physical drive to put an OSD on, but what's the point of doing that? I've got a whole RAID array I'm trying to use here.

Is there any way to get ceph to play nice with a partition instead of an entire physical array or disk?
 
ceph-deploy will allow partitions, or at least it used to as we used to deploy that way. pveceph does not.
 
I still don't understand what getting rid of the hot spare does for me. Sure, then I'd have a physical drive to put an OSD on, but what's the point of doing that? I've got a whole RAID array I'm trying to use here.

Is there any way to get ceph to play nice with a partition instead of an entire physical array or disk?
Hi,
to use an raid for ceph isn't recomended - and this due reasons. Search at the ceph mailing list for this topic - there are severals threads about this.

Because partitions... why don't create an new raid-volume on the raid-set (the naming is quite different between raid-vendors) - so it's for the linux an normal disk?! But again - I'm sure, that ceph is not the right decision for such an scenario.

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!