[SOLVED] CEPH Cluster

ChrisJM

Well-Known Member
Mar 12, 2018
54
1
48
38
Hello,

This is driving me insane.

I have done exactly in the CEPH video and documentation but it does not work.

When i create the OSD it makes it but doesnt show in the gui it just shows "default"

I also made the pool and it shows as 0 available

I think some documentation needs to be updated.

I am using version 5.2-1
 
Works here. I think you should post more info about what you do and what is not working.
 
When i add the disk via the OSD, it seems to add but does not show up in OSD section. it just shows default. it does this on all 3 nodes.

currently trying to do it manually.
 
Just done it manually and still does not works

Code:
root@INETC1083:~# pveceph createosd /dev/sdc
create OSD on /dev/sdc (bluestore)
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.

****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
Setting name!
partNum is 0
REALLY setting name!
The operation has completed successfully.
Setting name!
partNum is 1
REALLY setting name!
The operation has completed successfully.
The operation has completed successfully.
meta-data=/dev/sdc1              isize=2048   agcount=4, agsize=6400 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0, rmapbt=0, reflink=0
data     =                       bsize=4096   blocks=25600, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=864, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
The operation has completed successfully.

also getting this

Code:
root@INETC1083:~# ceph osd stat
0 osds: 0 up, 0 in
 
I have now fixed it, i needed to zap and add via the command like

for example

Code:
ceph-disk zap /dev/sdb
pveceph createosd /dev/sdb

Then it should show in

ceph osd stat

Example

root@INETC1084:~# ceph osd stat
6 osds: 6 up, 6 in
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!