[SOLVED] Ceph - Cannot configure journal

D0peX

Member
May 5, 2017
32
0
11
Hi,

i've been messing with Ceph, i had it running previously on a all HHD cluster (10k 149gb sas2 drives). It was fairly performant. but the write speeds weren't there. Now i have bought SSDs for each host (3). I have installed Proxmox on the ssd. set up networking, clustered, install ceph. but now i am facing a problem.

I cannot use /dev/sda (the ssd) as a journal (which is an option in the GUI). i'm getting the following error:
Disks are all raid 0 (HP410i controller, no passthrough sadly). i had no issue with ceph on 4.4 on the all hdd cluster.

Code:
root@hv1:~# pveceph createosd /dev/sdb -journal_dev /dev/sda

The ZFS modules are not loaded.
Try running '/sbin/modprobe zfs' as root to load them.
command '/sbin/zpool list -HPLv' failed: exit code 1

create OSD on /dev/sdb (xfs)
using device '/dev/sda' for journal
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.

****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
Setting name!
partNum is 3
REALLY setting name!
Could not create partition 4 from 34 to 10485793
Unable to set partition 4's name to 'ceph journal'!
Could not change partition 4's type code to 45b0969e-9b03-4f30-b4c6-b4b80ceff106!
Error encountered; not saving changes.
'/sbin/sgdisk --new=4:0:+5120M --change-name=4:ceph journal --partition-guid=4:8827ec92-5f5d-4ff2-852e-4bcf7c617fad --typecode=4:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sda' failed with status code 4
command 'ceph-disk prepare --zap-disk --fs-type xfs --cluster ceph --cluster-uuid 305873a4-82a5-4e78-b4fd-bac929897526 --journal-dev /dev/sdb /dev/sda' failed: exit code 1

pveversion:
pve-manager/5.0-10/0d270679 (running kernel: 4.10.11-1-pve)

Is there a way to create the journal on the SSD?

Thanks in advance!
 
what does "lsblk /dev/sd*" say?
 
is the journal disk already initialized with gpt?
it seems it tries to create a partition (number 4) from 0 to 5GB which fails
 
is the journal disk already initialized with gpt?
it seems it tries to create a partition (number 4) from 0 to 5GB which fails
Well, that obviously is not possible, since proxmox itself is installed there
 
Ok, i switched it up now. i'm using a 140gb 10k spinner for the proxmox install. I can use the SSD as journal now. but only for 1 OSD (sadly).
For other osd's it also wants to create a short partition at the beginning :/
 
We are using 1tb hdd and ssd journal. We can create 10 osd with single ssd. We use gui only.
Code:
/dev/sdc1 ceph data, active, cluster ceph, osd.15, journal /dev/sdb1
/dev/sdd :
 /dev/sdd1 ceph data, active, cluster ceph, osd.16, journal /dev/sdb2
/dev/sde :
 /dev/sde1 ceph data, active, cluster ceph, osd.17, journal /dev/sdb3
/dev/sdf :
 /dev/sdf1 ceph data, active, cluster ceph, osd.18, journal /dev/sdb4
/dev/sdg :
 /dev/sdg1 ceph data, active, cluster ceph, osd.19, journal /dev/sdb5
It's auto create partition on ssd journal. The ssd journal size is 120G.
 
@John Wick That did not work at all for me, even partitioning the disks with fdisk did not work. The OSD did not came online. they showed in the crush map. but not in the gui. it was complaining about backfilling and what not degraded stuff. not sure what there was to backfill since it is/was a clean install
 
@John Wick Okay, I finally managed to do that. GUI only. I've "sgdisk -Z /dev/sdX " Every disk. Init GPT them via gui, then add OSDs via GUI as well. And that worked. I had to reinstall every host though. It probably was not need to reinstall. but i just wanted to be thorough :D
Thank you
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!