ceph osd mit journal auf lvm

mr.x

Well-Known Member
Feb 16, 2010
49
0
46
Hi all,

right now I am following this manual here.
My setup looks as follows: Three Dell R530 each equiped with 2x 750GB SSDs and 6x 8TB HDDs.
OS was already installed on both SSDs in Raid1 mode. Cluster is running perfect. I would like to use the 6 HDDs for the OSD's.

According to manual:
Code:
Example: /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD journal disk

# pveceph createosd /dev/sdf -journal_dev /dev/sdb

Unfortunaly it is failing ...
Code:
# pveceph createosd /dev/sdc -journal_dev /dev/mapper/pve-osd_journal_1
command '/sbin/zpool list -HPLv' failed: open3: exec of /sbin/zpool list -HPLv failed at /usr/share/perl5/PVE/Tools.pm line 411.

create OSD on /dev/sdc (xfs)
using device '/dev/dm-5' for journal
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.

****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
Could not create partition 2 from 34 to 10485793
Unable to set partition 2's name to 'ceph journal'!
Setting name!
partNum is 1
REALLY setting name!
Could not change partition 2's type code to 45b0969e-9b03-4f30-b4c6-b4b80ceff106!
Error encountered; not saving changes.
Traceback (most recent call last):
  File "/usr/sbin/ceph-disk", line 9, in <module>
    load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5047, in run
    main(sys.argv[1:])
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5000, in main
    main_catch(args.func, args)
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5025, in main_catch
    func(args)
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 1812, in main
    Prepare.factory(args).prepare()
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 1801, in prepare
    self.prepare_locked()
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 1832, in prepare_locked
    self.data.prepare(self.journal)
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2494, in prepare
    self.prepare_device(*to_prepare_list)
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2670, in prepare_device
    to_prepare.prepare()
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2003, in prepare
    self.prepare_device()
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2095, in prepare_device
    num=num)
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 1554, in create_partition
    self.path,
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 446, in command_check_call
    return subprocess.check_call(arguments)
  File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/sbin/sgdisk', '--new=2:0:+5120M', '--change-name=2:ceph journal', '--partition-guid=2:95f4c721-0149-42fb-b9c7-17fee8618bfe', '--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106', '--mbrtogpt', '--', '/dev/dm-5']' returned non-zero exit status 4
command 'ceph-disk prepare --zap-disk --fs-type xfs --cluster ceph --cluster-uuid 217d7725-6ad8-4958-9ed6-94a39cd62482 --journal-dev /dev/sdc /dev/dm-5' failed: exit code 1

Does anybody can help or has an idea?

Many thanks !
 
Last edited:
Hi,

as I learned also a zap doesn't helped out... .-(

Code:
ceph-disk zap /dev/sdc
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.

****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.


root@PROX01:~# ceph-disk zap /dev/mapper/pve-osd_journal_1
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.

Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
on the recovery & transformation menu to examine the two tables.

Warning! One or more CRCs don't match. You should repair the disk!

****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.


root@PROX01:~# pveceph createosd /dev/sdc -journal_dev /dev/mapper/pve-osd_journal_1
command '/sbin/zpool list -HPLv' failed: open3: exec of /sbin/zpool list -HPLv failed at /usr/share/perl5/PVE/Tools.pm line 411.

create OSD on /dev/sdc (xfs)
using device '/dev/dm-5' for journal
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.

****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
Setting name!
partNum is 0
REALLY setting name!
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.
ceph-disk: Error: partition 1 for /dev/dm-5 does not appear to exist
command 'ceph-disk prepare --zap-disk --fs-type xfs --cluster ceph --cluster-uuid 217d7725-6ad8-4958-9ed6-94a39cd62482 --journal-dev /dev/sdc /dev/dm-5' failed: exit code 1
 
Hi,
don't put the journal on lvm-storage. I had done this before as I started with ceph (standalone ceph cluster) because it's feel for an good idea to me (but i use an filesystem on lvm too).

But the performance wasn't like expected - so I switched to pure partition journals later...

Udo
 
Hi Udo,

thanks for your reply.
Did you use "normal" SAS or SATA Disk's? I am running with SSD's and lvm.
Therefore I am expecting a huge performance boost but the script is not working as expected .-(

Br
Mr.X
 
Hi Udo,

thanks for your reply.
Did you use "normal" SAS or SATA Disk's? I am running with SSD's and lvm.
Therefore I am expecting a huge performance boost but the script is not working as expected .-(

Br
Mr.X
Hi,
the OSDs was SATA and the Journal are on SSDs (filebased on lvm). Later I switched to partitionbased on the same (consumer) SSDs and after that to "real" ceph-SSDs with intel DC S3700.

Udo
 
Hi,

and the Journal are on SSDs (filebased on lvm)
How did you archived this? This is what I try to do as well.
There is no slot left for additional SSD's.
Br
Mr.X
 
Last edited:
Hi,

I am also interested to use a lvm volume as journal. Any update on this?

Best regards,

BJ
 
The OP's creation command should work. the error he's getting denotes that his proxmox install was probably not original (he doesnt appear to have ZoL installed on his system.)
Hi alexskysilk,

with ZoL you mean ZFS On Linux?

BR
Mr.X
 
I do. If Proxmox was installed from the official installation, ZFS would be installed and the command '/sbin/zpool list -HPLv' would process instead of erroring out.

ZFS changed the module loading from automatic to opt-in, so this is in fact normal for a PVE 5.0 install. we might switch back to the old behaviour when upgrading to 0.7.1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!