Hi all,
right now I am following this manual here.
My setup looks as follows: Three Dell R530 each equiped with 2x 750GB SSDs and 6x 8TB HDDs.
OS was already installed on both SSDs in Raid1 mode. Cluster is running perfect. I would like to use the 6 HDDs for the OSD's.
According to manual:
Unfortunaly it is failing ...
Does anybody can help or has an idea?
Many thanks !
right now I am following this manual here.
My setup looks as follows: Three Dell R530 each equiped with 2x 750GB SSDs and 6x 8TB HDDs.
OS was already installed on both SSDs in Raid1 mode. Cluster is running perfect. I would like to use the 6 HDDs for the OSD's.
According to manual:
Code:
Example: /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD journal disk
# pveceph createosd /dev/sdf -journal_dev /dev/sdb
Unfortunaly it is failing ...
Code:
# pveceph createosd /dev/sdc -journal_dev /dev/mapper/pve-osd_journal_1
command '/sbin/zpool list -HPLv' failed: open3: exec of /sbin/zpool list -HPLv failed at /usr/share/perl5/PVE/Tools.pm line 411.
create OSD on /dev/sdc (xfs)
using device '/dev/dm-5' for journal
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.
****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
Could not create partition 2 from 34 to 10485793
Unable to set partition 2's name to 'ceph journal'!
Setting name!
partNum is 1
REALLY setting name!
Could not change partition 2's type code to 45b0969e-9b03-4f30-b4c6-b4b80ceff106!
Error encountered; not saving changes.
Traceback (most recent call last):
File "/usr/sbin/ceph-disk", line 9, in <module>
load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5047, in run
main(sys.argv[1:])
File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5000, in main
main_catch(args.func, args)
File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5025, in main_catch
func(args)
File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 1812, in main
Prepare.factory(args).prepare()
File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 1801, in prepare
self.prepare_locked()
File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 1832, in prepare_locked
self.data.prepare(self.journal)
File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2494, in prepare
self.prepare_device(*to_prepare_list)
File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2670, in prepare_device
to_prepare.prepare()
File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2003, in prepare
self.prepare_device()
File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2095, in prepare_device
num=num)
File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 1554, in create_partition
self.path,
File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 446, in command_check_call
return subprocess.check_call(arguments)
File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/sbin/sgdisk', '--new=2:0:+5120M', '--change-name=2:ceph journal', '--partition-guid=2:95f4c721-0149-42fb-b9c7-17fee8618bfe', '--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106', '--mbrtogpt', '--', '/dev/dm-5']' returned non-zero exit status 4
command 'ceph-disk prepare --zap-disk --fs-type xfs --cluster ceph --cluster-uuid 217d7725-6ad8-4958-9ed6-94a39cd62482 --journal-dev /dev/sdc /dev/dm-5' failed: exit code 1
Does anybody can help or has an idea?
Many thanks !
Last edited: