3 node cluster, cannot create a new osd

cxgl

New Member
Mar 31, 2019
3
0
1
54
Hi all,

I have recently re-setup our 2 standalone proxmox ve boxes to a 3 node cluster (I added a new pve box).

I had successfully setup a ceph store across all 3 nodes with the disks I had around -- so not really homogeneous. The intention was to order some nice 5TB drives, add them as osd's, and let it all rebalance.

The problem is I can't add the 5TB drive as an osd. I get:

Code:
Virtual Environment 5.3-11
Node 'atlas5'
No OSD selected
Logs
()
create OSD on /dev/sdc (bluestore)
using device '/dev/sda' for block.db
wipe disk/partition: /dev/sdc
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 2.25668 s, 92.9 MB/s
Creating new GPT entries.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
Setting name!
partNum is 0
REALLY setting name!
The operation has completed successfully.
prepare_device: OSD will not be hot-swappable if block.db is not the same device as the osd data
Could not create partition 4 from 0 to 2097151
Unable to set partition 4's name to 'ceph block.db'!
Could not change partition 4's type code to 30cd0809-c2b2-499c-8879-2d6b785292be!
Error encountered; not saving changes.
Setting name!
partNum is 3
REALLY setting name!
'/sbin/sgdisk --new=4:0:+1024M --change-name=4:ceph block.db --partition-guid=4:1acde9c6-4072-4bea-8eea-456a7f4b801f --typecode=4:30cd0809-c2b2-499c-8879-2d6b785292be --mbrtogpt -- /dev/sda' failed with status code 4
TASK ERROR: command 'ceph-disk prepare --zap-disk --cluster ceph --cluster-uuid 84965957-175f-4ab0-ada3-0df175c94a7f --bluestore --block.db /dev/sda /dev/sdc' failed: exit code 1

...and nothing I do (zap, dd front and back of disk, etc.) does any good.

Further, I saw someone else had a similar problem, and thinking it might be a journal issue, I removed the other osd on this node to try to re-add an osd -- and now I can't even re-add the original disk.

What should I be looking for?

Thanks in advance.


EDIT: I was trying to re-use the original journal drive as the journal for these two osd's. When I just had ceph put the journal on the same disk, it added them successfully. Non-optimal, I know... Could someone please point me in the right direction for correctly clearing the old, disused journals off of said old journal disk, so I can re-create the ods's with the journals on the separate, faster disk?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!