[SOLVED] Proxmox VE 6.0: New Ceph OSD, but GPT = No

mihanson

Well-Known Member
Nov 1, 2018
35
3
48
48
Just upgraded a 3 node cluster to PVE 6.0 last night. I followed the excellent upgrade docs for PVE and Ceph Nautilus upgrades. I added a new OSD using a new hard drive. I initialized the Disk with GPT and the the disk appeared to have a GPT partition table per the "Disks" menu of the web GUI. After I created the OSD and the PG's began to do their dance, I noticed that in the "Disks" menu, GPT now said "No." I fired up gdisk on the node with this new OSD and it complained that the GPT table was damaged.

# sudo gdisk /dev/sdd
GPT fdisk (gdisk) version 1.0.3

Caution: invalid main GPT header, but valid backup; regenerating main header
from backup!

Caution! After loading partitions, the CRC doesn't check out!
Warning! Main partition table CRC mismatch! Loaded backup partition table
instead of main partition table!

Warning! One or more CRCs don't match. You should repair the disk!

Partition table scan:
MBR: not present
BSD: not present
APM: not present
GPT: damaged

Found invalid MBR and corrupt GPT. What do you want to do? (Using the
GPT MAY permit recovery of GPT data.)
1 - Use current GPT
2 - Create blank GPT

Your answer:

I waited until the PG's finished shuffling around (~8 hours later) and tried to recreate the OSD by stopping it, outting it and destroying it. I used the gdisk on the CLI to create a new, empty GPT table and it appeared as GPT "Yes" in the Disks menu. However, once I created the OSD again, the same issue occurs with GPT "No."

Is this expected behavior? Here's the log of the second OSD creation:

create OSD on /dev/sdd (bluestore)
wipe disk/partition: /dev/sdd
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.39424 s, 150 MB/s
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 80e9da29-05d0-42e0-8216-6c440efbdaa0
Running command: /sbin/vgcreate -s 1G --force --yes ceph-34849157-ca5a-4d8e-819f-1c78c5533e67 /dev/sdd
stdout: Physical volume "/dev/sdd" successfully created.
stdout: Volume group "ceph-34849157-ca5a-4d8e-819f-1c78c5533e67" successfully created
Running command: /sbin/lvcreate --yes -l 100%FREE -n osd-block-80e9da29-05d0-42e0-8216-6c440efbdaa0 ceph-34849157-ca5a-4d8e-819f-1c78c5533e67
stdout: Logical volume "osd-block-80e9da29-05d0-42e0-8216-6c440efbdaa0" created.
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /sbin/cryptsetup --batch-mode --key-file - luksFormat /dev/ceph-34849157-ca5a-4d8e-819f-1c78c5533e67/osd-block-80e9da29-05d0-42e0-8216-6c440efbdaa0
Running command: /sbin/cryptsetup --key-file - --allow-discards luksOpen /dev/ceph-34849157-ca5a-4d8e-819f-1c78c5533e67/osd-block-80e9da29-05d0-42e0-8216-6c440efbdaa0 fYdXIK-vALZ-Lwuw-jurx-Fmqa-YnES-L7nN7s
Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-10
--> Absolute path not found for executable: restorecon
--> Ensure $PATH environment variable contains common executable locations
Running command: /bin/chown -h ceph:ceph /dev/mapper/fYdXIK-vALZ-Lwuw-jurx-Fmqa-YnES-L7nN7s
Running command: /bin/chown -R ceph:ceph /dev/dm-6
Running command: /bin/ln -s /dev/mapper/fYdXIK-vALZ-Lwuw-jurx-Fmqa-YnES-L7nN7s /var/lib/ceph/osd/ceph-10/block
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-10/activate.monmap
stderr: 2019-07-20 12:15:04.176 7f083c188700 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.bootstrap-osd.keyring: (2) No such file or directory
2019-07-20 12:15:04.176 7f083c188700 -1 AuthRegistry(0x7f083407ed58) no keyring found at /etc/pve/priv/ceph.client.bootstrap-osd.keyring, disabling cephx
stderr: got monmap epoch 23
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-10/keyring --create-keyring --name osd.10 --add-key AQAsaDNdk3ReBBAAY7DiNY/lVD2h6Jl9dBTzjA==
stdout: creating /var/lib/ceph/osd/ceph-10/keyring
added entity osd.10 auth(key=AQAsaDNdk3ReBBAAY7DiNY/lVD2h6Jl9dBTzjA==)
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-10/keyring
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-10/
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 10 --monmap /var/lib/ceph/osd/ceph-10/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-10/ --osd-uuid 80e9da29-05d0-42e0-8216-6c440efbdaa0 --setuser ceph --setgroup ceph
--> ceph-volume lvm prepare successful for: /dev/sdd
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-10/lockbox.keyring --create-keyring --name client.osd-lockbox.80e9da29-05d0-42e0-8216-6c440efbdaa0 --add-key AQAsaDNdoGg3BRAA9DKz7FUi2b4h/v6Z6wwkVg==
stdout: creating /var/lib/ceph/osd/ceph-10/lockbox.keyring
added entity client.osd-lockbox.80e9da29-05d0-42e0-8216-6c440efbdaa0 auth(key=AQAsaDNdoGg3BRAA9DKz7FUi2b4h/v6Z6wwkVg==)
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-10/lockbox.keyring
Running command: /usr/bin/ceph --cluster ceph --name client.osd-lockbox.80e9da29-05d0-42e0-8216-6c440efbdaa0 --keyring /var/lib/ceph/osd/ceph-10/lockbox.keyring config-key get dm-crypt/osd/80e9da29-05d0-42e0-8216-6c440efbdaa0/luks
Running command: /sbin/cryptsetup --key-file - --allow-discards luksOpen /dev/ceph-34849157-ca5a-4d8e-819f-1c78c5533e67/osd-block-80e9da29-05d0-42e0-8216-6c440efbdaa0 fYdXIK-vALZ-Lwuw-jurx-Fmqa-YnES-L7nN7s
stderr: Device fYdXIK-vALZ-Lwuw-jurx-Fmqa-YnES-L7nN7s already exists.
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-10
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/mapper/fYdXIK-vALZ-Lwuw-jurx-Fmqa-YnES-L7nN7s --path /var/lib/ceph/osd/ceph-10 --no-mon-config
Running command: /bin/ln -snf /dev/mapper/fYdXIK-vALZ-Lwuw-jurx-Fmqa-YnES-L7nN7s /var/lib/ceph/osd/ceph-10/block
Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-10/block
Running command: /bin/chown -R ceph:ceph /dev/dm-6
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-10
Running command: /bin/systemctl enable ceph-volume@lvm-10-80e9da29-05d0-42e0-8216-6c440efbdaa0
stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-10-80e9da29-05d0-42e0-8216-6c440efbdaa0.service -> /lib/systemd/system/ceph-volume@.service.
Running command: /bin/systemctl enable --runtime ceph-osd@10
stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@10.service -> /lib/systemd/system/ceph-osd@.service.
Running command: /bin/systemctl start ceph-osd@10
--> ceph-volume lvm activate successful for osd ID: 10
--> ceph-volume lvm create successful for: /dev/sdd
TASK OK

Thanks for your help

Mike
 
I'm starting to decode what I think may be the answer to my question. According to ceph-disk docs, ceph-disk, which was used for OSD creation is depreciated in favor of ceph-volume, which Proxmox uses to create LVM's out of raw devices. There are definite gaps in my knowledge here, but after reading up on ceph-volume I'm leaning toward GPT = "No" to be an expected outcome and non-issue.
 
but after reading up on ceph-volume I'm leaning toward GPT = "No" to be an expected outcome and non-issue.
yes this is ok and expected since an 'lvm pv' does not need or have a partition on it
 
Sorry for bringing this post back from the grave but: Im currently testing on my pve ceph testing cluster and some nodes (disks) show GPT no and some show GPT yes after creating the osds.

So as far as I understood, it shouldnt matter after creation of the osd, what the values show up there but its indeed strange that it differs from one disk to another, from one node to another.

So damaged gpt no problem for ceph osds? Or should I recreate the osds?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!