GPT wird bei CreateOSD nicht übernommen

PVEfan

New Member
May 15, 2021
12
1
3
52
Hallo,

woran kann es liegen, wenn eine formatierte 8TB HDD inklusive GPT nach der CreateOSD über PVE-WebGUI bzw. über Terminal: pveceph createosd /dev/sdh (in dem Fall) die GPT nicht beibehält?


fdisk /dev/sdh
g
w


fdisk -l /dev/sdh
Disk /dev/sdh: 7.3 TiB, 8001020755968 bytes, 15626993664 sectors
Disk model: MR9361-8i
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 262144 bytes / 262144 bytes
Disklabel type: gpt
Disk identifier: D43BDE17-5638-8047-8641-95654A721FC9



pveceph createosd /dev/sdh

create OSD on /dev/sdh (bluestore)
wipe disk/partition: /dev/sdh
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.05071 s, 200 MB/s
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 29787a04-5a49-41b2-ba86-719228d02a9b
Running command: /sbin/vgcreate --force --yes ceph-0f7cc37b-55b7-498e-9609-845fb213f99d /dev/sdh
stdout: Physical volume "/dev/sdh" successfully created.
stdout: Volume group "ceph-0f7cc37b-55b7-498e-9609-845fb213f99d" successfully created
Running command: /sbin/lvcreate --yes -l 1907591 -n osd-block-29787a04-5a49-41b2-ba86-719228d02a9b ceph-0f7cc37b-55b7-498e-9609-845fb213f99d
stdout: Logical volume "osd-block-29787a04-5a49-41b2-ba86-719228d02a9b" created.
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-27
--> Executable selinuxenabled not in PATH: /sbin:/bin:/usr/sbin:/usr/bin
Running command: /bin/chown -h ceph:ceph /dev/ceph-0f7cc37b-55b7-498e-9609-845fb213f99d/osd-block-29787a04-5a49-41b2-ba86-719228d02a9b
Running command: /bin/chown -R ceph:ceph /dev/dm-8
Running command: /bin/ln -s /dev/ceph-0f7cc37b-55b7-498e-9609-845fb213f99d/osd-block-29787a04-5a49-41b2-ba86-719228d02a9b /var/lib/ceph/osd/ceph-27/block
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-27/activate.monmap
stderr: 2021-08-26 19:31:48.807 7f8bf1961700 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.bootstrap-osd.keyring: (2) No such file or directory
2021-08-26 19:31:48.807 7f8bf1961700 -1 AuthRegistry(0x7f8bec0831b8) no keyring found at /etc/pve/priv/ceph.client.bootstrap-osd.keyring, disabling cephx
stderr: got monmap epoch 35
Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-27/keyring --create-keyring --name osd.27 --add-key AQAD0CdhfujQFRAAv8vx9RIWW2u2RPJHSSAKNw==
stdout: creating /var/lib/ceph/osd/ceph-27/keyring
added entity osd.27 auth(key=AQAD0CdhfujQFRAAv8vx9RIWW2u2RPJHSSAKNw==)
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-27/keyring
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-27/
Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 27 --monmap /var/lib/ceph/osd/ceph-27/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-27/ --osd-uuid 29787a04-5a49-41b2-ba86-719228d02a9b --setuser ceph --setgroup ceph
stderr: 2021-08-26 19:31:49.051 7fad7c585c80 -1 bluestore(/var/lib/ceph/osd/ceph-27/) _read_fsid unparsable uuid
--> ceph-volume lvm prepare successful for: /dev/sdh
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-27
Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-0f7cc37b-55b7-498e-9609-845fb213f99d/osd-block-29787a04-5a49-41b2-ba86-719228d02a9b --path /var/lib/ceph/osd/ceph-27 --no-mon-config
Running command: /bin/ln -snf /dev/ceph-0f7cc37b-55b7-498e-9609-845fb213f99d/osd-block-29787a04-5a49-41b2-ba86-719228d02a9b /var/lib/ceph/osd/ceph-27/block
Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-27/block
Running command: /bin/chown -R ceph:ceph /dev/dm-8
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-27
Running command: /bin/systemctl enable ceph-volume@lvm-27-29787a04-5a49-41b2-ba86-719228d02a9b
stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-27-29787a04-5a49-41b2-ba86-719228d02a9b.service -> /lib/systemd/system/ceph-volume@.service.
Running command: /bin/systemctl enable --runtime ceph-osd@27
Running command: /bin/systemctl start ceph-osd@27
--> ceph-volume lvm activate successful for osd ID: 27
--> ceph-volume lvm create successful for: /dev/sdh



nachher:

fdisk -l /dev/sdh
Disk /dev/sdh: 7.3 TiB, 8001020755968 bytes, 15626993664 sectors
Disk model: MR9361-8i
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 262144 bytes / 262144 bytes



pveversion -v

proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph: 14.2.16-pve1
ceph-fuse: 14.2.16-pve1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-2
libpve-guest-common-perl: 3.1-4
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.8-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-4
pve-cluster: 6.2-1
pve-container: 3.3-3
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-8
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-4
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1


Danke und Gruß,
PVEFan
 
If you are using a whole disk device for your physical volume, the disk must have no partition table.
Ceph osds verwenden LVM - LVM benötigt kein GPT und verwendet das komplette Block-Device. Redhat empfiehlt z.B. auch keine Partition Table zu haben (siehe Zitat oben und Link Abschnitt 4.2.1.1. Setting the Partition Type)

https://access.redhat.com/documenta...l_volume_manager_administration/physvol_admin

Code:
root@pve01:~# lsblk
nvme0n1                                                                                    259:3     0 894.3G  0 disk
└─ceph--2792e970--95f6--4157--8c28--fa0c73a13c46-osd--block--1155a4a0--5894--495c--a8b0--8b3cbdc0d9b0
                                                                                           253:1     0 894.3G  0 lvm
root@pve01:~# fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 894.25 GiB, 960197124096 bytes, 1875385008 sectors
Disk model: WUS4BB096D7P3E4
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
 
Last edited:
  • Like
Reactions: gurubert and PVEfan
Ceph osds verwenden LVM - LVM benötigt kein GPT und verwendet das komplette Block-Device. Redhat empfiehlt z.B. auch keine Partition Table zu haben (siehe Zitat oben und Link Abschnitt 4.2.1.1. Setting the Partition Type)

https://access.redhat.com/documenta...l_volume_manager_administration/physvol_admin

Code:
root@pve01:~# lsblk
nvme0n1                                                                                    259:3     0 894.3G  0 disk
└─ceph--2792e970--95f6--4157--8c28--fa0c73a13c46-osd--block--1155a4a0--5894--495c--a8b0--8b3cbdc0d9b0
                                                                                           253:1     0 894.3G  0 lvm
root@pve01:~# fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 894.25 GiB, 960197124096 bytes, 1875385008 sectors
Disk model: WUS4BB096D7P3E4
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Hi jsterr,

danke für die Info und den Link zum nachlesen!

Schönes Wochenende!

PVEFan
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!