Ceph OSD on LVM logical volume.

kwinz

Active Member
Apr 18, 2020
40
16
28
36
Hi,

I know it's not recommended for performance reasons. But I want to create a Ceph OSD on a node with just a single NVMe SSD.
So I kept some free space on the SSD during install.
And then created a new logical volume with `lvcreate -n vz -V 10G pve`

However that volume does not show up when trying to create a new OSD via GUI:ceph-osd-no-disk.PNG


[edit]
pveceph osd create /dev/mapper/pve-vz
results in:
unable to get device info for '/dev/dm-2'

[edit2]:
I will try ceph-disk according to https://forum.proxmox.com/threads/pveceph-unable-to-get-device-info.44927/#post-238545

How do I add a new OSD without having a dedicated disk for it?
 
Last edited:
So here's my little guide for everyone who wants to do this:

1. During install set maxvz to 0 to not create local storage and keep free space for Ceph on the OS drive. [GUIDE, 2.3.1 Advanced LVM Configuration Options ]
2. Setup Proxmox like usual and create a cluster
3. Install Ceph packages and do initial setup (network interfaces etc.) via GUI, also create Managers and Monitors
4. To create OSDs open a shell on each node and

4.a. bootstrap auth [4]:
ceph auth get client.bootstrap-osd > /var/lib/ceph/bootstrap-osd/ceph.keyring

4.b. Create new logical volume with the remaining free space:
lvcreate -l 100%FREE -n pve/vz

4.c. Create (= prepare and activate) the logical volume for OSD [2] [3]
ceph-volume lvm create --data pve/vz

5. That's it. Now you can keep using GUI to:
  • create Metadata servers,
  • by clicking on a node in the cluster in "Ceph" create a CephFS. And then add in "Datacenter-Storage" for CD images and backups. This will be mounted in /mnt/pve/cephfs/
  • and in "Datacenter-Storage" add an "RDS" block device for virtual VM HDDs.

[GUIDE] https://pve.proxmox.com/pve-docs/pve-admin-guide.pdf
[2] https://docs.ceph.com/docs/master/ceph-volume/lvm/create/#ceph-volume-lvm-create
[3] https://docs.ceph.com/docs/master/ceph-volume/
[4] https://forum.proxmox.com/threads/p...ble-to-create-a-new-osd-id.55730/#post-257533
 
Last edited:
So here's my little guide for everyone who wants to do this:

1. During install set maxvz to 0 to not create local storage and keep free space for Ceph on the OS drive. [GUIDE, 2.3.1 Advanced LVM Configuration Options ]
2. Setup Proxmox like usual and create a cluster
3. Install Ceph packages and do initial setup (network interfaces etc.) via GUI, also create Managers and Monitors
4. To create OSDs open a shell on each node and

Followed these steps with Proxmox VE 8.2.5 and Ceph 18.2.2.
My cluster setup is 6 x proxmox-node, node 1+2+3 are ceph-monitor and ceph-manager.

When trying to add an OSD on node 4+5+6, an error occurs.
These nodes will be providing OSD's, but are not monitor and/or manager.
Bash:
root@proxmox05:# ceph-volume lvm create --data pve/vz
Running command: /usr/bin/ceph-authtool --gen-print-key
-->  RuntimeError: No valid ceph configuration file was loaded.

Cause of this issue is that the symlink in /etc/ceph/ceph.conf is missing. This can be fixed by adding it manually (between steps 3 and 4 in the procedure).
Bash:
ln -s /etc/pve/ceph.conf /etc/ceph/ceph.conf
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!