PVE GUI doesn't recognize kernel bcache device?

Sep 14, 2020
57
6
13
47
Hi,

Is there a bug in Proxmox that prevents it from correctly seeing bcache devices as a regular storage device? I'm using Proxmox PVE 6.4-14, Linux 5.4.174-2-pve.

The bcache is a Linux kernel feature that allows you to use a small fast disk (flash, ssd, nvme, Optane, etc) as "cache" for a large, slower disk (a spinning HDD for example). It greatly improves disk performance. There are also reports of performance improvements on OS disks, LVM disks and ZFS disks using bcache.

I have found several posts on the internet talking about the advantages of using bcache for Ceph's OSD, either for use with Proxmox (or not).

I'm trying to configure it, following some recipes, and I came across a problem in the Proxmox GUI: The 'bcache0' driver doesn't appear as available with the other drivers, and it doesn't appear available to build a new OSD either.

Other drivers normally appear in the 'Disks' list, including the physical disk (/dev/sdb) used as the bcache backend appears in the list, indicating that it is being used by bcache. But the drive provided by the OS kernel as a cached drive (/dev/bcache0) strangely does not appear in the GUI disks list, nor does it appear to be used to create a volume, for example. Even if you create a GPT signature on it, or not.

So I decided to look for a solution on the internet to use bcache for Ceph within Proxmox. Then I found posts that guide you to use the CLI. I tried using as the posts indicate, but the following error message appears: unable to get device info for '/dev/bcache0'

Let's see what happened:
Code:
root@pve-20:~# ls /dev/bcache*
/dev/bcache0
root@pve-20:~# ls /dev/nvme*
/dev/nvme0  /dev/nvme0n1  /dev/nvme0n1p1  /dev/nvme0n1p2  /dev/nvme0n1p3
root@pve-20:~# pveceph osd create /dev/bcache0 -db_dev /dev/nvme0n1p3
unable to get device info for '/dev/bcache0'
root@pve-20:~#

Faced with this problem, I looked for other solutions and found posts saying the opposite, that it would not be possible to install directly through Proxmox, but if I said it through pure Ceph commands it would work (not using pveceph as above). So I used the suggested commands as below:
Code:
root@pve-20:~# ceph-volume lvm prepare --bluestore --data /dev/bcache0 --block.db /dev/nvme0n1p3
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 657a7611-e091-406e-8575-377ead90642e
Running command: /usr/sbin/vgcreate --force --yes ceph-9c456212-8807-4c04-b7af-e5ddd6b2f3fd /dev/bcache0
 stdout: Physical volume "/dev/bcache0" successfully created.
 stdout: Volume group "ceph-9c456212-8807-4c04-b7af-e5ddd6b2f3fd" successfully created
Running command: /usr/sbin/lvcreate --yes -l 238467 -n osd-block-657a7611-e091-406e-8575-377ead90642e ceph-9c456212-8807-4c04-b7af-e5ddd6b2f3fd
 stdout: Logical volume "osd-block-657a7611-e091-406e-8575-377ead90642e" created.
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
--> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-9c456212-8807-4c04-b7af-e5ddd6b2f3fd/osd-block-657a7611-e091-406e-8575-377ead90642e
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Running command: /usr/bin/ln -s /dev/ceph-9c456212-8807-4c04-b7af-e5ddd6b2f3fd/osd-block-657a7611-e091-406e-8575-377ead90642e /var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
 stderr: 2022-05-18T08:30:11.526-0300 7f4fbe287700 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.bootstrap-osd.keyring: (2) No such file or directory
2022-05-18T08:30:11.526-0300 7f4fbe287700 -1 AuthRegistry(0x7f4fb8059750) no keyring found at /etc/pve/priv/ceph.client.bootstrap-osd.keyring, disabling cephx
 stderr: got monmap epoch 1
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQDC2IRilumQAxAALfg+gDst15iqmnOIIFfTuQ==
 stdout: creating /var/lib/ceph/osd/ceph-0/keyring
added entity osd.0 auth(key=AQDC2IRilumQAxAALfg+gDst15iqmnOIIFfTuQ==)
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Running command: /usr/bin/chown -R ceph:ceph /dev/nvme0n1p3
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --bluestore-block-db-path /dev/nvme0n1p3 --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 657a7611-e091-406e-8575-377ead90642e --setuser ceph --setgroup ceph
 stderr: 2022-05-18T08:30:12.982-0300 7fc0df7e3e00 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
 stderr: 2022-05-18T08:30:13.034-0300 7fc0df7e3e00 -1 freelist read_size_meta_from_db missing size meta in DB
--> ceph-volume lvm prepare successful for: /dev/bcache0
root@pve-20:~#

Honestly, it worked! The result was that Ceph replied that it created the OSD. And in the Proxmox GUI, in that Ceph Dashboard inside Proxmox, it counted the OSD created. And also under Disks option, in the LVM volume list, the OSD volume appeared listed, including pointing to the '/dev/bcache0' device correctly.

But, unfortunately, the Proxmox GUI, on the OSD's list page, doesn't show the created OSD (very strange). The Node is listed, but the created OSD does not appear. Even if you click the "reload" button. I do not know the reason. But I would like to know. More than knowing the reason, I would really like to know how to solve the problem, because I imagine that many inconveniences can appear because of this, during the administration of the Cluster in Proxmox.

If anyone can help, thanks!
 
Last edited:
After the prepare step you need to activate the OSD. You can do both in one step with "ceph-volume lvm create".
Hello.

Thank you for your help. Your tip was on the fly!

Code:
root@pve-20:~# cat /var/lib/ceph/osd/ceph-0/fsid 
2f6b54af-aec8-414e-a231-3cce47249463
root@pve-20:~# ceph-volume lvm activate --bluestore 0 2f6b54af-aec8-414e-a231-3cce47249463
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-49171487-b030-43f1-bb1d-74590825f4be/osd-block-2f6b54af-aec8-414e-a231-3cce47249463 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Running command: /usr/bin/ln -snf /dev/ceph-49171487-b030-43f1-bb1d-74590825f4be/osd-block-2f6b54af-aec8-414e-a231-3cce47249463 /var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Running command: /usr/bin/ln -snf /dev/nvme0n1p3 /var/lib/ceph/osd/ceph-0/block.db
Running command: /usr/bin/chown -R ceph:ceph /dev/nvme0n1p3
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block.db
Running command: /usr/bin/chown -R ceph:ceph /dev/nvme0n1p3
Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-2f6b54af-aec8-414e-a231-3cce47249463
 stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-2f6b54af-aec8-414e-a231-3cce47249463.service → /lib/systemd/system/ceph-volume@.service.
Running command: /usr/bin/systemctl enable --runtime ceph-osd@0
Running command: /usr/bin/systemctl start ceph-osd@0
--> ceph-volume lvm activate successful for osd ID: 0
root@pve-20:~#

Indeed, after the command to activate the OSD, it appeared in the list.

But one question remains: Is there really a bug in Proxmox, which makes it not recognize the '/dev/bcache0' device provided by the Kernel as a regular storage device?

Because the pveceph command did not allow to create the OSD, according to the error I put above in the first message. And also the bcache device did not appear in the Disks list in the GUI.

Could it be a bug, or does Proxmox not intend to recognize bcache kernel devices?
 
Last edited:
Maybe because it's not a usual device name like /dev/sdc or because it is an aggregated device made out of two other block devices.
Maybe. But, thinking about it, it seems to me a BUG because I think Proxmox should recognize this device normally, since it is in the main line of the Linux Kernel itself.
 
  • Like
Reactions: Otter7721
I also encountered the same problem. I hope to integrate or be compatible with bcache functions in the future
 
Hello Adrian. Have you opened an issue on the bug Tracker?

I have been using bcache+drbd+pacemaker+iscsi for at last 7 years as the shared storage for the vm's in a vmware vcenter 6.5 setup. It works great!

Hope Proxmox gets (gui) support for bcache for ceph, as is really a very good solution for us with old servers and low budget.

Greatings from Chile, South america.

Fernando.
 
Hi,
Hello Adrian. Have you opened an issue on the bug Tracker?

I have been using bcache+drbd+pacemaker+iscsi for at last 7 years as the shared storage for the vm's in a vmware vcenter 6.5 setup. It works great!

Hope Proxmox gets (gui) support for bcache for ceph, as is really a very good solution for us with old servers and low budget.

Greatings from Chile, South america.

Fernando.
yes, there is a feature request now: https://bugzilla.proxmox.com/show_bug.cgi?id=4679 Unfortunately, nobody had time to work on it yet.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!