Hi,
Is there a bug in Proxmox that prevents it from correctly seeing bcache devices as a regular storage device? I'm using Proxmox PVE 6.4-14, Linux 5.4.174-2-pve.
The bcache is a Linux kernel feature that allows you to use a small fast disk (flash, ssd, nvme, Optane, etc) as "cache" for a large, slower disk (a spinning HDD for example). It greatly improves disk performance. There are also reports of performance improvements on OS disks, LVM disks and ZFS disks using bcache.
I have found several posts on the internet talking about the advantages of using bcache for Ceph's OSD, either for use with Proxmox (or not).
I'm trying to configure it, following some recipes, and I came across a problem in the Proxmox GUI: The 'bcache0' driver doesn't appear as available with the other drivers, and it doesn't appear available to build a new OSD either.
Other drivers normally appear in the 'Disks' list, including the physical disk (/dev/sdb) used as the bcache backend appears in the list, indicating that it is being used by bcache. But the drive provided by the OS kernel as a cached drive (/dev/bcache0) strangely does not appear in the GUI disks list, nor does it appear to be used to create a volume, for example. Even if you create a GPT signature on it, or not.
So I decided to look for a solution on the internet to use bcache for Ceph within Proxmox. Then I found posts that guide you to use the CLI. I tried using as the posts indicate, but the following error message appears: unable to get device info for '/dev/bcache0'
Let's see what happened:
Faced with this problem, I looked for other solutions and found posts saying the opposite, that it would not be possible to install directly through Proxmox, but if I said it through pure Ceph commands it would work (not using pveceph as above). So I used the suggested commands as below:
Honestly, it worked! The result was that Ceph replied that it created the OSD. And in the Proxmox GUI, in that Ceph Dashboard inside Proxmox, it counted the OSD created. And also under Disks option, in the LVM volume list, the OSD volume appeared listed, including pointing to the '/dev/bcache0' device correctly.
But, unfortunately, the Proxmox GUI, on the OSD's list page, doesn't show the created OSD (very strange). The Node is listed, but the created OSD does not appear. Even if you click the "reload" button. I do not know the reason. But I would like to know. More than knowing the reason, I would really like to know how to solve the problem, because I imagine that many inconveniences can appear because of this, during the administration of the Cluster in Proxmox.
If anyone can help, thanks!
Is there a bug in Proxmox that prevents it from correctly seeing bcache devices as a regular storage device? I'm using Proxmox PVE 6.4-14, Linux 5.4.174-2-pve.
The bcache is a Linux kernel feature that allows you to use a small fast disk (flash, ssd, nvme, Optane, etc) as "cache" for a large, slower disk (a spinning HDD for example). It greatly improves disk performance. There are also reports of performance improvements on OS disks, LVM disks and ZFS disks using bcache.
I have found several posts on the internet talking about the advantages of using bcache for Ceph's OSD, either for use with Proxmox (or not).
I'm trying to configure it, following some recipes, and I came across a problem in the Proxmox GUI: The 'bcache0' driver doesn't appear as available with the other drivers, and it doesn't appear available to build a new OSD either.
Other drivers normally appear in the 'Disks' list, including the physical disk (/dev/sdb) used as the bcache backend appears in the list, indicating that it is being used by bcache. But the drive provided by the OS kernel as a cached drive (/dev/bcache0) strangely does not appear in the GUI disks list, nor does it appear to be used to create a volume, for example. Even if you create a GPT signature on it, or not.
So I decided to look for a solution on the internet to use bcache for Ceph within Proxmox. Then I found posts that guide you to use the CLI. I tried using as the posts indicate, but the following error message appears: unable to get device info for '/dev/bcache0'
Let's see what happened:
Code:
root@pve-20:~# ls /dev/bcache*
/dev/bcache0
root@pve-20:~# ls /dev/nvme*
/dev/nvme0 /dev/nvme0n1 /dev/nvme0n1p1 /dev/nvme0n1p2 /dev/nvme0n1p3
root@pve-20:~# pveceph osd create /dev/bcache0 -db_dev /dev/nvme0n1p3
unable to get device info for '/dev/bcache0'
root@pve-20:~#
Faced with this problem, I looked for other solutions and found posts saying the opposite, that it would not be possible to install directly through Proxmox, but if I said it through pure Ceph commands it would work (not using pveceph as above). So I used the suggested commands as below:
Code:
root@pve-20:~# ceph-volume lvm prepare --bluestore --data /dev/bcache0 --block.db /dev/nvme0n1p3
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 657a7611-e091-406e-8575-377ead90642e
Running command: /usr/sbin/vgcreate --force --yes ceph-9c456212-8807-4c04-b7af-e5ddd6b2f3fd /dev/bcache0
stdout: Physical volume "/dev/bcache0" successfully created.
stdout: Volume group "ceph-9c456212-8807-4c04-b7af-e5ddd6b2f3fd" successfully created
Running command: /usr/sbin/lvcreate --yes -l 238467 -n osd-block-657a7611-e091-406e-8575-377ead90642e ceph-9c456212-8807-4c04-b7af-e5ddd6b2f3fd
stdout: Logical volume "osd-block-657a7611-e091-406e-8575-377ead90642e" created.
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
--> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-9c456212-8807-4c04-b7af-e5ddd6b2f3fd/osd-block-657a7611-e091-406e-8575-377ead90642e
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Running command: /usr/bin/ln -s /dev/ceph-9c456212-8807-4c04-b7af-e5ddd6b2f3fd/osd-block-657a7611-e091-406e-8575-377ead90642e /var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
stderr: 2022-05-18T08:30:11.526-0300 7f4fbe287700 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.bootstrap-osd.keyring: (2) No such file or directory
2022-05-18T08:30:11.526-0300 7f4fbe287700 -1 AuthRegistry(0x7f4fb8059750) no keyring found at /etc/pve/priv/ceph.client.bootstrap-osd.keyring, disabling cephx
stderr: got monmap epoch 1
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQDC2IRilumQAxAALfg+gDst15iqmnOIIFfTuQ==
stdout: creating /var/lib/ceph/osd/ceph-0/keyring
added entity osd.0 auth(key=AQDC2IRilumQAxAALfg+gDst15iqmnOIIFfTuQ==)
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Running command: /usr/bin/chown -R ceph:ceph /dev/nvme0n1p3
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --bluestore-block-db-path /dev/nvme0n1p3 --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 657a7611-e091-406e-8575-377ead90642e --setuser ceph --setgroup ceph
stderr: 2022-05-18T08:30:12.982-0300 7fc0df7e3e00 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
stderr: 2022-05-18T08:30:13.034-0300 7fc0df7e3e00 -1 freelist read_size_meta_from_db missing size meta in DB
--> ceph-volume lvm prepare successful for: /dev/bcache0
root@pve-20:~#
Honestly, it worked! The result was that Ceph replied that it created the OSD. And in the Proxmox GUI, in that Ceph Dashboard inside Proxmox, it counted the OSD created. And also under Disks option, in the LVM volume list, the OSD volume appeared listed, including pointing to the '/dev/bcache0' device correctly.
But, unfortunately, the Proxmox GUI, on the OSD's list page, doesn't show the created OSD (very strange). The Node is listed, but the created OSD does not appear. Even if you click the "reload" button. I do not know the reason. But I would like to know. More than knowing the reason, I would really like to know how to solve the problem, because I imagine that many inconveniences can appear because of this, during the administration of the Cluster in Proxmox.
If anyone can help, thanks!
Last edited: