[SOLVED] rbd: sysfs write failed on TPM disks

Hello everyone,

we are running a 4-Node pve cluster with 3 Nodes in a hyper-converged setup with ceph and the 4th Node just for virtualization without its own osds. After creating a VM with a TPM state device on a ceph pool it fails to start with the error message:

rbd: sysfs write failed TASK ERROR: start failed: can't map rbd volume vm-103-disk-1: rbd: sysfs write failed

Other VMs without TPM work fine on the same pool. It is even possible to remove and recreate the TPM state and move the disk from one pool to another. Whenever the disk is running on ceph it fails for the same reason but when it is located on the local storage the VM starts without trouble.
Assuming from the error message the TPM disk is mapped locally on the hypervisor whereas the other drives are handled by qemu directly which makes the TPM quiet unique. Creating a VM with the ceph-common tools installed and trying to map a random rbd (not just TPMs) from there fails as well.
After having no success in troubleshooting we set up a test-cluster on 3 old desktop PCs running into the problem again.

The issue first occured during the cluster's initial setup on pve 7.4 with Ceph quincy and is still persisting after upgrading to pve 8. During the installation of the VMs we put the TPM on a local pool but this way we are missing failover when entering the productive phase and creating a snapshot with a local disk attached prevents easy live migration.

We are running out of ideas. Thanks a lot,
Finally solved: Actually there were two issues on top of each other.

1. The ongoing rework of our networking should enable all services to run in full Dual-Stack, if possible. Since the Ceph-Documentation clearly states Ceph was capable to do so we enabled the feature.
According to the OSDs and Monmap, Ceph itself works perfectly fine, binding to 4 addresses in total, the Messenger V1 and V2 port, with an IPv4 and v6 address each.
Unfortunately not all the Client-implementations are Dual-Stack-Ready so far. While QEMU appears to be working with this kind of setup, KRBD does not. This is not an IPv6 related issue, since choosing either IPv4 or v6 but not both works equally fine.
After this experience, I assume the Proxmox integration of TPM on Ceph block devices uses only the KRBD implementation, as LXC Containers do. Since we are not using Containers on Proxmox and the VMs were working, we wrongly assumed the Problem was with TPM instead.
Meanwhile the Ceph Documentation issued an info-box, that Dual-Stack was not supported by the Messengers. However this warning is still not present in the Networking section of their doc.

Note: Since Ceph and many of their integrations including QEMU and ceph-csi support Dual-Stack it might be possible to run an external Cluster in Dual-Stack and only configure Machines with KRBD (and possibly other) integrations to bind to the Monitors using either Stack. Unfortunately I'm not able to try this.

2. In the meantime the normal operation of our cluster, with VMs being created, removed and Block-devices moved between pools, the partial availability of the Ceph-backend caused some disks to not properly being deleted. This results in another message
TASK ERROR: rbd error: rbd: listing images failed: (2) No such file or directory
As mentioned in a previous thread the issue could be resolved by cross-checking the devices Proxmox expects to be present and the ones actually listed. In total there were 3 devices configured in Proxmox but not available in Ceph and another 2 in Ceph but with no reference in Proxmox. Removing them cleared the warning.
  • Like
Reactions: mira and fiona


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!