1. C

    [SOLVED] rbd: sysfs write failed on TPM disks

    Hello everyone, we are running a 4-Node pve cluster with 3 Nodes in a hyper-converged setup with ceph and the 4th Node just for virtualization without its own osds. After creating a VM with a TPM state device on a ceph pool it fails to start with the error message: rbd: sysfs write failed TASK...
  2. Y

    Ceph very slow rebalancing ~300Kib

    Hi i have recreated a osd in my hyperconverged cluster. I have a 10Gbit link so it should be really fast to rebalance. But it seems like to rebalance only with some kilobytes: I have already set ceph tell 'osd.*' injectargs '--osd-recovery-max-active 4' ceph tell 'osd.*' injectargs...
  3. N

    Ceph monitors

    Hello I made the mistake of installing Ceph before I made my cluster and now I can not add/remove/start, my second monitor in ceph I am curious if there is a command to remove it that I cannot find. **Edit: I found the commands but it still will not let me remove it, Can I delete all the ceph...
  4. C

    Ceph - broken configuration - rados_connect failed - No such file or directory (500)

    Well, I've messed up pretty big this time. With following (again) guide for reinstalling ceph - https://dannyda.com/2021/04/10/how-to-completely-remove-delete-or-reinstall-ceph-and-its-configuration-from-proxmox-ve-pve/ I stopped at 1.16 `rm -r /etc/pve/ceph.conf` (while following steps only...
  5. M

    What is "Normal" Ceph performance?

    As the title suggests, I wanna find out what are the performance of other people's clusters. I'll start: Node Count: 3 Networking: 1 gbe shared with proxmox Disks: 3 x 3TB 7200RPM 2 x 1TB 7200RPM 1 x 1TB 5400RPM using this Benchmarking Tutorial here's my result
  6. D

    Ceph + secure communications + TPM disk ⇒ scary looking kernel error, 'no match of type 1 in addrvec', 'corrupt full osdmap', even when krbd not set

    [ This follows on from my previous comment on a different thread, https://forum.proxmox.com/threads/pverados-segfault.130628/post-574807 ] I've just figured out the whats and whys of a problem I've been having trying to create a new VM that uses RBD disks hosted by an external Ceph cluster...
  7. DynFi User

    6 nodes CEPH cluster same LAN different location

    Hello, We are working on a configuration where we will have 6 nodes spread on two (very close) sites, all linked on the same LAN (25G). I wanted to know how you would design the solution with CEPH in order to have a working site in case one of the two sites fails ? So idea is to have site A...
  8. B

    Recommended OS/Config Backup Strategies

    What are some strategies that people use to backup/clone/automate/etc OS disks and configurations? I've read a number of threads on this topic - varying from clonezilla backups, automated config managers, zfs send cron jobs and many more. Many are outdated and I am curious if there are more...
  9. A

    Ceph based HCI doesn't 'like' single drive nodes?

    I've been using a HCI cluster in my home lab built from really small and low-power devices, mostly because they've become potent enough to host these various 24x7 services I've been accumulating. I've used Mini-ITX Atoms and NUCs and am currently trying to transition a HCI cluster made from...
  10. N

    PVE-Cluster Ceph: "rbd: delete error" "Structure needs cleaning"

    Hello guys, once again I encountered the issue rbd error: rbd: listing images failed: (2) No such file or directory (500) Only that last time I was able to fix it via: rbd rm -p CEPH-POOL-NAME vm-ID-disk-ID This however results in:Removing image: 0% complete...failed. rbd: delete error...
  11. M

    PVE 7 to 8: VM crashes after migrating, OSD not found

    I run a 3-node PVE with CEPH. I migrated all VMs away from node 3, upgraded to the latest CEPH (Quincy) and then started the PVE 7 to 8 upgrade on node 3. After rebooting node 3 (now PVE 8), everything seemed to work well. So I migrated two VMs, one each from node 1 (still on PVE 7) and node 2...
  12. C

    Proxmox/Ceph Disk Layout

    Hello All, I am currently working on some hosts that have 8x 600GB 10k SAS drives and am planning on using these to install Proxmox and use the rest for CEPH. Is there a best way to split out these drives by filesystem? Should I use the RAID controller on the server or is it better to use the...
  13. R

    Proxmox HA and Ceph

    Hi there! Im trying to learn more about Ceph storage so we can use it in an upcoming installation. We have a database running on windows server that most of the company relies upon. I was looking into getting a 4 blade server and running proxmox ve on 3 of the blades and pbs on the last blade as...
  14. R

    [SOLVED] Guest migration via ceph copies disk to local-storage

    Hi guys, i have a single debian with guest-agent, running as a VM on a 3 node-cluster (each node has 2 OSD's building the ceph-cluster (rpool1)). When i tried to online-migrate the VM i got (you'll notice later that the disk does not lay on the local-storage but on the ceph (pool1), and this...
  15. T

    Ignoring custom ceph config for storage

    Good day I was wondering how to get rid of this error. Jun 28 14:37:01 pve13 pvestatd[1495]: ignoring custom ceph config for storage 'CephData', 'monhost' is not set (assuming pveceph managed cluster)! Jun 28 14:37:12 pve13 pvestatd[1495]: ignoring custom ceph config for storage 'CephData'...
  16. I

    For Best Performance - Proxmox Cluster with CEPH or ZFS?

    After months of planning, I came to a conclusion to assemble 3 Proxmox Nodes and cluster them together. I'm mostly interested in Mini-PC (NUC Style) with dual 2.5GbE LANs but after building 32 Core Epyc Proxmox Node, I'm known to the performance boost with actual server hardware. Anyway, I will...
  17. H

    Ceph Dasboard on v8 ?

    Hi, The Dashboard seems to not work with the last version of proxmox (upgrade ou clean install). I can access to the dashboard a few minutes after the installation of the plugin but quicly the manager service crash. I already try to reinstall for the host i upgraded and add a clean installed...
  18. L

    [SOLVED] How to remove old mds from ceph? (actually slow mds message)

    I had a failed node, which I replaced, but the MDS (for cephfs) that was on that node is still reported in the GUI as slow. How can I remove that? It's not in ceph.conf or storage.conf MDS_SLOW_METADATA_IO 1 MDSs report slow metadata IOs mdssm1(mds.0): 6 slow metadata IOs are blocked > 30 secs...
  19. F

    4 identical nodes | I need some recommendations

    Hello! I currently have 4 identical dedicated servers, each with: 2x E5-2680v4 192GB of RAM 6x 1.2TB SSD 2x 10Gbps SFP+ My question is: What is the recommended setup so that the data is replicated at least once (similar to RAID1 or RAID10, ceph shared storage with 3 nodes) and at the same time...
  20. I

    Best approach to ceph mount on multiple vm?

    What is the best approach? that will be easy to install and maintain for future upgrades currently on pve 7.4 and ceph 16.2.11 now have 4 vms (as testing,poc) but planning to grow to around 50 ( i prefer to do it once, then clone the node if it is possible) for perspective i have the...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!