ceph

  1. L

    Missing information about creating 2nd ring in Ceph in documentation, and impossibility to create 2nd ring in Ceph.

    Hi, I run in my homelab a 3xnode cluster with ceph, 3xmons, 3xmgr, 3xmds. I run it for about ~3 years now. Yesterday I've installed fresh 3x nodes and migrated my cluster from old nodes to a 3x new one, which are identical (GMKTEC M5 Plus, 24GB RAM each, beautiful devices, works like a charm so...
  2. B

    Low disc performance with CEPH pool storage

    Hi, I have a disk performance issues with a Windows 2019 virtual machine. The storage pool of my proxmox cluster is a CEPH POOL (with SSD disks). The virtual machine has software that writes using block sizes in 4k and the performance verified with benchmark tools is very low on 4k writing...
  3. M

    3−Node Proxmox EPYC 7T83 (7763) • 10 × NVMe per host • 100 GbE Ceph + Flink + Kafka — sanity-check me!

    Hey folks, I'm starting a greenfield data pipeline project for my startup. I need real-time stream processing so I'm upgrading some Milan hosts I have into a 3 node Proxmox + Ceph cluster. Per-node snapshot Motherboard – Gigabyte MZ72-HB0 Compute & RAM – 2 × EPYC 7T83 (128 c/256 t) + 1 TB...
  4. K

    Slow disk migration from iSCSI (FlashME5) to CephStorage

    Hi, I’m experiencing very slow disk migration in my Proxmox cluster when moving a VM disk from FlashME5 (iSCSI multipath + LVM) to CephStorage (RBD). Migration from Ceph → FlashME5 is fast (~500–700 MB/s), but in reverse (FlashME5 → Ceph) I only get ~200 MB/s or less. Environment: Proxmox 8.x...
  5. A

    Ceph and failed drives

    Good afternoon. We are new to Proxmox and looking to implement CEPH. We are planning to have 3 identical servers with 15 OSDs in each server. Each OSD will be SSD drives with 1.6TB of storage. What I am after is how many drives can fail on one node before that node would be considered...
  6. J

    [SOLVED] CEPHFS - SSD & HDD Pool

    Good evening, I have a three node test cluster running with PVE 8.4.1. pve-01 & pve-02CPU(s) 56 x Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz (2 Sockets) pve-03CPU(s) 28 x Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz (1 Socket) Kernel Version Linux 6.8.12-10-pve (2025-04-18T07:39Z)...
  7. A

    [SOLVED] Ceph is kicking my ass

    I've tried installing ceph, but i instantly just got a "Got Timeout 500". I tried deleting it alltogether with these commands I found on this forum: rm -rf /etc/systemd/system/ceph* killall -9 ceph-mon ceph-mgr ceph-mds rm -rf /var/lib/ceph/mon/ /var/lib/ceph/mgr/ /var/lib/ceph/mds/ pveceph...
  8. W

    Feature request: Dedicated quorum node

    Hello, I'm a system engineer at an IT outsourcing company. My job is to design and deploy virtualization solutions. I've used Proxmox for many years now, but only recently our company started to offer Proxmox as an alternative to ESXi/HyperV. I know Proxmox throughout and know its capabilities...
  9. A

    Ceph keeps crashing, but only on a single node

    I've been trying to figure this out for over a week and i'm getting nowhere. I have 3 machines with identical hardware,, each with 3 enterprise nvme drives. 2x 4tb samsung m.2 pm983, and 1x 8 tb samsung u.2 pm983a (i think this is an oem drive for amazon). For some reason PVE2 keeps getting...
  10. R

    [SOLVED] status unknown - vgs not responding

    Hi guys, I have a rather strange problem with my current proxmox configuration. The status of 2 out of 3 nodes always goes to unknown, about 3 minutes after restarting a node. In these 3 minutes the status is online. The node I restarted is working fine. Does anyone know what I have done wrong...
  11. G

    Ceph on 10GB NiC, which NVME?

    Greetings, I have just created my account here, since I am assembling a HOMELAB Proxmox Cluster with 3 Nodes, each having Dual 10GB NiCs. I want to use Ceph as a backend for the VM storage for learning purposes. While I also wish to migrate my own infrastructure onto Proxmox soon, as I hope it...
  12. M

    3 Server Ceph Cluster 100Gbit-Backend / 10 Gbit Frontend

    Hallo, My hyperconverged Proxmoxcluster with Ceph (19.2.1) have 3 Servers: All have Threadripper Pro Zen3, Zen4 16-32 Cores, 256Gb Ram, with at first 1 NVME OSD (Kioxia CM7r) per Server. Frontend Network has multiple redundant 10Gbit NICs for VMS und Clients. Backend Network only for Ceph...
  13. W

    Network Config Suggestions w/Ceph

    I'm in need of some help from some of the seasoned professionals out there. I'm setting up a 5-node cluster with Ceph that will run around 30-40 VMs. Each node has two 4-port 10G NICs. I'm going to use LACP to bond one port from each NIC to create four connections on each server. However, this...
  14. D

    Ceph networking guide

    In the doc there is just information about separating storage and public Ceph networks. But it doesn't seem to be that easy, eg. here https://forum.proxmox.com/threads/vm-storage-traffic-on-ceph.117137/ VLAN setup: Proxmox MGMT Ceph storage Ceph monitors VMs When copying data between VMs...
  15. B

    proxmox ceph slow ops, oldest one blocked for 2531 sec

    Hello, Yesterday I updated all the hosts in my proxmox cluster. After that, after restarting the osds one by one for the new version, the client io in my ceph cluster almost stopped. There is no problem on the network side and disk health. Restarting all ceph services and hosts did not solve the...
  16. G

    10G write speed in VM | Ceph?

    Hello everyone, I am currently planning a new Proxmox cluster for my Homelab. Because my firewall is to run in the cluster, the cluster should have shared storage for the VM disks and HA features. The firewall cannot run as a cluster, as only one external IP (PPPoE) is available and double...
  17. M

    New proxmox cluster - config help

    Hi, Currently installing a new proxmox cluster to replace VMware and wanted to get some hints/pointers on the configuration. Setup is as follows: 7 hosts 2x128GB M.2 SSD for OS per host 12x1,6TB SAS SSD per host 4x10Gb network interfaces If I understand correctly, if you want shared storage...
  18. K

    Ceph MDS OOM killed on weekends

    Hi, I have 4 node PVE Cluster with CephFS deployed and from a couple of months ago i get MDS oom kills and sometimes MDS are deployed on another node and get stucked on clientreplay status, so i need to restart this MDS again to gain acces to cephfs from all clients Checked scheduled jobs or...
  19. T

    Migrating Proxmox HA-Cluster with Ceph to new IP Subnet (Reup)

    Hey there, I am in the process of migrating my entire cluster, consisting of three nodes, to a new subnet. Old addresses: 10.10.20.11/24, 10.10.20.12/24, 10.10.20.13/24 New addresses: 10.10.0.10/24, 10.10.0.11/24, 10.10.0.12/24 I have already updated all necessary files, following a...
  20. C

    Upgrade cluster from 7.3-4 to 8.X with Ceph storage Configuration

    Hello I am running 7.3.-4 in cluster with 3 nodes and i plan to do a upgrade to 8.X. i know this full guide : https://pve.proxmox.com/wiki/Upgrade_from_7_to_8#Actions_step-by-step but , i can't find a response for all my questions : should i first ; removing a server from a Ceph cluster ...