ceph

  1. D

    Ceph: Verhalten beim Ausfall eines Knoten

    Guten Tag, ich habe eine Vefrständnisfrage bzgl. des Verhaltens von Ceph beim Ausfall eines Knotens. Szenario: 3+ Knoten Ceph in einer 3/2-Kopnfiguration Ceph-Storage inkl. CephFS ist zu 75+% gefüllt Bei dem plötzlichen Ausfall eines Knoten beginnt Ceph die PGs neu zu verteilen bzw...
  2. F

    Ceph on HPE DL380 Gen10+ not working

    I have a Proxmox 8.4 cluster with two nodes and one qdevice, with Ceph Squid 19.2.1 recently installed and an additional device to maintain quorum for Ceph. Each node has one SATA SSD, so I have two OSDs (osd.18 and osd.19) created, and I have a pool called poolssd with both. Since ceph has been...
  3. T

    Sanity check for new installation

    Could we get some 2nd and 3rd opinions of a plan for a new datacenter deployment: 8 PVE Hosts, each has two 16 core Xeons and 512GB of reg. Ram. We further have 4 10GbE NICs in each machine, two of those should handle guest traffic, the other two are for storage traffic. Each machine will have...
  4. G

    Ceph : number of placement groups for 5+ pools on 3hosts x 1osd

    Hi. MY CONFIG : 3 hosts with PVE 8.4.1 and ceph reef, 10gb ethernet dedicated ceph network. Each host have single osd which is 8tb hdd cmr drive. WHAT I DID : Created 5 pools with defaul settings. WHAT I NEED TO DO : Create 15 more pools. PROBLEM : Ceph started screaming "too many pgs per...
  5. K

    4 Node Streched Cluster with Ceph

    Hey, I am planing to create a 4 Node Streched Cluster with Ceph. Given that I have 4 nodes means 2 on each side I am in the need of a Quorum for Proxmox and a Tiebreaker Monitor for Ceph. As I read the Ceph Tiebreaker can even be in the Cloud or another location because no OSD is speaking with...
  6. L

    [SOLVED] CEPH keeps recreating a pool named .mgr

    Hi all, as per title, I have created a new Ceph 7 node cluster and noticed that there was a default pool named ".mgr" there. I deleted that pool and created a new one. After some restarts of the managers and monitors, i saw that the pool ".mgr" was recreated all by itself. Is this intended...
  7. N

    Ceph - Which is faster/preferred?

    I am in the process of ordering new servers for our company to set up a 5-node cluster with all NVME. I have a choice of either going with (4) 15.3TB drives or (8) 7.68TB drives. The cost is about the same. Are there any advantages/disadvantages in relation to Proxmox/Ceph performance? I think I...
  8. L

    Missing information about creating 2nd ring in Ceph in documentation, and impossibility to create 2nd ring in Ceph.

    Hi, I run in my homelab a 3xnode cluster with ceph, 3xmons, 3xmgr, 3xmds. I run it for about ~3 years now. Yesterday I've installed fresh 3x nodes and migrated my cluster from old nodes to a 3x new one, which are identical (GMKTEC M5 Plus, 24GB RAM each, beautiful devices, works like a charm so...
  9. S

    Low disc performance with CEPH pool storage

    Hi, I have a disk performance issues with a Windows 2019 virtual machine. The storage pool of my proxmox cluster is a CEPH POOL (with SSD disks). The virtual machine has software that writes using block sizes in 4k and the performance verified with benchmark tools is very low on 4k writing...
  10. M

    3−Node Proxmox EPYC 7T83 (7763) • 10 × NVMe per host • 100 GbE Ceph + Flink + Kafka — sanity-check me!

    Hey folks, I'm starting a greenfield data pipeline project for my startup. I need real-time stream processing so I'm upgrading some Milan hosts I have into a 3 node Proxmox + Ceph cluster. Per-node snapshot Motherboard – Gigabyte MZ72-HB0 Compute & RAM – 2 × EPYC 7T83 (128 c/256 t) + 1 TB...
  11. K

    Slow disk migration from iSCSI (FlashME5) to CephStorage

    Hi, I’m experiencing very slow disk migration in my Proxmox cluster when moving a VM disk from FlashME5 (iSCSI multipath + LVM) to CephStorage (RBD). Migration from Ceph → FlashME5 is fast (~500–700 MB/s), but in reverse (FlashME5 → Ceph) I only get ~200 MB/s or less. Environment: Proxmox 8.x...
  12. A

    Ceph and failed drives

    Good afternoon. We are new to Proxmox and looking to implement CEPH. We are planning to have 3 identical servers with 15 OSDs in each server. Each OSD will be SSD drives with 1.6TB of storage. What I am after is how many drives can fail on one node before that node would be considered...
  13. J

    [SOLVED] CEPHFS - SSD & HDD Pool

    Good evening, I have a three node test cluster running with PVE 8.4.1. pve-01 & pve-02CPU(s) 56 x Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz (2 Sockets) pve-03CPU(s) 28 x Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz (1 Socket) Kernel Version Linux 6.8.12-10-pve (2025-04-18T07:39Z)...
  14. A

    [SOLVED] Ceph is kicking my ass

    I've tried installing ceph, but i instantly just got a "Got Timeout 500". I tried deleting it alltogether with these commands I found on this forum: rm -rf /etc/systemd/system/ceph* killall -9 ceph-mon ceph-mgr ceph-mds rm -rf /var/lib/ceph/mon/ /var/lib/ceph/mgr/ /var/lib/ceph/mds/ pveceph...
  15. W

    Feature request: Dedicated quorum node

    Hello, I'm a system engineer at an IT outsourcing company. My job is to design and deploy virtualization solutions. I've used Proxmox for many years now, but only recently our company started to offer Proxmox as an alternative to ESXi/HyperV. I know Proxmox throughout and know its capabilities...
  16. A

    Ceph keeps crashing, but only on a single node

    I've been trying to figure this out for over a week and i'm getting nowhere. I have 3 machines with identical hardware,, each with 3 enterprise nvme drives. 2x 4tb samsung m.2 pm983, and 1x 8 tb samsung u.2 pm983a (i think this is an oem drive for amazon). For some reason PVE2 keeps getting...
  17. R

    [SOLVED] status unknown - vgs not responding

    Hi guys, I have a rather strange problem with my current proxmox configuration. The status of 2 out of 3 nodes always goes to unknown, about 3 minutes after restarting a node. In these 3 minutes the status is online. The node I restarted is working fine. Does anyone know what I have done wrong...
  18. G

    Ceph on 10GB NiC, which NVME?

    Greetings, I have just created my account here, since I am assembling a HOMELAB Proxmox Cluster with 3 Nodes, each having Dual 10GB NiCs. I want to use Ceph as a backend for the VM storage for learning purposes. While I also wish to migrate my own infrastructure onto Proxmox soon, as I hope it...
  19. M

    3 Server Ceph Cluster 100Gbit-Backend / 10 Gbit Frontend

    Hallo, My hyperconverged Proxmoxcluster with Ceph (19.2.1) have 3 Servers: All have Threadripper Pro Zen3, Zen4 16-32 Cores, 256Gb Ram, with at first 1 NVME OSD (Kioxia CM7r) per Server. Frontend Network has multiple redundant 10Gbit NICs for VMS und Clients. Backend Network only for Ceph...
  20. W

    Network Config Suggestions w/Ceph

    I'm in need of some help from some of the seasoned professionals out there. I'm setting up a 5-node cluster with Ceph that will run around 30-40 VMs. Each node has two 4-port 10G NICs. I'm going to use LACP to bond one port from each NIC to create four connections on each server. However, this...