ceph

  1. D

    CEPH: small cluster with multiple OSDs per one NVMe drive

    Hello community! We have deployed our first small Proxmox cluster along with Ceph and so far we've had great experience with it. We're running traditional VM workload (most VMs are idling and most of the Ceph workload comes from bursts of small files with the exception of few SQL servers that...
  2. F

    Questions Regarding Automation

    I have been working Proxmox for about a year and a half now and feel pretty comfortable with the platform. I can create VMs/containers, manage storage (using Ceph), handle networking, create cloud-init templates, etc. Now I want to take the next step and automate my infrastructure. I have some...
  3. K

    [SOLVED] New cluster - Ceph = got timeout (500)

    Hey, please can someone point me in the right direction? 4 nodes all installed with PVE ISO. So no firewall in play. Each node has a Ceph network: auto bond1 iface bond1 inet static address 10.10.10.1/24 (node1 10.10.10.1/24 node2 10.10.10.2/24 node3 10.10.10.3/24 etc.)...
  4. D

    Ceph: Verhalten beim Ausfall eines Knoten

    Guten Tag, ich habe eine Vefrständnisfrage bzgl. des Verhaltens von Ceph beim Ausfall eines Knotens. Szenario: 3+ Knoten Ceph in einer 3/2-Kopnfiguration Ceph-Storage inkl. CephFS ist zu 75+% gefüllt Bei dem plötzlichen Ausfall eines Knoten beginnt Ceph die PGs neu zu verteilen bzw...
  5. F

    Ceph on HPE DL380 Gen10+ not working

    I have a Proxmox 8.4 cluster with two nodes and one qdevice, with Ceph Squid 19.2.1 recently installed and an additional device to maintain quorum for Ceph. Each node has one SATA SSD, so I have two OSDs (osd.18 and osd.19) created, and I have a pool called poolssd with both. Since ceph has been...
  6. T

    Sanity check for new installation

    Could we get some 2nd and 3rd opinions of a plan for a new datacenter deployment: 8 PVE Hosts, each has two 16 core Xeons and 512GB of reg. Ram. We further have 4 10GbE NICs in each machine, two of those should handle guest traffic, the other two are for storage traffic. Each machine will have...
  7. G

    Ceph : number of placement groups for 5+ pools on 3hosts x 1osd

    Hi. MY CONFIG : 3 hosts with PVE 8.4.1 and ceph reef, 10gb ethernet dedicated ceph network. Each host have single osd which is 8tb hdd cmr drive. WHAT I DID : Created 5 pools with defaul settings. WHAT I NEED TO DO : Create 15 more pools. PROBLEM : Ceph started screaming "too many pgs per...
  8. K

    4 Node Streched Cluster with Ceph

    Hey, I am planing to create a 4 Node Streched Cluster with Ceph. Given that I have 4 nodes means 2 on each side I am in the need of a Quorum for Proxmox and a Tiebreaker Monitor for Ceph. As I read the Ceph Tiebreaker can even be in the Cloud or another location because no OSD is speaking with...
  9. L

    [SOLVED] CEPH keeps recreating a pool named .mgr

    Hi all, as per title, I have created a new Ceph 7 node cluster and noticed that there was a default pool named ".mgr" there. I deleted that pool and created a new one. After some restarts of the managers and monitors, i saw that the pool ".mgr" was recreated all by itself. Is this intended...
  10. N

    Ceph - Which is faster/preferred?

    I am in the process of ordering new servers for our company to set up a 5-node cluster with all NVME. I have a choice of either going with (4) 15.3TB drives or (8) 7.68TB drives. The cost is about the same. Are there any advantages/disadvantages in relation to Proxmox/Ceph performance? I think I...
  11. L

    Missing information about creating 2nd ring in Ceph in documentation, and impossibility to create 2nd ring in Ceph.

    Hi, I run in my homelab a 3xnode cluster with ceph, 3xmons, 3xmgr, 3xmds. I run it for about ~3 years now. Yesterday I've installed fresh 3x nodes and migrated my cluster from old nodes to a 3x new one, which are identical (GMKTEC M5 Plus, 24GB RAM each, beautiful devices, works like a charm so...
  12. S

    Low disc performance with CEPH pool storage

    Hi, I have a disk performance issues with a Windows 2019 virtual machine. The storage pool of my proxmox cluster is a CEPH POOL (with SSD disks). The virtual machine has software that writes using block sizes in 4k and the performance verified with benchmark tools is very low on 4k writing...
  13. M

    3−Node Proxmox EPYC 7T83 (7763) • 10 × NVMe per host • 100 GbE Ceph + Flink + Kafka — sanity-check me!

    Hey folks, I'm starting a greenfield data pipeline project for my startup. I need real-time stream processing so I'm upgrading some Milan hosts I have into a 3 node Proxmox + Ceph cluster. Per-node snapshot Motherboard – Gigabyte MZ72-HB0 Compute & RAM – 2 × EPYC 7T83 (128 c/256 t) + 1 TB...
  14. K

    Slow disk migration from iSCSI (FlashME5) to CephStorage

    Hi, I’m experiencing very slow disk migration in my Proxmox cluster when moving a VM disk from FlashME5 (iSCSI multipath + LVM) to CephStorage (RBD). Migration from Ceph → FlashME5 is fast (~500–700 MB/s), but in reverse (FlashME5 → Ceph) I only get ~200 MB/s or less. Environment: Proxmox 8.x...
  15. A

    Ceph and failed drives

    Good afternoon. We are new to Proxmox and looking to implement CEPH. We are planning to have 3 identical servers with 15 OSDs in each server. Each OSD will be SSD drives with 1.6TB of storage. What I am after is how many drives can fail on one node before that node would be considered...
  16. J

    [SOLVED] CEPHFS - SSD & HDD Pool

    Good evening, I have a three node test cluster running with PVE 8.4.1. pve-01 & pve-02CPU(s) 56 x Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz (2 Sockets) pve-03CPU(s) 28 x Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz (1 Socket) Kernel Version Linux 6.8.12-10-pve (2025-04-18T07:39Z)...
  17. A

    [SOLVED] Ceph is kicking my ass

    I've tried installing ceph, but i instantly just got a "Got Timeout 500". I tried deleting it alltogether with these commands I found on this forum: rm -rf /etc/systemd/system/ceph* killall -9 ceph-mon ceph-mgr ceph-mds rm -rf /var/lib/ceph/mon/ /var/lib/ceph/mgr/ /var/lib/ceph/mds/ pveceph...
  18. W

    Feature request: Dedicated quorum node

    Hello, I'm a system engineer at an IT outsourcing company. My job is to design and deploy virtualization solutions. I've used Proxmox for many years now, but only recently our company started to offer Proxmox as an alternative to ESXi/HyperV. I know Proxmox throughout and know its capabilities...
  19. A

    Ceph keeps crashing, but only on a single node

    I've been trying to figure this out for over a week and i'm getting nowhere. I have 3 machines with identical hardware,, each with 3 enterprise nvme drives. 2x 4tb samsung m.2 pm983, and 1x 8 tb samsung u.2 pm983a (i think this is an oem drive for amazon). For some reason PVE2 keeps getting...
  20. R

    [SOLVED] status unknown - vgs not responding

    Hi guys, I have a rather strange problem with my current proxmox configuration. The status of 2 out of 3 nodes always goes to unknown, about 3 minutes after restarting a node. In these 3 minutes the status is online. The node I restarted is working fine. Does anyone know what I have done wrong...