cluster ceph scaling

  1. D

    Proxmox 3 node cluster, 2 with nvme 1 with sata ssd

    I have older server with sata ssd(intel s3510) and Epyc 3rd gen server with intel p4510. I have a budget for another server(4th gen epyc with nvme ssd). I'm planning to build 3 node with ceph. If I mix those 3, would they bottle-necked by sata ssd server? I will run database intensive...
  2. M

    Replace 3TB drive with 1 TB drives (Ceph)

    My ceph cluster has 3 3TB and 3 1TB drives with SSD WAL and DB. The write speeds are kinda meh on my VMs, from what I understand the 3 TB drives will get 3x the write request of the 1TB drives. Is my understanding correct ? and would it be better if I swapped my 3TB with 1TB drives making it 2...
  3. M

    [SOLVED] CephFS max file size

    Hello everyone, I have 3 servers all the same with 16core/32Thread AMD EPYC, 256GB Ram, 2x 1TB SSD NVMe in ZFS RAID1 as OS, and 4x 3.2TB SSD NVMe as ceph storage for VMs drives, 2x4TB HDD in RAID0 for fast-local backup. These three servers are clustered together and connected with dedicated...
  4. N

    [SOLVED] Einige Nodes ignorieren Backupjobs

    Hi, wir bauen derzeit einen Proxmox Cluster auf basis von Ceph Storage auf und stoßen leider auf immer mehr Kinderkrankheiten. Derzeit bspw. ignorieren einige Nodes jegliche Backupjobs. Es wird nichts gelogged, startet man die Backups manuell läuft er einwandfrei durch, er startet lediglich...
  5. G

    [SOLVED] Optimal number of Ceph monitor/manager/MDS

    Hi all, I'm currently running a cluster with 15 nodes and I plan to add more in the near future. As for Ceph I have 5 monitors, 5 managers and 5 metadata servers which currently manage 60+ OSDs. Do you advice to add more monitors/mangers/mds? Should I stick with odd numbers because of quorum...
  6. G

    Evaluate my network setup?

    Hi all, pretty new to Proxmox and Ceph. We've been running a test cluster on three nodes, each node on Gigabit network for the Ceph network as well and so far we are satisfied with performance and resiliency. So we're planning to deploy a production cluster soon. The new cluster is starting...
  7. S

    Proxmox Ceph Converged (HCI) or external ceph

    Hello, On this moment we have: 6 x Proxmox Nodes 2 x 10 cores (2 nodes have 2 x 14 cores) 512 GB RAM 4 x 10 GB (2 x 10 GB LACP for network en corosync and 2 x 10 GB LACP for Storage) 3 x Ceph Monitor Dual Core 4 GB RAM 2 x 10 GB LACP 4 x Ceph OSD 2 x 6 Core 2,6 Ghz 96 GB RAM 4 x 10 GB (2 x...
  8. L

    New proxmox cluster separated or join to existing?

    Hi all, we have a production running cluster of 4 nodes version 5.4 with CEPH over SATA disks, 4 monitors, 1GB/s network cards. We have purchased 4 new servers, all with SSD's and 10GBe NIC's. As at least half of the containers and VM's on the existing SATA cluster will have to be migrated to...
  9. P

    Proxmox cluster scaling best practices

    Hello, I have a 5 node PVE cluster with Ceph (2 OSD every node on Intel DC SSD on dedicated 10G network). I am actually using it for LXC containers and my idea is to always keep one node empty to move containers in case of node failure. I will need to scale up in the near future probably to a...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!