ceph cluster

  1. K

    Using PBS as 4th Ceph node in 3-node Proxmox cluster

    Good afternoon. In my homelab I want to make a 3-node Proxmox cluster with Ceph. I also want to add a 4th separate host with PBS for backups. Each node in the Proxmox cluster will have an SSD for a 3/2 replicated Ceph pool for VM/CT disks. I also want to add a spinning HDD to each node for...
  2. F

    3 node HA cluster Proxmox with Ceph, node crashed, replace node

    We have a 3 node HA Proxmox Hyperviser cluster with ceph. One of the nodes (third node) has a problem and seen offline in the cluster. We would like to examine the 3rd node. In order to do so we disconnect it from the cluster but this causes problem with the working two nodes and we cannot...
  3. Z

    Ceph configuration Disappeared?

    Was in the middle of re balancing after a set of OSDs went offline and came back up only to find that about 20 minutes later the entire ceph cluster is unresponsive. upon looking into the issue the ceph.conf is now completely empty and I have no clue how to proceed without having to manually...
  4. M

    [SOLVED] CephFS max file size

    Hello everyone, I have 3 servers all the same with 16core/32Thread AMD EPYC, 256GB Ram, 2x 1TB SSD NVMe in ZFS RAID1 as OS, and 4x 3.2TB SSD NVMe as ceph storage for VMs drives, 2x4TB HDD in RAID0 for fast-local backup. These three servers are clustered together and connected with dedicated...
  5. M

    Ceph Cluster Konfiguration zum Imitieren von RAID 1 auf einem Cluster Knoten

    Guten Tag zusammen, zur Zeit arbeite ich mich in die Konfiguration von einem Ceph Clusters als Speicherlösung ein. Ich habe es auch geschafft ein einfaches Ceph Cluster mit der Standardkonfiguration zu erstellen. 2 Knoten mit identischem Aufbau: - 2 HDD Festplatten - 2 SSD Festplatten...
  6. B

    Proxmox 8/Ceph Cluster - High error rate on Ceph-network

    Hello proxmox-community, we encounter a high error rate (errors, drops, overruns and frames) on the cephs network interfaces on our newly set up four-machine proxmox 8/ceph cluster when writing data (e.g. from /dev/urandom to a file on a virtual machine for testing). bond0...
  7. M

    ceph tuning

    First a disclaimer, this is a lab, definitely not a reference design, the point was to do weird things, learn how cephs reacts and then learn how to get myself out of whatever weird scenario I ended up with. I've spent a few days on the forum, seems many of the resolutions were people...
  8. H

    CEPH traffic on Omnipath or Infiniband NICs and switches?

    Hi all, We are looking in to deploying a new refurbished NVME HCI Ceph Proxmox cluster. At this point we look at 7 nodes, each with 2 NVME OSD drives, with expansion for 2 NVME OSD's more. As we would quickly saturate a 25GbE link we should be looking in to 40/50/100 GbE links and switches...
  9. V

    3 nodes cluster messed up

    Hi, Here is the story, a three nodes cluster with, three MacMinis with internal disk and each one an external disk. Cluster configured, ceph configured with replication, HA configured. After a few hours of running I discover the local-lvm disk on Node2 and Node3 are offline. A short...
  10. S

    [SOLVED] 4 Node Cluster mit Ceph SSD tauschen

    Hallo, ich habe die Herausforderung, dass einige der Festplatte nahe dem Lebensende sind, laut SMART. SSD-Wareout Jetzt habe ich die VMs auf die anderen Nodes verschoben und den Server heruntergefahren. Dann habe ich die Festplatte umgebaut und den Server wieder gesatartet. Gibt es einen...
  11. O

    Compute and Ceph Storage Cluster

    Hi Everyone I want to ask best practice for 4 nodes that i have node-1 DL380 dual proc xeon gold with 256gb ram node-2 DL380 dual proc xeon silver with 96gb ram node-3 DL380 single proc xeon silver with 32gb ram, 3x1.92tb ssd node-4 DL380 single proc xeon silver with 32gb ram, 4x1.92tb ssd...
  12. C

    [SOLVED] Ceph health warning: unable to load:snappy

    Hello, after a server crash I was able to repair the cluster. Health check looks ok, but there's this warning for 68 OSDs: unable to load:snappy All OSDs are located on the same cluster node. Therefore I was checking version of related file libsnappy1v5; this was 1.1.9 Comparing this file...
  13. R

    Multi-region Ceph + mixed host HW?

    I'm facing a bit of a challenge with my current project and I'm hoping that someone here might have some wisdom to share. For context, my end goal is to have a self-hosted S3 service replicated across 3 data centers (1 West coast, 2 East coast). I have 6 storage servers (2 for each DC) that are...
  14. P

    ceph: 5 nodes with 16 drives vs 10 nodes with 8 drives

    Hi, I'm designing a ceph cluster for our VFX studio. We have about 32 artists seats and I need high sequential read and write speeds, not so much IOPS. I will use whatever it takes to put the best possible hardware inside each node, but I have to decide now if I go with many nodes with fewer...
  15. M

    Extend LVM of Ceph DB/WAL Disk

    I have a 3 node Ceph cluster running Proxmox 7.2. Each node has 4 x HDD OSDs and the 4 OSDs share an Intel Enterprise SSD for the Ceph OSD database (DB/WAL) on each node. I am going to be adding a 5th OSD HDD to each node and also add an additional Intel Enterprise SSD on each node for use with...
  16. S

    create 2 pools on 5 nodes proxmox with 20 OSD

    Hi All, I have been running a 5 nodes of proxmox for a while now, I have installed ceph and come to configure ceph pool i need two pool one for VMs and other for containers. i need help with this configuration for those pool size Min size num of PGs" as from pgcalc it said to put it 512 and from...
  17. R

    proxmox 6.2 ceph issue

    Hi all! I really hope someome could guide me with this issue because I do not know cepth much and I am not sure about the next steps.. So I have a 3 nodes cluster, same hardware. ceph was working without any issue until a reboot I had to do because of a network issue. Now on a node, ceph is...
  18. S

    Changing Ceph configuration

    Hello, i want to change the ceph config and use the bridge instead of the vlans I have created. But when I change the ceph.conf file with the IP network from bridge, my ceph doesn't work anymore. Any suggestions how I can do that? My main issue is that I am not getting enough read/write on the...
  19. J

    Ceph block.db and block.wal

    Hello, I'm looking over the Proxmox documentation for building a Ceph cluster here... https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster There is a small section entitled block.db and block.wal which says... I was wondering if anyone knows how much of a performance advantage...
  20. G

    VM friert für etwa 15 Sekunden ein, Ceph Storage

    Hallo zusammen, ich wollte mich nun einmal mit dem Thema Ceph in Proxmox vertraut machen und habe mir für diesen Zweck einmal einen kleinen 3 Node Cluster für Testzwecke gebaut. Auf diesem habe ich pro Node 2x SSDs für Ceph verbaut (gesamt 3x2). Das Ceph Netzwerk habe ich einmal in Public und...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!