ceph

  1. D

    Ceph: VM-Backup hängt, rbd info läuft in Timeout

    Moin zusammen! beim betroffenen Cluster handelt es sich um einen 8-Node Proxmox-Cluster, bei dem 6 Nodes davon einen Ceph-Cluster bereitstellen. Auf einem der Nodes (vs4, stellt keine Ceph-Dienste bereit, nutzt aber den Ceph-rbd-Pool für VMs) habe ich das Problem, dass sich die darauf...
  2. I

    Load balancing with multiple NAS units

    Greetings. I'm looking to move from Citrix Xencenter to something else and Proxmox has been highly recommended. My current setup involves clustering of three nodes with two NAS units (TrueNAS Scale) for load balancing storage. I would like to do something identical or similar using Proxmox but...
  3. L

    Migration VMs/Containers (Ceph-Pool/RBD) und CephFS von PVE8/Ceph in einen neuen PVE/Ceph-Cluster

    Hallo, wie migriert man (Best-Practice) VMs+Containers (Ceph-Pool/RBD) und CephFS von PVE8/Ceph in einen neuen PVE/Ceph-Cluster? Gibt es eine andere Methode als Backup und Restore? Vor allem bei VMs mit TByte RBD-Volumes. Container per Backup/Restore, ist kein Thema, die haben eh nur OS...
  4. K

    Expected Ceph performance?

    Hi, What is the 'normal/expected' VM disk performance on Ceph? In this instance, it's; - 4 x nodes each with 1 x NVMe (3000MB/s ish) - Dedicated Ceph network - 20G bonded links between nodes/switch (iperf 17.5Gbit/s) - MTU jumbo Here is an example rdb bench test (lowest 289 MiB/s)...
  5. K

    [SOLVED] How to set Ceph - # of PGs? - Keeps falling back to 32.

    Hi, I'm not sure how, but my Ceph was set to (# of PGs: 32). I found this while investigating slow disk speed on VMs. From the docs: I've changed my PGs in PVE GUI to match the docs 128 PGs, Ceph starts to rebalance, and the PGs start to fall again. It's back down to 32 now: How do I get...
  6. T

    Removing HDD from CEPH with different crush rules (HDD/SSD)

    Hello, I have a cluster of 4 servers in a proxmox datacenter. On these, CEPH is configured. All the servers are added as monitors and manager, everything is working properly. There is also a CRUSH rule for HDD and for SSD storage. Ceph version is 18.2.2 On 3 of 4 servers, there are HDD and SSD...
  7. V

    Proxmox Ceph Cluster Network mit MC LAG - Performance

    Hallo zusammen, ich baue aktuell einen 3-Node-Proxmox-Cluster mit Ceph auf. Die Nodes sind jeweils per LACP an zwei Dell-Switches (100 Gbit) angebunden. Netzwerktopologie: Switch 1: → 100 Gbit → Node 1 → 100 Gbit → Node 2 → 100 Gbit → Node 3 Switch 2: → 100 Gbit → Node 1 → 100 Gbit → Node 2 →...
  8. M

    Ceph monitor "out of quorum" on 3 node cluster, can I remove and readd?

    I have a 3 node proxmox cluster running Ceph. Recently is gave a warning that one of the three monitors is down or "out of quorum". root@pve-02:~# ceph -s cluster: id: f9b7ff0a-17b9-40d8-b897-cebfffb0ee8d health: HEALTH_WARN 1/3 mons down, quorum pve-01,pve-03...
  9. D

    [SOLVED] Ceph Proxmox active+clean+inconsistent

    Hello, i need help about my proxmox ceph cluster, after scheduled PG deep scrubbing, i have an error like this on my ceph health details : pg 2.f is active+clean+inconsistent, acting [4,3,0] pg 2.11 is active+clean+inconsistent, acting [3,0,5] pg 2.1e is active+clean+inconsistent, acting [1,5,3]...
  10. S

    Ceph PG quantity - calculator vs autoscaler vs docs

    I'm a bit confused about the autoscaler and PGs. This cluster has Ceph 19.2.1, 18 OSDs, default 3/2 replicas and default target 100 PGs per OSD. BULK is false. Capacity is just under 18000G. A while back we set a target size of 1500G and we've been gradually approaching that, currently...
  11. D

    CEPH: small cluster with multiple OSDs per one NVMe drive

    Hello community! We have deployed our first small Proxmox cluster along with Ceph and so far we've had great experience with it. We're running traditional VM workload (most VMs are idling and most of the Ceph workload comes from bursts of small files with the exception of few SQL servers that...
  12. F

    Questions Regarding Automation

    I have been working Proxmox for about a year and a half now and feel pretty comfortable with the platform. I can create VMs/containers, manage storage (using Ceph), handle networking, create cloud-init templates, etc. Now I want to take the next step and automate my infrastructure. I have some...
  13. K

    [SOLVED] New cluster - Ceph = got timeout (500)

    Hey, please can someone point me in the right direction? 4 nodes all installed with PVE ISO. So no firewall in play. Each node has a Ceph network: auto bond1 iface bond1 inet static address 10.10.10.1/24 (node1 10.10.10.1/24 node2 10.10.10.2/24 node3 10.10.10.3/24 etc.)...
  14. D

    Ceph: Verhalten beim Ausfall eines Knoten

    Guten Tag, ich habe eine Vefrständnisfrage bzgl. des Verhaltens von Ceph beim Ausfall eines Knotens. Szenario: 3+ Knoten Ceph in einer 3/2-Kopnfiguration Ceph-Storage inkl. CephFS ist zu 75+% gefüllt Bei dem plötzlichen Ausfall eines Knoten beginnt Ceph die PGs neu zu verteilen bzw...
  15. F

    Ceph on HPE DL380 Gen10+ not working

    I have a Proxmox 8.4 cluster with two nodes and one qdevice, with Ceph Squid 19.2.1 recently installed and an additional device to maintain quorum for Ceph. Each node has one SATA SSD, so I have two OSDs (osd.18 and osd.19) created, and I have a pool called poolssd with both. Since ceph has been...
  16. T

    Sanity check for new installation

    Could we get some 2nd and 3rd opinions of a plan for a new datacenter deployment: 8 PVE Hosts, each has two 16 core Xeons and 512GB of reg. Ram. We further have 4 10GbE NICs in each machine, two of those should handle guest traffic, the other two are for storage traffic. Each machine will have...
  17. G

    Ceph : number of placement groups for 5+ pools on 3hosts x 1osd

    Hi. MY CONFIG : 3 hosts with PVE 8.4.1 and ceph reef, 10gb ethernet dedicated ceph network. Each host have single osd which is 8tb hdd cmr drive. WHAT I DID : Created 5 pools with defaul settings. WHAT I NEED TO DO : Create 15 more pools. PROBLEM : Ceph started screaming "too many pgs per...
  18. K

    4 Node Streched Cluster with Ceph

    Hey, I am planing to create a 4 Node Streched Cluster with Ceph. Given that I have 4 nodes means 2 on each side I am in the need of a Quorum for Proxmox and a Tiebreaker Monitor for Ceph. As I read the Ceph Tiebreaker can even be in the Cloud or another location because no OSD is speaking with...
  19. L

    [SOLVED] CEPH keeps recreating a pool named .mgr

    Hi all, as per title, I have created a new Ceph 7 node cluster and noticed that there was a default pool named ".mgr" there. I deleted that pool and created a new one. After some restarts of the managers and monitors, i saw that the pool ".mgr" was recreated all by itself. Is this intended...
  20. N

    Ceph - Which is faster/preferred?

    I am in the process of ordering new servers for our company to set up a 5-node cluster with all NVME. I have a choice of either going with (4) 15.3TB drives or (8) 7.68TB drives. The cost is about the same. Are there any advantages/disadvantages in relation to Proxmox/Ceph performance? I think I...