ceph

  1. D

    How to disable cephfs?

    I set up a home lab of 3 machines to set up a proxmox cluster so I could learn how to do it. I got through the ceph setup and started setting up a cephfs, but when I set it up, for some reason, I cannot create any VMs on it, and can only use the storage local to each host. I found the content...
  2. J

    [SOLVED] Migrating a LXC ceph mountpoint to a VM

    Hello! I have a LXC that's on a (deprecated by ourselves) erasure coding pool. I want to move their 20TB mountpoint to a SSD erasure coding pool, without taking down the container. First I tried to take down the container and do a migration, but this activity was taking longer than a week, so...
  3. C

    what do you think of this Proxmox/ceph Cluster ?

    So I have this cluster in mind: PLEASE keep in mind the availability of hardware be it new/used server/workstation hardware is quit unorthodox and different Here from what's available in the us/eu. for example a single used epyc 9654 costs 10x a 9950x and that's just a single cpu, a single...
  4. D

    Ceph Managers Seg Faulting Post Upgrade (8 -> 9 upgrade)

    I upgraded my Proxmox cluster from the latest 8.4 version to 9.0, post upgrade most things went well. However the Ceph cluster has not gone as well. All monitors, OSDs, and metadata servers have all upgraded to Ceph 19.2.3, however, all of my manager services have failed. They ran for a while...
  5. A

    CEPH Expected Scrubbing, Remap and Backfilling Speeds Post Node Shutdown/Restart

    Good Morning While we were doing upgrades to our cluster (upgraded each memory from 256 to 512 - 3 identical nodes), doing one node at a time and all VM's removed from HA and switched off, we noticed that after a node comes online it takes approximately 20-30 minutes for the Remap/Scrub/Clean...
  6. D

    Ceph: recovering OSDs from another install

    Hello community! I have a theoretical question for you: In case a proxmox node dies and is reinstalled, can the ceph osds with data be salvaged? Given we know the original fsid, reinstall ceph on the reinstalled node, reconfigure it to the original fsid, can we do just 'ceph-volume lvm...
  7. C

    Assigning cores to CEPH

    PVE wiki details you should assign CPU cores to CEPH: https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster It doesn't detail on how it (excluding 25% of the cores from VMs) should be done. How do you run your systems?
  8. D

    Ceph: VM-Backup hängt, rbd info läuft in Timeout

    Moin zusammen! beim betroffenen Cluster handelt es sich um einen 8-Node Proxmox-Cluster, bei dem 6 Nodes davon einen Ceph-Cluster bereitstellen. Auf einem der Nodes (vs4, stellt keine Ceph-Dienste bereit, nutzt aber den Ceph-rbd-Pool für VMs) habe ich das Problem, dass sich die darauf...
  9. I

    Load balancing with multiple NAS units

    Greetings. I'm looking to move from Citrix Xencenter to something else and Proxmox has been highly recommended. My current setup involves clustering of three nodes with two NAS units (TrueNAS Scale) for load balancing storage. I would like to do something identical or similar using Proxmox but...
  10. L

    Migration VMs/Containers (Ceph-Pool/RBD) und CephFS von PVE8/Ceph in einen neuen PVE/Ceph-Cluster

    Hallo, wie migriert man (Best-Practice) VMs+Containers (Ceph-Pool/RBD) und CephFS von PVE8/Ceph in einen neuen PVE/Ceph-Cluster? Gibt es eine andere Methode als Backup und Restore? Vor allem bei VMs mit TByte RBD-Volumes. Container per Backup/Restore, ist kein Thema, die haben eh nur OS...
  11. K

    Expected Ceph performance?

    Hi, What is the 'normal/expected' VM disk performance on Ceph? In this instance, it's; - 4 x nodes each with 1 x NVMe (3000MB/s ish) - Dedicated Ceph network - 20G bonded links between nodes/switch (iperf 17.5Gbit/s) - MTU jumbo Here is an example rdb bench test (lowest 289 MiB/s)...
  12. K

    [SOLVED] How to set Ceph - # of PGs? - Keeps falling back to 32.

    Hi, I'm not sure how, but my Ceph was set to (# of PGs: 32). I found this while investigating slow disk speed on VMs. From the docs: I've changed my PGs in PVE GUI to match the docs 128 PGs, Ceph starts to rebalance, and the PGs start to fall again. It's back down to 32 now: How do I get...
  13. T

    Removing HDD from CEPH with different crush rules (HDD/SSD)

    Hello, I have a cluster of 4 servers in a proxmox datacenter. On these, CEPH is configured. All the servers are added as monitors and manager, everything is working properly. There is also a CRUSH rule for HDD and for SSD storage. Ceph version is 18.2.2 On 3 of 4 servers, there are HDD and SSD...
  14. V

    Proxmox Ceph Cluster Network mit MC LAG - Performance

    Hallo zusammen, ich baue aktuell einen 3-Node-Proxmox-Cluster mit Ceph auf. Die Nodes sind jeweils per LACP an zwei Dell-Switches (100 Gbit) angebunden. Netzwerktopologie: Switch 1: → 100 Gbit → Node 1 → 100 Gbit → Node 2 → 100 Gbit → Node 3 Switch 2: → 100 Gbit → Node 1 → 100 Gbit → Node 2 →...
  15. M

    Ceph monitor "out of quorum" on 3 node cluster, can I remove and readd?

    I have a 3 node proxmox cluster running Ceph. Recently is gave a warning that one of the three monitors is down or "out of quorum". root@pve-02:~# ceph -s cluster: id: f9b7ff0a-17b9-40d8-b897-cebfffb0ee8d health: HEALTH_WARN 1/3 mons down, quorum pve-01,pve-03...
  16. D

    [SOLVED] Ceph Proxmox active+clean+inconsistent

    Hello, i need help about my proxmox ceph cluster, after scheduled PG deep scrubbing, i have an error like this on my ceph health details : pg 2.f is active+clean+inconsistent, acting [4,3,0] pg 2.11 is active+clean+inconsistent, acting [3,0,5] pg 2.1e is active+clean+inconsistent, acting [1,5,3]...
  17. S

    Ceph PG quantity - calculator vs autoscaler vs docs

    I'm a bit confused about the autoscaler and PGs. This cluster has Ceph 19.2.1, 18 OSDs, default 3/2 replicas and default target 100 PGs per OSD. BULK is false. Capacity is just under 18000G. A while back we set a target size of 1500G and we've been gradually approaching that, currently...
  18. D

    CEPH: small cluster with multiple OSDs per one NVMe drive

    Hello community! We have deployed our first small Proxmox cluster along with Ceph and so far we've had great experience with it. We're running traditional VM workload (most VMs are idling and most of the Ceph workload comes from bursts of small files with the exception of few SQL servers that...
  19. F

    Questions Regarding Automation

    I have been working Proxmox for about a year and a half now and feel pretty comfortable with the platform. I can create VMs/containers, manage storage (using Ceph), handle networking, create cloud-init templates, etc. Now I want to take the next step and automate my infrastructure. I have some...
  20. K

    [SOLVED] New cluster - Ceph = got timeout (500)

    Hey, please can someone point me in the right direction? 4 nodes all installed with PVE ISO. So no firewall in play. Each node has a Ceph network: auto bond1 iface bond1 inet static address 10.10.10.1/24 (node1 10.10.10.1/24 node2 10.10.10.2/24 node3 10.10.10.3/24 etc.)...