ceph

  1. S

    Unattended install of Ceph using pveceph

    I am attempting to write an Ansible playbook that sets up my Proxmox cluster. One issue I'm running into is pveceph install doesn't have a non-interactive option. I've tried setting DEBIAN_FRONTEND=noninteractive on the task like this: - name: Install Ceph environment: DEBIAN_FRONTEND...
  2. V

    [SOLVED] PG stuck incomplete

    Hey Folks, Stashing this here as it's the only solution that worked for me and I will undoubtedly need it again. Given, $> ceph health detail ... [WRN] PG_AVAILABILITY: Reduced data availability: 1 pg inactive, 1 pg incomplete pg 7.188 is incomplete, acting [5,10,43] (reducing pool...
  3. K

    Ceph Storage question

    Hi This seems to be the best place / forum to ask question about ceph :) My understanding of ceph is the underlying storage is OSD, these are distributed between nodes. Pools are then created that sit on top of OSD's ... i think OSD's are broken into PG and PG are assigned to Pools I think ...
  4. C

    ceph Erasure Coding temporary override failure domain

    I'm recreating my ceph cluster due a configuration corruption. I will be reusing the same hardware. The problem is I don't have enough hard drives for two ceph clusters, but there is enough capacity. I know that you can't change the size of a erasure coded pool, but is there any way to override...
  5. J

    Linux Bond balance-rr and Ceph Squid 19.2.1 OSD's Lost after Setting Up Bond

    Here's what I did. When I first set up the 3-node cluster I only had 1 cable for the storage network, ie. where each host is connected 10Gb/s to a switch so they can use a shared storage scheme (Ceph - to which I'm fairly new and just learning) This week we got the extra cables ordered (because...
  6. K

    How to permission ceph on a proxmox cluster to another proxmox node

    Hi I have built a proxmox cluster and im running ceph on there. I have another proxmox node - out side the cluster and for now don't want to connect it to the cluster but I want to share the ceph filesystem - so the rdb and a cephfs so I'm thinking i need to do something like this on the...
  7. L

    VMs not migrating when Ceph is degraded in 3-Node Full-Mesh Cluster

    Hello Community, I am currently setting up our new 3-Node Proxmox Cluster, pretty new to Proxmox itself. We are Using Full-Mesh with 25Gbit/s cards for Ceph, 10Gbit/s cards for Coro/VMBR and 18 (6 per Node) SATA 6G Enterprise SSDs. Ceph performance took a bit of testing, but we are now at a...
  8. K

    how to migrate a OSD

    Hi got a 6 node cluster , want to reduce the number of nodes. so lets say i have server A1 and server A4 out of a1..A6 I have ceph installed on all node I want to take the drives (OSD one) out of A1 and install them into A4 with out having to resync the entire drive. I have read its possible...
  9. K

    Ceph Pool readded and cannot find disk in Proxmox

    Hi guys, I have a Ceph pool that I removed by accident and then I readded it back in my proxmox cluster. I had a VM disks in there before I removed and when I added the Ceph pool back in, they're not showing up. I can see that the data is there when I take a look at the storage metrics and run...
  10. D

    How to disable cephfs?

    I set up a home lab of 3 machines to set up a proxmox cluster so I could learn how to do it. I got through the ceph setup and started setting up a cephfs, but when I set it up, for some reason, I cannot create any VMs on it, and can only use the storage local to each host. I found the content...
  11. J

    [SOLVED] Migrating a LXC ceph mountpoint to a VM

    Hello! I have a LXC that's on a (deprecated by ourselves) erasure coding pool. I want to move their 20TB mountpoint to a SSD erasure coding pool, without taking down the container. First I tried to take down the container and do a migration, but this activity was taking longer than a week, so...
  12. C

    what do you think of this Proxmox/ceph Cluster ?

    So I have this cluster in mind: PLEASE keep in mind the availability of hardware be it new/used server/workstation hardware is quit unorthodox and different Here from what's available in the us/eu. for example a single used epyc 9654 costs 10x a 9950x and that's just a single cpu, a single...
  13. D

    Ceph Managers Seg Faulting Post Upgrade (8 -> 9 upgrade)

    I upgraded my Proxmox cluster from the latest 8.4 version to 9.0, post upgrade most things went well. However the Ceph cluster has not gone as well. All monitors, OSDs, and metadata servers have all upgraded to Ceph 19.2.3, however, all of my manager services have failed. They ran for a while...
  14. A

    CEPH Expected Scrubbing, Remap and Backfilling Speeds Post Node Shutdown/Restart

    Good Morning While we were doing upgrades to our cluster (upgraded each memory from 256 to 512 - 3 identical nodes), doing one node at a time and all VM's removed from HA and switched off, we noticed that after a node comes online it takes approximately 20-30 minutes for the Remap/Scrub/Clean...
  15. D

    Ceph: recovering OSDs from another install

    Hello community! I have a theoretical question for you: In case a proxmox node dies and is reinstalled, can the ceph osds with data be salvaged? Given we know the original fsid, reinstall ceph on the reinstalled node, reconfigure it to the original fsid, can we do just 'ceph-volume lvm...
  16. C

    Assigning cores to CEPH

    PVE wiki details you should assign CPU cores to CEPH: https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster It doesn't detail on how it (excluding 25% of the cores from VMs) should be done. How do you run your systems?
  17. D

    Ceph: VM-Backup hängt, rbd info läuft in Timeout

    Moin zusammen! beim betroffenen Cluster handelt es sich um einen 8-Node Proxmox-Cluster, bei dem 6 Nodes davon einen Ceph-Cluster bereitstellen. Auf einem der Nodes (vs4, stellt keine Ceph-Dienste bereit, nutzt aber den Ceph-rbd-Pool für VMs) habe ich das Problem, dass sich die darauf...
  18. I

    Load balancing with multiple NAS units

    Greetings. I'm looking to move from Citrix Xencenter to something else and Proxmox has been highly recommended. My current setup involves clustering of three nodes with two NAS units (TrueNAS Scale) for load balancing storage. I would like to do something identical or similar using Proxmox but...
  19. L

    Migration VMs/Containers (Ceph-Pool/RBD) und CephFS von PVE8/Ceph in einen neuen PVE/Ceph-Cluster

    Hallo, wie migriert man (Best-Practice) VMs+Containers (Ceph-Pool/RBD) und CephFS von PVE8/Ceph in einen neuen PVE/Ceph-Cluster? Gibt es eine andere Methode als Backup und Restore? Vor allem bei VMs mit TByte RBD-Volumes. Container per Backup/Restore, ist kein Thema, die haben eh nur OS...
  20. K

    Expected Ceph performance?

    Hi, What is the 'normal/expected' VM disk performance on Ceph? In this instance, it's; - 4 x nodes each with 1 x NVMe (3000MB/s ish) - Dedicated Ceph network - 20G bonded links between nodes/switch (iperf 17.5Gbit/s) - MTU jumbo Here is an example rdb bench test (lowest 289 MiB/s)...