osd

  1. C

    [SOLVED] Ceph health warning: unable to load:snappy

    Hello, after a server crash I was able to repair the cluster. Health check looks ok, but there's this warning for 68 OSDs: unable to load:snappy All OSDs are located on the same cluster node. Therefore I was checking version of related file libsnappy1v5; this was 1.1.9 Comparing this file...
  2. G

    Ceph: actual used space?

    Hi, I'm running a Proxmox 7.2-7 cluster with Ceph 16.2.9 "Pacific". I can't tell the difference between Ceph > Usage and Ceph > Pools > Used (see screenshots). Can someone please explain what's the actual space used in my Ceph storage? Do you think that 90% used pool is potentially dangerous...
  3. H

    Adding smaller size OSDs to ceph cluster

    Hello, Currently we have a ceph cluster of 6 nodes, 3 of the nodes are dedicated ceph nodes. Proxmox build 7.2-4. There's 8 x 3.84 Tib drives in each ceph node (24 total in three nodes). We are running out of space in ceph pool with 86%-87% usage. We currently do not have additional spare...
  4. B

    Is it possible to recover CEPH when proxmox is dead ?

    Hello, Sorry, i'm frech, and i dont speack english fluently, so excuse me for my bad writting. I was wondering if i could rebuild a CEPH clusters without having the original monitors/manager. So i made a trying lab for testing it. In this case, the OSD data is preserved. I made this test simple...
  5. I

    [SOLVED] ceph problem - Reduced data availability: 15 pgs inactive

    proxmox 7.1-8 yesterday i executed a large delete operation on the ceph-fs pool (around 2 TB of data) the operation ended withing few seconds successful (without any noticeable errors). and then the following problem occurred: 7 out of 32 osds went to down and out. trying to set them in and...
  6. 0

    Ceph ghost OSDs

    Hi all, After an upgrade, Proxmox would not start and I had to reinstall it completely. I made a backup of the config but presumably missed something : ceph.mon keeps crashing and 4 OSDs appear as ghosts (out/down). proxmox version : 7.2-3 ceph version : 15.2.16 Any help appreciated !
  7. L

    Adding new Ceph OSD when using multiple Cluster LAN IPs

    We had problems adding disks as new Ceph OSDs pveceph createosd /dev/sdX Error was: command '/sbin/ip address show to '192.168.1.201/24 192.168.1.202/24' up' failed: exit code 1 The workaround was to teporarily deactivate the 2nd Cluster IP in /etc/pve/ceph.conf cluster_network =...
  8. F

    keyring: (2) No such file or directory

    Hello, So We are having a 3 node cluster up and running in productions. One of our node went down and when it got back up everything was back to normal. But the only thing that was an issue is one OSD on this node was showing down/out under ceph > OSD. The HDD in reference to the osd is working...
  9. M

    Ceph pool config

    Hi I currently have a pool of 3 1TB OSDs across 3 nodes Im planning to add 3 3tb hard disk and I was wondering if I should start a new pool with the 3tb hdds or add it to the existing pool I only know that If I add to the pool depending on how I adjust the weights I either trade IO or capacity
  10. N

    Add new OSD to existing CEPH POOL

    Hi all, i've 4 nodes proxmox, with CEPH, only three are monitors. For every node, i've three 3 SSD and 2 HDD (normal) , and two different pools: one for ssd and one for hdd. Now, i'm adding one OSD per node, to add this to existing pool HDD. But it's taking more time that i thinked. This is the...
  11. G

    [SOLVED] Gelöst: Ceph-Pool schrumpft schnell nach Erweiterung mit OSDs (wahrscheinlich morgen Cluster-Ausfall)

    Hallo zusammen, nachdem ich einen SSD-Pool zu meinem bestehenden HDD-Pool hinzugefügt habe schrumpft der HDD-Pool extrem schnell, so dass vermutlich morgen ein Produktionsausfall bevorsteht. ursprüngliche Umgebung: 3-Node-Hyper-Converged-Cluster, (PVE Vers. 6.3-6) mit verteiltem Ceph (Vers...
  12. C

    CEPH Health warnings - how to resolve?

    I have configured two nodes with just one OSD each for the moment, they are 500GB NVME storage. I have some health warnings and wondered what they mean and how I should resolve them? My nodes are 128GB RAM, 64 x Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz (2 Sockets) each. I'm on PVE 7.0-14+...
  13. F

    Octopus Failed to update an OSD

    Hi, I'm running with this environment : pve-kernel-5.4: 6.4-9 pve-kernel-helper: 6.4-9 pve-kernel-5.3: 6.1-6 pve-kernel-5.4.148-1-pve: 5.4.148-1 pve-kernel-5.4.143-1-pve: 5.4.143-1 pve-kernel-4.15: 5.4-6 pve-kernel-5.3.18-3-pve: 5.3.18-3 pve-kernel-4.15.18-18-pve: 4.15.18-44 ceph...
  14. A

    Ceph OSDs marked out, but still rebalance when I remove them.

    Okay, so reading the manual for Ceph tells me to mark an OSD as "Out" and then allow the pool to rebalance the data away from it before removing it, and then removing it should not require any rebalancing. Okay, so I did this. I marked an OSD as OUT. I added two OSDs to replace it. I let the...
  15. F

    Ceph select specific OSD to form a Pool

    Hi there! i'm needing a little help. I have a Proxmox + Ceph cluster with 4 nodes, with the following HDDs in each node 2x 900GB SAS 15.000K 2X 300GB SAS 10.000K 2x 480GB SSD 2x 240GB SSD I need to make a Pool for each class and size of HD, the classes of the HDs know how to separate them, but I...
  16. N

    OSD not stopping in GUI

    Hi I have a empty test cluster in which 4 nodes are ceph. PVE 7.0-1 + Ceph pacific The setup is: 4 identical nodes 1x Xeon cpu 32GB DDR4 ECC 2x 1Gb, 2x 10Gb 1x system HDD, 2x 16TB OSD HDD, 1x 1TB SSD WAL enterprise class 8 port 10G deidcated switch for ceph backend 8 port 10G switch cluster...
  17. K

    [SOLVED] Cannot add OSD

    I had latency issues with a 4TB disk, which I replaced with a 2TB disk. I used @alexskysilk procedure : https://forum.proxmox.com/threads/ceph-osd-disk-replacement.54591/ However, the new osd does not start : "osd.1 0 failed to load OSD map for epoch 904831, got 0 bytes" I've ceph 15.2.13 and...
  18. L

    How to restore previous OSD of ceph?

    How to restore previous OSD of ceph? To study ceph I have a cluster of 3 servers. Having created ceph, ceph osd, cephfs everything is fine. Then I simulate the situation of restoring proxmox ceph through "pvceph purge". Reinstall ceph, monitor. But how do I see the previous OSD? After all, they...
  19. F

    ceph osd replace SD card

    Hi every body. I have ceph cluster. One of OSD host os on sd card that have 6 osd hdd is crashed, so I need to change sd card. how can I reconnect 6 osd hard disks to new os without destroy the data? Thanks in advance.
  20. I

    [SOLVED] ceph mix osd type ssd (sas,nvme(u2),nvme(pcie))

    I asked similar question around a year ago but i did not find it so ill ask it here again. Our system: proxmox cluster based on 6.3-2 10 node, ceph pool based on 24 osds sas3 (4 or 8 TB ) more will be added soon. (split across 3 nodes, 1 more node will be added this week ) we plan to add more...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!