osd

  1. F

    keyring: (2) No such file or directory

    Hello, So We are having a 3 node cluster up and running in productions. One of our node went down and when it got back up everything was back to normal. But the only thing that was an issue is one OSD on this node was showing down/out under ceph > OSD. The HDD in reference to the osd is working...
  2. M

    Ceph pool config

    Hi I currently have a pool of 3 1TB OSDs across 3 nodes Im planning to add 3 3tb hard disk and I was wondering if I should start a new pool with the 3tb hdds or add it to the existing pool I only know that If I add to the pool depending on how I adjust the weights I either trade IO or capacity
  3. N

    Add new OSD to existing CEPH POOL

    Hi all, i've 4 nodes proxmox, with CEPH, only three are monitors. For every node, i've three 3 SSD and 2 HDD (normal) , and two different pools: one for ssd and one for hdd. Now, i'm adding one OSD per node, to add this to existing pool HDD. But it's taking more time that i thinked. This is the...
  4. G

    [SOLVED] Gelöst: Ceph-Pool schrumpft schnell nach Erweiterung mit OSDs (wahrscheinlich morgen Cluster-Ausfall)

    Hallo zusammen, nachdem ich einen SSD-Pool zu meinem bestehenden HDD-Pool hinzugefügt habe schrumpft der HDD-Pool extrem schnell, so dass vermutlich morgen ein Produktionsausfall bevorsteht. ursprüngliche Umgebung: 3-Node-Hyper-Converged-Cluster, (PVE Vers. 6.3-6) mit verteiltem Ceph (Vers...
  5. C

    CEPH Health warnings - how to resolve?

    I have configured two nodes with just one OSD each for the moment, they are 500GB NVME storage. I have some health warnings and wondered what they mean and how I should resolve them? My nodes are 128GB RAM, 64 x Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz (2 Sockets) each. I'm on PVE 7.0-14+...
  6. F

    Octopus Failed to update an OSD

    Hi, I'm running with this environment : pve-kernel-5.4: 6.4-9 pve-kernel-helper: 6.4-9 pve-kernel-5.3: 6.1-6 pve-kernel-5.4.148-1-pve: 5.4.148-1 pve-kernel-5.4.143-1-pve: 5.4.143-1 pve-kernel-4.15: 5.4-6 pve-kernel-5.3.18-3-pve: 5.3.18-3 pve-kernel-4.15.18-18-pve: 4.15.18-44 ceph...
  7. A

    Ceph OSDs marked out, but still rebalance when I remove them.

    Okay, so reading the manual for Ceph tells me to mark an OSD as "Out" and then allow the pool to rebalance the data away from it before removing it, and then removing it should not require any rebalancing. Okay, so I did this. I marked an OSD as OUT. I added two OSDs to replace it. I let the...
  8. F

    Ceph select specific OSD to form a Pool

    Hi there! i'm needing a little help. I have a Proxmox + Ceph cluster with 4 nodes, with the following HDDs in each node 2x 900GB SAS 15.000K 2X 300GB SAS 10.000K 2x 480GB SSD 2x 240GB SSD I need to make a Pool for each class and size of HD, the classes of the HDs know how to separate them, but I...
  9. N

    OSD not stopping in GUI

    Hi I have a empty test cluster in which 4 nodes are ceph. PVE 7.0-1 + Ceph pacific The setup is: 4 identical nodes 1x Xeon cpu 32GB DDR4 ECC 2x 1Gb, 2x 10Gb 1x system HDD, 2x 16TB OSD HDD, 1x 1TB SSD WAL enterprise class 8 port 10G deidcated switch for ceph backend 8 port 10G switch cluster...
  10. K

    [SOLVED] Cannot add OSD

    I had latency issues with a 4TB disk, which I replaced with a 2TB disk. I used @alexskysilk procedure : https://forum.proxmox.com/threads/ceph-osd-disk-replacement.54591/ However, the new osd does not start : "osd.1 0 failed to load OSD map for epoch 904831, got 0 bytes" I've ceph 15.2.13 and...
  11. L

    How to restore previous OSD of ceph?

    How to restore previous OSD of ceph? To study ceph I have a cluster of 3 servers. Having created ceph, ceph osd, cephfs everything is fine. Then I simulate the situation of restoring proxmox ceph through "pvceph purge". Reinstall ceph, monitor. But how do I see the previous OSD? After all, they...
  12. F

    ceph osd replace SD card

    Hi every body. I have ceph cluster. One of OSD host os on sd card that have 6 osd hdd is crashed, so I need to change sd card. how can I reconnect 6 osd hard disks to new os without destroy the data? Thanks in advance.
  13. I

    [SOLVED] ceph mix osd type ssd (sas,nvme(u2),nvme(pcie))

    I asked similar question around a year ago but i did not find it so ill ask it here again. Our system: proxmox cluster based on 6.3-2 10 node, ceph pool based on 24 osds sas3 (4 or 8 TB ) more will be added soon. (split across 3 nodes, 1 more node will be added this week ) we plan to add more...
  14. B

    DB/WAL Disk configuration

    Dear Members, there is a cluster with 3 nodes with the following hw configuration: - Dell R730 - PERC H730 raid controller - 256GB RAM - 4 x 1.9TB 12Gb SAS SSD for OSDs - 2 x 4TB 6Gb SAS HDD for OSDs - 2 x 800GB 12Gb SAS SSD for DB/WAL disk. Raid controller is working in HBA mode. In this mode...
  15. jsterr

    OSDs are filestore at creation, although not specified

    I reinstalled one of the three proxmox ceph nodes with new name, new ip. I removed all lvm data, wiped fs of the old disks with: dmsetup remove_all wipefs -af /dev/sda ceph-volume lvm zap /dev/sda Now when I create osds via gui or cli they are always filestore and I dont get it, default should...
  16. G

    Crushmap vanished after Networking error

    Hi I have a worst case, osd's in a 3 node cluster each 4 nvme's won't start we had a ip config change in public network, and mon's died so we managed mon's to come back with new ip's. corosync on 2 rings is fine, all 3 mon's are up osd's won't start how to get back to the pool, already...
  17. F

    Ceph unter Proxmox: OSD werden nicht immer angezeigt

    Hallo, ich betreibe aktuell Proxmox 6.2-1 mit 3 Nodes. Alle Nodes sind identisch ausgerüstet und beherbergen Ceph. Wenn ich bei einem der 3 Nodes auf Ceph > OSD gehe, werden mir die vorhandenen OSD nicht immer angezeigt. Teilweise muss die Ansicht mehrfach aktualisiert werden, bis ich sie...
  18. F

    [SOLVED] Ceph unter Proxmox: create OSD

    Hallo, ich betreibe aktuell Proxmox 6.2-1 mit 3 Nodes. Alle Nodes sind identisch ausgerüstet und beherbergen Ceph. Bei einem der 3 Nodes habe ich das Problem, dass ich kein OSD erstellen kann. Nachdem ich die Disk ausgewählt habe und auf "Create" klicke, erscheint für einen kurzen Augenblick...
  19. I

    [SOLVED] OSD Create Failing

    Hey Guys, I am having an issue when creating an OSD... Not sure why though... What would cause the below error on creating the osd? command 'ceph-volume lvm create --cluster-fsid 248fab2c-bd08-43fb-a562-08144c019785 --data /dev/sdd' failed: exit code 1
  20. O

    Recfreating ceph, cannot create OSD

    Hello! I've been playing with ceph and after initially getting it working, I realised I had everything on the wrong network. So I deleted and removed everything, uninstalled and reinstalled ceph and began recreating the cluster using the correct network. I had to destroy / zap some drives on...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!