osd adding

  1. I

    Ceph select specific OSD to form a Pool

    Hello there, I want to create two separate pools in my CEPH. At the moment I have a configuration made on 4 nodes with m.2 NVMe drives as OSDs. My nodes also have SATA SSD drives which I'd like to use for 2nd pool but I don't see any option to select these OSDs, you just add them and that's it...
  2. A

    [SOLVED] Ceph OSD adding issues

    Greetings community! After few month of using ceph from Proxmox i decided to add new disk and stuck with this issue. ceph version 17.2.7 (2dd3854d5b35a35486e86e2616727168e244f470) quincy (stable) Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/ceph --cluster...
  3. M

    Ceph OSD creation error

    Setting up ceph on a three node cluster, all three nodes are fresh hardware and installs of PVE. Getting an error on all three nodes when trying to create the OSD either via GUI or CLI. create OSD on /dev/sdc (bluestore) wiping block device /dev/sdc 200+0 records in 200+0 records out 209715200...
  4. R

    OSD reweight

    Hello, maybe often diskussed but also question from me too: since we have our ceph cluster we can see an unweighted usage of all osd's. 4 nodes with 7x1TB SSDs (1HE, no space left) 3 nodes with 8X1TB SSDs (2HE, some space left) = 52 SSDs pve 7.2-11 all ceph-nodes showing us the same like...
  5. A

    Error for add new OSD on ceph

    I am trying to add new osd without success. Follow the logs and information --------------------------------------------------------------------------------------------------- # pveversion -v proxmox-ve: 6.4-1 (running kernel: 5.4.203-1-pve) pve-manager: 6.4-15 (running version...
  6. A

    Problem with add new osd in ceph

    Hi, I'm having trouble adding a new osd to an existing ceph. ----------------------------------------------------------------------------------------------------------------- pveversion -v proxmox-ve: 6.4-1 (running kernel: 5.4.203-1-pve) pve-manager: 6.4-15 (running version...
  7. F

    [SOLVED] Ceph issue after replacing OSD

    Hi, I have a 3 node cluster running Proxmox 6.4-8 with Ceph. 2 of the 3 nodes have 1.2TB for Ceph (each node has one 1.2TB disk for OSD, one 1.2TB disk for DB), the third node has the same configuration but with 900GB disks. I decided to stop, out and destroy the 900GB OSD to replace with 1.2TB...
  8. L

    OSD replacement and adding minimizing rebalances

    Hello! I have a 10 nodes hyperconverged cluster. Each node has 4 OSDs (total 40 OSDs). I have 2 different questions: QUESTION 1 - OSD REPLACEMENT (with identical SSD) Since I need to replace an SSD (one OSD has crashed 3 times in latest months so I prefer to replace it with a brand new one)...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!