osd

  1. grin

    [pve6] ceph lumi to nautilus guide problem: Required devices (block and data) not present for bluest

    # ceph-volume simple scan stderr: lsblk: /var/lib/ceph/osd/ceph-2: not a block device stderr: Bad argument "/var/lib/ceph/osd/ceph-2", expected an absolute path in /dev/ or /sys or a unit name: Invalid argument Running...
  2. K

    Unable to create Ceph OSD

    Hello there, I recently upgraded from proxmox 5 to 6 as well as ceph luminous to nautilus. I wanted to go through and re-create the osds I have in my cluster. I ran into an issue with the second osd I wanted to convert (the first went fine). Here's what I get after I zap the disk: pveceph...
  3. Z

    Proxmox 6.0 CEPH OSD's don't show down

    Hi Everyone, We have been running proxmox 5.4 with CEPH for awhile now without any issues. We recently built a brand new 5 node cluster using Proxmox 6.0. We had no issues getting our OSD's up and running, However whenever we physically pull a drive the associated OSD does not show down...
  4. M

    [SOLVED] Proxmox VE 6.0: New Ceph OSD, but GPT = No

    Just upgraded a 3 node cluster to PVE 6.0 last night. I followed the excellent upgrade docs for PVE and Ceph Nautilus upgrades. I added a new OSD using a new hard drive. I initialized the Disk with GPT and the the disk appeared to have a GPT partition table per the "Disks" menu of the web GUI...
  5. C

    Cannot stop OSD in WebUI

    Hi, I want to stop several OSDs to start node maintenance. However, I get an error message indicating a communication error. Please check the attached screenshot for details. Please note that I have defined different storage types (NVME, HDD, etc.) in the crush map in order to use different...
  6. S

    Proxmox Ceph OSD Partition Created With Only 10GB

    How do you define the Ceph OSD Disk Partition Size? It always creates with only 10 GB usable space. Disk size = 3.9 TB Partition size = 3.7 TB Using *ceph-disk prepare* and *ceph-disk activate* (See below) OSD created but only with 10 GB, not 3.7 TB Commands Used root@proxmox:~#...
  7. I

    Adding an OSD requires root

    We've split users into groups and allocated different roles to each group (essentially admins and non-admin users). Non-admin users are limited to performing VM functions, while users in the admin group have the built-in role "Administrator" applied. This should give them the ability to perform...
  8. P

    Analyzing Ceph load

    Hello, I want to understand if I am reaching a bottleneck in my hyperconverged Proxmox+ Ceph cluster. In moments of high load (multiple LXC with high I/O on small files) I see one node with: - IO delay at 40% - around 50% CPU usage - load at 200 (40 total cores / 2 x Intel(R) Xeon CPU E5-2660...
  9. P

    Ceph OSD disk replacement

    Hello, I am looking for updated documentation on correct procedure for Ceph OSD disk replacement. Current Proxmox docs only cover OSD creation but is lacking of management procedures using pveceph commands. I found this Ceph document but some commands are not giving the same output (ie...
  10. A

    RBD hangs with remote Ceph OSDs

    Hi there, I am running a 2-node Proxmox-Cluster and mounted RBD images on a remote Ceph cluster (latest Mimic release). Currently we are using the RBD image mount as backup storage for our VMs (mounted in /var/lib/backup). It all works fine unless an OSD or an OSD-Host (we have 3, each...
  11. R

    [SOLVED] CEPH - 3er Cluster mit jeweils sieben SSDs

    Hallo liebe Proxmoxfreunde, ich habe eine Frage. Es wird im Ceph GUI - nachdem ich zwei OSDs entfernt (ceph-8 und ceph-13 osds) und dann wieder hinzugefügt habe (ceph-21 & ceph-22 osds) - nun im status unter Total: 23 angezeigt, sowie das zwei OSDs "down" wären. Aus welchen osd table holt sich...
  12. A

    Ceph osd changes status: (down,up)

    Hi, on one of the ceph cluster nodes a message appeared: 1 osds down, it appears and then disappears, the status constantly changes from down to up, what can be done about it? The SMART status of the disk shows OK. Package version: proxmox-ve: 5.1-32 (running kernel: 4.13.13-2-pve) pve-manager...
  13. B

    [SOLVED] Near-root permissions

    Good afternoon. I'm looking for a means to create super user type accounts that are similar to root that authenticate against Active directory. I have AD authentication working but I am unable to fine tune the permissions to allow things for some users/groups to create, stop, out, destroy OSDs...
  14. G

    ceph misplaced objects

    Hi all, I'm new with ceph and ran into an interesting warning message which I think I can not interpret correctly and would appreciate any suggestion/comment on this, where to start rolling up the case and/or I missing something. First of all, the configuration structured as follows: 3 node...
  15. C

    Ceph OSD failure after host reboot

    Hi, I have configured a 3-node Ceph cluster. Each node has 2 RAID controllers, 4 SSDs and 48 HDDs. I used this syntax to create an OSD: pveceph osd create /dev/sdd -bluestore -journal_dev /dev/sdv1 pveceph osd create /dev/sde -bluestore -journal_dev /dev/sdw1 pveceph osd create /dev/sdf...
  16. C

    too few PGs per OSD (21 < min 30)

    Hello, I have added an extra disk and added them as OSD i have a total of 3 disks per node with 3 nodes. I am now getting the following error "too few PGs per OSD (21 < min 30)" in my ceph Is there a way to resolve this?
  17. F

    Failing to add an OSD

    I have repeating failure at installing an osd on one node. - installing thru GUI seems to work... but the OSD is not visible - installing thru command line seems to work... but the OSD is not visible either eg:: # ceph-disk zap /dev/sdb Caution: invalid backup GPT header, but valid main header...
  18. R

    [SOLVED] Proxmox Ceph implementation seems to be broken (OSD Creation)

    Preface: I have a hybrid Ceph environment using 16 SATA spinners and 2 Intel Optane NVMe PCIe cards (intended for DB and WAL). Because of enumeration issues on reboot, the NVMe cards can flip their /dev/{names}. This will cause a full cluster re balance if the /dev/{names} flip. The...
  19. D

    VM/Containers not running after removing bad OSD

    Hi Everyone, I'm in a bit of a situation here. We identified a bad drive (but still running) and decided we needed to remove it. Therefore we followed these instructions believing that it would work without a hitch and all our containers/vms would continue to run. Unfortunately, not the case...
  20. J

    [SOLVED] Proxmox Ceph - After power failure

    Hi, Today there was an unexpected power outage where my servers are co-located, the entire datacenter went dark. Luckily I had fresh backups to simply restore for the most part. However, I have an issue with one OSD on one server, the OSD is stuck in "active+recovery_wait+degraded" I have...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!