osd

  1. C

    [SOLVED] Howto define OSD weight in Crush map

    Hi, after adding an OSD to Ceph it is adviseable to create a relevant entry in Crush map using a weight size depending on disk size. Example: ceph osd crush set osd.<id> <weight> root=default host=<hostname> Question: How is the weight defined depending on disk size? Which algorithm can be...
  2. C

    Howto add DB device to BlueFS

    Hi, I have created OSD on HDD w/o putting DB on faster drive. In order to improve performance I have now a single SSD drive with 3.8TB. Questions: How can I add DB device for every single OSD to this new SSD drive? Which parameter in ceph.conf defines the size for the DB? Can you confirm that...
  3. C

    [SOLVED] Directory /var/lib/ceph/osd/ceph-<id>/ is empty

    Hi, I finished upgrade to Proxmox 6 + Ceph Nautilus on 4 node cluster. On 2 nodes I have identified that all directories /var/lib/ceph/osd/ceph-<id>/ are empty after rebooting. Typically the content of this directory is this: root@ld5508:~# ls -l /var/lib/ceph/osd/ceph-70/ insgesamt 60...
  4. B

    [SOLVED] OSD Encryption

    Hi guys I notice that OSD encryption is available under create OSD. Is there any mechanism to show which OSDs are encrypted in the UI or command line? I made some encrypted and some non-encrypted to gauge performance but unable to differentiate which are which.
  5. I

    Ceph Speicherverbrauch ist falsch

    Hallo zusammen, ich habe 3 Node Cluster mit je 3 OSD in jedem dieser Nodes. Meine Ceph Version ist: 14.2.1 (9257126ffb439de1652793b3e29f4c0b97a47b47) nautilus (stable). Der Pool hat Replica 3/2 mit 128pg. Sobald ich eine VM aus einem Backup, das auf einem NFS Share liegt, herstelle zeigt der...
  6. H

    Ceph on Raid0

    Dear Proxmox Team, we have 3 Dell Servers running Proxmox 6.0. Unfortunately we encountered an issue with the setup of Ceph OSD's on individual drives. The main problem is that the given Perc H710 mini adapter does not allow "IT-Mode" / JBOD Passthrough of individual drives, and so we're stuck...
  7. R

    No bootable device - migrierte Server lassen sich nicht rebooten

    Hallo zusammen! Wir haben ca. 20 VMs in einem dreier Cluster mit Ceph am laufen. Die hälfte der VMs wurde mit Clonezilla migriert, P2V, die andere hälfte habe ich zu raw convertiert von vmware. Nun das Problem. Vor ein paar Wochen hat sich eine Ubuntu 18.04. VM aufgehängt, diese wurde von vmware...
  8. TwiX

    Ceph nautilus - Raid 0

    Hi, First thing first, I know it is not recommended to use raid 0 disks beyond ceph, however that's what I did on 4 Dell R430 servers with Perc 730 (with 6 15k SAS drives). I have pretty descent performance with it and absolutely no issues for the last 2 years. With full SSD nodes I don't use...
  9. grin

    [pve6] ceph lumi to nautilus guide problem: Required devices (block and data) not present for bluest

    # ceph-volume simple scan stderr: lsblk: /var/lib/ceph/osd/ceph-2: not a block device stderr: Bad argument "/var/lib/ceph/osd/ceph-2", expected an absolute path in /dev/ or /sys or a unit name: Invalid argument Running...
  10. K

    Unable to create Ceph OSD

    Hello there, I recently upgraded from proxmox 5 to 6 as well as ceph luminous to nautilus. I wanted to go through and re-create the osds I have in my cluster. I ran into an issue with the second osd I wanted to convert (the first went fine). Here's what I get after I zap the disk: pveceph...
  11. Z

    Proxmox 6.0 CEPH OSD's don't show down

    Hi Everyone, We have been running proxmox 5.4 with CEPH for awhile now without any issues. We recently built a brand new 5 node cluster using Proxmox 6.0. We had no issues getting our OSD's up and running, However whenever we physically pull a drive the associated OSD does not show down...
  12. M

    [SOLVED] Proxmox VE 6.0: New Ceph OSD, but GPT = No

    Just upgraded a 3 node cluster to PVE 6.0 last night. I followed the excellent upgrade docs for PVE and Ceph Nautilus upgrades. I added a new OSD using a new hard drive. I initialized the Disk with GPT and the the disk appeared to have a GPT partition table per the "Disks" menu of the web GUI...
  13. C

    Cannot stop OSD in WebUI

    Hi, I want to stop several OSDs to start node maintenance. However, I get an error message indicating a communication error. Please check the attached screenshot for details. Please note that I have defined different storage types (NVME, HDD, etc.) in the crush map in order to use different...
  14. S

    Proxmox Ceph OSD Partition Created With Only 10GB

    How do you define the Ceph OSD Disk Partition Size? It always creates with only 10 GB usable space. Disk size = 3.9 TB Partition size = 3.7 TB Using *ceph-disk prepare* and *ceph-disk activate* (See below) OSD created but only with 10 GB, not 3.7 TB Commands Used root@proxmox:~#...
  15. I

    Adding an OSD requires root

    We've split users into groups and allocated different roles to each group (essentially admins and non-admin users). Non-admin users are limited to performing VM functions, while users in the admin group have the built-in role "Administrator" applied. This should give them the ability to perform...
  16. P

    Analyzing Ceph load

    Hello, I want to understand if I am reaching a bottleneck in my hyperconverged Proxmox+ Ceph cluster. In moments of high load (multiple LXC with high I/O on small files) I see one node with: - IO delay at 40% - around 50% CPU usage - load at 200 (40 total cores / 2 x Intel(R) Xeon CPU E5-2660...
  17. P

    Ceph OSD disk replacement

    Hello, I am looking for updated documentation on correct procedure for Ceph OSD disk replacement. Current Proxmox docs only cover OSD creation but is lacking of management procedures using pveceph commands. I found this Ceph document but some commands are not giving the same output (ie...
  18. A

    RBD hangs with remote Ceph OSDs

    Hi there, I am running a 2-node Proxmox-Cluster and mounted RBD images on a remote Ceph cluster (latest Mimic release). Currently we are using the RBD image mount as backup storage for our VMs (mounted in /var/lib/backup). It all works fine unless an OSD or an OSD-Host (we have 3, each...
  19. R

    [SOLVED] CEPH - 3er Cluster mit jeweils sieben SSDs

    Hallo liebe Proxmoxfreunde, ich habe eine Frage. Es wird im Ceph GUI - nachdem ich zwei OSDs entfernt (ceph-8 und ceph-13 osds) und dann wieder hinzugefügt habe (ceph-21 & ceph-22 osds) - nun im status unter Total: 23 angezeigt, sowie das zwei OSDs "down" wären. Aus welchen osd table holt sich...
  20. A

    Ceph osd changes status: (down,up)

    Hi, on one of the ceph cluster nodes a message appeared: 1 osds down, it appears and then disappears, the status constantly changes from down to up, what can be done about it? The SMART status of the disk shows OK. Package version: proxmox-ve: 5.1-32 (running kernel: 4.13.13-2-pve) pve-manager...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!