Search results

  1. C

    BlueFS spillover detected on 30 OSD(s)

    The workload for the OSDs of type HDD is only: OLTP Database backup / restore This means that any DB server has mapped a single RBD to backup / restore the Database. Would you confirm that for this workload a dedicated SSD for block.db is not required?
  2. C

    Howto add DB device to BlueFS

    OK. I modified /etc/ceph/ceph.conf by adding this in [global]: bluestore_block_db_size = 53687091200 This should create RockDB with size 50GB. Then I wanted to move DB to a new device (SSD) that is not formatted: root@ld5505:~# ceph-bluestore-tool bluefs-bdev-new-db –-path...
  3. C

    Howto add DB device to BlueFS

    Hi, I have created OSD on HDD w/o putting DB on faster drive. In order to improve performance I have now a single SSD drive with 3.8TB. Questions: How can I add DB device for every single OSD to this new SSD drive? Which parameter in ceph.conf defines the size for the DB? Can you confirm that...
  4. C

    [SOLVED] Directory /var/lib/ceph/osd/ceph-<id>/ is empty

    Does this means I need to create the json-files for LVM OSDs, too? If yes, how should I do this? If not, how can I ensure that OSD activation on startup for LVM OSDs is working in case the files in /var/lib/ceph/osd/ceph-<id>/ are lost?
  5. C

    [SOLVED] Directory /var/lib/ceph/osd/ceph-<id>/ is empty

    All right. I've completed all activities on servers with "simple" ceph-disk(s). The json-files in /etc/ceph/osd/ are complete now. I understand this as a precaution measure in case files in /var/lib/ceph/osd/ceph-<id>/ are lost (again). However I don't understand how to fix a comparable issue...
  6. C

    BlueFS spillover detected on 30 OSD(s)

    Based on my calculation I need much more SSD disk space. 260x HDD 2TB = 520TB total 5% for DB = 26TB distributed over 4 nodes = 6.5TB Once I have the required SSD drives I will create new DB storage location. Can you please advise how to proceed for the following 2 scenarios: 1. HDD - Single...
  7. C

    [SOLVED] Directory /var/lib/ceph/osd/ceph-<id>/ is empty

    root@ld5507:~# ceph-volume simple scan /dev/sda1 Running command: /sbin/cryptsetup status /dev/sda1 --> OSD 172 got scanned and metadata persisted to file: /etc/ceph/osd/172-a7de0317-05da-4df5-be08-8b4401d76f10.json --> To take over management of this scanned OSD, and disable ceph-disk and udev...
  8. C

    [SOLVED] Directory /var/lib/ceph/osd/ceph-<id>/ is empty

    No, I didn't create the json-files. And directory /etc/ceph/osd/ does not exist. root@ld5508:~# ls -l /etc/ceph/ insgesamt 16 -rw------- 1 ceph ceph 161 Mai 28 14:33 ceph.client.admin.keyring lrwxrwxrwx 1 root root 18 Mai 28 14:33 ceph.conf -> /etc/pve/ceph.conf -rw-r----- 1 root root 704 Aug...
  9. C

    BlueFS spillover detected on 30 OSD(s)

    Well, the partitions on SSD are created sequentially. The design now looks like this: sdbl 67:240 0 372,6G 0 disk ├─sdbl1 67:241 0 1G 0 part ├─sdbl2 67:242 0 1G 0 part ├─sdbl3 67:243 0 1G 0 part ├─sdbl4 67:244 0 1G 0 part ├─sdbl5 67:245...
  10. C

    [SOLVED] Directory /var/lib/ceph/osd/ceph-<id>/ is empty

    Yes. I followed the upgrade guide and executed every single step. And actually everything was fine. However since yesterday the issue started. I cleaned up some packages from Debian 9, upgraded the PVE kernel and rebooted 2 of 4 nodes. I modified /ect/pve/ceph.conf too in order to troubleshoot...
  11. C

    BlueFS spillover detected on 30 OSD(s)

    Hi, thanks Alwin for the explanation. However there's one thing that is not mentioned. With Nautilus all OSDs are now created using LVM when using command pveceph createosd <device>. Before this command creates primary partitions with GPT. Or is this command obsolete now? It is still documented...
  12. C

    [SOLVED] Directory /var/lib/ceph/osd/ceph-<id>/ is empty

    Hi, I finished upgrade to Proxmox 6 + Ceph Nautilus on 4 node cluster. On 2 nodes I have identified that all directories /var/lib/ceph/osd/ceph-<id>/ are empty after rebooting. Typically the content of this directory is this: root@ld5508:~# ls -l /var/lib/ceph/osd/ceph-70/ insgesamt 60...
  13. C

    BlueFS spillover detected on 30 OSD(s)

    Well, my issue is not the OSD performance. Therefore tuning was not my request. The issue is that my setup originated from Proxmox 5 + Ceph Luminous with every OSD of type HDD has a journal on SSD with 1GB each. According to Ceph this is by fare to small for block.db (see here): It is...
  14. C

    BlueFS spillover detected on 30 OSD(s)

    Thanks for providing this link. This means my current Ceph setup is somehow obsolete because the command pveceph osd create <hdd-device> --journal-dev <ssd-device> created a partion of size 1G on the SSD. What is the recommended procedure to correct this? THX
  15. C

    BlueFS spillover detected on 30 OSD(s)

    Thank you for this hint. I corrected my posting accordingly.
  16. C

    BlueFS spillover detected on 30 OSD(s)

    Hello! I'm facing the same issue with just the difference that 192 OSD(s) are affected. When I created the OSD(s) in PVE 5 + Luminous there was a 1GB partition on the SSD created for the DB (metadata). Question: How can I determine the amount of spilled metadata? I run this command...
  17. C

    [SOLVED] Restore / rebuild Ceph monitor after HW crash

    Hi, thanks for this input. After successfully removing the relevant node ld4464 from Ceph the relevant error message is gone. What would be the next steps? Do you advice to re-enter this node ld4464 to the existing Ceph cluster? Or should I first fix the OSDs and ensure that they will start...
  18. C

    [SOLVED] Restore / rebuild Ceph monitor after HW crash

    Hello! Due to an HD crash I was forced to rebuild a server node from scratch, means I installed OS and Proxmox VE ( apt install proxmox-ve postfix open-iscsi) fresh on the server. Then I executed and Ceph (pveceph install) on greenfield. Then I ran pvecm add 192.168.10.11 -ring0_addr...
  19. C

    Cannot stop OSD in WebUI

    Hi, I'm aware of the object classes. As far as I understand Ceph can now identify the object class of a disk automatically. My intention was this: In order to ensure that a VM uses a specific disk say NVME, I need to - define another root nvme - define a fake hostname, e.g. <hostname>-nvme -...
  20. C

    Cannot stop OSD in WebUI

    Well, this means the Crush Map is incorrect. But how should I reflect the different types for a drive (NVME, SSD, HDD) in order to use relevant rules?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!