osd

  1. After update of Nov 14 - monitors fail to start

    I ran the updates which installed a new kernel. after the reboot the monitor did not start. Attempted to start from command line: systemctl status ceph-mon@proxp01.service ● ceph-mon@proxp01.service - Ceph cluster monitor daemon Loaded: loaded (/lib/systemd/system/ceph-mon@.service; enabled...
  2. Ceph OSD db and wal size

    Hello Guys! I have a big question for the ceph cluster and I need your help or your opinion. I installed a simple 3 nodes setup with Ceph. In one node has 2x146 GB HW RAID 1 + 18x 600 GB 10k SAS without RAID. (Summary we have 54 OSD device and we have to buy 3 SSD for journal) And my big...
  3. Cannot start any KVM / LXC

    Hi, here I describe 1 of the 2 major issues I'm currently facing in my 8 node ceph cluster (2x MDS, 6x ODS). The issue is that I cannot start any virtual machine KVM or container LXC; the boot process just hangs after a few seconds. All these KVMs and LXCs have in common that their virtual...
  4. [SOLVED] ceph OSD creation using a partition

    hi all. i need help in creating osd in my partition. in our server, we are provided with 2 nvme drive in raid-1. this is the partition: root@XXXXXXXX:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme1n1 259:0 0 1.8T 0 disk ├─nvme1n1p1 259:1 0 511M 0 part...
  5. [SOLVED] Howto define OSD weight in Crush map

    Hi, after adding an OSD to Ceph it is adviseable to create a relevant entry in Crush map using a weight size depending on disk size. Example: ceph osd crush set osd.<id> <weight> root=default host=<hostname> Question: How is the weight defined depending on disk size? Which algorithm can be...
  6. Howto add DB device to BlueFS

    Hi, I have created OSD on HDD w/o putting DB on faster drive. In order to improve performance I have now a single SSD drive with 3.8TB. Questions: How can I add DB device for every single OSD to this new SSD drive? Which parameter in ceph.conf defines the size for the DB? Can you confirm that...
  7. [SOLVED] Directory /var/lib/ceph/osd/ceph-<id>/ is empty

    Hi, I finished upgrade to Proxmox 6 + Ceph Nautilus on 4 node cluster. On 2 nodes I have identified that all directories /var/lib/ceph/osd/ceph-<id>/ are empty after rebooting. Typically the content of this directory is this: root@ld5508:~# ls -l /var/lib/ceph/osd/ceph-70/ insgesamt 60...
  8. [SOLVED] OSD Encryption

    Hi guys I notice that OSD encryption is available under create OSD. Is there any mechanism to show which OSDs are encrypted in the UI or command line? I made some encrypted and some non-encrypted to gauge performance but unable to differentiate which are which.
  9. Ceph Speicherverbrauch ist falsch

    Hallo zusammen, ich habe 3 Node Cluster mit je 3 OSD in jedem dieser Nodes. Meine Ceph Version ist: 14.2.1 (9257126ffb439de1652793b3e29f4c0b97a47b47) nautilus (stable). Der Pool hat Replica 3/2 mit 128pg. Sobald ich eine VM aus einem Backup, das auf einem NFS Share liegt, herstelle zeigt der...
  10. Ceph on Raid0

    Dear Proxmox Team, we have 3 Dell Servers running Proxmox 6.0. Unfortunately we encountered an issue with the setup of Ceph OSD's on individual drives. The main problem is that the given Perc H710 mini adapter does not allow "IT-Mode" / JBOD Passthrough of individual drives, and so we're stuck...
  11. No bootable device - migrierte Server lassen sich nicht rebooten

    Hallo zusammen! Wir haben ca. 20 VMs in einem dreier Cluster mit Ceph am laufen. Die hälfte der VMs wurde mit Clonezilla migriert, P2V, die andere hälfte habe ich zu raw convertiert von vmware. Nun das Problem. Vor ein paar Wochen hat sich eine Ubuntu 18.04. VM aufgehängt, diese wurde von vmware...
  12. TwiX

    Ceph nautilus - Raid 0

    Hi, First thing first, I know it is not recommended to use raid 0 disks beyond ceph, however that's what I did on 4 Dell R430 servers with Perc 730 (with 6 15k SAS drives). I have pretty descent performance with it and absolutely no issues for the last 2 years. With full SSD nodes I don't use...
  13. grin

    [pve6] ceph lumi to nautilus guide problem: Required devices (block and data) not present for bluest

    # ceph-volume simple scan stderr: lsblk: /var/lib/ceph/osd/ceph-2: not a block device stderr: Bad argument "/var/lib/ceph/osd/ceph-2", expected an absolute path in /dev/ or /sys or a unit name: Invalid argument Running...
  14. Unable to create Ceph OSD

    Hello there, I recently upgraded from proxmox 5 to 6 as well as ceph luminous to nautilus. I wanted to go through and re-create the osds I have in my cluster. I ran into an issue with the second osd I wanted to convert (the first went fine). Here's what I get after I zap the disk: pveceph...
  15. Proxmox 6.0 CEPH OSD's don't show down

    Hi Everyone, We have been running proxmox 5.4 with CEPH for awhile now without any issues. We recently built a brand new 5 node cluster using Proxmox 6.0. We had no issues getting our OSD's up and running, However whenever we physically pull a drive the associated OSD does not show down...
  16. [SOLVED] Proxmox VE 6.0: New Ceph OSD, but GPT = No

    Just upgraded a 3 node cluster to PVE 6.0 last night. I followed the excellent upgrade docs for PVE and Ceph Nautilus upgrades. I added a new OSD using a new hard drive. I initialized the Disk with GPT and the the disk appeared to have a GPT partition table per the "Disks" menu of the web GUI...
  17. Cannot stop OSD in WebUI

    Hi, I want to stop several OSDs to start node maintenance. However, I get an error message indicating a communication error. Please check the attached screenshot for details. Please note that I have defined different storage types (NVME, HDD, etc.) in the crush map in order to use different...
  18. Proxmox Ceph OSD Partition Created With Only 10GB

    How do you define the Ceph OSD Disk Partition Size? It always creates with only 10 GB usable space. Disk size = 3.9 TB Partition size = 3.7 TB Using *ceph-disk prepare* and *ceph-disk activate* (See below) OSD created but only with 10 GB, not 3.7 TB Commands Used root@proxmox:~#...
  19. Adding an OSD requires root

    We've split users into groups and allocated different roles to each group (essentially admins and non-admin users). Non-admin users are limited to performing VM functions, while users in the admin group have the built-in role "Administrator" applied. This should give them the ability to perform...
  20. Analyzing Ceph load

    Hello, I want to understand if I am reaching a bottleneck in my hyperconverged Proxmox+ Ceph cluster. In moments of high load (multiple LXC with high I/O on small files) I see one node with: - IO delay at 40% - around 50% CPU usage - load at 200 (40 total cores / 2 x Intel(R) Xeon CPU E5-2660...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!