osd

  1. S

    [SOLVED] Ceph bluestore WAL and DB size – WAL on external SSD

    Hi All, I’m setting up ceph cluster with 3x node pve 6.2. each node got following disks 7x 6TB 7200 Enterprise SAS HDD 2x 3TB Enterprise SAS SSD 2x 400GB Enterprise SATA SSD This setup previously used for old ceph (with filestore) cluster where it configured to use 2x 400GB SATA SSD to...
  2. M

    Optimal Proxmox Ceph cluster settings for 5 node but 2 node sometimes offline and only created for Osd replica backup

    Hi, I have 5 Proxmox node + 5 osd + 5 monitor + 5 manager (all node same lineup but different HDD size and different hardware), lastly added 5., and I see this: Ceph settings: I have only 1 pool, for all VM: I sometimes disable 2 Proxmox node and only 3 Proxmox node available. You...
  3. M

    Smaller size OSD - bigger VM, what happening? Ceph OSD, replication settings.

    Hi! Please help me understand for be clear: Example for sure, all works fine, Ceph health OK, VMs are stored in Ceph, by default only node1 running a VMs: node1: 1000 GB OSD (1 HDD) node2: 1000 GB OSD (1 HDD) node3: 1000 GB OSD (1 HDD) node4: 500 GB OSD (1 HDD) node5: 500 GB + 500 GB OSD (2...
  4. K

    Ceph OSD Renamed /dev/sdX

    During a disaster "test" (randomly removing drives), I pulled one osd and one drive that had the journals for 4 other OSDs. So I assumed this would down/out 4 possibly 5 OSDs. Upon re-inserting the drives they were given different /dev/sd[a-z] labels, now the journal disk has the SD[a-z] label...
  5. A

    [SOLVED] OSDs still exist after removal of a node

    Hi, I removed one node from the cluster using 'pvecm delnode node_name"'. (before executing this, I removed the node from monitor and manager list and shut it down) Now, the OSDs are in down/out status and I am unable to remove it from GUI (since the node removed already). How can I remove...
  6. E

    [SOLVED] Can not create OSD for ceph

    Can not create OSD for ceph. Same error in GUI and terminal: # pveceph osd create /dev/nvme0n1 Error: any valid prefix is expected rather than "". command '/sbin/ip address show to '' up' failed: exit code 1 The only thing i can think of is since last time it worked was that i now have two...
  7. C

    [SOLVED] Ceph - No such OSD

    Hello, I'm trying out ceph to see if it would fit our need. In my testing I tried removing all OSDs to change the disk to SSDs. The issue I'm facing is when trying to delete the last OSD I get hit with an error "no such OSD" (see attached screenshot). The command line return no OSD so it's like...
  8. K

    Ceph unter Proxmox: Brauche Tips zur Erweiterung

    Hallo, ich betreibe aktuell Proxmox 5.4-13 mit 3 Nodes Alle nodes sind identisch ausgerüstet und beherbergen auch CEPH. Jede Node hat aktuell 4 OSDs auf jeweils einer 1 TB SSD. Ceph läuft 3/2 PlacementGroups habe ich auf 896 schrittweise von 256 hochgeschraubt. Hier hat mich leider die...
  9. P

    After update of Nov 14 - monitors fail to start

    I ran the updates which installed a new kernel. after the reboot the monitor did not start. Attempted to start from command line: systemctl status ceph-mon@proxp01.service ● ceph-mon@proxp01.service - Ceph cluster monitor daemon Loaded: loaded (/lib/systemd/system/ceph-mon@.service; enabled...
  10. G

    Ceph OSD db and wal size

    Hello Guys! I have a big question for the ceph cluster and I need your help or your opinion. I installed a simple 3 nodes setup with Ceph. In one node has 2x146 GB HW RAID 1 + 18x 600 GB 10k SAS without RAID. (Summary we have 54 OSD device and we have to buy 3 SSD for journal) And my big...
  11. C

    Cannot start any KVM / LXC

    Hi, here I describe 1 of the 2 major issues I'm currently facing in my 8 node ceph cluster (2x MDS, 6x ODS). The issue is that I cannot start any virtual machine KVM or container LXC; the boot process just hangs after a few seconds. All these KVMs and LXCs have in common that their virtual...
  12. K

    [SOLVED] ceph OSD creation using a partition

    hi all. i need help in creating osd in my partition. in our server, we are provided with 2 nvme drive in raid-1. this is the partition: root@XXXXXXXX:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme1n1 259:0 0 1.8T 0 disk ├─nvme1n1p1 259:1 0 511M 0 part...
  13. C

    [SOLVED] Howto define OSD weight in Crush map

    Hi, after adding an OSD to Ceph it is adviseable to create a relevant entry in Crush map using a weight size depending on disk size. Example: ceph osd crush set osd.<id> <weight> root=default host=<hostname> Question: How is the weight defined depending on disk size? Which algorithm can be...
  14. C

    Howto add DB device to BlueFS

    Hi, I have created OSD on HDD w/o putting DB on faster drive. In order to improve performance I have now a single SSD drive with 3.8TB. Questions: How can I add DB device for every single OSD to this new SSD drive? Which parameter in ceph.conf defines the size for the DB? Can you confirm that...
  15. C

    [SOLVED] Directory /var/lib/ceph/osd/ceph-<id>/ is empty

    Hi, I finished upgrade to Proxmox 6 + Ceph Nautilus on 4 node cluster. On 2 nodes I have identified that all directories /var/lib/ceph/osd/ceph-<id>/ are empty after rebooting. Typically the content of this directory is this: root@ld5508:~# ls -l /var/lib/ceph/osd/ceph-70/ insgesamt 60...
  16. B

    [SOLVED] OSD Encryption

    Hi guys I notice that OSD encryption is available under create OSD. Is there any mechanism to show which OSDs are encrypted in the UI or command line? I made some encrypted and some non-encrypted to gauge performance but unable to differentiate which are which.
  17. I

    Ceph Speicherverbrauch ist falsch

    Hallo zusammen, ich habe 3 Node Cluster mit je 3 OSD in jedem dieser Nodes. Meine Ceph Version ist: 14.2.1 (9257126ffb439de1652793b3e29f4c0b97a47b47) nautilus (stable). Der Pool hat Replica 3/2 mit 128pg. Sobald ich eine VM aus einem Backup, das auf einem NFS Share liegt, herstelle zeigt der...
  18. H

    Ceph on Raid0

    Dear Proxmox Team, we have 3 Dell Servers running Proxmox 6.0. Unfortunately we encountered an issue with the setup of Ceph OSD's on individual drives. The main problem is that the given Perc H710 mini adapter does not allow "IT-Mode" / JBOD Passthrough of individual drives, and so we're stuck...
  19. R

    No bootable device - migrierte Server lassen sich nicht rebooten

    Hallo zusammen! Wir haben ca. 20 VMs in einem dreier Cluster mit Ceph am laufen. Die hälfte der VMs wurde mit Clonezilla migriert, P2V, die andere hälfte habe ich zu raw convertiert von vmware. Nun das Problem. Vor ein paar Wochen hat sich eine Ubuntu 18.04. VM aufgehängt, diese wurde von vmware...
  20. TwiX

    Ceph nautilus - Raid 0

    Hi, First thing first, I know it is not recommended to use raid 0 disks beyond ceph, however that's what I did on 4 Dell R430 servers with Perc 730 (with 6 15k SAS drives). I have pretty descent performance with it and absolutely no issues for the last 2 years. With full SSD nodes I don't use...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!