osd

  1. jsterr

    OSDs are filestore at creation, although not specified

    I reinstalled one of the three proxmox ceph nodes with new name, new ip. I removed all lvm data, wiped fs of the old disks with: dmsetup remove_all wipefs -af /dev/sda ceph-volume lvm zap /dev/sda Now when I create osds via gui or cli they are always filestore and I dont get it, default should...
  2. G

    Crushmap vanished after Networking error

    Hi I have a worst case, osd's in a 3 node cluster each 4 nvme's won't start we had a ip config change in public network, and mon's died so we managed mon's to come back with new ip's. corosync on 2 rings is fine, all 3 mon's are up osd's won't start how to get back to the pool, already...
  3. F

    Ceph unter Proxmox: OSD werden nicht immer angezeigt

    Hallo, ich betreibe aktuell Proxmox 6.2-1 mit 3 Nodes. Alle Nodes sind identisch ausgerüstet und beherbergen Ceph. Wenn ich bei einem der 3 Nodes auf Ceph > OSD gehe, werden mir die vorhandenen OSD nicht immer angezeigt. Teilweise muss die Ansicht mehrfach aktualisiert werden, bis ich sie...
  4. F

    [SOLVED] Ceph unter Proxmox: create OSD

    Hallo, ich betreibe aktuell Proxmox 6.2-1 mit 3 Nodes. Alle Nodes sind identisch ausgerüstet und beherbergen Ceph. Bei einem der 3 Nodes habe ich das Problem, dass ich kein OSD erstellen kann. Nachdem ich die Disk ausgewählt habe und auf "Create" klicke, erscheint für einen kurzen Augenblick...
  5. I

    [SOLVED] OSD Create Failing

    Hey Guys, I am having an issue when creating an OSD... Not sure why though... What would cause the below error on creating the osd? command 'ceph-volume lvm create --cluster-fsid 248fab2c-bd08-43fb-a562-08144c019785 --data /dev/sdd' failed: exit code 1
  6. O

    Recfreating ceph, cannot create OSD

    Hello! I've been playing with ceph and after initially getting it working, I realised I had everything on the wrong network. So I deleted and removed everything, uninstalled and reinstalled ceph and began recreating the cluster using the correct network. I had to destroy / zap some drives on...
  7. R

    Ceph OSD Verschlüsselung mit manueller Key-Eingabe

    Hallo, wir evaluieren derzeit eine Umstellung auf Proxmox im Enterprise-Bereich. Eine Verschlüsselung sämtlicher sensitiven Daten ist für uns unumgänglich. Ja, cold boot attacks sind uns bekannt, allerdings wollen wir die Hürden weiter erhöhen, im Falle eines physischen Diebstahls der Server...
  8. S

    [SOLVED] Ceph bluestore WAL and DB size – WAL on external SSD

    Hi All, I’m setting up ceph cluster with 3x node pve 6.2. each node got following disks 7x 6TB 7200 Enterprise SAS HDD 2x 3TB Enterprise SAS SSD 2x 400GB Enterprise SATA SSD This setup previously used for old ceph (with filestore) cluster where it configured to use 2x 400GB SATA SSD to...
  9. M

    Optimal Proxmox Ceph cluster settings for 5 node but 2 node sometimes offline and only created for Osd replica backup

    Hi, I have 5 Proxmox node + 5 osd + 5 monitor + 5 manager (all node same lineup but different HDD size and different hardware), lastly added 5., and I see this: Ceph settings: I have only 1 pool, for all VM: I sometimes disable 2 Proxmox node and only 3 Proxmox node available. You...
  10. M

    Smaller size OSD - bigger VM, what happening? Ceph OSD, replication settings.

    Hi! Please help me understand for be clear: Example for sure, all works fine, Ceph health OK, VMs are stored in Ceph, by default only node1 running a VMs: node1: 1000 GB OSD (1 HDD) node2: 1000 GB OSD (1 HDD) node3: 1000 GB OSD (1 HDD) node4: 500 GB OSD (1 HDD) node5: 500 GB + 500 GB OSD (2...
  11. K

    Ceph OSD Renamed /dev/sdX

    During a disaster "test" (randomly removing drives), I pulled one osd and one drive that had the journals for 4 other OSDs. So I assumed this would down/out 4 possibly 5 OSDs. Upon re-inserting the drives they were given different /dev/sd[a-z] labels, now the journal disk has the SD[a-z] label...
  12. A

    [SOLVED] OSDs still exist after removal of a node

    Hi, I removed one node from the cluster using 'pvecm delnode node_name"'. (before executing this, I removed the node from monitor and manager list and shut it down) Now, the OSDs are in down/out status and I am unable to remove it from GUI (since the node removed already). How can I remove...
  13. E

    [SOLVED] Can not create OSD for ceph

    Can not create OSD for ceph. Same error in GUI and terminal: # pveceph osd create /dev/nvme0n1 Error: any valid prefix is expected rather than "". command '/sbin/ip address show to '' up' failed: exit code 1 The only thing i can think of is since last time it worked was that i now have two...
  14. C

    [SOLVED] Ceph - No such OSD

    Hello, I'm trying out ceph to see if it would fit our need. In my testing I tried removing all OSDs to change the disk to SSDs. The issue I'm facing is when trying to delete the last OSD I get hit with an error "no such OSD" (see attached screenshot). The command line return no OSD so it's like...
  15. K

    Ceph unter Proxmox: Brauche Tips zur Erweiterung

    Hallo, ich betreibe aktuell Proxmox 5.4-13 mit 3 Nodes Alle nodes sind identisch ausgerüstet und beherbergen auch CEPH. Jede Node hat aktuell 4 OSDs auf jeweils einer 1 TB SSD. Ceph läuft 3/2 PlacementGroups habe ich auf 896 schrittweise von 256 hochgeschraubt. Hier hat mich leider die...
  16. P

    After update of Nov 14 - monitors fail to start

    I ran the updates which installed a new kernel. after the reboot the monitor did not start. Attempted to start from command line: systemctl status ceph-mon@proxp01.service ● ceph-mon@proxp01.service - Ceph cluster monitor daemon Loaded: loaded (/lib/systemd/system/ceph-mon@.service; enabled...
  17. G

    Ceph OSD db and wal size

    Hello Guys! I have a big question for the ceph cluster and I need your help or your opinion. I installed a simple 3 nodes setup with Ceph. In one node has 2x146 GB HW RAID 1 + 18x 600 GB 10k SAS without RAID. (Summary we have 54 OSD device and we have to buy 3 SSD for journal) And my big...
  18. C

    Cannot start any KVM / LXC

    Hi, here I describe 1 of the 2 major issues I'm currently facing in my 8 node ceph cluster (2x MDS, 6x ODS). The issue is that I cannot start any virtual machine KVM or container LXC; the boot process just hangs after a few seconds. All these KVMs and LXCs have in common that their virtual...
  19. K

    [SOLVED] ceph OSD creation using a partition

    hi all. i need help in creating osd in my partition. in our server, we are provided with 2 nvme drive in raid-1. this is the partition: root@XXXXXXXX:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme1n1 259:0 0 1.8T 0 disk ├─nvme1n1p1 259:1 0 511M 0 part...
  20. C

    [SOLVED] Howto define OSD weight in Crush map

    Hi, after adding an OSD to Ceph it is adviseable to create a relevant entry in Crush map using a weight size depending on disk size. Example: ceph osd crush set osd.<id> <weight> root=default host=<hostname> Question: How is the weight defined depending on disk size? Which algorithm can be...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!