ceph osd

  1. N

    Ceph OSD using wrong device identifier

    Hello! I have been messing around with ceph to see if it will properly augment my NAS in a small subset of tasks, but I noticed if a disk is removed and put back in the ceph cluster doesn't detect that until reboot. This is because it is defined using the /dev/sdX format instead of the...
  2. M

    Recover Data from Ceph OSDs

    Is there a way to recover data from OSDs if the monitors and managers don't work anymore but the OSDs still start up? Just for clarification: I don't want to rebuild the cluster I just want to copy data from the OSDs to another HDD.
  3. A

    PVE GUI doesn't recognize kernel bcache device?

    Hi, Is there a bug in Proxmox that prevents it from correctly seeing bcache devices as a regular storage device? I'm using Proxmox PVE 6.4-14, Linux 5.4.174-2-pve. The bcache is a Linux kernel feature that allows you to use a small fast disk (flash, ssd, nvme, Optane, etc) as "cache" for a...
  4. W

    [SOLVED] Ceph Overlapping Roots

    Hi, I notice an error inside my ceph-mgr log. 2022-02-02T15:51:21.405+0100 7f13aaf3e700 0 [pg_autoscaler ERROR root] pool 13 has overlapping roots: {-2, -1} 2022-02-02T15:52:21.413+0100 7f13aaf3e700 0 [pg_autoscaler ERROR root] pool 13 has overlapping roots: {-2, -1}...
  5. L

    How to re-add ceph disk

    Hi guys! I experimented with ceph behavior when 1 of the disks fails. The disk has been "hot" removed from the server. After a couple of days, the disc was inserted back. The disk status was down/out. I clicked the IN button, but the down status didn't change. The server has been rebooted, but...
  6. DynFi User

    Ceph trying to reset class

    I have noticed these errors on my cluster (on every node for all my osd) : Dec 20 22:00:28 pve1 ceph-osd[3991]: 2021-12-20T22:00:28.350+0100 7fbd20fa4f00 -1 osd.2 1257 mon_cmd_maybe_osd_create fail: 'osd.2 has already bound to class 'nvme', can not reset class to 'ssd'; use 'ceph osd crush...
  7. R

    [SOLVED] OSD Crash after upgrade from 5.4 to 6.3

    Hi to all We have recently upgraded our cluster from the version 5.4 to version 6.3. Our cluster is composed of 6 ceph node and 5 hypervisor. All server have the same packages versions. Here's the details: pveversion -v proxmox-ve: 6.3-1 (running kernel: 5.4.101-1-pve) pve-manager: 6.3-4...
  8. W

    osd disk problem ? hmm...

    Hi, I have 3 servers (2xDELL R540 & 1xDELL R510) in a cluster with ceph. Everything works fine but... I replaced one old machine with a new one and syslog gives such errors: Nov 29 05:35:22 pve05 ceph-osd[3957]: 2019-11-29 05:35:22.754 7fc5324f2700 -1 bdev(0x55a4dd670000...
  9. S

    Unable to start OSD - crashes while loading pgs

    Hi I did a few things to our (Proxmox) Ceph cluster: Added an additional node with a three more hdd OSD's (yielding a 3 node cluster with 3 hdds each) Increased pg_num and pgp_num for one of the pools (from 128 to 256 with size 3 and min_size 1) I shouldn't have fiddled with pg_num and...
  10. I

    Adding an OSD requires root

    We've split users into groups and allocated different roles to each group (essentially admins and non-admin users). Non-admin users are limited to performing VM functions, while users in the admin group have the built-in role "Administrator" applied. This should give them the ability to perform...
  11. R

    [SOLVED] CEPH - 3er Cluster mit jeweils sieben SSDs

    Hallo liebe Proxmoxfreunde, ich habe eine Frage. Es wird im Ceph GUI - nachdem ich zwei OSDs entfernt (ceph-8 und ceph-13 osds) und dann wieder hinzugefügt habe (ceph-21 & ceph-22 osds) - nun im status unter Total: 23 angezeigt, sowie das zwei OSDs "down" wären. Aus welchen osd table holt sich...
  12. P

    Issue removing Ceph OSD

    Hello, I had to temporary remove a drive (OSD) from Ceph but I think I made it wrong. - I set noout - I Outed the OSD but I saw data redundancy started the same (I though setting noout avoided that). - I stopped OSD.8 service Then I extracted hot swap bay. When I inserted disk back device...
  13. S

    Ceph OSD reuse after reinstall

    Hi all, I have a proxmox node that was a member of a ceph cluster, that had to be reinstalled from scratch. I have reinstalled the node (same proxmox as the other nodes) and added the new node on the proxmox cluster. Everything is fine as far as proxmox is concerned. The node was hosting 7...
  14. N

    [Ceph] unable to run OSDs

    My apologies in advance for the length of this post! During a new hardware install, our Ceph node/server is: Dell PowerEdge R7415: 1x AMD EPYC 7251 8-Core Processor 128GB RAM HBA330 disk controller (LSI/Broadcom SAS3008, running FW 15.17.09.06 in IT mode) 4x Toshiba THNSF8200CCS 200GB...
  15. L

    [SOLVED] New OSD doesn't come up

    I added a new OSD to a new node (pmx 4.4 cluster), but for some reasons the OSD doesn't complete the startup. Please check the log below: 2018-08-16 06:57:05.321371 7f5ffc9cc880 0 ceph version 0.94.10 (b1e0532418e4631af01acbc0cedd426f1905f4af), process ceph-osd, pid 1679 2018-08-16...
  16. A

    [SOLVED] Ceph OSD change DB Disk

    Hi, I've had issues when I put in new journal disks and wanted to move existing disks from one journal disk to the new ones. The issues where, I set the osd into Out mode, then Stopped the OSD, and destroyed it. Recreating the OSD with the new DB device make the OSD never to show up! This is a...
  17. S

    Proxmox v5 and Ceph + ZFS

    Hi, I have 3x nodes with last version of Proxmox 5.x this 3 nodes has 3 local hard drives for each nodes 1 hard drive use for boot and Proxmox OS 2x Samsung SSD SM863 using for ZFS pool with RAID1 now, I try to Install Ceph and: 1. during the installation I got this error: root@server203:~#...
  18. M

    Failed to create OSD

    Hye guys.. i got some issue here.. Why i can't create ceph osd?? did i missed something? when i tried to create ceph osd, the popup window on Disk column show "NO DISK UNUSED" while on Journal Disk it show "use OSD disk" . Any suggestions guys??
  19. Y

    4 Nodes Ceph

    Hello, i'm testing a 4 nodes Ceph Cluster: Each node have two sata HD and two SSD for Journal. ----------------------------------------------------------------------------------- ceph -w cluster 1126f843-c89b-4a28-84cd-e89515b10ea2 health HEALTH_OK monmap e4: 4 mons at...
  20. R

    pveceph createosd makes "partitions" instead of "osd.x"

    Just setting up a new 8 node cluster. Each node offers two OSDs. Looking at this what I am experiencing is that I seem to be capped at 14 OSDs for the whole cluster. I was curious if this is just a change to Ceph.pm because I found this line: pg_bits => { description =>...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!