2 of the OSD showing down

Aug 16, 2023
27
1
3
Dear All,
we are using proxmox 8 EE, when checking ceph status we see 2 OSD down, is this normal or is there any serious issues in the hard disk
Do we need to think that there is problem in disk , screen shot and command line status given
Guidance requested


root@pve-5:~# pveceph status cluster: id: 6346d7b8-713e-4a84-be38-7fd483f49da0 health: HEALTH_OK services: mon: 3 daemons, quorum pve-1,pve-2,pve-3 (age 11d) mgr: pve-1(active, since 11d) osd: 56 osds: 54 up (since 11m), 54 in (since 72m) data: pools: 2 pools, 513 pgs objects: 1.49M objects, 5.5 TiB usage: 16 TiB used, 62 TiB / 79 TiB avail pgs: 513 active+clean io: client: 0 B/s rd, 4.9 MiB/s wr, 0 op/s rd, 149 op/s wr

Thanks
Joseph John
 

Attachments

  • 2-CFD-Down.png
    2-CFD-Down.png
    46.9 KB · Views: 16
Hello, could you try starting the OSDs. Should an error happen, please post the output of journalctl -u ceph-osd@OSD_NUMBER.service. To start a OSD you can go to Datacenter->{node}->Ceph->OSD (The start button is in the top right) on the web UI.

Note that the cluster is healthy and managed to recover, meaning that no data was lost :).
 
  • Like
Reactions: joseph.john
Like to update, from the GUI, I click the OSD and clicked it start and it is working now all the OSD up

root@pve-3:~# pveceph status
cluster: id: 6346d7b8-713e-4a84-be38-7fd483f49da0 health: HEALTH_OK services: mon: 3 daemons, quorum pve-1,pve-2,pve-3 (age 11d) mgr: pve-1(active, since 11d) osd: 64 osds: 64 up (since 2m), 64 in (since 2m); 33 remapped pgs data: pools: 2 pools, 513 pgs objects: 1.49M objects, 5.5 TiB usage: 16 TiB used, 77 TiB / 93 TiB avail pgs: 80797/4461456 objects misplaced (1.811%) 479 active+clean 20 active+remapped+backfilling 14 active+remapped+backfill_wait io: client: 0 B/s rd, 8.2 MiB/s wr, 0 op/s rd, 100 op/s wr recovery: 2.0 GiB/s, 529 objects/s

root@pve-3:~#
Thanks
Joseph John
 
Hello, could you try starting the OSDs. Should an error happen, please post the output of journalctl -u ceph-osd@OSD_NUMBER.service. To start a OSD you can go to Datacenter->{node}->Ceph->OSD (The start button is in the top right) on the web UI.

Note that the cluster is healthy and managed to recover, meaning that no data was lost :).
THANKS A lot
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!