Hello,
I am running 3 node proxmox cluster on version 8.3.0. I have CEPH 19.2.0 installed on each node. I am running 3 monitors and managers, healt status is OK. On each node I have Samsung 990 Pro nvme drive, dedicated for CEPH OSD. No matter what i try, no matter what order i pick I always end with OSD as a ghost.
I click on CEPH, OSD and create OSD. System offer unused Samsung drive, and i am not touching anything else except create. TASK run OK with no errors. But after that i can see created OSD on that page, only overal page shows that i have osd.0 as a ghost.
What am I doing wrong?
PS: Before I've started to create OSD, I've erased drive on each node with: ceph-volume lvm zap /dev/nvme0n1 --destroy
Attached are screens:
create osd - how i create it
log file from successful task
ceph_after_create - configuration osd, default is blank, nothing there
gohst_osd - on dashboard is visible ghost osd
I am running 3 node proxmox cluster on version 8.3.0. I have CEPH 19.2.0 installed on each node. I am running 3 monitors and managers, healt status is OK. On each node I have Samsung 990 Pro nvme drive, dedicated for CEPH OSD. No matter what i try, no matter what order i pick I always end with OSD as a ghost.
I click on CEPH, OSD and create OSD. System offer unused Samsung drive, and i am not touching anything else except create. TASK run OK with no errors. But after that i can see created OSD on that page, only overal page shows that i have osd.0 as a ghost.
What am I doing wrong?
PS: Before I've started to create OSD, I've erased drive on each node with: ceph-volume lvm zap /dev/nvme0n1 --destroy
Attached are screens:
create osd - how i create it
log file from successful task
ceph_after_create - configuration osd, default is blank, nothing there
gohst_osd - on dashboard is visible ghost osd
Attachments
Last edited: