Hi,
i've a 5 Node Proxmox/Ceph Cluster. The OSDs are located at 3 of those 5 Machines.
4 OSDs per Node with P3700 NVMe SSD as journal Devices.
My Question is: ceph-disk list should print the OSDs with some infos about journaling.
My journal info of each OSD is empty?! Is this ok?
root@proxceph-c-03:~# ceph-disk list
/dev/nvme0n1 :
/dev/nvme0n1p1 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p2 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p3 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p4 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/sda :
...
/dev/sdc :
/dev/sdc1 ceph data, active, cluster ceph, osd.8
/dev/sdd :
/dev/sdd1 ceph data, active, cluster ceph, osd.9
/dev/sde :
/dev/sde1 ceph data, active, cluster ceph, osd.10
/dev/sdf :
/dev/sdf1 ceph data, active, cluster ceph, osd.11
i've tought that it look more like this:
/dev/sdc1 ceph data, active, cluster ceph, osd.8, journal /dev/nvme0n1p1
I use nmon for Device Access Monitoring and there are just a few writes at the journal device, the OSDs are between 40% and 100% write load.
Something is wrong... i already tried to readd the journal with
ceph-osd -i X ---mkjournal ... there is no Error showing up ...
Howto: https://www.sebastien-han.fr/blog/2012/08/17/ceph-storage-node-maintenance/
thx
i've a 5 Node Proxmox/Ceph Cluster. The OSDs are located at 3 of those 5 Machines.
4 OSDs per Node with P3700 NVMe SSD as journal Devices.
My Question is: ceph-disk list should print the OSDs with some infos about journaling.
My journal info of each OSD is empty?! Is this ok?
root@proxceph-c-03:~# ceph-disk list
/dev/nvme0n1 :
/dev/nvme0n1p1 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p2 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p3 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p4 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/sda :
...
/dev/sdc :
/dev/sdc1 ceph data, active, cluster ceph, osd.8
/dev/sdd :
/dev/sdd1 ceph data, active, cluster ceph, osd.9
/dev/sde :
/dev/sde1 ceph data, active, cluster ceph, osd.10
/dev/sdf :
/dev/sdf1 ceph data, active, cluster ceph, osd.11
i've tought that it look more like this:
/dev/sdc1 ceph data, active, cluster ceph, osd.8, journal /dev/nvme0n1p1
I use nmon for Device Access Monitoring and there are just a few writes at the journal device, the OSDs are between 40% and 100% write load.
Something is wrong... i already tried to readd the journal with
ceph-osd -i X ---mkjournal ... there is no Error showing up ...
Howto: https://www.sebastien-han.fr/blog/2012/08/17/ceph-storage-node-maintenance/
thx
Last edited: