I have a 4 node ProxMox Cluster with Ceph, 4 OSDs per node.
When I run 'cat /sys/kernel/debug/ceph/*/osdmap' on each node I get the following on 3 of 4 nodes.
epoch 7125 barrier 0 flags 0x588000
pool 1 'Ceph-CT-VM' type 1 size 3 min_size 2 pg_num 256 pg_num_mask 255 flags 0x1 lfor 0 read_tier -1 write_tier -1
pool 4 'test' type 1 size 3 min_size 2 pg_num 64 pg_num_mask 63 flags 0x1 lfor 0 read_tier -1 write_tier -1
osd0 (1)10.10.3.11:6810 100% (exists, up) 100%
osd1 (1)10.10.3.11:6805 100% (exists, up) 100%
osd2 (1)10.10.3.11:6811 100% (exists, up) 100%
osd3 (1)10.10.3.11:6801 100% (exists, up) 100%
osd4 (1)10.10.3.12:6806 100% (exists, up) 100%
osd5 (1)10.10.3.12:6805 100% (exists, up) 100%
osd6 (1)10.10.3.12:6807 100% (exists, up) 100%
osd7 (1)10.10.3.12:6803 100% (exists, up) 100%
osd8 (1)10.10.3.13:6801 100% (exists, up) 100%
osd9 (1)10.10.3.13:6809 100% (exists, up) 100%
osd10 (1)10.10.3.13:6813 100% (exists, up) 100%
osd11 (1)10.10.3.13:6805 100% (exists, up) 100%
osd12 (1)10.10.3.14:6813 100% (exists, up) 100%
osd13 (1)10.10.3.14:6803 100% (exists, up) 100%
osd14 (1)10.10.3.14:6801 100% (exists, up) 100%
osd15 (1)10.10.3.14:6808 100% (exists, up) 100%
On 1 of the 4 nodes I get:
cat: '/sys/kernel/debug/ceph/*/osdmap': No such file or directory
Shouldn't all 4 nodes have the same info? If so, any info on fixing the odd node?
Package Versions:
proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve)
pve-manager: 6.1-5 (running version: 6.1-5/9bf06119)
pve-kernel-5.3: 6.1-1
pve-kernel-helper: 6.1-1
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph: 14.2.5-pve1
ceph-fuse: 14.2.5-pve1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: not correctly installed
ifupdown2: 1.2.8-1+pve4
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-15
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-4
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
When I run 'cat /sys/kernel/debug/ceph/*/osdmap' on each node I get the following on 3 of 4 nodes.
epoch 7125 barrier 0 flags 0x588000
pool 1 'Ceph-CT-VM' type 1 size 3 min_size 2 pg_num 256 pg_num_mask 255 flags 0x1 lfor 0 read_tier -1 write_tier -1
pool 4 'test' type 1 size 3 min_size 2 pg_num 64 pg_num_mask 63 flags 0x1 lfor 0 read_tier -1 write_tier -1
osd0 (1)10.10.3.11:6810 100% (exists, up) 100%
osd1 (1)10.10.3.11:6805 100% (exists, up) 100%
osd2 (1)10.10.3.11:6811 100% (exists, up) 100%
osd3 (1)10.10.3.11:6801 100% (exists, up) 100%
osd4 (1)10.10.3.12:6806 100% (exists, up) 100%
osd5 (1)10.10.3.12:6805 100% (exists, up) 100%
osd6 (1)10.10.3.12:6807 100% (exists, up) 100%
osd7 (1)10.10.3.12:6803 100% (exists, up) 100%
osd8 (1)10.10.3.13:6801 100% (exists, up) 100%
osd9 (1)10.10.3.13:6809 100% (exists, up) 100%
osd10 (1)10.10.3.13:6813 100% (exists, up) 100%
osd11 (1)10.10.3.13:6805 100% (exists, up) 100%
osd12 (1)10.10.3.14:6813 100% (exists, up) 100%
osd13 (1)10.10.3.14:6803 100% (exists, up) 100%
osd14 (1)10.10.3.14:6801 100% (exists, up) 100%
osd15 (1)10.10.3.14:6808 100% (exists, up) 100%
On 1 of the 4 nodes I get:
cat: '/sys/kernel/debug/ceph/*/osdmap': No such file or directory
Shouldn't all 4 nodes have the same info? If so, any info on fixing the odd node?
Package Versions:
proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve)
pve-manager: 6.1-5 (running version: 6.1-5/9bf06119)
pve-kernel-5.3: 6.1-1
pve-kernel-helper: 6.1-1
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph: 14.2.5-pve1
ceph-fuse: 14.2.5-pve1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: not correctly installed
ifupdown2: 1.2.8-1+pve4
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-15
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-4
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
Last edited: