We appear to have an inconsistent experience with one of the monitors sometimes appearing to miss behave. Ceph health shows a warning with slow operations:
Logging in to kvm6a and then run:
Thereafter everything is immediately healthy:
All updates have been installed on all servers:
PS: Any news on Ceph 15.2.7?
Code:
[admin@kvm6b ~]# ceph -s
cluster:
id: 2a554db9-5d56-4d6a-a1e2-e4f98ef1052f
health: HEALTH_WARN
17 slow ops, oldest one blocked for 337272 sec, mon.kvm6a has slow ops
services:
mon: 3 daemons, quorum kvm6a,kvm6b,kvm6c (age 3d)
mgr: kvm6a(active, since 4d), standbys: kvm6b, kvm6c
mds: cephfs:1 {0=kvm6c=up:active} 2 up:standby
osd: 24 osds: 24 up (since 3d), 24 in
task status:
scrub status:
mds.kvm6c: idle
data:
pools: 6 pools, 225 pgs
objects: 602.79k objects, 1.9 TiB
usage: 4.9 TiB used, 30 TiB / 35 TiB avail
pgs: 225 active+clean
io:
client: 15 KiB/s rd, 21 MiB/s wr, 33 op/s rd, 561 op/s wr
cache: 14 MiB/s flush
Logging in to kvm6a and then run:
Code:
systemctl restart ceph-mon@kvm6a
Thereafter everything is immediately healthy:
Code:
[admin@kvm6b ~]# ceph -s
cluster:
id: 2a554db9-5d56-4d6a-a1e2-e4f98ef1052f
health: HEALTH_OK
services:
mon: 3 daemons, quorum kvm6a,kvm6b,kvm6c (age 12m)
mgr: kvm6a(active, since 13m), standbys: kvm6b, kvm6c
mds: cephfs:1 {0=kvm6c=up:active} 2 up:standby
osd: 24 osds: 24 up (since 4d), 24 in
task status:
scrub status:
mds.kvm6c: idle
data:
pools: 6 pools, 225 pgs
objects: 605.09k objects, 1.9 TiB
usage: 4.9 TiB used, 30 TiB / 35 TiB avail
pgs: 225 active+clean
io:
client: 9.3 KiB/s rd, 6.7 MiB/s wr, 30 op/s rd, 75 op/s wr
cache: 4.7 MiB/s flush, 0 op/s promote
All updates have been installed on all servers:
Code:
[admin@kvm6a ~]# pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.65-1-pve)
pve-manager: 6.3-2 (running version: 6.3-2/22f57405)
pve-kernel-5.4: 6.3-1
pve-kernel-helper: 6.3-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.4.65-1-pve: 5.4.65-1
ceph: 15.2.6-pve1
ceph-fuse: 15.2.6-pve1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-6
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.3-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.0-1
proxmox-backup-client: 1.0.5-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-1
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-1
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1
PS: Any news on Ceph 15.2.7?