I noticed osd's down at pve web page. tried to start , fail .
at a not where ceph is up:
node with ceph down: [ i had to press ctl+c after a minute it was stuck]
tried this:
Any advise to debug and fix?
at a not where ceph is up:
Code:
# ceph -s
cluster:
id: 220b9a53-4556-48e3-a73c-28deff665e45
health: HEALTH_WARN
noout flag(s) set
10 osds down
1 host (10 osds) down
Degraded data redundancy: 1165923/4919307 objects degraded (23.701%), 92 pgs degraded, 92 pgs undersized
services:
mon: 3 daemons, quorum pve11,pve4,pve2 (age 25m)
mgr: pve2(active, since 27m), standbys: pve11, pve4
osd: 41 osds: 31 up (since 21m), 41 in (since 2w)
flags noout
data:
pools: 2 pools, 129 pgs
objects: 1.64M objects, 6.1 TiB
usage: 18 TiB used, 131 TiB / 149 TiB avail
pgs: 1165923/4919307 objects degraded (23.701%)
92 active+undersized+degraded
37 active+clean
io:
client: 3.5 MiB/s rd, 2.5 MiB/s wr, 228 op/s rd, 262 op/s wr
node with ceph down: [ i had to press ctl+c after a minute it was stuck]
Code:
# ceph -s
^CCluster connection aborted
tried this:
Code:
# pveceph install
This will install Ceph 19.2 Squid - continue (y/N)? y
update available package list
start installation
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
ceph is already the newest version (19.2.3-pve1).
ceph-common is already the newest version (19.2.3-pve1).
ceph-fuse is already the newest version (19.2.3-pve1).
ceph-mds is already the newest version (19.2.3-pve1).
ceph-volume is already the newest version (19.2.3-pve1).
gdisk is already the newest version (1.0.10-2).
nvme-cli is already the newest version (2.13-2).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
installed Ceph 19.2 Squid successfully!
Any advise to debug and fix?