Hinzufügen einer neue OSD nicht möflich; Load > 60

SveNoh

Member
Nov 15, 2019
2
0
21
Munich, Germany
Hallo,

vielleicht kann mir hier ja jemand helfen. nach dem ausfall einer OSD Wollte möchte eine neue OSD mittels pveceph createosd /dev/nvme6n1 hinzufügen aber es passiert nichts. Der Node hat eine Load von >60.

Code:
~# top
top - 12:42:10 up 517 days, 14:21,  2 users,  load average: 69.76, 69.67, 69.71
Tasks: 624 total,   3 running, 335 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.5 us,  0.6 sy,  0.0 ni, 93.1 id,  4.8 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 13190825+total, 35690452 free, 71265760 used, 24952048 buff/cache
KiB Swap:  8388604 total,  8387836 free,      768 used. 88346288 avail Mem

~# ceph health
HEALTH_ERR 5 backfillfull osd(s); 1 pool(s) backfillfull; 468094/1546710 objects misplaced (30.264%); Degraded data redundancy: 983/1546710 objects degraded (0.064%), 1 pg degraded, 1 pg undersized; Degraded data redundancy (low space): 236 pgs backfill_toofull

Code:
~# ceph health detail
HEALTH_ERR 5 backfillfull osd(s); 1 pool(s) backfillfull; 468094/1546710 objects misplaced (30.264%); Degraded data redundancy: 983/1546710 objects degraded (0.064%), 1 pg degraded, 1 pg undersized; Degraded data redundancy (low space): 236 pgs backfill_toofull
OSD_BACKFILLFULL 5 backfillfull osd(s)
    osd.0 is backfill full
    osd.1 is backfill full
    osd.2 is backfill full
    osd.18 is backfill full
    osd.19 is backfill full
POOL_BACKFILLFULL 1 pool(s) backfillfull
    pool 'haifischbecken' is backfillfull
OBJECT_MISPLACED 468094/1546710 objects misplaced (30.264%)
PG_DEGRADED Degraded data redundancy: 983/1546710 objects degraded (0.064%), 1 pg degraded, 1 pg undersized
    pg 1.88 is stuck undersized for 4830.554121, current state active+undersized+degraded+remapped+backfill_toofull, last acting [15,16]
PG_DEGRADED_FULL Degraded data redundancy (low space): 236 pgs backfill_toofull
    pg 1.bb is active+remapped+backfill_toofull, acting [6,14,0]
    pg 1.be is active+remapped+backfill_toofull, acting [15,0,16]
    pg 1.c0 is active+remapped+backfill_toofull, acting [9,1,17]
    pg 1.c1 is active+remapped+backfill_toofull, acting [0,11,5]
    pg 1.c2 is active+remapped+backfill_toofull, acting [8,5,19]
    pg 1.c5 is active+remapped+backfill_toofull, acting [8,5,1]
    pg 1.198 is active+remapped+backfill_toofull, acting [2,15,16]
    pg 1.19a is active+remapped+backfill_toofull, acting [5,10,18]
    pg 1.19b is active+remapped+backfill_toofull, acting [1,8,16]
    pg 1.19c is active+remapped+backfill_toofull, acting [0,9,16]
    pg 1.19e is active+remapped+backfill_toofull, acting [7,11,18]
    pg 1.1a3 is active+remapped+backfill_toofull, acting [4,0,15]
    pg 1.1a5 is active+remapped+backfill_toofull, acting [10,7,18]
    pg 1.1a8 is active+remapped+backfill_toofull, acting [14,17,6]
    pg 1.1a9 is active+remapped+backfill_toofull, acting [14,19,4]
    pg 1.1ac is active+remapped+backfill_toofull, acting [15,16,18]
    pg 1.1ad is active+remapped+backfill_toofull, acting [1,8,7]
    pg 1.1ae is active+remapped+backfill_toofull, acting [17,8,1]
    pg 1.1b0 is active+remapped+backfill_toofull, acting [10,2,17]
    pg 1.1b3 is active+remapped+backfill_toofull, acting [7,9,18]
    pg 1.1b5 is active+remapped+backfill_toofull, acting [14,7,19]
    pg 1.1b6 is active+remapped+backfill_toofull, acting [20,22,19]
    pg 1.1b9 is active+remapped+backfill_toofull, acting [10,18,6]
    pg 1.1bc is active+remapped+backfill_toofull, acting [1,9,4]
    pg 1.1bd is active+remapped+backfill_toofull, acting [0,11,16]
    pg 1.1be is active+remapped+backfill_toofull, acting [2,11,16]
    pg 1.1c0 is active+remapped+backfill_toofull, acting [4,1,10]
    pg 1.1c3 is active+remapped+backfill_toofull, acting [17,14,1]
    pg 1.1c8 is active+remapped+backfill_toofull, acting [2,9,17]
    pg 1.1c9 is active+remapped+backfill_toofull, acting [22,20,1]
    pg 1.1ce is active+remapped+backfill_toofull, acting [23,21,14]
    pg 1.1d2 is active+remapped+backfill_toofull, acting [0,15,5]
    pg 1.1d3 is active+remapped+backfill_toofull, acting [15,17,19]
    pg 1.1d6 is active+remapped+backfill_toofull, acting [5,22,0]
    pg 1.1da is active+remapped+backfill_toofull, acting [21,11,1]
    pg 1.1dd is active+remapped+backfill_toofull, acting [11,7,19]
    pg 1.1e0 is active+remapped+backfill_toofull, acting [4,2,14]
    pg 1.1e1 is active+remapped+backfill_toofull, acting [11,5,0]
    pg 1.1e4 is active+remapped+backfill_toofull, acting [14,0,6]
    pg 1.1e6 is active+remapped+backfill_toofull, acting [17,18,10]
    pg 1.1e7 is active+remapped+backfill_toofull, acting [6,11,19]
    pg 1.1e9 is active+remapped+backfill_toofull, acting [7,10,18]
    pg 1.1eb is active+remapped+backfill_toofull, acting [0,16,15]
    pg 1.1ee is active+remapped+backfill_toofull, acting [6,14,18]
    pg 1.1ef is active+remapped+backfill_toofull, acting [11,7,18]
    pg 1.1f2 is active+remapped+backfill_toofull, acting [22,20,14]
    pg 1.1f3 is active+remapped+backfill_toofull, acting [0,6,10]
    pg 1.1f5 is active+remapped+backfill_toofull, acting [23,20,1]
    pg 1.1fa is active+remapped+backfill_toofull, acting [21,22,1]
    pg 1.1fb is active+remapped+backfill_toofull, acting [20,15,0]
    pg 1.1fc is active+remapped+backfill_toofull, acting [8,7,2]

Code:
~# ceph osd tree
ID CLASS WEIGHT   TYPE NAME      STATUS REWEIGHT PRI-AFF
-1       13.18875 root default
-3        2.45572     host pve1
0  nvme  0.40929         osd.0      up  0.79999 1.00000
1  nvme  0.40929         osd.1      up  0.50000 1.00000
2  nvme  0.40929         osd.2      up  0.79999 1.00000
3  nvme  0.40929         osd.3    down        0 1.00000
18  nvme  0.40929         osd.18     up  0.79999 1.00000
19  nvme  0.40929         osd.19     up  0.79999 1.00000
-5        5.36652     host pve2
4  nvme  0.40929         osd.4      up  1.00000 1.00000
5  nvme  0.40929         osd.5      up  0.90002 1.00000
6  nvme  0.40929         osd.6      up  1.00000 1.00000
7  nvme  0.40929         osd.7      up  1.00000 1.00000
16  nvme  0.40929         osd.16     up  1.00000 1.00000
17  nvme  0.40929         osd.17     up  1.00000 1.00000
20  nvme  1.45540         osd.20     up  1.00000 1.00000
21  nvme  1.45540         osd.21     up  1.00000 1.00000
-7        5.36652     host pve3
8  nvme  0.40929         osd.8      up  1.00000 1.00000
9  nvme  0.40929         osd.9      up  1.00000 1.00000
10  nvme  0.40929         osd.10     up  1.00000 1.00000
11  nvme  0.40929         osd.11     up  1.00000 1.00000
14  nvme  0.40929         osd.14     up  1.00000 1.00000
15  nvme  0.40929         osd.15     up  1.00000 1.00000
22  nvme  1.45540         osd.22     up  1.00000 1.00000
23  nvme  1.45540         osd.23     up  1.00000 1.00000

Code:
~# ceph osd df
#ID CLASS WEIGHT  REWEIGHT SIZE   USE   AVAIL  %USE  VAR  PGS
0  nvme 0.40929  0.79999   419G  381G 38888M 90.94 1.99  98
1  nvme 0.40929  0.50000   419G  378G 41176M 90.40 1.98  97
2  nvme 0.40929  0.79999   419G  380G 39847M 90.71 1.98  97
3  nvme 0.40929        0      0     0      0     0    0   0
18  nvme 0.40929  0.79999   419G  378G 42000M 90.21 1.97  97
19  nvme 0.40929  0.79999   419G  378G 41715M 90.28 1.97  97
4  nvme 0.40929  1.00000   419G  172G   246G 41.21 0.90  44
5  nvme 0.40929  0.90002   419G  187G   231G 44.75 0.98  48
6  nvme 0.40929  1.00000   419G  218G   200G 52.11 1.14  56
7  nvme 0.40929  1.00000   419G  206G   212G 49.33 1.08  53
16  nvme 0.40929  1.00000   419G  234G   184G 55.96 1.22  60
17  nvme 0.40929  1.00000   419G  223G   195G 53.36 1.17  57
20  nvme 1.45540  1.00000  1490G  443G  1046G 29.79 0.65 114
21  nvme 1.45540  1.00000  1490G  330G  1160G 22.15 0.48  85
8  nvme 0.40929  1.00000   419G  210G   208G 50.17 1.10  54
9  nvme 0.40929  1.00000   419G  222G   196G 53.07 1.16  57
10  nvme 0.40929  1.00000   419G  206G   212G 49.33 1.08  53
11  nvme 0.40929  1.00000   419G  234G   184G 55.88 1.22  60
14  nvme 0.40929  1.00000   419G  233G   185G 55.69 1.22  60
15  nvme 0.40929  1.00000   419G  203G   215G 48.52 1.06  52
22  nvme 1.45540  1.00000  1490G  447G  1042G 30.05 0.66 115
23  nvme 1.45540  1.00000  1490G  315G  1174G 21.19 0.46  81

EDIT:

Paketversionen:
Code:
proxmox-ve: 5.4-2 (running kernel: 4.15.17-1-pve)
pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
pve-kernel-4.15: 5.4-8
pve-kernel-4.13: 5.2-2
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-13-pve: 4.15.18-37
pve-kernel-4.15.18-12-pve: 4.15.18-36
pve-kernel-4.15.18-11-pve: 4.15.18-34
pve-kernel-4.15.18-10-pve: 4.15.18-32
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-8-pve: 4.15.18-28
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-5-pve: 4.15.18-24
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.18-2-pve: 4.15.18-21
pve-kernel-4.15.18-1-pve: 4.15.18-19
pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.13.16-4-pve: 4.13.16-51
pve-kernel-4.13.16-2-pve: 4.13.16-48
pve-kernel-4.13.16-1-pve: 4.13.16-46
pve-kernel-4.13.13-6-pve: 4.13.13-42
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.4-1-pve: 4.13.4-26
ceph: 12.2.12-pve1
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-12
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-54
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-6
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-38
pve-container: 2.0-40
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-7
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-54
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
 
Last edited:
3 nvme 0.40929 osd.3 down 0 1.00000
Die OSD läuft nicht mehr, das Recovery wird sicherlich kleiner, wenn die OSD wieder im Dienst ist. Da die REWEIGHTs unterschiedlich sind, kann das zu mehr 'misplaced objects' führen.

top - 12:42:10 up 517 days, 14:21, 2 users, load average: 69.76, 69.67, 69.71
Im top müsste noch mehr zu sehen sein. Vermutlich arbeiten die OSD Prozesse auf Vollgas.
 
Die OSD läuft nicht mehr, das Recovery wird sicherlich kleiner, wenn die OSD wieder im Dienst ist. Da die REWEIGHTs unterschiedlich sind, kann das zu mehr 'misplaced objects' führen.

die frage ist nur, wie bekomme ich jetzt die OSD 3 ersetzt? sie wieder in/up zu nehmen funktioniert nicht. reicht es, wenn ich an dieser stelle eine neue platte einbaue? kann/sollte ich die dann vorher an einem anderen node vorbereiten?

Im top müsste noch mehr zu sehen sein. Vermutlich arbeiten die OSD Prozesse auf Vollgas.

diese vermutung ist zutreffend :)

%CPU zwischen 1% und 2,3% durchschnittlich 1,7% (bei 40 Kernen)

Code:
top - 11:52:04 up 522 days, 13:31,  3 users,  load average: 69.93, 69.75, 69.81
Tasks: 626 total,   2 running, 337 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.7 us,  0.6 sy,  0.0 ni, 92.7 id,  5.0 wa,  0.0 hi,  0.1 si,  0.0 st
KiB Mem : 13190825+total, 35377880 free, 71537328 used, 24993052 buff/cache
KiB Swap:  8388604 total,  8387836 free,      768 used. 88453360 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
...
2364727 root      20   0  709376  37596  21400 R   4.6  0.0   0:00.14 ceph
...
   2394 ceph      20   0  806792 410776  22492 S   1.7  0.3   9866:33 ceph-mon
   3809 ceph      20   0 6136672 5.090g  29372 S   1.7  4.0   5910:07 ceph-osd
3241717 ceph      20   0 6310880 5.227g  29824 S   1.7  4.2   5974:31 ceph-osd
2018578 ceph      20   0 6039212 4.997g  30176 S   1.0  4.0   6085:49 ceph-osd
   2946 ceph      20   0 6317268 5.263g  29672 S   0.7  4.2   7546:50 ceph-osd
   3257 ceph      20   0 6088076 5.030g  29404 S   0.7  4.0   6360:40 ceph-osd
 
%CPU zwischen 1% und 2,3% durchschnittlich 1,7% (bei 40 Kernen)
Da scheint eher die CPU zu hängen, wenn nicht doch ein paar Prozesse auftauchen, die 100% CPU-Last erzeugen. Am besten die Node mal neu starten.

die frage ist nur, wie bekomme ich jetzt die OSD 3 ersetzt?
Da die OSD (vermute ich) auf PVE1 (load >60) liegt, könnte der Reboot ausreichen, um die OSD wieder zum leben zu erwecken.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!