VM's stoppen ohne ersichtlichen Grund

tommytom

Active Member
Aug 22, 2015
32
5
28
Ich habe einen Cluster mit 6 Servern.
Proxmox: neuste Version - subscription -
Es läuft Ceph-hammer. 66 OSD's
Ca. 60VM's auf denen Debian installiert ist.

Seit ca. 2-3 Wochen Stoppen zufällig VM's. Sie sind dann einfach "aus" wie "runtergefahren". Ein start der VM's behebt das Problem. Doch leider ist das nicht sonderlich zuverlässig. Eine Webseite die aus vielen VM's besteht und darauf läuft fällt aus diesem Grund immer mal wieder teilweise oder gar komplett aus. Hat jemand eine Idee woran das liegen kann ? Zuvor hatte ich dieses Problem nicht.
Außerdem sind auch immer mal wieder OSD's down und können wieder gestartet werden und laufen dann wieder. Immer unterschiedliche... Ich bin etwas ratlos und wäre über Tips dankbar.
 

tommytom

Active Member
Aug 22, 2015
32
5
28
pveversion -v:

proxmox-ve: 4.4-78 (running kernel: 4.4.35-1-pve)
pve-manager: 4.4-5 (running version: 4.4-5/c43015a5)
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.35-2-pve: 4.4.35-78
pve-kernel-4.4.21-1-pve: 4.4.21-71
pve-kernel-4.4.24-1-pve: 4.4.24-72
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-102
pve-firmware: 1.1-10
libpve-common-perl: 4.0-85
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-71
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.4-1
pve-qemu-kvm: 2.7.0-10
pve-container: 1.0-90
pve-firewall: 2.0-33
pve-ha-manager: 1.0-38
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.6-5
lxcfs: 2.0.5-pve2
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80
ceph: 0.94.9-1~bpo80+1


-----

und im syslog habe ich dann ein Problem gefunden... "out of memory" - hmmm die Server haben alle 128GB

Auf dem einem Server z.B. läuft eine Datenbank VM mit 80GB RAM
und 2 kleinere. mit jeweils 4GB
Ich vermute Ceph braucht einfach zuviel speicher. Manchmal fliegen auch OSD's raus.
 

tommytom

Active Member
Aug 22, 2015
32
5
28
auszug aus dem Syslog:





Jan 26 08:04:52 srv4332 kernel: [1576280.778280] rados invoked oom-killer: gfp_mask=0x26000c0, order=2, oom_score_adj=0
Jan 26 08:04:52 srv4332 kernel: [1576280.778283] rados cpuset=/ mems_allowed=0-1
Jan 26 08:04:52 srv4332 kernel: [1576280.778289] CPU: 50 PID: 54943 Comm: rados Tainted: P O 4.4.35-1-pve #1
Jan 26 08:04:52 srv4332 kernel: [1576280.778290] Hardware name: HP ProLiant DL380 Gen9/ProLiant DL380 Gen9, BIOS P89 09/13/2016

Jan 26 08:04:52 srv4332 kernel: [1576280.778348] Mem-Info:
Jan 26 08:04:52 srv4332 kernel: [1576280.778355] active_anon:13955859 inactive_anon:1096039 isolated_anon:0
Jan 26 08:04:52 srv4332 kernel: [1576280.778355] active_file:5940581 inactive_file:9165219 isolated_file:17
Jan 26 08:04:52 srv4332 kernel: [1576280.778355] unevictable:4764 dirty:269 writeback:0 unstable:0
Jan 26 08:04:52 srv4332 kernel: [1576280.778355] slab_reclaimable:725865 slab_unreclaimable:590025
Jan 26 08:04:52 srv4332 kernel: [1576280.778355] mapped:170888 shmem:17681 pagetables:54531 bounce:0
Jan 26 08:04:52 srv4332 kernel: [1576280.778355] free:108089 free_pcp:60 free_cma:0
Jan 26 08:04:52 srv4332 kernel: [1576280.778358] Node 0 DMA free:15012kB min:4kB low:4kB high:4kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15996kB managed:15904kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:816kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
Jan 26 08:04:52 srv4332 kernel: [1576280.778362] lowmem_reserve[]: 0 1787 64247 64247 64247
Jan 26 08:04:52 srv4332 kernel: [1576280.778364] Node 0 DMA32 free:257532kB min:636kB low:792kB high:952kB active_anon:406088kB inactive_anon:406308kB active_file:584kB inactive_file:13740kB unevictable:988kB isolated(anon):0kB isolated(file):0kB present:1940480kB managed:1859592kB mlocked:988kB dirty:4kB writeback:0kB mapped:2560kB shmem:2024kB slab_reclaimable:472756kB slab_unreclaimable:227132kB kernel_stack:34128kB pagetables:3048kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jan 26 08:04:52 srv4332 kernel: [1576280.778367] lowmem_reserve[]: 0 0 62459 62459 62459
Jan 26 08:04:52 srv4332 kernel: [1576280.778369] Node 0 Normal free:76200kB min:22276kB low:27844kB high:33412kB active_anon:42291420kB inactive_anon:2115380kB active_file:8186568kB inactive_file:8188376kB unevictable:13008kB isolated(anon):0kB isolated(file):0kB present:65011712kB managed:63958980kB mlocked:13008kB dirty:388kB writeback:0kB mapped:323712kB shmem:28268kB slab_reclaimable:754192kB slab_unreclaimable:710372kB kernel_stack:42736kB pagetables:114136kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jan 26 08:04:52 srv4332 kernel: [1576280.778373] lowmem_reserve[]: 0 0 0 0 0
Jan 26 08:04:52 srv4332 kernel: [1576280.778375] Node 1 Normal free:83612kB min:23008kB low:28760kB high:34512kB active_anon:13125928kB inactive_anon:1862468kB active_file:15575172kB inactive_file:28458760kB unevictable:5060kB isolated(anon):0kB isolated(file):68kB present:67108864kB managed:66055388kB mlocked:5060kB dirty:684kB writeback:0kB mapped:357280kB shmem:40432kB slab_reclaimable:1676512kB slab_unreclaimable:1421780kB kernel_stack:52512kB pagetables:100940kB unstable:0kB bounce:0kB free_pcp:240kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jan 26 08:04:52 srv4332 kernel: [1576280.778377] lowmem_reserve[]: 0 0 0 0 0
Jan 26 08:04:52 srv4332 kernel: [1576280.778379] Node 0 DMA: 1*4kB (U) 0*8kB 0*16kB 1*32kB (U) 0*64kB 5*128kB (U) 2*256kB (U) 3*512kB (U) 2*1024kB (U) 1*2048kB (M) 2*4096kB (UM) = 15012kB
Jan 26 08:04:52 srv4332 kernel: [1576280.778386] Node 0 DMA32: 30186*4kB (UME) 17100*8kB (UMEH) 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 257544kB
Jan 26 08:04:52 srv4332 kernel: [1576280.778391] Node 0 Normal: 19344*4kB (UME) 46*8kB (UME) 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 77744kB
Jan 26 08:04:52 srv4332 kernel: [1576280.778395] Node 1 Normal: 21081*4kB (UME) 63*8kB (UME) 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 84828kB
Jan 26 08:04:52 srv4332 kernel: [1576280.778401] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
Jan 26 08:04:52 srv4332 kernel: [1576280.778402] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Jan 26 08:04:52 srv4332 kernel: [1576280.778402] Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
Jan 26 08:04:52 srv4332 kernel: [1576280.778403] Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Jan 26 08:04:52 srv4332 kernel: [1576280.778404] 15157528 total pagecache pages
Jan 26 08:04:52 srv4332 kernel: [1576280.778405] 32592 pages in swap cache
Jan 26 08:04:52 srv4332 kernel: [1576280.778406] Swap cache stats: add 7008111, delete 6975519, find 17798671/18711393
Jan 26 08:04:52 srv4332 kernel: [1576280.778407] Free swap = 4024636kB
Jan 26 08:04:52 srv4332 kernel: [1576280.778408] Total swap = 8388604kB
Jan 26 08:04:52 srv4332 kernel: [1576280.778409] 33519263 pages RAM
Jan 26 08:04:52 srv4332 kernel: [1576280.778409] 0 pages HighMem/MovableOnly
Jan 26 08:04:52 srv4332 kernel: [1576280.778410] 546797 pages reserved
Jan 26 08:04:52 srv4332 kernel: [1576280.778410] 0 pages cma reserved
Jan 26 08:04:52 srv4332 kernel: [1576280.778411] 0 pages hwpoisoned

.
.
.


Jan 26 08:04:52 srv4332 kernel: [1576280.778536] Out of memory: Kill process 47918 (kvm) score 399 or sacrifice child
Jan 26 08:04:52 srv4332 kernel: [1576280.779239] Killed process 47918 (kvm) total-vm:86127896kB, anon-rss:57644100kB, file-rss:11608kB
 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
15,620
959
163
Ich nehme an, du hast nicht den aktutellen kernel
proxmox-ve: 4.4-78 (running kernel: 4.4.35-1-pve)

ein bekanntes und gefixtes problem.

installier den aktuellen kernel.
 

tommytom

Active Member
Aug 22, 2015
32
5
28
habe ich tatsächlich bei 3 der 6 server nicht gemacht - reboot. Bin gespannt ob der Cluster jetzt wieder stabil läuft

danke für eure schnelle Hilfe
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!