LXC Container Crashes

n1ete

New Member
Feb 26, 2017
16
1
1
39
Hello there again, i got a proxmox machine running actually an freenas with pci passtrough LSI controller
and and emby debian 8 based lxc container.

freenas is hosting asynchronous nfs shares and these are mounted at the proxmox host to bind them in unprivileged lxc containers like emby....

my containers and vm images are lying on a separate zfs pool

unfortunatly my lxc container is crashing from time to time, so heres the log shortly after the crash
Code:
Mar 06 18:14:46 proxmox rrdcached[2441]: flushing old values
Mar 06 18:14:46 proxmox rrdcached[2441]: rotating journals
Mar 06 18:14:46 proxmox rrdcached[2441]: started new journal /var/lib/rrdcached/journal/rrd.journal.1488820486.740906
Mar 06 18:14:46 proxmox rrdcached[2441]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1488813286.740895
Mar 06 18:14:47 proxmox smartd[2418]: Device: /dev/sda [SAT], open() failed: No such device
Mar 06 18:14:47 proxmox smartd[2418]: Device: /dev/sdb [SAT], open() failed: No such device
Mar 06 18:14:47 proxmox smartd[2418]: Device: /dev/sdc [SAT], open() failed: No such device
Mar 06 18:17:01 proxmox CRON[16814]: pam_unix(cron:session): session opened for user root by (uid=0)
Mar 06 18:17:01 proxmox CRON[16815]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Mar 06 18:17:01 proxmox CRON[16814]: pam_unix(cron:session): session closed for user root
Mar 06 18:18:18 proxmox pvedaemon[2623]: <root@pam> successful auth for user 'root@pam'
Mar 06 18:22:41 proxmox pveproxy[28479]: worker exit
Mar 06 18:22:41 proxmox pveproxy[32638]: worker 28479 finished
Mar 06 18:22:41 proxmox pveproxy[32638]: starting 1 worker(s)
Mar 06 18:22:41 proxmox pveproxy[32638]: worker 17392 started
Mar 06 18:26:33 proxmox systemd-timesyncd[2003]: interval/delta/delay/jitter/drift 2048s/-0.003s/0.020s/0.003s/+31ppm
Mar 06 18:29:44 proxmox pvedaemon[18088]: shutdown CT 102: UPID:proxmox:000046A8:00644623:58BD9C88:vzshutdown:102:root@pam:
Mar 06 18:29:44 proxmox pvedaemon[2623]: <root@pam> starting task UPID:proxmox:000046A8:00644623:58BD9C88:vzshutdown:102:root@pam:
Mar 06 18:29:47 proxmox kernel: vmbr0: port 3(veth102i0) entered disabled state
Mar 06 18:29:47 proxmox kernel: vmbr0: port 3(veth102i0) entered disabled state
Mar 06 18:29:47 proxmox kernel: device veth102i0 left promiscuous mode
Mar 06 18:29:47 proxmox kernel: vmbr0: port 3(veth102i0) entered disabled state
Mar 06 18:29:48 proxmox pvedaemon[2621]: unable to get PID for CT 102 (not running?)
Mar 06 18:29:48 proxmox pvedaemon[2623]: <root@pam> end task UPID:proxmox:000046A8:00644623:58BD9C88:vzshutdown:102:root@pam: OK
Mar 06 18:29:48 proxmox systemd[1]: lxc@102.service: main process exited, code=exited, status=1/FAILURE
Mar 06 18:29:48 proxmox systemd[1]: Unit lxc@102.service entered failed state.
Mar 06 18:29:50 proxmox pvedaemon[2622]: <root@pam> starting task UPID:proxmox:00004882:0064488E:58BD9C8E:vzstart:102:root@pam:
Mar 06 18:29:50 proxmox pvedaemon[18562]: starting CT 102: UPID:proxmox:00004882:0064488E:58BD9C8E:vzstart:102:root@pam:
Mar 06 18:29:50 proxmox systemd[1]: Starting LXC Container: 102...
Mar 06 18:29:50 proxmox kernel: IPv6: ADDRCONF(NETDEV_UP): veth102i0: link is not ready
Mar 06 18:29:51 proxmox kernel: device veth102i0 entered promiscuous mode
Mar 06 18:29:51 proxmox kernel: eth0: renamed from vethHAMM2K
Mar 06 18:29:51 proxmox systemd[1]: Started LXC Container: 102.
Mar 06 18:29:51 proxmox pvedaemon[2622]: <root@pam> end task UPID:proxmox:00004882:0064488E:58BD9C8E:vzstart:102:root@pam: OK
Mar 06 18:29:51 proxmox kernel: audit: type=1400 audit(1488821391.439:50): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default-cgns" name="/sys/fs/pstore/" pid=18754 comm="mount" fstype="pstore" srcname="pstore"
Mar 06 18:29:51 proxmox kernel: audit: type=1400 audit(1488821391.439:51): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default-cgns" name="/sys/fs/pstore/" pid=18754 comm="mount" fstype="pstore" srcname="pstore" flags="ro"
Mar 06 18:29:51 proxmox kernel: audit: type=1400 audit(1488821391.471:52): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default-cgns" name="/" pid=18863 comm="mount" flags="rw, remount, silent"
Mar 06 18:29:51 proxmox kernel: audit: type=1400 audit(1488821391.475:53): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default-cgns" name="/" pid=18864 comm="mount" flags="rw, remount"
Mar 06 18:29:51 proxmox kernel: audit: type=1400 audit(1488821391.487:54): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default-cgns" name="/run/" pid=18919 comm="mount" flags="rw, nosuid, nodev, noexec, remount, relatime"
Mar 06 18:29:51 proxmox kernel: audit: type=1400 audit(1488821391.491:55): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default-cgns" name="/run/lock/" pid=18930 comm="mount" flags="rw, nosuid, nodev, noexec, remount, relatime"
Mar 06 18:29:51 proxmox kernel: audit: type=1400 audit(1488821391.511:56): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default-cgns" name="/run/shm/" pid=19011 comm="mount" flags="rw, nosuid, nodev, noexec, remount, relatime"
Mar 06 18:29:51 proxmox kernel: audit: type=1400 audit(1488821391.515:57): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default-cgns" name="/dev/pts/" pid=19019 comm="mount" flags="rw, nosuid, noexec, remount, relatime"
Mar 06 18:29:51 proxmox kernel: vmbr0: port 3(veth102i0) entered forwarding state
Mar 06 18:29:51 proxmox kernel: vmbr0: port 3(veth102i0) entered forwarding state
Mar 06 18:30:53 proxmox pveproxy[32640]: worker exit
Mar 06 18:30:53 proxmox pveproxy[32638]: worker 32640 finished
Mar 06 18:30:53 proxmox pveproxy[32638]: starting 1 worker(s)
Mar 06 18:30:53 proxmox pveproxy[32638]: worker 20070 started
Mar 06 18:33:18 proxmox pvedaemon[2621]: <root@pam> successful auth for user 'root@pam'
Mar 06 18:34:40 proxmox pvedaemon[2622]: worker exit
Mar 06 18:34:40 proxmox pvedaemon[2620]: worker 2622 finished
Mar 06 18:34:40 proxmox pvedaemon[2620]: starting 1 worker(s)
Mar 06 18:34:40 proxmox pvedaemon[2620]: worker 20750 started
Mar 06 18:39:35 proxmox pvedaemon[21612]: shutdown CT 102: UPID:proxmox:0000546C:00652D30:58BD9ED7:vzshutdown:102:root@pam:
Mar 06 18:39:35 proxmox pvedaemon[20750]: <root@pam> starting task UPID:proxmox:0000546C:00652D30:58BD9ED7:vzshutdown:102:root@pam:
Mar 06 18:39:43 proxmox kernel: vmbr0: port 3(veth102i0) entered disabled state
Mar 06 18:39:43 proxmox kernel: vmbr0: port 3(veth102i0) entered disabled state
Mar 06 18:39:43 proxmox kernel: device veth102i0 left promiscuous mode
Mar 06 18:39:43 proxmox kernel: vmbr0: port 3(veth102i0) entered disabled state
Mar 06 18:39:44 proxmox pvedaemon[2621]: unable to get PID for CT 102 (not running?)
Mar 06 18:39:44 proxmox pvedaemon[20750]: <root@pam> end task UPID:proxmox:0000546C:00652D30:58BD9ED7:vzshutdown:102:root@pam: OK
Mar 06 18:39:44 proxmox systemd[1]: lxc@102.service: main process exited, code=exited, status=1/FAILURE
Mar 06 18:39:44 proxmox systemd[1]: Unit lxc@102.service entered failed state.
Mar 06 18:40:04 proxmox pvedaemon[2623]: <root@pam> starting task UPID:proxmox:0000565E:0065387E:58BD9EF4:vzstart:102:root@pam:
Mar 06 18:40:04 proxmox pvedaemon[22110]: starting CT 102: UPID:proxmox:0000565E:0065387E:58BD9EF4:vzstart:102:root@pam:
Mar 06 18:40:04 proxmox systemd[1]: Starting LXC Container: 102...
Mar 06 18:40:05 proxmox kernel: IPv6: ADDRCONF(NETDEV_UP): veth102i0: link is not ready
Mar 06 18:40:05 proxmox kernel: device veth102i0 entered promiscuous mode
Mar 06 18:40:05 proxmox kernel: eth0: renamed from veth8EMLEA
Mar 06 18:40:05 proxmox systemd[1]: Started LXC Container: 102.
Mar 06 18:40:05 proxmox pvedaemon[2623]: <root@pam> end task UPID:proxmox:0000565E:0065387E:58BD9EF4:vzstart:102:root@pam: OK
Mar 06 18:40:05 proxmox kernel: audit: type=1400 audit(1488822005.721:58): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default-cgns" name="/sys/fs/pstore/" pid=22300 comm="mount" fstype="pstore" srcname="pstore"
Mar 06 18:40:05 proxmox kernel: audit: type=1400 audit(1488822005.721:59): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default-cgns" name="/sys/fs/pstore/" pid=22300 comm="mount" fstype="pstore" srcname="pstore" flags="ro"
Mar 06 18:40:05 proxmox kernel: audit: type=1400 audit(1488822005.757:60): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default-cgns" name="/" pid=22409 comm="mount" flags="rw, remount, silent"
Mar 06 18:40:05 proxmox kernel: audit: type=1400 audit(1488822005.757:61): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default-cgns" name="/" pid=22410 comm="mount" flags="rw, remount"
Mar 06 18:40:05 proxmox kernel: audit: type=1400 audit(1488822005.773:62): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default-cgns" name="/run/" pid=22465 comm="mount" flags="rw, nosuid, nodev, noexec, remount, relatime"
Mar 06 18:40:05 proxmox kernel: audit: type=1400 audit(1488822005.773:63): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default-cgns" name="/run/lock/" pid=22476 comm="mount" flags="rw, nosuid, nodev, noexec, remount, relatime"
Mar 06 18:40:05 proxmox kernel: audit: type=1400 audit(1488822005.797:64): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default-cgns" name="/run/shm/" pid=22557 comm="mount" flags="rw, nosuid, nodev, noexec, remount, relatime"
Mar 06 18:40:05 proxmox kernel: audit: type=1400 audit(1488822005.797:65): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default-cgns" name="/dev/pts/" pid=22565 comm="mount" flags="rw, nosuid, noexec, remount, relatime"
Mar 06 18:40:05 proxmox kernel: vmbr0: port 3(veth102i0) entered forwarding state
Mar 06 18:40:05 proxmox kernel: vmbr0: port 3(veth102i0) entered forwarding state
Mar 06 18:40:22 proxmox pvedaemon[20750]: <root@pam> starting task UPID:proxmox:00005BB9:00653F91:58BD9F06:aptupdate::root@pam:
Mar 06 18:40:24 proxmox pvedaemon[23481]: update new package list: /var/lib/pve-manager/pkgupdates
Mar 06 18:40:26 proxmox pvedaemon[20750]: <root@pam> end task UPID:proxmox:00005BB9:00653F91:58BD9F06:aptupdate::root@pam: OK
Mar 06 18:44:47 proxmox smartd[2418]: Device: /dev/sda [SAT], open() failed: No such device
Mar 06 18:44:47 proxmox smartd[2418]: Device: /dev/sdb [SAT], open() failed: No such device
Mar 06 18:44:47 proxmox smartd[2418]: Device: /dev/sdc [SAT], open() failed: No such device
Mar 06 18:44:47 proxmox smartd[2418]: Device: /dev/sdd [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 67 to 64
Mar 06 18:44:47 proxmox smartd[2418]: Device: /dev/sde [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 111 to 109
Mar 06 18:44:47 proxmox smartd[2418]: Device: /dev/sdf [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 112 to 111
Mar 06 18:48:18 proxmox pvedaemon[2621]: <root@pam> successful auth for user 'root@pam'
Mar 06 18:49:31 proxmox pveproxy[32639]: worker exit
Mar 06 18:49:32 proxmox pveproxy[32638]: worker 32639 finished
Mar 06 18:49:32 proxmox pveproxy[32638]: starting 1 worker(s)
Mar 06 18:49:32 proxmox pveproxy[32638]: worker 26104 started
Mar 06 18:51:54 proxmox pvedaemon[2621]: worker exit
Mar 06 18:51:54 proxmox pvedaemon[2620]: worker 2621 finished
Mar 06 18:51:54 proxmox pvedaemon[2620]: starting 1 worker(s)
Mar 06 18:51:54 proxmox pvedaemon[2620]: worker 27276 started
Mar 06 18:54:30 proxmox kernel: Threadpool work invoked oom-killer: gfp_mask=0x24000c0, order=0, oom_score_adj=0
Mar 06 18:54:30 proxmox kernel: Threadpool work cpuset=102 mems_allowed=0
Mar 06 18:54:30 proxmox kernel: CPU: 7 PID: 23381 Comm: Threadpool work Tainted: P O 4.4.40-1-pve #1
Mar 06 18:54:30 proxmox kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./E3C224D4I-14S, BIOS P3.20 05/29/2015
Mar 06 18:54:30 proxmox kernel: 0000000000000286 00000000006de7ac ffff88001180fc90 ffffffff813f9a83
Mar 06 18:54:30 proxmox kernel: ffff88001180fd68 ffff880125415800 ffff88001180fcf8 ffffffff8120ac6b
Mar 06 18:54:30 proxmox kernel: ffff88001180fcc8 ffffffff8119205b ffff88001a3ab800 ffff88001a3ab800
Mar 06 18:54:30 proxmox kernel: Call Trace:
Mar 06 18:54:30 proxmox kernel: [<ffffffff813f9a83>] dump_stack+0x63/0x90
Mar 06 18:54:30 proxmox kernel: [<ffffffff8120ac6b>] dump_header+0x67/0x1d5
Mar 06 18:54:30 proxmox kernel: [<ffffffff8119205b>] ? find_lock_task_mm+0x3b/0x80
Mar 06 18:54:30 proxmox kernel: [<ffffffff81192625>] oom_kill_process+0x205/0x3c0
Mar 06 18:54:30 proxmox kernel: [<ffffffff811fe84f>] ? mem_cgroup_iter+0x1cf/0x380
Mar 06 18:54:30 proxmox kernel: [<ffffffff81200818>] mem_cgroup_out_of_memory+0x2a8/0x2f0
Mar 06 18:54:30 proxmox kernel: [<ffffffff812015b7>] mem_cgroup_oom_synchronize+0x347/0x360
Mar 06 18:54:30 proxmox kernel: [<ffffffff811fc5e0>] ? mem_cgroup_begin_page_stat+0x90/0x90
Mar 06 18:54:30 proxmox kernel: [<ffffffff81192d24>] pagefault_out_of_memory+0x44/0xc0
Mar 06 18:54:30 proxmox kernel: [<ffffffff8106af1f>] mm_fault_error+0x7f/0x160
Mar 06 18:54:30 proxmox kernel: [<ffffffff8106b723>] __do_page_fault+0x3e3/0x410
Mar 06 18:54:30 proxmox kernel: [<ffffffff8106b772>] do_page_fault+0x22/0x30
Mar 06 18:54:30 proxmox kernel: [<ffffffff8185e4b8>] page_fault+0x28/0x30
Mar 06 18:54:30 proxmox kernel: Task in /lxc/102 killed as a result of limit of /lxc/102
Mar 06 18:54:30 proxmox kernel: memory: usage 1048576kB, limit 1048576kB, failcnt 643603
Mar 06 18:54:30 proxmox kernel: memory+swap: usage 1048576kB, limit 1572864kB, failcnt 0
Mar 06 18:54:30 proxmox kernel: kmem: usage 0kB, limit 9007199254740988kB, failcnt 0
Mar 06 18:54:30 proxmox kernel: Memory cgroup stats for /lxc/102: cache:84KB rss:1048492KB rss_huge:0KB mapped_file:36KB dirty:12KB writeback:12KB swap:0KB inactive_anon:524368KB active_anon:524168KB inactive_file:8KB active_file:0KB unevictable:0KB
Mar 06 18:54:30 proxmox kernel: [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name
Mar 06 18:54:30 proxmox kernel: [22160] 100000 22160 3873 41 13 3 0 0 init
Mar 06 18:54:30 proxmox kernel: [22936] 100000 22936 9270 98 24 3 0 0 rpcbind
Mar 06 18:54:30 proxmox kernel: [23052] 100000 23052 11190 107 27 3 0 0 su
Mar 06 18:54:30 proxmox kernel: [23073] 100000 23073 46235 116 27 3 0 0 rsyslogd
Mar 06 18:54:30 proxmox kernel: [23119] 100103 23119 624453 224014 789 7 0 0 Main
Mar 06 18:54:30 proxmox kernel: [23164] 100000 23164 4756 42 13 3 0 0 atd
Mar 06 18:54:30 proxmox kernel: [23184] 100102 23184 10531 100 23 3 0 0 dbus-daemon
Mar 06 18:54:30 proxmox kernel: [23254] 100000 23254 13796 180 29 3 0 0 sshd
Mar 06 18:54:30 proxmox kernel: [23259] 100000 23259 6476 56 18 3 0 0 cron
Mar 06 18:54:30 proxmox kernel: [23370] 100000 23370 9042 141 22 3 0 0 master
Mar 06 18:54:30 proxmox kernel: [23374] 100100 23374 9558 133 23 3 0 0 pickup
Mar 06 18:54:30 proxmox kernel: [23375] 100100 23375 9570 134 23 4 0 0 qmgr
Mar 06 18:54:30 proxmox kernel: [23420] 100000 23420 1060 25 8 3 0 0 startpar
Mar 06 18:54:30 proxmox kernel: [23427] 100000 23427 3166 48 12 3 0 0 getty
Mar 06 18:54:30 proxmox kernel: [23428] 100000 23428 3166 47 12 3 0 0 getty
Mar 06 18:54:30 proxmox kernel: [28284] 100103 28284 268660 33620 158 4 0 0 ffmpeg
Mar 06 18:54:30 proxmox kernel: [28449] 100103 28449 134425 2619 51 4 0 0 ffmpeg
Mar 06 18:54:30 proxmox kernel: Memory cgroup out of memory: Kill process 23119 (Main) score 857 or sacrifice child
Mar 06 18:54:30 proxmox kernel: Killed process 28284 (ffmpeg) total-vm:1074640kB, anon-rss:134480kB, file-rss:0kB
Mar 06 18:54:47 proxmox kernel: Threadpool work invoked oom-killer: gfp_mask=0x24000c0, order=0, oom_score_adj=0
Mar 06 18:54:47 proxmox kernel: Threadpool work cpuset=102 mems_allowed=0
Mar 06 18:54:47 proxmox kernel: CPU: 6 PID: 23454 Comm: Threadpool work Tainted: P O 4.4.40-1-pve #1
Mar 06 18:54:47 proxmox kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./E3C224D4I-14S, BIOS P3.20 05/29/2015
Mar 06 18:54:47 proxmox kernel: 0000000000000286 00000000dd59b5e4 ffff8800229b7c90 ffffffff813f9a83
Mar 06 18:54:47 proxmox kernel: ffff8800229b7d68 ffff880125415800 ffff8800229b7cf8 ffffffff8120ac6b
Mar 06 18:54:47 proxmox kernel: ffff8800229b7cc8 ffffffff8119205b ffff880212827000 ffff880212827000
Mar 06 18:54:47 proxmox kernel: Call Trace:
Mar 06 18:54:47 proxmox kernel: [<ffffffff813f9a83>] dump_stack+0x63/0x90
Mar 06 18:54:47 proxmox kernel: [<ffffffff8120ac6b>] dump_header+0x67/0x1d5
Mar 06 18:54:47 proxmox kernel: [<ffffffff8119205b>] ? find_lock_task_mm+0x3b/0x80
Mar 06 18:54:47 proxmox kernel: [<ffffffff81192625>] oom_kill_process+0x205/0x3c0
Mar 06 18:54:47 proxmox kernel: [<ffffffff811fe84f>] ? mem_cgroup_iter+0x1cf/0x380
Mar 06 18:54:47 proxmox kernel: [<ffffffff81200818>] mem_cgroup_out_of_memory+0x2a8/0x2f0
Mar 06 18:54:47 proxmox kernel: [<ffffffff812015b7>] mem_cgroup_oom_synchronize+0x347/0x360
Mar 06 18:54:47 proxmox kernel: [<ffffffff811fc5e0>] ? mem_cgroup_begin_page_stat+0x90/0x90
Mar 06 18:54:47 proxmox kernel: [<ffffffff81192d24>] pagefault_out_of_memory+0x44/0xc0
Mar 06 18:54:47 proxmox kernel: [<ffffffff8106af1f>] mm_fault_error+0x7f/0x160
Mar 06 18:54:47 proxmox kernel: [<ffffffff8106b723>] __do_page_fault+0x3e3/0x410
Mar 06 18:54:47 proxmox kernel: [<ffffffff8106b772>] do_page_fault+0x22/0x30
Mar 06 18:54:47 proxmox kernel: [<ffffffff8185e4b8>] page_fault+0x28/0x30
Mar 06 18:54:47 proxmox kernel: Task in /lxc/102 killed as a result of limit of /lxc/102
Mar 06 18:54:47 proxmox kernel: memory: usage 1048576kB, limit 1048576kB, failcnt 960921
Mar 06 18:54:47 proxmox kernel: memory+swap: usage 1048576kB, limit 1572864kB, failcnt 0
Mar 06 18:54:47 proxmox kernel: kmem: usage 0kB, limit 9007199254740988kB, failcnt 0
Mar 06 18:54:47 proxmox kernel: Memory cgroup stats for /lxc/102: cache:184KB rss:1048392KB rss_huge:0KB mapped_file:28KB dirty:0KB writeback:12KB swap:0KB inactive_anon:529012KB active_anon:519400KB inactive_file:0KB active_file:0KB unevictable:0KB
Mar 06 18:54:47 proxmox kernel: [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name
Mar 06 18:54:47 proxmox kernel: [22160] 100000 22160 3873 50 13 3 0 0 init
Mar 06 18:54:47 proxmox kernel: [22936] 100000 22936 9270 98 24 3 0 0 rpcbind
Mar 06 18:54:47 proxmox kernel: [23052] 100000 23052 11190 107 27 3 0 0 su
Mar 06 18:54:47 proxmox kernel: [23073] 100000 23073 46235 116 27 3 0 0 rsyslogd
Mar 06 18:54:47 proxmox kernel: [23119] 100103 23119 639557 238988 848 7 0 0 Main
Mar 06 18:54:47 proxmox kernel: [23164] 100000 23164 4756 42 13 3 0 0 atd
Mar 06 18:54:47 proxmox kernel: [23184] 100102 23184 10531 100 23 3 0 0 dbus-daemon
Mar 06 18:54:47 proxmox kernel: [23254] 100000 23254 13796 180 29 3 0 0 sshd
Mar 06 18:54:47 proxmox kernel: [23259] 100000 23259 6476 56 18 3 0 0 cron
Mar 06 18:54:47 proxmox kernel: [23370] 100000 23370 9042 141 22 3 0 0 master
Mar 06 18:54:47 proxmox kernel: [23374] 100100 23374 9558 133 23 3 0 0 pickup
Mar 06 18:54:47 proxmox kernel: [23375] 100100 23375 9570 134 23 4 0 0 qmgr
Mar 06 18:54:47 proxmox kernel: [23420] 100000 23420 1060 25 8 3 0 0 startpar
Mar 06 18:54:47 proxmox kernel: [23427] 100000 23427 3166 48 12 3 0 0 getty
Mar 06 18:54:47 proxmox kernel: [23428] 100000 23428 3166 47 12 3 0 0 getty
Mar 06 18:54:47 proxmox kernel: [28581] 100103 28581 253662 21317 124 4 0 0 ffmpeg
Mar 06 18:54:47 proxmox kernel: Memory cgroup out of memory: Kill process 23119 (Main) score 914 or sacrifice child
Mar 06 18:54:47 proxmox kernel: Killed process 28581 (ffmpeg) total-vm:1014648kB, anon-rss:85268kB, file-rss:0kB
Mar 06 18:54:52 proxmox kernel: init invoked oom-killer: gfp_mask=0x24000c0, order=0, oom_score_adj=0
Mar 06 18:54:52 proxmox kernel: init cpuset=102 mems_allowed=0
Mar 06 18:54:52 proxmox kernel: CPU: 6 PID: 22160 Comm: init Tainted: P O 4.4.40-1-pve #1
Mar 06 18:54:52 proxmox kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./E3C224D4I-14S, BIOS P3.20 05/29/2015
Mar 06 18:54:52 proxmox kernel: 0000000000000286 00000000f83414d3 ffff88012e91fc90 ffffffff813f9a83
Mar 06 18:54:52 proxmox kernel: ffff88012e91fd68 ffff880125415800 ffff88012e91fcf8 ffffffff8120ac6b
Mar 06 18:54:52 proxmox kernel: ffff88012e91fcc8 ffffffff8119205b ffff88001a3a9c00 ffff88001a3a9c00
Mar 06 18:54:52 proxmox kernel: Call Trace:
Mar 06 18:54:52 proxmox kernel: [<ffffffff813f9a83>] dump_stack+0x63/0x90
Mar 06 18:54:52 proxmox kernel: [<ffffffff8120ac6b>] dump_header+0x67/0x1d5
Mar 06 18:54:52 proxmox kernel: [<ffffffff8119205b>] ? find_lock_task_mm+0x3b/0x80
Mar 06 18:54:52 proxmox kernel: [<ffffffff81192625>] oom_kill_process+0x205/0x3c0
Mar 06 18:54:52 proxmox kernel: [<ffffffff811fe84f>] ? mem_cgroup_iter+0x1cf/0x380
Mar 06 18:54:52 proxmox kernel: [<ffffffff81200818>] mem_cgroup_out_of_memory+0x2a8/0x2f0
Mar 06 18:54:52 proxmox kernel: [<ffffffff812015b7>] mem_cgroup_oom_synchronize+0x347/0x360
Mar 06 18:54:52 proxmox kernel: [<ffffffff811fc5e0>] ? mem_cgroup_begin_page_stat+0x90/0x90
Mar 06 18:54:52 proxmox kernel: [<ffffffff81192d24>] pagefault_out_of_memory+0x44/0xc0
Mar 06 18:54:52 proxmox kernel: [<ffffffff8106af1f>] mm_fault_error+0x7f/0x160
Mar 06 18:54:52 proxmox kernel: [<ffffffff8106b723>] __do_page_fault+0x3e3/0x410
Mar 06 18:54:52 proxmox kernel: [<ffffffff8106b772>] do_page_fault+0x22/0x30
Mar 06 18:54:52 proxmox kernel: [<ffffffff8185e4b8>] page_fault+0x28/0x30
Mar 06 18:54:52 proxmox kernel: Task in /lxc/102 killed as a result of limit of /lxc/102
Mar 06 18:54:52 proxmox kernel: memory: usage 962900kB, limit 1048576kB, failcnt 1721354
Mar 06 18:54:52 proxmox kernel: memory+swap: usage 962900kB, limit 1572864kB, failcnt 0
Mar 06 18:54:52 proxmox kernel: kmem: usage 0kB, limit 9007199254740988kB, failcnt 0
Mar 06 18:54:52 proxmox kernel: Memory cgroup stats for /lxc/102: cache:1120KB rss:961676KB rss_huge:0KB mapped_file:1080KB dirty:0KB writeback:0KB swap:0KB inactive_anon:529012KB active_anon:432708KB inactive_file:4KB active_file:1008KB unevictable:0KB
Mar 06 18:54:52 proxmox kernel: [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name
Mar 06 18:54:52 proxmox kernel: [22160] 100000 22160 3873 56 13 3 0 0 init
Mar 06 18:54:52 proxmox kernel: [22936] 100000 22936 9270 98 24 3 0 0 rpcbind
Mar 06 18:54:52 proxmox kernel: [23052] 100000 23052 11190 107 27 3 0 0 su
Mar 06 18:54:52 proxmox kernel: [23073] 100000 23073 46235 116 27 3 0 0 rsyslogd
Mar 06 18:54:52 proxmox kernel: [23119] 100103 23119 639557 239362 848 7 0 0 Main
Mar 06 18:54:52 proxmox kernel: [23164] 100000 23164 4756 42 13 3 0 0 atd
Mar 06 18:54:52 proxmox kernel: [23184] 100102 23184 10531 100 23 3 0 0 dbus-daemon
Mar 06 18:54:52 proxmox kernel: [23254] 100000 23254 13796 180 29 3 0 0 sshd
Mar 06 18:54:52 proxmox kernel: [23259] 100000 23259 6476 56 18 3 0 0 cron
Mar 06 18:54:52 proxmox kernel: [23370] 100000 23370 9042 141 22 3 0 0 master
Mar 06 18:54:52 proxmox kernel: [23374] 100100 23374 9558 133 23 3 0 0 pickup
Mar 06 18:54:52 proxmox kernel: [23375] 100100 23375 9570 134 23 4 0 0 qmgr
Mar 06 18:54:52 proxmox kernel: [23420] 100000 23420 1060 25 8 3 0 0 startpar
Mar 06 18:54:52 proxmox kernel: [23427] 100000 23427 3166 48 12 3 0 0 getty
Mar 06 18:54:52 proxmox kernel: [23428] 100000 23428 3166 47 12 3 0 0 getty
Mar 06 18:54:52 proxmox kernel: Memory cgroup out of memory: Kill process 23119 (Main) score 916 or sacrifice child
Mar 06 18:54:52 proxmox kernel: Killed process 23119 (Main) total-vm:2558228kB, anon-rss:956060kB, file-rss:1560kB

Several further questions:

1. Whats thsi APParmor message in the Log?
2. sda sdb & sdc are my passtrough hdds how can i disable proxmox looking for it?
3. why is emby crashing?


i gave emby 1gb of ram, should be enough but i think it crashes exactly when ram usage is at 100%

thanks again for your help ;)

n1ete
 
Last edited:
I'm facing similar problems. My container was killed while having enough memory. I cannot see memory consumption increase before the crash in the graphs nor in my monitoring. The failed process in question for me was a cronjob that runs every 4 hours, 99% of the time without problems. The job itself tars and compresses a directory into a small file (less than 1 MB). The whole container has 1 GB of RAM and less than that as "filesystem".
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!