Proxmox system rebooting without any reason? ^@ logged to syslog

SPQRInc

Member
Jul 27, 2015
57
1
6
Hello,

yesterday my proxmox-server rebooted hard without any reason that I could see.

The logs are showing
Code:
Sep  1 15:13:47 example kernel: [4514929.741761] Firewall: *UDP_IN Blocked* IN=vmbr0 OUT= MAC=ff:ff:ff:ff:ff:ff:0c:c4:7a:77:38:28:08:00 SRC=123.123.123.123 DST=255.255.255.255 LEN=173 TOS=0x00 PREC=0x00 TTL=64 ID=47121 DF PROTO=UDP SPT=17500 DPT=17500 LEN=153
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^$
Sep  1 15:17:50 example systemd-modules-load[817]: Module 'fuse' is builtin
Sep  1 15:17:50 example systemd-modules-load[817]: Inserted module 'vhost_net'
Sep  1 15:17:50 example hdparm[856]: Setting parameters of disc: (none).
Sep  1 15:17:50 example systemd-fsck[1015]: /dev/sda3: Journal wird wiederhergestellt
Sep  1 15:17:50 example systemd-fsck[1015]: /dev/sda3: sauber, 314/62592 Dateien, 40617/250112 Blöcke
As I learned yesterday, the ^@ stands for a NULL byte. But why is it logged? Any explanations for that?
 
Install netconsole logging or install kdump (and configure it properly). Post mortem analysis is almost impossible without any logs.
 
Hi LnxBil,

thanks for your reply.

Could be an idea - but the system was loging? The system was shut down at 15:13:xx and was available again at 15:17:xx.


Sep 1 15:13:47 example kernel: [4514929.741761] Firewall: *UDP_IN Blocked* IN=vmbr0 OUT= MAC=ff:ff:ff:ff:ff:ff:0c:c4:7a:77:38:28:08:00 SRC=123.123.123.123 DST=255.255.255.255 LEN=173 TOS=0x00 PREC=0x00 TTL=64 ID=47121 DF PROTO=UDP SPT=17500 DPT=17500 LEN=153
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^$
Sep 1 15:17:50 example systemd-modules-load[817]: Module 'fuse' is builtin
 
Yeah, something is logged, but nothing useful. Real logging of linux kernel crashes is most of the time not written to disk. The system crashes, so of course logging is also crashed.

Crashes are not logged, e.g. if your disk driver breaks, how would you log? Install netconsole and ensure that it works. Install kdump and manually trigger a crash to see if it works and then wait until another crash occures.
 
same too,,
low memmory ? i had 8gb memory and just run 2 kvm with 512 memmory each kvm..
2016-09-05T09:21:56-04:00 vps kernel: [170112.115933] Hardware name: Intel Corporation S1200SP/S1200SP, BIOS S1200SP.86B.01.02.0001.111120150000 11/11/2015
2016-09-05T09:21:56-04:00 vps kernel: [170112.115934] 0000000000000286 000000000389b3aa ffff8801ea2dbc90 ffffffff813eb0e3
2016-09-05T09:21:56-04:00 vps kernel: [170112.115935] ffff8801ea2dbd68 ffff88024a793000 ffff8801ea2dbcf8 ffffffff812087fb
2016-09-05T09:21:56-04:00 vps kernel: [170112.115936] ffff8801ea2dbcc8 ffffffff811903db ffff88022963e740 ffff88022963e740
2016-09-05T09:21:56-04:00 vps kernel: [170112.115938] Call Trace:
2016-09-05T09:21:56-04:00 vps kernel: [170112.115942] [<ffffffff813eb0e3>] dump_stack+0x63/0x90
2016-09-05T09:21:56-04:00 vps kernel: [170112.115946] [<ffffffff811903db>] ? find_lock_task_mm+0x3b/0x80
2016-09-05T09:21:56-04:00 vps kernel: [170112.115949] [<ffffffff811fc640>] ? mem_cgroup_iter+0x1d0/0x380
2016-09-05T09:21:56-04:00 vps kernel: [170112.115951] [<ffffffff811ff2d7>] mem_cgroup_oom_synchronize+0x347/0x360
2016-09-05T09:21:56-04:00 vps kernel: [170112.115954] [<ffffffff811910a4>] pagefault_out_of_memory+0x44/0xc0
2016-09-05T09:21:56-04:00 vps kernel: [170112.115956] [<ffffffff8106b733>] __do_page_fault+0x3e3/0x410
2016-09-05T09:21:56-04:00 vps kernel: [170112.115959] [<ffffffff81849878>] page_fault+0x28/0x30
2016-09-05T09:21:56-04:00 vps kernel: [170112.115962] memory: usage 262144kB, limit 262144kB, failcnt 1421265
2016-09-05T09:21:56-04:00 vps kernel: [170112.115964] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0
2016-09-05T09:21:56-04:00 vps kernel: [170112.116058] Memory cgroup out of memory: Kill process 10657 (python) score 926 or sacrifice child
2016-09-05T09:21:57-04:00 vps kernel: [170113.172641] python invoked oom-killer: gfp_mask=0x24000c0, order=0, oom_score_adj=0
2016-09-05T09:21:57-04:00 vps kernel: [170113.172646] CPU: 2 PID: 10658 Comm: python Tainted: P O 4.4.6-1-pve #1
2016-09-05T09:21:57-04:00 vps kernel: [170113.172648] 0000000000000286 00000000da35f20b ffff8801ea2dbc90 ffffffff813eb0e3
2016-09-05T09:21:57-04:00 vps kernel: [170113.172651] ffff8801ea2dbcc8 ffffffff811903db ffff88022963e740 ffff88022963e740
2016-09-05T09:21:57-04:00 vps kernel: [170113.172656] [<ffffffff813eb0e3>] dump_stack+0x63/0x90
2016-09-05T09:21:57-04:00 vps kernel: [170113.172660] [<ffffffff811903db>] ? find_lock_task_mm+0x3b/0x80
2016-09-05T09:21:57-04:00 vps kernel: [170113.172663] [<ffffffff811fc640>] ? mem_cgroup_iter+0x1d0/0x380
2016-09-05T09:21:57-04:00 vps kernel: [170113.172666] [<ffffffff811ff2d7>] mem_cgroup_oom_synchronize+0x347/0x360
2016-09-05T09:21:57-04:00 vps kernel: [170113.172668] [<ffffffff811910a4>] pagefault_out_of_memory+0x44/0xc0
2016-09-05T09:21:57-04:00 vps kernel: [170113.172671] [<ffffffff8106b733>] __do_page_fault+0x3e3/0x410
2016-09-05T09:21:57-04:00 vps kernel: [170113.172673] [<ffffffff81849878>] page_fault+0x28/0x30
2016-09-05T09:21:57-04:00 vps kernel: [170113.172677] memory: usage 261972kB, limit 262144kB, failcnt 1421293
2016-09-05T09:21:57-04:00 vps kernel: [170113.172678] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0
2016-09-05T09:21:57-04:00 vps kernel: [170113.172685] [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name
2016-09-05T09:21:57-04:00 vps kernel: [170113.172741] [ 5543] 1 5543 2086 1 9 3 32 0 portmap
2016-09-05T09:21:57-04:00 vps kernel: [170113.172743] [ 5702] 0 5702 4173 3 12 3 38 0 atd
2016-09-05T09:21:57-04:00 vps kernel: [170113.172745] [ 5800] 0 5800 2615 1 9 3 32 0 inetd
2016-09-05T09:21:57-04:00 vps kernel: [170113.172747] [ 5854] 104 5854 7456 1 19 3 64 0 dbus-daemon
2016-09-05T09:21:57-04:00 vps kernel: [170113.172749] [ 5914] 0 5914 1050 2 8 3 35 0 mysqld_safe
2016-09-05T09:21:57-04:00 vps kernel: [170113.172751] [ 6242] 0 6242 1027 1 7 3 25 0 logger
2016-09-05T09:21:57-04:00 vps kernel: [170113.172753] [ 6586] 101 6586 10004 24 24 3 111 0 qmgr
2016-09-05T09:21:57-04:00 vps kernel: [170113.172755] [ 6589] 0 6589 3649 2 13 3 38 0 getty
2016-09-05T09:21:57-04:00 vps kernel: [170113.172758] [ 7254] 1000 7254 4757 2 15 3 408 0 bash
2016-09-05T09:21:57-04:00 vps kernel: [170113.172760] [ 7736] 1000 7736 4761 2 15 3 412 0 bash
2016-09-05T09:21:57-04:00 vps kernel: [170113.172764] [26093] 1000 26093 256647 64152 436 4 117508 0 python
2016-09-05T09:21:57-04:00 vps kernel: [170113.172768] [10653] 0 10653 13008 161 31 3 0 0 sshd
2016-09-05T09:22:03-04:00 vps kernel: [170119.382910] Call Trace:
2016-09-05T09:22:03-04:00 vps kernel: [170119.382914] [<ffffffff813eb0e3>] dump_stack+0x63/0x90
2016-09-05T09:22:03-04:00 vps kernel: [170119.382916] [<ffffffff812087fb>] dump_header+0x67/0x1d5
2016-09-05T09:22:03-04:00 vps kernel: [170119.382919] [<ffffffff811903db>] ? find_lock_task_mm+0x3b/0x80
2016-09-05T09:22:03-04:00 vps kernel: [170119.382920] [<ffffffff811909a5>] oom_kill_process+0x205/0x3c0
2016-09-05T09:22:03-04:00 vps kernel: [170119.382921] [<ffffffff811fc640>] ? mem_cgroup_iter+0x1d0/0x380
2016-09-05T09:22:03-04:00 vps kernel: [170119.382923] [<ffffffff811fe6b4>] mem_cgroup_out_of_memory+0x2a4/0x2e0
2016-09-05T09:22:03-04:00 vps kernel: [170119.382924] [<ffffffff811ff2d7>] mem_cgroup_oom_synchronize+0x347/0x360
2016-09-05T09:22:03-04:00 vps kernel: [170119.382926] [<ffffffff811fa6d0>] ? mem_cgroup_css_online+0x240/0x240
2016-09-05T09:22:03-04:00 vps kernel: [170119.382927] [<ffffffff811910a4>] pagefault_out_of_memory+0x44/0xc0
2016-09-05T09:22:03-04:00 vps kernel: [170119.382928] [<ffffffff8106af2f>] mm_fault_error+0x7f/0x160
2016-09-05T09:22:03-04:00 vps kernel: [170119.382930] [<ffffffff8106b733>] __do_page_fault+0x3e3/0x410
2016-09-05T09:22:03-04:00 vps kernel: [170119.382931] [<ffffffff8106b782>] do_page_fault+0x22/0x30
2016-09-05T09:22:03-04:00 vps kernel: [170119.382933] [<ffffffff81849878>] page_fault+0x28/0x30
016-09-05T11:42:13-04:00 vps kernel: [178529.031699] python invoked oom-killer: gfp_mask=0x24000c0, order=0, oom_score_adj=0
2016-09-05T11:42:13-04:00 vps kernel: [178529.031701] python cpuset=105 mems_allowed=0
2016-09-05T11:42:13-04:00 vps kernel: [178529.031704] CPU: 3 PID: 30407 Comm: python Tainted: P O 4.4.6-1-pve #1
2016-09-05T11:42:13-04:00 vps kernel: [178529.031705] Hardware name: Intel Corporation S1200SP/S1200SP, BIOS S1200SP.86B.01.02.0001.111120150000 11/11/2015
2016-09-05T11:42:13-04:00 vps kernel: [178529.031708] ffff880169c4fd68 ffff88024a793000 ffff880169c4fcf8 ffffffff812087fb
2016-09-05T11:42:13-04:00 vps kernel: [178529.031709] ffff880169c4fcc8 ffffffff811903db ffff8800813c5880 ffff8800813c5880
2016-09-05T11:42:13-04:00 vps kernel: [178529.031710] Call Trace:
2016-09-05T11:42:13-04:00 vps kernel: [178529.031714] [<ffffffff813eb0e3>] dump_stack+0x63/0x90
2016-09-05T11:42:13-04:00 vps kernel: [178529.031716] [<ffffffff812087fb>] dump_header+0x67/0x1d5
2016-09-05T11:42:13-04:00 vps kernel: [178529.031718] [<ffffffff811903db>] ? find_lock_task_mm+0x3b/0x80
2016-09-05T11:42:13-04:00 vps kernel: [178529.031719] [<ffffffff811909a5>] oom_kill_process+0x205/0x3c0
2016-09-05T11:42:13-04:00 vps kernel: [178529.031721] [<ffffffff811fc640>] ? mem_cgroup_iter+0x1d0/0x380
2016-09-05T11:42:13-04:00 vps kernel: [178529.031722] [<ffffffff811fe6b4>] mem_cgroup_out_of_memory+0x2a4/0x2e0
2016-09-05T11:42:13-04:00 vps kernel: [178529.031724] [<ffffffff811ff2d7>] mem_cgroup_oom_synchronize+0x347/0x360
2016-09-05T11:42:13-04:00 vps kernel: [178529.031725] [<ffffffff811fa6d0>] ? mem_cgroup_css_online+0x240/0x240
2016-09-05T11:42:13-04:00 vps kernel: [178529.031726] [<ffffffff811910a4>] pagefault_out_of_memory+0x44/0xc0
2016-09-05T11:42:13-04:00 vps kernel: [178529.031728] [<ffffffff8106af2f>] mm_fault_error+0x7f/0x160
2016-09-05T11:42:13-04:00 vps kernel: [178529.031729] [<ffffffff8106b733>] __do_page_fault+0x3e3/0x410
2016-09-05T11:42:13-04:00 vps kernel: [178529.031731] [<ffffffff81003885>] ? syscall_trace_enter_phase1+0xc5/0x140
2016-09-05T11:42:13-04:00 vps kernel: [178529.031732] [<ffffffff8106b782>] do_page_fault+0x22/0x30
2016-09-05T11:42:13-04:00 vps kernel: [178529.031734] [<ffffffff81849878>] page_fault+0x28/0x30
 
2016-09-07T06:46:57-04:00 vps systemd-modules-load[1434]: Module 'fuse' is builtin
2016-09-07T06:46:57-04:00 vps systemd-modules-load[1434]: Inserted module 'vhost_net'
2016-09-07T06:46:57-04:00 vps hdparm[1470]: Setting parameters of disc: (none).
2016-09-07T06:46:57-04:00 vps keyboard-setup[1468]: Setting preliminary keymap...done.
2016-09-07T06:46:57-04:00 vps zpool[2288]: no pools available to import
2016-09-07T06:46:57-04:00 vps mv[2907]: /bin/mv: cannot stat â^À^Ø/etc/network/interfaces.newâ^À^Ù: No such file or directory
2016-09-07T06:46:57-04:00 vps pvepw-logger[2978]: starting pvefw logger
2016-09-07T06:46:57-04:00 vps networking[2918]: Configuring network interfaces...
2016-09-07T06:46:57-04:00 vps networking[2918]: Waiting for vmbr0 to get ready (MAXWAIT is 2 seconds).
09-07T06:25:01-04:00 vps CRON[28321]: pam_unix(cron:session): session opened for user root by (uid=0)
2016-09-07T06:25:01-04:00 vps CRON[28322]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
2016-09-07T06:25:21-04:00 vps pveproxy[23977]: received signal TERM
2016-09-07T06:25:21-04:00 vps pveproxy[23977]: server closing
2016-09-07T06:25:21-04:00 vps pveproxy[16892]: worker exit
2016-09-07T06:25:21-04:00 vps pveproxy[12610]: worker exit
2016-09-07T06:25:21-04:00 vps pveproxy[18036]: worker exit
2016-09-07T06:25:21-04:00 vps pveproxy[23977]: worker 12610 finished
2016-09-07T06:25:21-04:00 vps pveproxy[23977]: worker 16892 finished
2016-09-07T06:25:21-04:00 vps pveproxy[23977]: worker 18036 finished
2016-09-07T06:25:21-04:00 vps pveproxy[23977]: server stopped
2016-09-07T06:25:24-04:00 vps pveproxy[28481]: starting server
2016-09-07T06:25:24-04:00 vps pveproxy[28481]: starting 3 worker(s)
2016-09-07T06:25:24-04:00 vps pveproxy[28481]: worker 28482 started
2016-09-07T06:25:24-04:00 vps pveproxy[28481]: worker 28483 started
2016-09-07T06:25:24-04:00 vps pveproxy[28481]: worker 28484 started
2016-09-07T06:25:25-04:00 vps sshd[28479]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.229.172.108 user=root
2016-09-07T06:25:25-04:00 vps spiceproxy[24000]: received signal TERM
2016-09-07T06:25:25-04:00 vps spiceproxy[24000]: server closing
2016-09-07T06:25:25-04:00 vps spiceproxy[24001]: worker exit
2016-09-07T06:25:25-04:00 vps spiceproxy[24000]: worker 24001 finished
2016-09-07T06:25:25-04:00 vps spiceproxy[24000]: server stopped
2016-09-07T06:25:27-04:00 vps spiceproxy[28509]: starting server
2016-09-07T06:25:27-04:00 vps spiceproxy[28509]: starting 1 worker(s)
2016-09-07T06:25:27-04:00 vps spiceproxy[28509]: worker 28510 started
 
Hmm, that is really strange. Please update the old kernel 4.4.6-1-pve to the newest one and analyze of the problem persists.
 
proxmox-ve: 4.2-48 (running kernel: 4.4.6-1-pve)
pve-manager: 4.2-2 (running version: 4.2-2/725d76f0)
pve-kernel-4.4.6-1-pve: 4.4.6-48
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-72
pve-firmware: 1.1-8
libpve-common-perl: 4.0-59
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-14
pve-container: 1.0-62
pve-firewall: 2.0-25
pve-ha-manager: 1.0-28
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!