Proxmox v7.1-8 problems

lps90

Member
May 21, 2020
168
7
23
Hi

Here are some of the problems i face after the last update.
(v7.1-7 was working good!)

"Dec 14 17:38:47 Server audit[1645]: AVC apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-100_</var/lib/lxc>" pid=1645 comm="apparmor_parser"
Dec 14 17:38:47 Server kernel: audit: type=1400 audit(1639503527.196:20): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-100_</var/lib/lxc>" pid=1645 comm="apparmor_parser" "

"Dec 14 17:33:39 Server audit[3279]: AVC apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-252_</var/lib/lxc>" pid=3279 comm="apparmor_parser"
Dec 14 17:33:39 Server kernel: audit: type=1400 audit(1639503219.479:23): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-252_</var/lib/lxc>" pid=3279 comm="apparmor_parser" "

"Dec 14 17:26:41 Server audit[7137]: AVC apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-402_</var/lib/lxc>" pid=7137 comm="apparmor_parser"
Dec 14 17:26:41 Server kernel: audit: type=1400 audit(1639502801.707:26): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-402_</var/lib/lxc>" pid=7137 comm="apparmor_parser" "

"Dec 14 17:32:07 Server kernel: EXT4-fs warning (device dm-5): ext4_multi_mount_protect:326: MMP interval 42 higher than expected, please wait."

"Dec 14 17:26:42 Server pmxcfs[1264]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/Server/local: -1
Dec 14 17:26:42 Server pmxcfs[1264]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/Server/ServerNVME: -1
Dec 14 17:26:42 Server pmxcfs[1264]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/Server/local-lvm: -1
Dec 14 17:26:42 Server pmxcfs[1264]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/Server/ServerSSD: -1"

"Dec 14 17:26:43 Server systemd-udevd[7147]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 14 17:26:43 Server systemd-udevd[7147]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 14 17:26:43 Server systemd-udevd[7163]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable."

Startup finished in 15.317s (firmware) + 6.754s (loader) + 3.394s (kernel) + 5min 34.851s (userspace) = 6min 317ms.

6 min to start everything, this is way a lot time.

I'll not uptade Proxmox anymore before waiting for other users feedback.
Is there any way to rollback this update?
 

lps90

Member
May 21, 2020
168
7
23
This time the Proxmox machine completely crashed.
More logs.. maybe it helps:

"Dec 14 23:45:33 Server kernel: unchecked MSR access error: WRMSR to 0x33 (tried to write 0x0000000020000000) at rIP: 0xffffffff8527faf4 (native_write_msr+0x4/0x30)
Dec 14 23:45:33 Server kernel: Call Trace:
Dec 14 23:45:33 Server kernel: ? switch_to_sld+0x33/0x40
Dec 14 23:45:33 Server kernel: __switch_to_xtra+0x12c/0x510
Dec 14 23:45:33 Server kernel: __switch_to+0x261/0x460
Dec 14 23:45:33 Server kernel: ? __switch_to_asm+0x36/0x70
Dec 14 23:45:33 Server kernel: __schedule+0x2fa/0x910
Dec 14 23:45:33 Server kernel: schedule+0x4f/0xc0
Dec 14 23:45:33 Server kernel: futex_wait_queue_me+0xbb/0x120
Dec 14 23:45:33 Server kernel: futex_wait+0x105/0x250
Dec 14 23:45:33 Server kernel: ? __hrtimer_init+0xd0/0xd0
Dec 14 23:45:33 Server kernel: do_futex+0x162/0xb80
Dec 14 23:45:33 Server kernel: ? default_wake_function+0x1a/0x30
Dec 14 23:45:33 Server kernel: ? pollwake+0x72/0x90
Dec 14 23:45:33 Server kernel: ? wake_up_q+0xa0/0xa0
Dec 14 23:45:33 Server kernel: ? __wake_up_common+0x7e/0x140
Dec 14 23:45:33 Server kernel: ? _copy_from_user+0x2e/0x60
Dec 14 23:45:33 Server kernel: __x64_sys_futex+0x81/0x1d0
Dec 14 23:45:33 Server kernel: do_syscall_64+0x61/0xb0
Dec 14 23:45:33 Server kernel: ? exit_to_user_mode_prepare+0x37/0x1b0
Dec 14 23:45:33 Server kernel: ? syscall_exit_to_user_mode+0x27/0x50
Dec 14 23:45:33 Server kernel: ? __x64_sys_write+0x1a/0x20
Dec 14 23:45:33 Server kernel: ? do_syscall_64+0x6e/0xb0
Dec 14 23:45:33 Server kernel: ? exit_to_user_mode_prepare+0x8f/0x1b0
Dec 14 23:45:33 Server kernel: ? syscall_exit_to_user_mode+0x27/0x50
Dec 14 23:45:33 Server kernel: ? do_syscall_64+0x6e/0xb0
Dec 14 23:45:33 Server kernel: ? asm_sysvec_call_function_single+0xa/0x20
Dec 14 23:45:33 Server kernel: entry_SYSCALL_64_after_hwframe+0x44/0xae
Dec 14 23:45:33 Server kernel: RIP: 0033:0x7f89ada97388
Dec 14 23:45:33 Server kernel: Code: 9e 09 00 00 44 89 e6 b9 ca 00 00 00 45 31 c0 41 89 c5 81 f6 89 01 00 00 49 89 da 31 d2 41 b9 ff ff ff ff 48 89 ef 89 c8 0f 05 <48> 89 c3 44 89 ef 48 3d 00 f0 ff ff 77 1a e8 c5 09 00 00 31 c0 48
Dec 14 23:45:33 Server kernel: RSP: 002b:00007f81f7df82c0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
Dec 14 23:45:33 Server kernel: RAX: ffffffffffffffda RBX: 00007f81f7df8350 RCX: 00007f89ada97388
Dec 14 23:45:33 Server kernel: RDX: 0000000000000000 RSI: 0000000000000189 RDI: 000055dc3f34b3a8
Dec 14 23:45:33 Server kernel: RBP: 000055dc3f34b3a8 R08: 0000000000000000 R09: 00000000ffffffff
Dec 14 23:45:33 Server kernel: R10: 00007f81f7df8350 R11: 0000000000000246 R12: 0000000000000000
Dec 14 23:45:33 Server kernel: R13: 0000000000000000 R14: fffffffeffffffff R15: 00007f8198000b90
Dec 14 23:45:33 Server kernel: traps: kvm[356772] general protection fault ip:7ffd2fafc6b5 sp:7f81f7df8328 error:0
Dec 14 23:45:33 Server kernel: traps: srcds_linux[16693] general protection fault ip:f7f96599 sp:ff93e390 error:0
Dec 14 23:45:33 Server kernel: traps: CIPCServer::Thr[30109] general protection fault ip:f7edf599 sp:eddc5f30 error:0
Dec 14 23:45:33 Server kernel: traps: CIPCServer::Thr[23580] general protection fault ip:f7eeb599 sp:ed12df30 error:0
Dec 14 23:45:33 Server kernel: traps: srcds_run[367866] general protection fault ip:7ff7a7b417e5 sp:7ffe20009220 error:0 in libc-2.28.so[7ff7a7a9d000+148000]
Dec 14 23:45:33 Server kernel: traps: srcds_run[367867] general protection fault ip:7f13e33067e5 sp:7ffe4e74a6e0 error:0 in libc-2.28.so[7f13e3262000+148000]
Dec 14 23:45:33 Server kernel: traps: CNet Encrypt:0[23897] general protection fault ip:f7fc5599 sp:e71fefc0 error:0
Dec 14 23:45:33 Server kernel: traps: CIPCServer::Thr[34089] general protection fault ip:f7fc6599 sp:e77f1f30 error:0
Dec 14 23:45:34 Server kernel: traps: CHTTPCacheFileT[34656] general protection fault ip:f7eec599 sp:e28fefc0 error:0
Dec 14 23:45:34 Server kernel: traps: CIPCServer::Thr[28548] general protection fault ip:f7f42599 sp:eae51f30 error:0
Dec 14 23:45:34 Server systemd-udevd[367900]: dm-1: Process '/sbin/dmsetup splitname --nameprefixes --noheadings --rows ServerNVME-vm--1002--disk--0' terminated by signal SEGV.
Dec 14 23:45:34 Server systemd-udevd[367900]: dm-1: Failed to wait for spawned command '/sbin/dmsetup splitname --nameprefixes --noheadings --rows ServerNVME-vm--1002--disk--0': Input/output error
Dec 14 23:45:34 Server systemd-udevd[367900]: dm-1: /usr/lib/udev/rules.d/56-lvm.rules:21 Failed to execute '/sbin/dmsetup splitname --nameprefixes --noheadings --rows ServerNVME-vm--1002--disk--0', ignoring: Input/output error
Dec 14 23:45:36 Server kernel: fwbr1002i0: port 2(tap1002i0) entered disabled state
Dec 14 23:45:37 Server kernel: fwbr1002i0: port 2(tap1002i0) entered disabled state
Dec 14 23:45:37 Server systemd[1]: 1002.scope: Succeeded.
Dec 14 23:45:37 Server systemd[1]: 1002.scope: Consumed 8h 4min 12.867s CPU time.
Dec 14 23:45:37 Server qmeventd[367913]: Starting cleanup for 1002
Dec 14 23:45:37 Server pvedaemon[1372]: worker 1374 finished
Dec 14 23:45:37 Server pvedaemon[1372]: starting 1 worker(s)
Dec 14 23:45:37 Server pvedaemon[1372]: worker 367915 started
Dec 14 23:45:37 Server kernel: fwbr1002i0: port 1(fwln1002i0) entered disabled state
Dec 14 23:45:37 Server kernel: vmbr0: port 9(fwpr1002p0) entered disabled state
Dec 14 23:45:37 Server kernel: device fwln1002i0 left promiscuous mode
Dec 14 23:45:37 Server kernel: fwbr1002i0: port 1(fwln1002i0) entered disabled state
Dec 14 23:45:37 Server kernel: device fwpr1002p0 left promiscuous mode
Dec 14 23:45:37 Server kernel: vmbr0: port 9(fwpr1002p0) entered disabled state
Dec 14 23:45:37 Server qmeventd[367913]: Finished cleanup for 1002
Dec 14 23:45:38 Server spiceproxy[1392]: worker 1393 finished
Dec 14 23:45:38 Server spiceproxy[1392]: starting 1 worker(s)
Dec 14 23:45:38 Server spiceproxy[1392]: worker 367922 started
Dec 14 23:45:42 Server kernel: show_signal: 13 callbacks suppressed
Dec 14 23:45:42 Server kernel: traps: pvestatd[1350] general protection fault ip:7ffcf71d26b5 sp:7ffcf7188e08 error:0
Dec 14 23:45:42 Server systemd[1]: pvestatd.service: Main process exited, code=killed, status=11/SEGV
Dec 14 23:45:42 Server systemd[1]: pvestatd.service: Failed with result 'signal'.
Dec 14 23:45:42 Server systemd[1]: pvestatd.service: Consumed 4min 55.025s CPU time.
Dec 14 23:45:44 Server kernel: traps: srcds_linux[367945] general protection fault ip:f7f57599 sp:ffa25cd0 error:0
Dec 14 23:45:44 Server kernel: traps: srcds_run[367950] general protection fault ip:7ff7a7b417e5 sp:7ffe20009220 error:0 in libc-2.28.so[7ff7a7a9d000+148000]
Dec 14 23:45:44 Server systemd[1]: lxcfs.service: Main process exited, code=killed, status=11/SEGV
Dec 14 23:45:44 Server kernel: traps: lxcfs[1732] general protection fault ip:7ffe405c66b5 sp:7efdd37fdad0 error:0
Dec 14 23:45:44 Server systemd[1]: lxcfs.service: Failed with result 'signal'.
Dec 14 23:45:44 Server kernel: traps: fusermount[367956] general protection fault ip:7f75fee79df4 sp:7ffe272a83f0 error:0 in ld-2.31.so[7f75fee79000+20000]
Dec 14 23:45:45 Server systemd[1]: lxcfs.service: Scheduled restart job, restart counter is at 1.
Dec 14 23:45:45 Server systemd[1]: Stopped FUSE filesystem for LXC.
Dec 14 23:45:45 Server systemd[1]: Started FUSE filesystem for LXC.
Dec 14 23:45:45 Server lxcfs[367958]: Running constructor lxcfs_init to reload liblxcfs
Dec 14 23:45:45 Server lxcfs[367958]: mount namespace: 5
Dec 14 23:45:45 Server lxcfs[367958]: hierarchies:
Dec 14 23:45:45 Server lxcfs[367958]: 0: fd: 6: cpuset,cpu,io,memory,hugetlb,pids,rdma,misc
Dec 14 23:45:45 Server lxcfs[367958]: Kernel supports pidfds
Dec 14 23:45:45 Server lxcfs[367958]: Kernel supports swap accounting
Dec 14 23:45:45 Server lxcfs[367958]: api_extensions:
Dec 14 23:45:45 Server lxcfs[367958]: - cgroups
Dec 14 23:45:45 Server lxcfs[367958]: - sys_cpu_online
Dec 14 23:45:45 Server lxcfs[367958]: - proc_cpuinfo
Dec 14 23:45:45 Server lxcfs[367958]: - proc_diskstats
Dec 14 23:45:45 Server lxcfs[367958]: - proc_loadavg
Dec 14 23:45:45 Server lxcfs[367958]: - proc_meminfo
Dec 14 23:45:45 Server lxcfs[367958]: - proc_stat
Dec 14 23:45:45 Server lxcfs[367958]: - proc_swaps
Dec 14 23:45:45 Server lxcfs[367958]: - proc_uptime
Dec 14 23:45:45 Server lxcfs[367958]: - shared_pidns
Dec 14 23:45:45 Server lxcfs[367958]: - cpuview_daemon
Dec 14 23:45:45 Server lxcfs[367958]: - loadavg_daemon
Dec 14 23:45:45 Server lxcfs[367958]: - pidfds
Dec 14 23:45:45 Server lxcfs[367958]: fuse: bad mount point `/var/lib/lxcfs': Transport endpoint is not connected
Dec 14 23:45:45 Server lxcfs[367958]: Running destructor lxcfs_exit
Dec 14 23:45:45 Server systemd[1]: lxcfs.service: Main process exited, code=exited, status=1/FAILURE
Dec 14 23:45:45 Server systemd[1]: var-lib-lxcfs.mount: Succeeded.
Dec 14 23:45:45 Server systemd[1]: lxcfs.service: Failed with result 'exit-code'.
Dec 14 23:45:45 Server kernel: traps: srcds_linux[367944] general protection fault ip:f7f70599 sp:fff9f0d0 error:0
Dec 14 23:45:45 Server systemd[1]: lxcfs.service: Scheduled restart job, restart counter is at 2.
Dec 14 23:45:45 Server systemd[1]: Stopped FUSE filesystem for LXC.
Dec 14 23:45:45 Server systemd[1]: Started FUSE filesystem for LXC.
Dec 14 23:45:45 Server lxcfs[367964]: Running constructor lxcfs_init to reload liblxcfs
Dec 14 23:45:45 Server lxcfs[367964]: mount namespace: 5
Dec 14 23:45:45 Server lxcfs[367964]: hierarchies:
Dec 14 23:45:45 Server lxcfs[367964]: 0: fd: 6: cpuset,cpu,io,memory,hugetlb,pids,rdma,misc
Dec 14 23:45:45 Server lxcfs[367964]: Kernel supports pidfds
Dec 14 23:45:45 Server lxcfs[367964]: Kernel supports swap accounting
Dec 14 23:45:45 Server lxcfs[367964]: api_extensions:
Dec 14 23:45:45 Server lxcfs[367964]: - cgroups
Dec 14 23:45:45 Server lxcfs[367964]: - sys_cpu_online
Dec 14 23:45:45 Server lxcfs[367964]: - proc_cpuinfo
Dec 14 23:45:45 Server lxcfs[367964]: - proc_diskstats
Dec 14 23:45:45 Server lxcfs[367964]: - proc_loadavg
Dec 14 23:45:45 Server lxcfs[367964]: - proc_meminfo
Dec 14 23:45:45 Server lxcfs[367964]: - proc_stat
Dec 14 23:45:45 Server lxcfs[367964]: - proc_swaps
Dec 14 23:45:45 Server lxcfs[367964]: - proc_uptime
Dec 14 23:45:45 Server lxcfs[367964]: - shared_pidns
Dec 14 23:45:45 Server lxcfs[367964]: - cpuview_daemon
Dec 14 23:45:45 Server lxcfs[367964]: - loadavg_daemon
Dec 14 23:45:45 Server lxcfs[367964]: - pidfds
Dec 14 23:45:45 Server kernel: traps: srcds_linux[367949] general protection fault ip:f7ef0599 sp:ffb53d30 error:0
Dec 14 23:45:45 Server kernel: traps: srcds_linux[367952] general protection fault ip:f7fba599 sp:ffae5d00 error:0
Dec 14 23:45:45 Server kernel: traps: srcds_linux[367954] general protection fault ip:f7efd599 sp:ffd63fb0 error:0
Dec 14 23:45:47 Server kernel: srcds_linux[367947]: segfault at 0 ip 00000000f7c6987b sp 00000000ffe02230 error 6 in libtier0.so[f7c54000+3e000]
Dec 14 23:45:47 Server kernel: Code: 00 00 5e 5f 5d c3 89 f6 8d bc 27 00 00 00 00 55 89 e5 53 83 ec 14 8b 5d 08 a1 1c fe e7 f7 89 04 24 e8 a9 2e 0a 00 85 db 74 0a <c7> 05 00 00 00 00 01 00 00 00 89 1c 24 e8 b8 c3 0f 00 8d 76 00 55
Dec 14 23:45:49 Server kernel: traps: CIPCServer::Thr[367992] general protection fault ip:f7f7f599 sp:ed332fd0 error:0
Dec 14 23:45:49 Server kernel: traps: srcds_run[367999] general protection fault ip:7f2ab163c7e5 sp:7ffe29f3ea40 error:0 in libc-2.28.so[7f2ab1598000+148000]
Dec 14 23:45:49 Server kernel: traps: srcds_run[368000] general protection fault ip:7f2ab163c7e5 sp:7ffe29f3e9a0 error:0 in libc-2.28.so[7f2ab1598000+148000]
Dec 14 23:45:49 Server kernel: traps: srcds_linux[367953] general protection fault ip:f7f9a599 sp:ffab9b30 error:0
Dec 14 23:45:49 Server kernel: traps: srcds_run[368004] general protection fault ip:7f69e153f7e5 sp:7ffe59db0be0 error:0 in libc-2.28.so[7f69e149b000+148000]
Dec 14 23:45:49 Server kernel: traps: srcds_run[368005] general protection fault ip:7f69e153f7e5 sp:7ffe59db0b40 error:0 in libc-2.28.so[7f69e149b000+148000]
Dec 14 23:45:49 Server kernel: traps: srcds_run[368006] general protection fault ip:7f69e153f7e5 sp:7ffe59db0a20 error:0 in libc-2.28.so[7f69e149b000+148000]
Dec 14 23:45:49 Server kernel: traps: srcds_run[368007] general protection fault ip:7f69e153f7e5 sp:7ffe59db0be0 error:0 in libc-2.28.so[7f69e149b000+148000]
Dec 14 23:45:49 Server kernel: traps: srcds_run[368008] general protection fault ip:7f69e153f7e5 sp:7ffe59db0b40 error:0 in libc-2.28.so[7f69e149b000+148000]
Dec 14 23:45:49 Server kernel: traps: srcds_run[368010] general protection fault ip:7f69e153f7e5 sp:7ffe59db0be0 error:0 in libc-2.28.so[7f69e149b000+148000] "
 

lps90

Member
May 21, 2020
168
7
23
" Dec 14 23:45:58 Server kernel: show_signal: 3 callbacks suppressed
Dec 14 23:45:58 Server kernel: traps: srcds_linux[368052] general protection fault ip:f7df5efb sp:ecf2f370 error:0 in libpthread-2.28.so[f7df4000+10000]
Dec 14 23:45:59 Server kernel: traps: CNet Encrypt:0[368066] general protection fault ip:f7f56599 sp:e65c8fc0 error:0
Dec 14 23:45:59 Server kernel: traps: srcds_linux[368070] general protection fault ip:f7dfbefb sp:ee77b370 error:0 in libpthread-2.28.so[f7dfa000+10000]
Dec 14 23:45:59 Server kernel: traps: srcds_run[368073] general protection fault ip:7f2ab163c7e5 sp:7ffe29f3e9a0 error:0 in libc-2.28.so[7f2ab1598000+148000]
Dec 14 23:45:59 Server kernel: traps: srcds_run[368075] general protection fault ip:7f2ab163c7e5 sp:7ffe29f3ea40 error:0 in libc-2.28.so[7f2ab1598000+148000]
Dec 14 23:45:59 Server kernel: traps: srcds_run[368076] general protection fault ip:7f2ab163c7e5 sp:7ffe29f3e9a0 error:0 in libc-2.28.so[7f2ab1598000+148000]
Dec 14 23:45:59 Server kernel: traps: srcds_run[368077] general protection fault ip:7f2ab163c7e5 sp:7ffe29f3e880 error:0 in libc-2.28.so[7f2ab1598000+148000]
Dec 14 23:45:59 Server kernel: traps: srcds_run[368078] general protection fault ip:7f2ab163c7e5 sp:7ffe29f3ea40 error:0 in libc-2.28.so[7f2ab1598000+148000]
Dec 14 23:45:59 Server kernel: traps: srcds_run[368079] general protection fault ip:7f2ab163c7e5 sp:7ffe29f3e9a0 error:0 in libc-2.28.so[7f2ab1598000+148000]
Dec 14 23:45:59 Server kernel: traps: srcds_run[368080] general protection fault ip:7f2ab163c7e5 sp:7ffe29f3e880 error:0 in libc-2.28.so[7f2ab1598000+148000]
Dec 14 23:46:11 Server kernel: show_signal: 9 callbacks suppressed
Dec 14 23:46:11 Server kernel: traps: srcds_linux[368123] general protection fault ip:f7ddfefb sp:ef4bd370 error:0 in libpthread-2.28.so[f7dde000+10000]
Dec 14 23:46:12 Server kernel: traps: srcds_run[368126] general protection fault ip:7f2ab163c7e5 sp:7ffe29f3e9a0 error:0 in libc-2.28.so[7f2ab1598000+148000]
Dec 14 23:46:12 Server kernel: traps: srcds_run[368127] general protection fault ip:7f2ab163c7e5 sp:7ffe29f3e880 error:0 in libc-2.28.so[7f2ab1598000+148000]
Dec 14 23:46:13 Server kernel: traps: srcds_linux[368164] general protection fault ip:f7e51efb sp:ebebd370 error:0 in libpthread-2.28.so[f7e50000+10000]
Dec 14 23:46:22 Server pve-firewall[1348]: status update error: command 'iptables-save' failed: got signal 11
Dec 14 23:46:22 Server kernel: traps: iptables-save[368169] general protection fault ip:7f5715e30df4 sp:7ffc4eb91710 error:0 in ld-2.31.so[7f5715e30000+20000]
Dec 14 23:46:22 Server kernel: traps: srcds_linux[368171] general protection fault ip:f7dfcefb sp:f17f2370 error:0 in libpthread-2.28.so[f7dfb000+10000]
Dec 14 23:46:23 Server kernel: traps: awk[368174] general protection fault ip:7feb351d3df4 sp:7fff49f412b0 error:0 in ld-2.31.so[7feb351d3000+20000]
Dec 14 23:46:23 Server ksmtuned[1009]: /usr/sbin/ksmtuned: line 101: [: too many arguments
Dec 14 23:46:23 Server ksmtuned[1009]: /usr/sbin/ksmtuned: line 107: [: -lt: unary operator expected
Dec 14 23:46:23 Server ksmtuned[1009]: /usr/sbin/ksmtuned: line 123: 368179 Segmentation fault sleep $KSM_MONITOR_INTERVAL
Dec 14 23:46:23 Server kernel: traps: sleep[368179] general protection fault ip:7f606faeddf4 sp:7ffc21476f80 error:0 in ld-2.31.so[7f606faed000+20000]
Dec 14 23:46:23 Server ksmtuned[1009]: /usr/sbin/ksmtuned: line 123: 368184 Segmentation fault sleep $KSM_MONITOR_INTERVAL
Dec 14 23:46:23 Server kernel: traps: sleep[368184] general protection fault ip:7f257da0edf4 sp:7ffeb60c0450 error:0 in ld-2.31.so[7f257da0e000+20000]
Dec 14 23:46:23 Server kernel: traps: awk[368185] general protection fault ip:7fc36baafdf4 sp:7ffe5bce0e20 error:0 in ld-2.31.so[7fc36baaf000+20000]
Dec 14 23:46:23 Server kernel: traps: awk[368188] general protection fault ip:7f3d9c243df4 sp:7ffddef12e40 error:0 in ld-2.31.so[7f3d9c243000+20000]
Dec 14 23:46:23 Server ksmtuned[1009]: /usr/sbin/ksmtuned: line 101: [: too many arguments
Dec 14 23:46:23 Server ksmtuned[1009]: /usr/sbin/ksmtuned: line 107: [: -lt: unary operator expected
Dec 14 23:46:24 Server kernel: traps: srcds_linux[368193] general protection fault ip:f7e89efb sp:f1861370 error:0 in libpthread-2.28.so[f7e88000+10000]
Dec 14 23:46:27 Server kernel: traps: srcds_linux[368208] general protection fault ip:f7e0cefb sp:ee78a370 error:0 in libpthread-2.28.so[f7e0b000+10000]
Dec 14 23:46:32 Server pve-firewall[1348]: status update error: command 'iptables-save' failed: got signal 11
Dec 14 23:46:32 Server kernel: traps: iptables-save[368211] general protection fault ip:7fa11bf45df4 sp:7ffc1c3931d0 error:0 in ld-2.31.so[7fa11bf45000+20000]
Dec 14 23:46:39 Server kernel: traps: CIPCServer::Thr[368226] general protection fault ip:f7ef8599 sp:ebf17f30 error:0
Dec 14 23:46:52 Server kernel: traps: srcds_linux[368290] general protection fault ip:f7e61efb sp:ee98c370 error:0 in libpthread-2.28.so[f7e60000+10000]
Dec 14 23:46:52 Server kernel: traps: sleep[368292] general protection fault ip:7fecf755eea4 sp:7ffe57364a50 error:0 in ld-2.28.so[7fecf755e000+1e000]
Dec 14 23:46:53 Server kernel: traps: CHTTPClientThre[368247] general protection fault ip:f7fa4599 sp:ec6fcfc0 error:0
Dec 14 23:46:55 Server kernel: traps: srcds_linux[368299] general protection fault ip:f7d9fefb sp:ee68a370 error:0 in libpthread-2.28.so[f7d9e000+10000]
Dec 14 23:47:02 Server pve-firewall[1348]: status update error: command 'iptables-save' failed: got signal 11
Dec 14 23:47:02 Server kernel: traps: iptables-save[368304] general protection fault ip:7ff0425addf4 sp:7ffcc7f19c90 error:0 in ld-2.31.so[7ff0425ad000+20000]
Dec 14 23:47:06 Server kernel: traps: srcds_linux[368315] general protection fault ip:f7e6cefb sp:ee909370 error:0 in libpthread-2.28.so[f7e6b000+10000]
Dec 14 23:47:06 Server pvedaemon[1373]: <root@pam> successful auth for user 'root@pam'
Dec 14 23:47:08 Server kernel: srcds_linux[368312]: segfault at 0 ip 00000000f7bc387b sp 00000000ffb36a80 error 6 in libtier0.so[f7bae000+3e000]
Dec 14 23:47:08 Server kernel: Code: 00 00 5e 5f 5d c3 89 f6 8d bc 27 00 00 00 00 55 89 e5 53 83 ec 14 8b 5d 08 a1 1c 9e dd f7 89 04 24 e8 a9 2e 0a 00 85 db 74 0a <c7> 05 00 00 00 00 01 00 00 00 89 1c 24 e8 b8 c3 0f 00 8d 76 00 55
Dec 14 23:47:08 Server kernel: traps: sh[368320] general protection fault ip:7f2b84b05df4 sp:7ffca9d33cb0 error:0 in ld-2.31.so[7f2b84b05000+20000]
Dec 14 23:47:18 Server kernel: traps: srcds_linux[368342] general protection fault ip:f7d9eefb sp:f1761370 error:0 in libpthread-2.28.so[f7d9d000+10000]
Dec 14 23:47:19 Server kernel: traps: srcds_linux[368345] general protection fault ip:f7e38efb sp:f7632370 error:0 in libpthread-2.28.so[f7e37000+10000]
Dec 14 23:47:22 Server pve-firewall[1348]: status update error: command 'iptables-save' failed: got signal 11
Dec 14 23:47:22 Server kernel: traps: iptables-save[368348] general protection fault ip:7ff29f409df4 sp:7fff61b4ea30 error:0 in ld-2.31.so[7ff29f409000+20000]
Dec 14 23:47:23 Server kernel: traps: ps[368351] general protection fault ip:7f6c75291df4 sp:7ffd4c8080e0 error:0 in ld-2.31.so[7f6c75291000+20000]
Dec 14 23:47:24 Server pvedaemon[368354]: starting vnc proxy UPID:Server:00059EE2:0020CA7E:61B92D0C:vncproxy:1002:root@pam:
Dec 14 23:47:24 Server pvedaemon[1373]: <root@pam> starting task UPID:Server:00059EE2:0020CA7E:61B92D0C:vncproxy:1002:root@pam:
Dec 14 23:47:28 Server qm[368358]: VM 1002 qmp command failed - VM 1002 not running
Dec 14 23:47:28 Server pvedaemon[368354]: Failed to run vncproxy.
Dec 14 23:47:28 Server pvedaemon[1373]: <root@pam> end task UPID:Server:00059EE2:0020CA7E:61B92D0C:vncproxy:1002:root@pam: Failed to run vncproxy.
Dec 14 23:47:31 Server kernel: traps: srcds_linux[368362] general protection fault ip:f7e4befb sp:f180c370 error:0 in libpthread-2.28.so[f7e4a000+10000]
Dec 14 23:47:31 Server kernel: traps: srcds_linux[368365] general protection fault ip:f7e22efb sp:f761c370 error:0 in libpthread-2.28.so[f7e21000+10000]
Dec 14 23:47:32 Server pve-firewall[1348]: status update error: command 'iptables-save' failed: got signal 11
Dec 14 23:47:32 Server kernel: traps: iptables-save[368368] general protection fault ip:7fa416d85df4 sp:7ffe7fb24480 error:0 in ld-2.31.so[7fa416d85000+20000]
Dec 14 23:47:36 Server pvedaemon[367915]: <root@pam> starting task UPID:Server:00059EF1:0020CF00:61B92D18:vncproxy:1002:root@pam:
Dec 14 23:47:36 Server pvedaemon[368369]: starting vnc proxy UPID:Server:00059EF1:0020CF00:61B92D18:vncproxy:1002:root@pam:
Dec 14 23:47:37 Server qm[368371]: VM 1002 qmp command failed - VM 1002 not running
Dec 14 23:47:37 Server pvedaemon[368369]: Failed to run vncproxy.
Dec 14 23:47:37 Server pvedaemon[367915]: <root@pam> end task UPID:Server:00059EF1:0020CF00:61B92D18:vncproxy:1002:root@pam: Failed to run vncproxy.
Dec 14 23:47:42 Server kernel: traps: srcds_linux[368373] general protection fault ip:f7fa3599 sp:ff9e71d0 error:0
Dec 14 23:47:43 Server pvedaemon[367915]: VM 1002 qmp command failed - VM 1002 not running
Dec 14 23:47:43 Server pvedaemon[367915]: VM 1002 not running
Dec 14 23:47:43 Server kernel: traps: srcds_linux[368393] general protection fault ip:f7e96efb sp:f7690370 error:0 in libpthread-2.28.so[f7e95000+10000] "
 

lps90

Member
May 21, 2020
168
7
23
" Dec 14 23:47:47 Server pvedaemon[1373]: VM 1002 qmp command failed - VM 1002 not running
Dec 14 23:47:47 Server pvedaemon[1373]: VM 1002 not running
Dec 14 23:47:52 Server pvedaemon[1375]: <root@pam> starting task UPID:Server:00059F0D:0020D548:61B92D28:qmstop:1002:root@pam:
Dec 14 23:47:52 Server pvedaemon[368397]: stop VM 1002: UPID:Server:00059F0D:0020D548:61B92D28:qmstop:1002:root@pam:
Dec 14 23:47:52 Server pvedaemon[1375]: <root@pam> end task UPID:Server:00059F0D:0020D548:61B92D28:qmstop:1002:root@pam: OK
Dec 14 23:47:53 Server kernel: traps: srcds_linux[368418] general protection fault ip:f7e42efb sp:f180c370 error:0 in libpthread-2.28.so[f7e41000+10000]
Dec 14 23:47:59 Server kernel: traps: srcds_linux[368417] general protection fault ip:f7f0f599 sp:ffba4490 error:0
Dec 14 23:48:06 Server kernel: traps: srcds_linux[368466] general protection fault ip:f7e2aefb sp:ee98c370 error:0 in libpthread-2.28.so[f7e29000+10000]
Dec 14 23:48:09 Server kernel: traps: srcds_run[368469] general protection fault ip:7f2ab163c7e5 sp:7ffe29f3e9a0 error:0 in libc-2.28.so[7f2ab1598000+148000]
Dec 14 23:48:11 Server pvedaemon[368472]: stop VM 1002: UPID:Server:00059F58:0020DCB1:61B92D3B:qmstop:1002:root@pam:
Dec 14 23:48:11 Server pvedaemon[1373]: <root@pam> starting task UPID:Server:00059F58:0020DCB1:61B92D3B:qmstop:1002:root@pam:
Dec 14 23:48:11 Server pvedaemon[1373]: <root@pam> end task UPID:Server:00059F58:0020DCB1:61B92D3B:qmstop:1002:root@pam: OK
Dec 14 23:48:13 Server kernel: traps: mini-journalrea[368491] general protection fault ip:7f54f8e33df4 sp:7ffc84472860 error:0 in ld-2.31.so[7f54f8e33000+20000]
Dec 14 23:48:19 Server kernel: traps: srcds_run[368501] general protection fault ip:7f2ab163c7e5 sp:7ffe29f3e9a0 error:0 in libc-2.28.so[7f2ab1598000+148000]
Dec 14 23:48:30 Server kernel: traps: srcds_linux[368544] general protection fault ip:f7e78efb sp:f7672370 error:0 in libpthread-2.28.so[f7e77000+10000]
Dec 14 23:48:32 Server kernel: traps: iptables-save[368547] general protection fault ip:7f25efdb7df4 sp:7fff885d9370 error:0 in ld-2.31.so[7f25efdb7000+20000]
Dec 14 23:48:32 Server pve-firewall[1348]: status update error: command 'iptables-save' failed: got signal 11
Dec 14 23:48:35 Server pvedaemon[368548]: starting vnc proxy UPID:Server:00059FA4:0020E643:61B92D53:vncproxy:1002:root@pam:
Dec 14 23:48:35 Server pvedaemon[1375]: <root@pam> starting task UPID:Server:00059FA4:0020E643:61B92D53:vncproxy:1002:root@pam:
Dec 14 23:48:37 Server qm[368550]: VM 1002 qmp command failed - VM 1002 not running
Dec 14 23:48:37 Server pvedaemon[368548]: Failed to run vncproxy.
Dec 14 23:48:37 Server pvedaemon[1375]: <root@pam> end task UPID:Server:00059FA4:0020E643:61B92D53:vncproxy:1002:root@pam: Failed to run vncproxy.
Dec 14 23:48:43 Server kernel: traps: srcds_linux[368572] general protection fault ip:f7df5efb sp:f75ef370 error:0 in libpthread-2.28.so[f7df4000+10000]
Dec 14 23:48:47 Server sshd[368575]: Accepted password for root from 192.168.10.71 port 64075 ssh2
Dec 14 23:48:47 Server sshd[368575]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Dec 14 23:48:47 Server systemd[1]: Created slice User Slice of UID 0.
Dec 14 23:48:47 Server systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 14 23:48:47 Server systemd-logind[954]: New session 320 of user root.
Dec 14 23:48:47 Server systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 14 23:48:47 Server systemd[1]: Starting User Manager for UID 0...
Dec 14 23:48:47 Server systemd[368578]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Dec 14 23:48:47 Server systemd[368580]: /usr/lib/systemd/user-environment-generators/90gpg-agent terminated by signal SEGV.
Dec 14 23:48:47 Server kernel: traps: 90gpg-agent[368582] general protection fault ip:7f8404feddf4 sp:7ffc419a2770 error:0 in ld-2.31.so[7f8404fed000+20000]
Dec 14 23:48:47 Server systemd[368578]: Queued start job for default target Main User Target.
Dec 14 23:48:47 Server systemd[368578]: Created slice User Application Slice.
Dec 14 23:48:47 Server systemd[368578]: Reached target Paths.
Dec 14 23:48:47 Server systemd[368578]: Reached target Timers.
Dec 14 23:48:47 Server systemd[368578]: Listening on GnuPG network certificate management daemon.
Dec 14 23:48:47 Server systemd[368578]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers).
Dec 14 23:48:47 Server systemd[368578]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).
Dec 14 23:48:47 Server systemd[368578]: Listening on GnuPG cryptographic agent (ssh-agent emulation).
Dec 14 23:48:47 Server systemd[368578]: Listening on GnuPG cryptographic agent and passphrase cache.
Dec 14 23:48:47 Server systemd[368578]: Reached target Sockets.
Dec 14 23:48:47 Server systemd[368578]: Reached target Basic System.
Dec 14 23:48:47 Server systemd[368578]: Reached target Main User Target.
Dec 14 23:48:47 Server systemd[368578]: Startup finished in 73ms.
Dec 14 23:48:47 Server systemd[1]: Started User Manager for UID 0.
Dec 14 23:48:47 Server systemd[1]: Started Session 320 of user root.
Dec 14 23:48:54 Server kernel: traps: srcds_linux[368610] general protection fault ip:f7f85599 sp:ffa6a190 error:0
Dec 14 23:48:54 Server kernel: traps: srcds_run[368611] general protection fault ip:7f2ab163c7e5 sp:7ffe29f3e880 error:0 in libc-2.28.so[7f2ab1598000+148000]
Dec 14 23:49:00 Server kernel: traps: server[1337] general protection fault ip:7ffc42f7f6b5 sp:7f81f9fd19e0 error:0
Dec 14 23:49:00 Server systemd[1]: pve-cluster.service: Main process exited, code=killed, status=11/SEGV
Dec 14 23:49:00 Server systemd[1]: pve-cluster.service: Failed with result 'signal'.
Dec 14 23:49:00 Server kernel: traps: systemd-logind[954] general protection fault ip:7ffd397e76b5 sp:7ffd397b4bc0 error:0
Dec 14 23:49:00 Server systemd[1]: pve-cluster.service: Consumed 22.213s CPU time.
Dec 14 23:49:00 Server systemd[1]: Condition check resulted in Corosync Cluster Engine being skipped.
Dec 14 23:49:00 Server pveproxy[353093]: ipcc_send_rec[1] failed: Connection refused
Dec 14 23:49:00 Server pveproxy[353093]: ipcc_send_rec[2] failed: Connection refused
Dec 14 23:49:00 Server pveproxy[353093]: ipcc_send_rec[3] failed: Connection refused
Dec 14 23:49:00 Server pvescheduler[368613]: replication: Connection refused
Dec 14 23:49:00 Server pvescheduler[368614]: jobs: cfs-lock 'file-jobs_cfg' error: pve cluster filesystem not online.
Dec 14 23:49:00 Server systemd[1]: pve-cluster.service: Scheduled restart job, restart counter is at 1.
" Dec 14 23:49:00 Server systemd[1]: Stopped The Proxmox VE cluster filesystem.
Dec 14 23:49:00 Server systemd[1]: pve-cluster.service: Consumed 22.213s CPU time.
Dec 14 23:49:00 Server systemd[1]: Starting The Proxmox VE cluster filesystem...
Dec 14 23:49:00 Server systemd[1]: systemd-logind.service: Main process exited, code=killed, status=11/SEGV
Dec 14 23:49:00 Server systemd[1]: systemd-logind.service: Failed with result 'signal'.
Dec 14 23:49:00 Server systemd[1]: systemd-logind.service: Scheduled restart job, restart counter is at 1.
Dec 14 23:49:00 Server systemd[1]: Stopped User Login Management.
Dec 14 23:49:00 Server systemd[1]: Starting Load Kernel Module drm...
Dec 14 23:49:00 Server systemd[1]: modprobe@drm.service: Succeeded.
Dec 14 23:49:00 Server systemd[1]: Finished Load Kernel Module drm.
Dec 14 23:49:00 Server systemd[368578]: etc-pve.mount: Succeeded.
Dec 14 23:49:00 Server systemd[1]: Starting User Login Management...
Dec 14 23:49:00 Server systemd[1]: etc-pve.mount: Succeeded.
Dec 14 23:49:00 Server systemd-logind[368620]: New seat seat0.
Dec 14 23:49:00 Server systemd-logind[368620]: Watching system buttons on /dev/input/event2 (Power Button)
Dec 14 23:49:00 Server systemd-logind[368620]: Watching system buttons on /dev/input/event1 (Power Button)
Dec 14 23:49:00 Server systemd-logind[368620]: Watching system buttons on /dev/input/event0 (Sleep Button)
Dec 14 23:49:00 Server systemd[1]: Started User Login Management.
Dec 14 23:49:00 Server systemd-logind[368620]: New session 320 of user root.
Dec 14 23:49:01 Server systemd[1]: Started The Proxmox VE cluster filesystem.
Dec 14 23:49:01 Server systemd[1]: Condition check resulted in Corosync Cluster Engine being skipped.
Dec 14 23:49:02 Server kernel: traps: sh[368652] general protection fault ip:7fdbe5ab3df4 sp:7ffce2f98690 error:0 in ld-2.31.so[7fdbe5ab3000+20000]
Dec 14 23:49:05 Server kernel: traps: srcds_linux[368655] general protection fault ip:f7f44599 sp:ffe85e80 error:0
Dec 14 23:49:05 Server kernel: traps: srcds_run[368657] general protection fault ip:7f2ab163c7e5 sp:7ffe29f3ea40 error:0 in libc-2.28.so[7f2ab1598000+148000]
Dec 14 23:49:13 Server kernel: traps: CJobMgr::m_Work[368691] general protection fault ip:f7f4a599 sp:ec7fdfc0 error:0
Dec 14 23:49:13 Server kernel: traps: srcds_run[368697] general protection fault ip:7f2ab163c7e5 sp:7ffe29f3e880 error:0 in libc-2.28.so[7f2ab1598000+148000]
Dec 14 23:49:19 Server smartd[953]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 42 to 40
Dec 14 23:49:19 Server smartd[953]: Device: /dev/sdb [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 59 to 60
Dec 14 23:49:20 Server kernel: traps: rrdcached[1275] general protection fault ip:7ffc243c66b5 sp:7fc32a811bb8 error:0
Dec 14 23:49:20 Server kernel: traps: in:imklog[1017] general protection fault ip:7ffdef7f26b5 sp:7f4c8b00d368 error:0
Dec 14 23:49:20 Server systemd[1]: rsyslog.service: Main process exited, code=killed, status=11/SEGV
Dec 14 23:49:20 Server systemd[1]: rsyslog.service: Failed with result 'signal'. "
 

lps90

Member
May 21, 2020
168
7
23
So? Is there any way to rollback this update?
Tired of seeing my VM's randomly crashing.
i wan to go back to v7.1-7 so i stop facing this version bugs / problems.
If there is no way, please tell me so i can format my dedicated with the available v7.1-2 .iso and solve all the problems;)
 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
15,616
956
163
Please add:

> pveversion -v

and your CT config:

> pct config CTID
 

lps90

Member
May 21, 2020
168
7
23
Pve version is described in the thread title.
Is thw latest available one.

Why CT config?
The problem is not with CTs.
The problem is with VM's and the dedicated server is randomly rebooting after some time.
 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
15,616
956
163
Pve version is described in the thread title.
no, you always need to provide all packages versions:

> pveversion -v (or copy the full list from the GUI)

Why CT config?
Because you post logs from issues with your LXC ="lxc-100.. container, I just read what you post.

Anyway, if you expect clear answers you should post clear questions and logs, otherwise no one will understand your issues.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,496
1,744
164
South Tyrol/Italy
shop.proxmox.com
I'll not uptade Proxmox anymore before waiting for other users feedback.
what's your actual issue, there's no actual error in the logs you posted in the first post...

And the follow-up shows errors in third party software, so I'd rather ask their support channels for help.

Dec 14 23:45:45 Server kernel: traps: srcds_linux[367949] general protection fault ip:f7ef0599 sp:ffb53d30 error:0
Dec 14 23:45:45 Server kernel: traps: srcds_linux[367952] general protection fault ip:f7fba599 sp:ffae5d00 error:0
Dec 14 23:45:45 Server kernel: traps: srcds_linux[367954] general protection fault ip:f7efd599 sp:ffd63fb0 error:0
Dec 14 23:45:47 Server kernel: srcds_linux[367947]: segfault at 0 ip 00000000f7c6987b sp 00000000ffe02230 error 6 in libtier0.so[f7c54000+3e000]
Dec 14 23:45:47 Server kernel: Code: 00 00 5e 5f 5d c3 89 f6 8d bc 27 00 00 00 00 55 89 e5 53 83 ec 14 8b 5d 08 a1 1c fe e7 f7 89 04 24 e8 a9 2e 0a 00 85 db 74 0a <c7> 05 00 00 00 00 01 00 00 00 89 1c 24 e8 b8 c3 0f 00 8d 76 00 55
Dec 14 23:45:49 Server kernel: traps: CIPCServer::Thr[367992] general protection fault ip:f7f7f599 sp:ed332fd0 error:0
Dec 14 23:45:49 Server kernel: traps: srcds_run[367999] general protection fault ip:7f2ab163c7e5 sp:7ffe29f3ea40 error:0 in libc-2.28.so[7f2ab1598000+148000]
 

pawanosman

New Member
Apr 3, 2021
2
0
1
25
I have the same issue, Proxmox server is crashing randomly and I just see this in the logs!!

my proxmox version is 6.4-13

Code:
Dec 18 13:06:37 Proxmox-VE kernel:  do_syscall_64+0x58/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  ? exit_to_user_mode_prepare+0x170/0x1c0
Dec 18 13:06:37 Proxmox-VE kernel:  ? syscall_exit_to_user_mode+0x1c/0x30
Dec 18 13:06:37 Proxmox-VE kernel:  ? do_syscall_64+0x67/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  ? do_syscall_64+0x67/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  ? do_syscall_64+0x67/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  ? do_syscall_64+0x67/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  ? sysvec_apic_timer_interrupt+0x4b/0xa0
Dec 18 13:06:37 Proxmox-VE kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xae
Dec 18 13:06:37 Proxmox-VE kernel: RIP: 0033:0x7f6f3e4b6413
Dec 18 13:06:37 Proxmox-VE kernel: Code: c3 8b 07 85 c0 75 24 49 89 fb 48 89 f0 48 89 d7 48 89 ce 4c 89 c2 4d 89 ca 4c 8b 44 24 08 4c 8b 4c 24 10 4c 89 5c 24 08 0f 05 <c3> e9 8a d2 ff ff 41 54 b8 02 00 00 00 49 89 f4 be 00 88 08 00 55
Dec 18 13:06:37 Proxmox-VE kernel: RSP: 002b:00007f6ef54e49d8 EFLAGS: 00000246 ORIG_RAX: 0000000000000023
Dec 18 13:06:37 Proxmox-VE kernel: RAX: ffffffffffffffda RBX: 00007f6ef54e6b30 RCX: 00007f6f3e4b6413
Dec 18 13:06:37 Proxmox-VE kernel: RDX: 0000000000000000 RSI: 00007f6ef54e4a60 RDI: 00007f6ef54e4a60
Dec 18 13:06:37 Proxmox-VE kernel: RBP: 00007f6ef8e1c790 R08: 0000000000000000 R09: 0000000000000000
Dec 18 13:06:37 Proxmox-VE kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000023
Dec 18 13:06:37 Proxmox-VE kernel: R13: 0000000000000002 R14: 00007f6f0a992820 R15: 00007f6ef8e43780
Dec 18 13:06:37 Proxmox-VE kernel: Call Trace:
Dec 18 13:06:37 Proxmox-VE kernel:  switch_to_sld+0x33/0x40
Dec 18 13:06:37 Proxmox-VE kernel:  __switch_to_xtra+0x120/0x510
Dec 18 13:06:37 Proxmox-VE kernel:  __switch_to+0x35a/0x430
Dec 18 13:06:37 Proxmox-VE kernel:  ? __switch_to_asm+0x36/0x70
Dec 18 13:06:37 Proxmox-VE kernel:  __schedule+0xbd7/0x1250
Dec 18 13:06:37 Proxmox-VE kernel:  ? tick_program_event+0x44/0x70
Dec 18 13:06:37 Proxmox-VE kernel:  ? hrtimer_reprogram+0x9a/0xa0
Dec 18 13:06:37 Proxmox-VE kernel:  ? hrtimer_start_range_ns+0x121/0x300
Dec 18 13:06:37 Proxmox-VE kernel:  schedule+0x3e/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  do_nanosleep+0x90/0x170
Dec 18 13:06:37 Proxmox-VE kernel:  hrtimer_nanosleep+0x94/0x130
Dec 18 13:06:37 Proxmox-VE kernel:  ? hrtimer_init_sleeper+0x80/0x80
Dec 18 13:06:37 Proxmox-VE kernel:  __x64_sys_nanosleep+0x99/0xd0
Dec 18 13:06:37 Proxmox-VE kernel:  do_syscall_64+0x58/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  ? syscall_exit_to_user_mode+0x1c/0x30
Dec 18 13:06:37 Proxmox-VE kernel:  ? do_syscall_64+0x67/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  ? do_syscall_64+0x67/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  ? do_syscall_64+0x67/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  ? do_syscall_64+0x67/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  ? sysvec_apic_timer_interrupt+0x4b/0xa0
Dec 18 13:06:37 Proxmox-VE kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xae
Dec 18 13:06:37 Proxmox-VE kernel: RIP: 0033:0x7f6f3e4b6413
Dec 18 13:06:37 Proxmox-VE kernel: Code: c3 8b 07 85 c0 75 24 49 89 fb 48 89 f0 48 89 d7 48 89 ce 4c 89 c2 4d 89 ca 4c 8b 44 24 08 4c 8b 4c 24 10 4c 89 5c 24 08 0f 05 <c3> e9 8a d2 ff ff 41 54 b8 02 00 00 00 49 89 f4 be 00 88 08 00 55
Dec 18 13:06:37 Proxmox-VE kernel: RSP: 002b:00007f6ef54e49d8 EFLAGS: 00000246 ORIG_RAX: 0000000000000023
Dec 18 13:06:37 Proxmox-VE kernel: RAX: ffffffffffffffda RBX: 00007f6ef54e6b30 RCX: 00007f6f3e4b6413
Dec 18 13:06:37 Proxmox-VE kernel: RDX: 0000000000000000 RSI: 00007f6ef54e4a60 RDI: 00007f6ef54e4a60
Dec 18 13:06:37 Proxmox-VE kernel: RBP: 00007f6ef8e1c790 R08: 0000000000000000 R09: 0000000000000000
Dec 18 13:06:37 Proxmox-VE kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000023
Dec 18 13:06:37 Proxmox-VE kernel: R13: 0000000000000002 R14: 00007f6f0a992820 R15: 00007f6ef8e43780
Dec 18 13:06:37 Proxmox-VE kernel: Call Trace:
Dec 18 13:06:37 Proxmox-VE kernel:  switch_to_sld+0x33/0x40
Dec 18 13:06:37 Proxmox-VE kernel:  __switch_to_xtra+0x120/0x510
Dec 18 13:06:37 Proxmox-VE kernel:  __switch_to+0x35a/0x430
Dec 18 13:06:37 Proxmox-VE kernel:  ? __switch_to_asm+0x36/0x70
Dec 18 13:06:37 Proxmox-VE kernel:  __schedule+0xbd7/0x1250
Dec 18 13:06:37 Proxmox-VE kernel:  ? timerqueue_add+0x62/0x90
Dec 18 13:06:37 Proxmox-VE kernel:  ? enqueue_hrtimer+0x36/0x70
Dec 18 13:06:37 Proxmox-VE kernel:  ? hrtimer_start_range_ns+0x121/0x300
Dec 18 13:06:37 Proxmox-VE kernel:  schedule+0x3e/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  do_nanosleep+0x90/0x170
Dec 18 13:06:37 Proxmox-VE kernel:  hrtimer_nanosleep+0x94/0x130
Dec 18 13:06:37 Proxmox-VE kernel:  ? hrtimer_init_sleeper+0x80/0x80
Dec 18 13:06:37 Proxmox-VE kernel:  __x64_sys_nanosleep+0x99/0xd0
Dec 18 13:06:37 Proxmox-VE kernel:  do_syscall_64+0x58/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  ? switch_fpu_return+0x56/0xc0
Dec 18 13:06:37 Proxmox-VE kernel:  ? exit_to_user_mode_prepare+0x170/0x1c0
Dec 18 13:06:37 Proxmox-VE kernel:  ? syscall_exit_to_user_mode+0x1c/0x30
Dec 18 13:06:37 Proxmox-VE kernel:  ? do_syscall_64+0x67/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  ? syscall_exit_to_user_mode+0x1c/0x30
Dec 18 13:06:37 Proxmox-VE kernel:  ? do_syscall_64+0x67/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  ? do_syscall_64+0x67/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  ? do_syscall_64+0x67/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  ? do_syscall_64+0x67/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xae
Dec 18 13:06:37 Proxmox-VE kernel: RIP: 0033:0x7fc2958d0413
Dec 18 13:06:37 Proxmox-VE kernel: Code: c3 8b 07 85 c0 75 24 49 89 fb 48 89 f0 48 89 d7 48 89 ce 4c 89 c2 4d 89 ca 4c 8b 44 24 08 4c 8b 4c 24 10 4c 89 5c 24 08 0f 05 <c3> e9 8a d2 ff ff 41 54 b8 02 00 00 00 49 89 f4 be 00 88 08 00 55
Dec 18 13:06:37 Proxmox-VE kernel: RSP: 002b:00007fc278169a28 EFLAGS: 00000246 ORIG_RAX: 0000000000000023
Dec 18 13:06:37 Proxmox-VE kernel: RAX: ffffffffffffffda RBX: 00007fc27816bb30 RCX: 00007fc2958d0413
Dec 18 13:06:37 Proxmox-VE kernel: RDX: 0000000000000000 RSI: 00007fc278169ab0 RDI: 00007fc278169ab0
Dec 18 13:06:37 Proxmox-VE kernel: RBP: 00007fc27ae18730 R08: 0000000000000000 R09: 0000000000000000
Dec 18 13:06:37 Proxmox-VE kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000023
Dec 18 13:06:37 Proxmox-VE kernel: R13: 0000000000000001 R14: 00007fc27b179b10 R15: 00007fc27ae3f720
Dec 18 13:06:37 Proxmox-VE kernel: Call Trace:
Dec 18 13:06:37 Proxmox-VE kernel:  switch_to_sld+0x33/0x40
Dec 18 13:06:37 Proxmox-VE kernel:  __switch_to_xtra+0x120/0x510
Dec 18 13:06:37 Proxmox-VE kernel:  __switch_to+0x35a/0x430
Dec 18 13:06:37 Proxmox-VE kernel:  ? __switch_to_asm+0x36/0x70
Dec 18 13:06:37 Proxmox-VE kernel:  __schedule+0xbd7/0x1250
Dec 18 13:06:37 Proxmox-VE kernel:  ? __schedule+0xbdf/0x1250
Dec 18 13:06:37 Proxmox-VE kernel:  ? timerqueue_add+0x62/0x90
Dec 18 13:06:37 Proxmox-VE kernel:  ? enqueue_hrtimer+0x36/0x70
Dec 18 13:06:37 Proxmox-VE kernel:  ? hrtimer_start_range_ns+0x121/0x300
Dec 18 13:06:37 Proxmox-VE kernel:  schedule+0x3e/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  do_nanosleep+0x90/0x170
Dec 18 13:06:37 Proxmox-VE kernel:  hrtimer_nanosleep+0x94/0x130
Dec 18 13:06:37 Proxmox-VE kernel:  ? hrtimer_init_sleeper+0x80/0x80
Dec 18 13:06:37 Proxmox-VE kernel:  __x64_sys_nanosleep+0x99/0xd0
Dec 18 13:06:37 Proxmox-VE kernel:  do_syscall_64+0x58/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  ? do_syscall_64+0x67/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  ? do_syscall_64+0x67/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  ? do_syscall_64+0x67/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xae
Dec 18 13:06:37 Proxmox-VE kernel: RIP: 0033:0x7fc2958d0413
Dec 18 13:06:37 Proxmox-VE kernel: Code: c3 8b 07 85 c0 75 24 49 89 fb 48 89 f0 48 89 d7 48 89 ce 4c 89 c2 4d 89 ca 4c 8b 44 24 08 4c 8b 4c 24 10 4c 89 5c 24 08 0f 05 <c3> e9 8a d2 ff ff 41 54 b8 02 00 00 00 49 89 f4 be 00 88 08 00 55
Dec 18 13:06:37 Proxmox-VE kernel: RSP: 002b:00007fc277d64a28 EFLAGS: 00000246 ORIG_RAX: 0000000000000023
Dec 18 13:06:37 Proxmox-VE kernel: RAX: ffffffffffffffda RBX: 00007fc277d66b30 RCX: 00007fc2958d0413
Dec 18 13:06:37 Proxmox-VE kernel: RDX: 0000000000000000 RSI: 00007fc277d64ab0 RDI: 00007fc277d64ab0
Dec 18 13:06:37 Proxmox-VE kernel: RBP: 00007fc27adf1740 R08: 0000000000000000 R09: 0000000000000000
Dec 18 13:06:37 Proxmox-VE kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000023
Dec 18 13:06:37 Proxmox-VE kernel: R13: 0000000000000001 R14: 00007fc27b179cb0 R15: 00007fc27ae18730
Dec 18 13:06:37 Proxmox-VE kernel: Call Trace:
Dec 18 13:06:37 Proxmox-VE kernel:  switch_to_sld+0x33/0x40
Dec 18 13:06:37 Proxmox-VE kernel:  __switch_to_xtra+0x120/0x510
Dec 18 13:06:37 Proxmox-VE kernel:  __switch_to+0x35a/0x430
Dec 18 13:06:37 Proxmox-VE kernel:  ? __switch_to_asm+0x36/0x70
Dec 18 13:06:37 Proxmox-VE kernel:  __schedule+0xbd7/0x1250
Dec 18 13:06:37 Proxmox-VE kernel:  ? timerqueue_add+0x62/0x90
Dec 18 13:06:37 Proxmox-VE kernel:  ? enqueue_hrtimer+0x36/0x70
Dec 18 13:06:37 Proxmox-VE kernel:  ? hrtimer_start_range_ns+0x121/0x300
Dec 18 13:06:37 Proxmox-VE kernel:  schedule+0x3e/0xb0
Dec 18 13:06:37 Proxmox-VE kernel:  do_nanosleep+0x90/0x170
Dec 18 13:06:37 Proxmox-VE kernel:  hrtimer_nanosleep+0x94/0x130
Dec 18 13:06:37 Proxmox-VE kernel:  ? hrtimer_init_sleeper+0x80/0x80
Dec 18 13:06:37 Proxmox-VE kernel:  __x64_sys_nanosleep+0x99/0xd0
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,496
1,744
164
South Tyrol/Italy
shop.proxmox.com
is
Code:
systemd-udevd[943485]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable
nothing to worry about?
Depends completely on the context, full journalctl -b output of the boot, HW details and ethtool outputs would be good to have as a starter.

FWIW, op has segfaults, general protection fault and split lock detection triggered from a user space tool (srcds_linux, IIRC steam gaming server related), it may be that a newer kernel got stricter on detecting/enforcing such bugs, but its still the fault of the userspace tools.

Anyhow, why do you ask?
 
May 18, 2019
198
9
23
Varies
Depends completely on the context, full journalctl -b output of the boot, HW details and ethtool outputs would be good to have as a starter.

FWIW, op has segfaults, general protection fault and split lock detection triggered from a user space tool (srcds_linux, IIRC steam gaming server related), it may be that a newer kernel got stricter on detecting/enforcing such bugs, but its still the fault of the userspace tools.

Anyhow, why do you ask?
Because we have said error on (entirely up to date) proxmox. There are no non proxmox userspace tools running outside of the VMs/CTs.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,496
1,744
164
South Tyrol/Italy
shop.proxmox.com
Because we have said error on (entirely up to date) proxmox. There are no non proxmox userspace tools running outside of the VMs/CTs.
Is it "just" that error showing up in the log or are there more ill effects to it? Note that CT processes run effectively on the host directly, and thus can also produce such split lock detection warning messages (i.e., I don't think OP runs srcds_linux natively on the host, but rather in a CT).

As said, HW details, full journal, ethtool NIC and ethtool -i NIC output, ... would be good to have if there are actual ill effects, so that they could be better investigated.
 
May 18, 2019
198
9
23
Varies
Is it "just" that error showing up in the log or are there more ill effects to it? Note that CT processes run effectively on the host directly, and thus can also produce such split lock detection warning messages (i.e., I don't think OP runs srcds_linux natively on the host, but rather in a CT).

As said, HW details, full journal, ethtool NIC and ethtool -i NIC output, ... would be good to have if there are actual ill effects, so that they could be better investigated.
 

Attachments

  • ethtool.txt
    5.7 KB · Views: 4

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!