vms randomly restarting itself

rpereyra

Active Member
May 19, 2008
39
0
26
Hi all !

I have an issue with the proxmox. Vms randomly restarting itself. I've updated to latest version and the issue seems not fixed. Vms restart not the node server.

This is my /var/log/syslog output when crash.



Mar 30 07:41:45 pm1 corosync[5460]: [MAIN ] Completed service synchronization, ready to provide service.
Mar 30 07:41:47 pm1 kernel: [605780.786251] Tainted: P O 4.4.24-1-pve #1
Mar 30 07:41:47 pm1 kernel: [605780.786274] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 30 07:41:47 pm1 kernel: [605780.786314] kswapd0 D ffff88094f62b5a8 0 52 2 0x00000000
Mar 30 07:41:47 pm1 kernel: [605780.786319] ffff88094f62b5a8 0000000180010001 ffff880035a9d280 ffff8809540c5280
Mar 30 07:41:47 pm1 kernel: [605780.786323] ffff88094f62c000 ffff88097fd57180 7fffffffffffffff ffff8803099637b8
Mar 30 07:41:47 pm1 kernel: [605780.786325] 0000000000000001 ffff88094f62b5c0 ffffffff81854cd5 0000000000000000
Mar 30 07:41:47 pm1 kernel: [605780.786328] Call Trace:
Mar 30 07:41:47 pm1 kernel: [605780.786336] [<ffffffff81854cd5>] schedule+0x35/0x80
Mar 30 07:41:47 pm1 kernel: [605780.786340] [<ffffffff81857f05>] schedule_timeout+0x235/0x2d0
Mar 30 07:41:47 pm1 kernel: [605780.786346] [<ffffffff810c3da4>] ? __wake_up+0x44/0x50
Mar 30 07:41:47 pm1 kernel: [605780.786411] [<ffffffffc01fec10>] ? zio_taskq_member.isra.6+0x80/0x80 [zfs]
Mar 30 07:41:47 pm1 kernel: [605780.786416] [<ffffffff810f5abc>] ? ktime_get+0x3c/0xb0
Mar 30 07:41:47 pm1 kernel: [605780.786418] [<ffffffff818541cb>] io_schedule_timeout+0xbb/0x140
Mar 30 07:41:47 pm1 kernel: [605780.786430] [<ffffffffc0094cb3>] cv_wait_common+0xb3/0x130 [spl]
Mar 30 07:41:47 pm1 kernel: [605780.786433] [<ffffffff810c41c0>] ? wait_woken+0x90/0x90
Mar 30 07:41:47 pm1 kernel: [605780.786440] [<ffffffffc0094d88>] __cv_wait_io+0x18/0x20 [spl]
Mar 30 07:41:47 pm1 kernel: [605780.786488] [<ffffffffc020199f>] zio_wait+0x10f/0x1f0 [zfs]
Mar 30 07:41:47 pm1 kernel: [605780.786535] [<ffffffffc01fbb14>] zil_commit.part.11+0x414/0x790 [zfs]
Mar 30 07:41:47 pm1 kernel: [605780.786582] [<ffffffffc01fbea7>] zil_commit+0x17/0x20 [zfs]
Mar 30 07:41:47 pm1 kernel: [605780.786628] [<ffffffffc020c909>] zvol_request+0x399/0x670 [zfs]
Mar 30 07:41:47 pm1 kernel: [605780.786634] [<ffffffff813c8430>] generic_make_request+0x110/0x1f0
Mar 30 07:41:47 pm1 kernel: [605780.786637] [<ffffffff813c8586>] submit_bio+0x76/0x180
Mar 30 07:41:47 pm1 kernel: [605780.786642] [<ffffffff811d334a>] __swap_writepage+0x22a/0x270
Mar 30 07:41:47 pm1 kernel: [605780.786645] [<ffffffff811e4f64>] ? __mmu_notifier_invalidate_page+0x64/0x70
Mar 30 07:41:47 pm1 kernel: [605780.786648] [<ffffffff811d894c>] ? __frontswap_store+0x8c/0x120
Mar 30 07:41:47 pm1 kernel: [605780.786651] [<ffffffff811d33c9>] swap_writepage+0x39/0x70
Mar 30 07:41:47 pm1 kernel: [605780.786655] [<ffffffff811a19eb>] pageout.isra.43+0x16b/0x280
Mar 30 07:41:47 pm1 kernel: [605780.786658] [<ffffffff811a3a69>] shrink_page_list+0x3f9/0x790
Mar 30 07:41:47 pm1 kernel: [605780.786661] [<ffffffff811a449f>] shrink_inactive_list+0x20f/0x520
Mar 30 07:41:47 pm1 kernel: [605780.786665] [<ffffffff811a5109>] shrink_lruvec+0x579/0x750
Mar 30 07:41:47 pm1 kernel: [605780.786669] [<ffffffff8102d736>] ? __switch_to+0x256/0x5c0
Mar 30 07:41:47 pm1 kernel: [605780.786673] [<ffffffff811fd96f>] ? mem_cgroup_iter+0x1cf/0x380
Mar 30 07:41:47 pm1 kernel: [605780.786676] [<ffffffff811a53cb>] shrink_zone+0xeb/0x2d0
Mar 30 07:41:47 pm1 kernel: [605780.786679] [<ffffffff811a6753>] kswapd+0x583/0xa40
Mar 30 07:41:47 pm1 kernel: [605780.786682] [<ffffffff811a61d0>] ? mem_cgroup_shrink_node_zone+0x1c0/0x1c0
Mar 30 07:41:47 pm1 kernel: [605780.786685] [<ffffffff810a107a>] kthread+0xea/0x100
Mar 30 07:41:47 pm1 kernel: [605780.786687] [<ffffffff810a0f90>] ? kthread_park+0x60/0x60
Mar 30 07:41:47 pm1 kernel: [605780.786691] [<ffffffff8185918f>] ret_from_fork+0x3f/0x70
Mar 30 07:41:47 pm1 kernel: [605780.786693] [<ffffffff810a0f90>] ? kthread_park+0x60/0x60
Mar 30 07:41:47 pm1 kernel: [605780.786751] INFO: task txg_sync:1342 blocked for more than 120 seconds.
Mar 30 07:41:47 pm1 kernel: [605780.786777] Tainted: P O 4.4.24-1-pve #1
Mar 30 07:41:47 pm1 kernel: [605780.786800] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 30 07:41:47 pm1 kernel: [605780.786838] txg_sync D ffff88094448bac8 0 1342 2 0x00000000
Mar 30 07:41:47 pm1 kernel: [605780.786841] ffff88094448bac8 ffffffff810abd2d ffff880035a9c4c0 ffff8809452de040
Mar 30 07:41:47 pm1 kernel: [605780.786844] ffff88094448c000 ffff88097fd17180 7fffffffffffffff ffff8806d6db8360
Mar 30 07:41:47 pm1 kernel: [605780.786847] 0000000000000001 ffff88094448bae0 ffffffff81854cd5 0000000000000000
Mar 30 07:41:47 pm1 kernel: [605780.786850] Call Trace:
Mar 30 07:41:47 pm1 kernel: [605780.786854] [<ffffffff810abd2d>] ? ttwu_do_activate.constprop.89+0x5d/0x70
Mar 30 07:41:47 pm1 kernel: [605780.786857] [<ffffffff81854cd5>] schedule+0x35/0x80
Mar 30 07:41:47 pm1 kernel: [605780.786859] [<ffffffff81857f05>] schedule_timeout+0x235/0x2d0
Mar 30 07:41:47 pm1 kernel: [605780.786862] [<ffffffff8102d736>] ? __switch_to+0x256/0x5c0
Mar 30 07:41:47 pm1 kernel: [605780.786866] [<ffffffff810f5abc>] ? ktime_get+0x3c/0xb0
Mar 30 07:41:47 pm1 kernel: [605780.786868] [<ffffffff818541cb>] io_schedule_timeout+0xbb/0x140
Mar 30 07:41:47 pm1 kernel: [605780.786876] [<ffffffffc0094cb3>] cv_wait_common+0xb3/0x130 [spl]
Mar 30 07:41:47 pm1 kernel: [605780.786880] [<ffffffff810c41c0>] ? wait_woken+0x90/0x90
Mar 30 07:41:47 pm1 kernel: [605780.786886] [<ffffffffc0094d88>] __cv_wait_io+0x18/0x20 [spl]
Mar 30 07:41:47 pm1 kernel: [605780.786933] [<ffffffffc020199f>] zio_wait+0x10f/0x1f0 [zfs]
Mar 30 07:41:47 pm1 kernel: [605780.786968] [<ffffffffc018cff8>] dsl_pool_sync+0xb8/0x450 [zfs]
Mar 30 07:41:47 pm1 kernel: [605780.787008] [<ffffffffc01a5629>] spa_sync+0x369/0xb20 [zfs]
Mar 30 07:41:47 pm1 kernel: [605780.787011] [<ffffffff810acbc2>] ? default_wake_function+0x12/0x20
Mar 30 07:41:47 pm1 kernel: [605780.787054] [<ffffffffc01b8974>] txg_sync_thread+0x3c4/0x610 [zfs]
Mar 30 07:41:47 pm1 kernel: [605780.787058] [<ffffffff810ac769>] ? try_to_wake_up+0x49/0x400
Mar 30 07:41:47 pm1 kernel: [605780.787099] [<ffffffffc01b85b0>] ? txg_sync_stop+0xe0/0xe0 [zfs]
Mar 30 07:41:47 pm1 kernel: [605780.787105] [<ffffffffc008fe9a>] thread_generic_wrapper+0x7a/0x90 [spl]
Mar 30 07:41:47 pm1 kernel: [605780.787111] [<ffffffffc008fe20>] ? __thread_exit+0x20/0x20 [spl]
Mar 30 07:41:47 pm1 kernel: [605780.787113] [<ffffffff810a107a>] kthread+0xea/0x100
Mar 30 07:41:47 pm1 kernel: [605780.787116] [<ffffffff810a0f90>] ? kthread_park+0x60/0x60
Mar 30 07:41:47 pm1 kernel: [605780.787119] [<ffffffff8185918f>] ret_from_fork+0x3f/0x70
Mar 30 07:41:47 pm1 kernel: [605780.787121] [<ffffffff810a0f90>] ? kthread_park+0x60/0x60
Mar 30 07:41:47 pm1 kernel: [605780.787130] INFO: task ksmtuned:2210 blocked for more than 120 seconds.
Mar 30 07:41:47 pm1 kernel: [605780.787155] Tainted: P O 4.4.24-1-pve #1
Mar 30 07:41:47 pm1 kernel: [605780.787179] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 30 07:41:47 pm1 kernel: [605780.787217] ksmtuned D ffff880035ac7460 0 2210 1 0x00000000
Mar 30 07:41:47 pm1 kernel: [605780.787220] ffff880035ac7460 ffff8807fb882a80 ffff880954f75280 ffff880945470dc0
Mar 30 07:41:47 pm1 kernel: [605780.787222] ffff880035ac8000 ffff88093d729000 ffff88093d729028 ffff88093d7291c8
Mar 30 07:41:47 pm1 kernel: [605780.787225] 0000000000000000 ffff880035ac7478 ffffffff81854cd5 ffff88093d7291c0
Mar 30 07:41:47 pm1 kernel: [605780.787228] Call Trace:
Mar 30 07:41:47 pm1 kernel: [605780.787230] [<ffffffff81854cd5>] schedule+0x35/0x80
Mar 30 07:41:47 pm1 kernel: [605780.787237] [<ffffffffc0094cf4>] cv_wait_common+0xf4/0x130 [spl]
Mar 30 07:41:47 pm1 kernel: [605780.787240] [<ffffffff810c41c0>] ? wait_woken+0x90/0x90
Mar 30 07:41:47 pm1 kernel: [605780.787247] [<ffffffffc0094d45>] __cv_wait+0x15/0x20 [spl]
Mar 30 07:41:47 pm1 kernel: [605780.787293] [<ffffffffc01fb776>] zil_commit.part.11+0x76/0x790 [zfs]
Mar 30 07:41:47 pm1 kernel: [605780.787340] [<ffffffffc01e6269>] ? zfs_range_unlock+0x179/0x300 [zfs]
Mar 30 07:41:47 pm1 kernel: [605780.787387] [<ffffffffc01fbea7>] zil_commit+0x17/0x20 [zfs]
Mar 30 07:41:47 pm1 kernel: [605780.787433] [<ffffffffc020c909>] zvol_request+0x399/0x670 [zfs]
Mar 30 07:41:47 pm1 kernel: [605780.787438] [<ffffffff813c8430>] generic_make_request+0x110/0x1f0
Mar 30 07:41:47 pm1 kernel: [605780.787441] [<ffffffff813c8586>] submit_bio+0x76/0x180
Mar 30 07:41:47 pm1 kernel: [605780.787445] [<ffffffff811d334a>] __swap_writepage+0x22a/0x270
Mar 30 07:41:47 pm1 kernel: [605780.787447] [<ffffffff811e4f64>] ? __mmu_notifier_invalidate_page+0x64/0x70
Mar 30 07:41:47 pm1 kernel: [605780.787450] [<ffffffff811d894c>] ? __frontswap_store+0x8c/0x120
Mar 30 07:41:47 pm1 kernel: [605780.787453] [<ffffffff811d33c9>] swap_writepage+0x39/0x70
Mar 30 07:41:47 pm1 kernel: [605780.787456] [<ffffffff811a19eb>] pageout.isra.43+0x16b/0x280
Mar 30 07:41:47 pm1 kernel: [605780.787459] [<ffffffff811a3a69>] shrink_page_list+0x3f9/0x790
Mar 30 07:41:47 pm1 kernel: [605780.787462] [<ffffffff811a449f>] shrink_inactive_list+0x20f/0x520
Mar 30 07:41:47 pm1 kernel: [605780.787465] [<ffffffff811a5109>] shrink_lruvec+0x579/0x750
Mar 30 07:41:47 pm1 kernel: [605780.787468] [<ffffffff811fd96f>] ? mem_cgroup_iter+0x1cf/0x380
Mar 30 07:41:47 pm1 kernel: [605780.787471] [<ffffffff811a53cb>] shrink_zone+0xeb/0x2d0
Mar 30 07:41:47 pm1 kernel: [605780.787474] [<ffffffff811a5733>] do_try_to_free_pages+0x183/0x480
Mar 30 07:41:47 pm1 kernel: [605780.787476] [<ffffffff811a0ed3>] ? throttle_direct_reclaim+0xa3/0x260
Mar 30 07:41:47 pm1 kernel: [605780.787479] [<ffffffff811a5b05>] try_to_free_pages+0xd5/0x180
Mar 30 07:41:47 pm1 kernel: [605780.787484] [<ffffffff811978eb>] __alloc_pages_nodemask+0x65b/0xba0
Mar 30 07:41:47 pm1 kernel: [605780.787489] [<ffffffff813fd6f2>] ? radix_tree_lookup_slot+0x22/0x50
Mar 30 07:41:47 pm1 kernel: [605780.787491] [<ffffffff81190262>] ? filemap_fault+0xb2/0x3e0
Mar 30 07:41:47 pm1 kernel: [605780.787495] [<ffffffff81197e7b>] alloc_kmem_pages_node+0x4b/0xd0
Mar 30 07:41:47 pm1 kernel: [605780.787499] [<ffffffff8107efc3>] copy_process+0x1c3/0x1c00
Mar 30 07:41:47 pm1 kernel: [605780.787503] [<ffffffff813927f0>] ? apparmor_file_alloc_security+0x60/0x240
Mar 30 07:41:47 pm1 kernel: [605780.787506] [<ffffffff81347af3>] ? security_file_alloc+0x33/0x50
Mar 30 07:41:47 pm1 kernel: [605780.787509] [<ffffffff81080b90>] _do_fork+0x80/0x360
Mar 30 07:41:47 pm1 kernel: [605780.787514] [<ffffffff810917af>] ? sigprocmask+0x6f/0xa0
Mar 30 07:41:47 pm1 kernel: [605780.787516] [<ffffffff81080f19>] SyS_clone+0x19/0x20
Mar 30 07:41:47 pm1 kernel: [605780.787519] [<ffffffff81858df6>] entry_SYSCALL_64_fastpath+0x16/0x75
Mar 30 07:41:47 pm1 kernel: [605780.787523] INFO: task pvestatd:2374 blocked for more than 120 seconds.
Mar 30 07:41:47 pm1 kernel: [605780.787547] Tainted: P O 4.4.24-1-pve #1
Mar 30 07:41:47 pm1 kernel: [605780.788655] [<ffffffff81347af3>] ? security_file_alloc+0x33/0x50
Mar 30 07:41:47 pm1 kernel: [605780.788658] [<ffffffff81080b90>] _do_fork+0x80/0x360
Mar 30 07:41:47 pm1 kernel: [605780.788662] [<ffffffff81003885>] ? syscall_trace_enter_phase1+0xc5/0x140
Mar 30 07:41:47 pm1 kernel: [605780.788665] [<ffffffff81080f19>] SyS_clone+0x19/0x20
Mar 30 07:41:47 pm1 kernel: [605780.788668] [<ffffffff81858df6>] entry_SYSCALL_64_fastpath+0x16/0x75
Mar 30 07:41:47 pm1 kernel: [605780.788674] INFO: task kvm:32008 blocked for more than 120 seconds.
Mar 30 07:41:47 pm1 kernel: [605780.788699] Tainted: P O 4.4.24-1-pve #1
Mar 30 07:41:47 pm1 kernel: [605780.788722] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 30 07:41:47 pm1 kernel: [605780.788760] kvm D ffff880104c5baa8 0 32008 1 0x00000000
Mar 30 07:41:47 pm1 kernel: [605780.788763] ffff880104c5baa8 00000085be5ec000 ffff8808f5fad280 ffff880945a4c4c0
Mar 30 07:41:47 pm1 kernel: [605780.788766] ffff880104c5c000 ffff88097fd17180 7fffffffffffffff ffff8808300162f0
Mar 30 07:41:47 pm1 kernel: [605780.788768] 0000000000000001 ffff880104c5bac0 ffffffff81854cd5 0000000000000000
Mar 30 07:41:47 pm1 kernel: [605780.788771] Call Trace:
Mar 30 07:41:47 pm1 kernel: [605780.788773] [<ffffffff81854cd5>] schedule+0x35/0x80
Mar 30 07:41:47 pm1 kernel: [605780.788776] [<ffffffff81857f05>] schedule_timeout+0x235/0x2d0
Mar 30 07:41:47 pm1 kernel: [605780.788779] [<ffffffff810c3da4>] ? __wake_up+0x44/0x50
Mar 30 07:41:47 pm1 kernel: [605780.788824] [<ffffffffc01fec10>] ? zio_taskq_member.isra.6+0x80/0x80 [zfs]
Mar 30 07:41:47 pm1 kernel: [605780.788831] [<ffffffffc0091731>] ? taskq_dispatch_ent+0x51/0x120 [spl]
Mar 30 07:41:47 pm1 kernel: [605780.788834] [<ffffffff818541cb>] io_schedule_timeout+0xbb/0x140
Mar 30 07:41:47 pm1 kernel: [605780.788842] [<ffffffffc0094cb3>] cv_wait_common+0xb3/0x130 [spl]
Mar 30 07:41:47 pm1 kernel: [605780.788845] [<ffffffff810c41c0>] ? wait_woken+0x90/0x90
Mar 30 07:41:47 pm1 kernel: [605780.788852] [<ffffffffc0094d88>] __cv_wait_io+0x18/0x20 [spl]
Mar 30 07:41:47 pm1 kernel: [605780.788897] [<ffffffffc020199f>] zio_wait+0x10f/0x1f0 [zfs]
Mar 30 07:41:47 pm1 kernel: [605780.788941] [<ffffffffc01fbb14>] zil_commit.part.11+0x414/0x790 [zfs]
Mar 30 07:41:47 pm1 kernel: [605780.788984] [<ffffffffc01fbea7>] zil_commit+0x17/0x20 [zfs]
Mar 30 07:41:47 pm1 kernel: [605780.789028] [<ffffffffc020c8c9>] zvol_request+0x359/0x670 [zfs]
Mar 30 07:41:47 pm1 kernel: [605780.789032] [<ffffffff813c7bf3>] ? generic_make_request_checks+0x243/0x4f0
Mar 30 07:41:47 pm1 kernel: [605780.789036] [<ffffffff813c8430>] generic_make_request+0x110/0x1f0
Mar 30 07:41:47 pm1 kernel: [605780.789039] [<ffffffff813c8586>] submit_bio+0x76/0x180
Mar 30 07:41:47 pm1 kernel: [605780.789042] [<ffffffff8118f361>] ? __filemap_fdatawrite_range+0xd1/0x100
Mar 30 07:41:47 pm1 kernel: [605780.789044] [<ffffffff813beb33>] submit_bio_wait+0x63/0x90
Mar 30 07:41:47 pm1 kernel: [605780.789047] [<ffffffff813cc18d>] blkdev_issue_flush+0x5d/0x90
Mar 30 07:41:47 pm1 kernel: [605780.789051] [<ffffffff81248e95>] blkdev_fsync+0x35/0x50
Mar 30 07:41:47 pm1 kernel: [605780.789055] [<ffffffff81241f5d>] vfs_fsync_range+0x3d/0xb0
Mar 30 07:41:47 pm1 kernel: [605780.789060] [<ffffffff811035c5>] ? SyS_futex+0x85/0x180
Mar 30 07:41:47 pm1 kernel: [605780.789063] [<ffffffff8124202d>] do_fsync+0x3d/0x70
Mar 30 07:41:47 pm1 kernel: [605780.789066] [<ffffffff812422e3>] SyS_fdatasync+0x13/0x20
Mar 30 07:41:47 pm1 kernel: [605780.789069] [<ffffffff81858df6>] entry_SYSCALL_64_fastpath+0x16/0x75
Mar 30 07:42:20 pm1 systemd[1]: Starting Cleanup of Temporary Directories...
Mar 30 07:42:21 pm1 systemd[1]: Started Cleanup of Temporary Directories.
Mar 30 07:42:29 pm1 pve-firewall[2375]: firewall update time (175.099 seconds)
Mar 30 07:42:29 pm1 rrdcached[2195]: queue_thread_main: rrd_update_r (/var/lib/rrdcached/db/pve2-storage/pm1/local-zfs) failed with status -1. (/var/lib/rrdcached/db/pve2-storage/pm1/local-zfs: illegal attempt to update using time 1490870216 when last update time is 1490870340 (minimum one second step))
Mar 30 07:42:29 pm1 rrdcached[2195]: queue_thread_main: rrd_update_r (/var/lib/rrdcached/db/pve2-storage/pm1/freenas) failed with status -1. (/var/lib/rrdcached/db/pve2-storage/pm1/freenas: illegal attempt to update using time 1490870216 when last update time is 1490870340 (minimum one second step))
Mar 30 07:42:29 pm1 rrdcached[2195]: queue_thread_main: rrd_update_r (/var/lib/rrdcached/db/pve2-storage/pm1/local) failed with status -1. (/var/lib/rrdcached/db/pve2-storage/pm1/local: illegal attempt to update using time 1490870216 when last update time is 1490870340 (minimum one second step))
Mar 30 07:42:29 pm1 pvestatd[2374]: status update time (178.523 seconds)
Mar 30 07:42:29 pm1 pmxcfs[5529]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/pm1/freenas: -1
Mar 30 07:42:29 pm1 pmxcfs[5529]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/pm1/local: -1
Mar 30 07:42:29 pm1 pmxcfs[5529]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/pm1/local-zfs: -1
Mar 30 07:42:29 pm1 pmxcfs[5529]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/pm1/server6: -1
Mar 30 07:42:59 pm1 rrdcached[2195]: queue_thread_main: rrd_update_r (/var/lib/rrdcached/db/pve2-storage/pm1/server6) failed with status -1. (/var/lib/rrdcached/db/pve2-storage/pm1/server6: illegal attempt to update using time 1490870286 when last update time is 1490870549 (minimum one second step))
Mar 30 07:47:42 pm1 pmxcfs[5529]: [dcdb] notice: data verification successful


and my proxmox info output:

root@pm1:~# pveversion -v
proxmox-ve: 4.4-84 (running kernel: 4.4.24-1-pve)
pve-manager: 4.4-13 (running version: 4.4-13/7ea56165)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.44-1-pve: 4.4.44-84
pve-kernel-4.4.24-1-pve: 4.4.24-72
pve-kernel-4.4.10-1-pve: 4.4.10-54
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-109
pve-firmware: 1.1-10
libpve-common-perl: 4.0-94
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-1
pve-docs: 4.4-3
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-96
pve-firewall: 2.0-33
pve-ha-manager: 1.0-40
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80


Thanks for any help.

roberto
 
Hi all !

I have an issue with the proxmox. Vms randomly restarting itself. I've updated to latest version and the issue seems not fixed. Vms restart not the node server.

...
proxmox-ve: 4.4-84 (running kernel: 4.4.24-1-pve)
pve-manager: 4.4-13 (running version: 4.4-13/7ea56165)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.44-1-pve: 4.4.44-84
pve-kernel-4.4.24-1-pve: 4.4.24-72
pve-kernel-4.4.10-1-pve: 4.4.10-54
Hi,
but you have forgotten to reboot the node - so that the actual kernel was activated.

Still is the old kernel running.

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!