PVE 6 cluster nodes randomly hangs (10gbe network down)

Whatever

Renowned Member
Nov 19, 2012
383
59
93
I've noticed that after installing PVE 6.x ckuster with 10Gb net for intercluster and storage (NFS) communications cluster nodes randomly hangs - still available through ethernet (1Gbe) nework but NOT accesible via main 10Gbe, so neither cluster nor storage are availible
Yesterday it happened with node2, tonight with node3.
All nodes are using same common NFS storage. All backup plans are performed on storage only

Node is still accessible via 1Gbe IP but even shutdown command is freezing on guest shutdown which is blocked by "no quorum" for good.
/etc/init.d/network restart dos not help either

All the log are listed bellow (able to get them online via 1Gbe net)
Node has been rebooted only with IPMI reset(
 
Last edited:
Code:
[Sun Sep  8 04:23:20 2019] fwbr143i0: port 2(tap143i0) entered disabled state
[Sun Sep  8 04:23:20 2019] fwbr143i0: port 2(tap143i0) entered blocking state
[Sun Sep  8 04:23:20 2019] fwbr143i0: port 2(tap143i0) entered forwarding state
[Sun Sep  8 07:25:56 2019] perf: interrupt took too long (2625 > 2500), lowering kernel.perf_event_max_sample_rate to 76000
[Sun Sep  8 09:30:12 2019] perf: interrupt took too long (3292 > 3281), lowering kernel.perf_event_max_sample_rate to 60750
[Sun Sep  8 11:19:15 2019] perf: interrupt took too long (4147 > 4115), lowering kernel.perf_event_max_sample_rate to 48000
[Sun Sep  8 15:08:08 2019] perf: interrupt took too long (5201 > 5183), lowering kernel.perf_event_max_sample_rate to 38250
[Mon Sep  9 00:24:03 2019] perf: interrupt took too long (6502 > 6501), lowering kernel.perf_event_max_sample_rate to 30750
[Mon Sep  9 00:53:44 2019] device tap114i0 entered promiscuous mode
[Mon Sep  9 00:53:44 2019] vmbr0: port 13(tap114i0) entered blocking state
[Mon Sep  9 00:53:44 2019] vmbr0: port 13(tap114i0) entered disabled state
[Mon Sep  9 00:53:44 2019] vmbr0: port 13(tap114i0) entered blocking state
[Mon Sep  9 00:53:44 2019] vmbr0: port 13(tap114i0) entered forwarding state
[Mon Sep  9 00:54:19 2019] vmbr0: port 13(tap114i0) entered disabled state
[Wed Sep 11 03:06:43 2019] nfs: server 172.16.253.252 not responding, still trying
...
[Wed Sep 11 03:09:29 2019] nfs: server 172.16.253.252 not responding, still trying
[Wed Sep 11 03:09:36 2019] nfs: server storageB not responding, still trying
[Wed Sep 11 03:09:36 2019] nfs: server storageB not responding, still trying
[Wed Sep 11 03:09:36 2019] nfs: server storageB not responding, timed out
[Wed Sep 11 03:09:36 2019] nfs: server storageB not responding, still trying
[Wed Sep 11 03:09:44 2019] INFO: task kworker/2:1:3004219 blocked for more than 120 seconds.
[Wed Sep 11 03:09:29 2019] nfs: server 172.16.253.252 not responding, still trying
[Wed Sep 11 03:09:36 2019] nfs: server storageB not responding, still trying
[Wed Sep 11 03:09:36 2019] nfs: server storageB not responding, still trying
[Wed Sep 11 03:09:36 2019] nfs: server storageB not responding, timed out
[Wed Sep 11 03:09:36 2019] nfs: server storageB not responding, still trying
[Wed Sep 11 03:09:44 2019] INFO: task kworker/2:1:3004219 blocked for more than 120 seconds.
[Wed Sep 11 03:09:44 2019]       Tainted: P           O      5.0.21-1-pve #1
[Wed Sep 11 03:09:44 2019] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Wed Sep 11 03:09:44 2019] kworker/2:1     D    0 3004219      2 0x80000000
[Wed Sep 11 03:09:44 2019] Workqueue: events slab_caches_to_rcu_destroy_workfn
[Wed Sep 11 03:09:44 2019] Call Trace:
[Wed Sep 11 03:09:44 2019]  __schedule+0x2d4/0x870
[Wed Sep 11 03:09:44 2019]  ? pcpu_cnt_pop_pages+0x45/0x60
[Wed Sep 11 03:09:44 2019]  schedule+0x2c/0x70
[Wed Sep 11 03:09:44 2019]  schedule_timeout+0x258/0x360
[Wed Sep 11 03:09:44 2019]  ? pcpu_free_area+0x1ec/0x2f0
[Wed Sep 11 03:09:44 2019]  ? __x2apic_send_IPI_dest+0x32/0x36
[Wed Sep 11 03:09:44 2019]  ? x2apic_send_IPI+0x2b/0x30
[Wed Sep 11 03:09:44 2019]  wait_for_completion+0xb7/0x140
[Wed Sep 11 03:09:44 2019]  ? wake_up_q+0x80/0x80
[Wed Sep 11 03:09:44 2019]  rcu_barrier+0x112/0x180
[Wed Sep 11 03:09:44 2019]  slab_caches_to_rcu_destroy_workfn+0x93/0xe0
[Wed Sep 11 03:09:44 2019]  process_one_work+0x20f/0x410
[Wed Sep 11 03:09:44 2019]  worker_thread+0x34/0x400
[Wed Sep 11 03:09:44 2019]  kthread+0x120/0x140
[Wed Sep 11 03:09:44 2019]  ? process_one_work+0x410/0x410
[Wed Sep 11 03:09:44 2019]  ? __kthread_parkme+0x70/0x70
[Wed Sep 11 03:09:44 2019]  ret_from_fork+0x35/0x40
[Wed Sep 11 03:10:42 2019] nfs: server 172.16.253.252 not responding, still trying
[Wed Sep 11 03:10:50 2019] nfs: server 172.16.253.252 not responding, still trying
[Wed Sep 11 03:10:59 2019] nfs: server 172.16.253.252 not responding, still trying
[Wed Sep 11 03:10:59 2019] nfs: server 172.16.253.252 not responding, still trying
[Wed Sep 11 03:10:59 2019] nfs: server 172.16.253.252 not responding, still trying
[Wed Sep 11 03:11:06 2019] nfs: server storageB not responding, still trying
[Wed Sep 11 03:11:45 2019] INFO: task kworker/2:1:3004219 blocked for more than 120 seconds.
[Wed Sep 11 03:11:45 2019]       Tainted: P           O      5.0.21-1-pve #1
[Wed Sep 11 03:11:45 2019] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Wed Sep 11 03:11:45 2019] kworker/2:1     D    0 3004219      2 0x80000000
[Wed Sep 11 03:11:45 2019] Workqueue: events slab_caches_to_rcu_destroy_workfn
[Wed Sep 11 03:11:45 2019] Call Trace:
[Wed Sep 11 03:11:45 2019]  __schedule+0x2d4/0x870
[Wed Sep 11 03:11:45 2019]  ? pcpu_cnt_pop_pages+0x45/0x60
[Wed Sep 11 03:11:45 2019]  schedule+0x2c/0x70
[Wed Sep 11 03:11:45 2019]  schedule_timeout+0x258/0x360
[Wed Sep 11 03:11:45 2019]  ? pcpu_free_area+0x1ec/0x2f0
[Wed Sep 11 03:11:45 2019]  ? __x2apic_send_IPI_dest+0x32/0x36
[Wed Sep 11 03:11:45 2019]  ? x2apic_send_IPI+0x2b/0x30
[Wed Sep 11 03:11:45 2019]  wait_for_completion+0xb7/0x140
[Wed Sep 11 03:11:45 2019]  ? wake_up_q+0x80/0x80
[Wed Sep 11 03:11:45 2019]  rcu_barrier+0x112/0x180
[Wed Sep 11 03:11:45 2019]  slab_caches_to_rcu_destroy_workfn+0x93/0xe0
[Wed Sep 11 03:11:45 2019]  process_one_work+0x20f/0x410
[Wed Sep 11 03:11:45 2019]  worker_thread+0x34/0x400
[Wed Sep 11 03:11:45 2019]  kthread+0x120/0x140
[Wed Sep 11 03:11:45 2019]  ? process_one_work+0x410/0x410
[Wed Sep 11 03:11:45 2019]  ? __kthread_parkme+0x70/0x70
[Wed Sep 11 03:11:45 2019]  ret_from_fork+0x35/0x40
[Wed Sep 11 03:12:36 2019] nfs: server storageB not responding, still trying
[Wed Sep 11 03:13:45 2019] INFO: task kworker/2:1:3004219 blocked for more than 120 seconds.
[Wed Sep 11 03:13:45 2019]       Tainted: P           O      5.0.21-1-pve #1
[Wed Sep 11 03:13:45 2019] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Wed Sep 11 03:13:45 2019] kworker/2:1     D    0 3004219      2 0x80000000
[Wed Sep 11 03:13:45 2019] Workqueue: events slab_caches_to_rcu_destroy_workfn
[Wed Sep 11 03:13:45 2019] Call Trace:
[Wed Sep 11 03:13:45 2019]  __schedule+0x2d4/0x870
[Wed Sep 11 03:13:45 2019]  ? pcpu_cnt_pop_pages+0x45/0x60
[Wed Sep 11 03:13:45 2019]  schedule+0x2c/0x70
[Wed Sep 11 03:13:45 2019]  schedule_timeout+0x258/0x360
[Wed Sep 11 03:13:45 2019]  ? pcpu_free_area+0x1ec/0x2f0
[Wed Sep 11 03:13:45 2019]  ? __x2apic_send_IPI_dest+0x32/0x36
[Wed Sep 11 03:13:45 2019]  ? x2apic_send_IPI+0x2b/0x30
[Wed Sep 11 03:13:45 2019]  wait_for_completion+0xb7/0x140
[Wed Sep 11 03:13:45 2019]  ? wake_up_q+0x80/0x80
[Wed Sep 11 03:13:45 2019]  rcu_barrier+0x112/0x180
[Wed Sep 11 03:13:45 2019]  slab_caches_to_rcu_destroy_workfn+0x93/0xe0
[Wed Sep 11 03:13:45 2019]  process_one_work+0x20f/0x410
[Wed Sep 11 03:13:45 2019]  worker_thread+0x34/0x400
[Wed Sep 11 03:13:45 2019]  kthread+0x120/0x140
[Wed Sep 11 03:13:45 2019]  ? process_one_work+0x410/0x410
[Wed Sep 11 03:13:45 2019]  ? __kthread_parkme+0x70/0x70
[Wed Sep 11 03:13:45 2019]  ret_from_fork+0x35/0x40
[Wed Sep 11 03:14:06 2019] nfs: server storageB not responding, still trying
[Wed Sep 11 03:15:36 2019] nfs: server storageB not responding, still trying
[Wed Sep 11 03:15:46 2019] INFO: task kworker/2:1:3004219 blocked for more than 120 seconds.
[Wed Sep 11 03:15:46 2019]       Tainted: P           O      5.0.21-1-pve #1
[Wed Sep 11 03:15:46 2019] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Wed Sep 11 03:15:46 2019] kworker/2:1     D    0 3004219      2 0x80000000
[Wed Sep 11 03:15:46 2019] Workqueue: events slab_caches_to_rcu_destroy_workfn
...
[Wed Sep 11 03:19:48 2019]  ret_from_fork+0x35/0x40
[Wed Sep 11 03:20:06 2019] nfs: server storageB not responding, still trying
[Wed Sep 11 03:21:36 2019] nfs: server storageB not responding, still trying
 
Last edited:
Code:
root@pve-node3:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 00:25:90:88:a6:58 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 00:25:90:88:a6:58 brd ff:ff:ff:ff:ff:ff
4: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
    link/ether 9c:69:b4:60:ec:e0 brd ff:ff:ff:ff:ff:ff
    inet 172.16.253.103/24 brd 172.16.253.255 scope global enp5s0
       valid_lft forever preferred_lft forever
    inet6 fe80::9e69:b4ff:fe60:ece0/64 scope link
       valid_lft forever preferred_lft forever
5: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 00:25:90:88:a6:58 brd ff:ff:ff:ff:ff:ff
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether 00:25:90:88:a6:58 brd ff:ff:ff:ff:ff:ff
    inet 172.16.252.103/24 brd 172.16.252.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::225:90ff:fe88:a658/64 scope link
       valid_lft forever preferred_lft forever
7: tap102i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master fwbr102i0 state UNKNOWN group default qlen 1000
    link/ether 9e:93:c8:a3:88:6f brd ff:ff:ff:ff:ff:ff
8: fwbr102i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether 06:f4:93:62:2f:76 brd ff:ff:ff:ff:ff:ff
9: fwpr102p0@fwln102i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 12:0d:b8:ce:11:11 brd ff:ff:ff:ff:ff:ff
10: fwln102i0@fwpr102p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master fwbr102i0 state UP group default qlen 1000
    link/ether 06:f4:93:62:2f:76 brd ff:ff:ff:ff:ff:ff
11: tap103i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether b6:6b:fc:ed:73:75 brd ff:ff:ff:ff:ff:ff
12: tap104i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master fwbr104i0 state UNKNOWN group default qlen 1000
    link/ether 96:22:f8:2f:08:69 brd ff:ff:ff:ff:ff:ff
13: fwbr104i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether 76:12:e4:4c:20:b0 brd ff:ff:ff:ff:ff:ff
14: fwpr104p0@fwln104i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fa:28:df:e3:2a:d8 brd ff:ff:ff:ff:ff:ff
15: fwln104i0@fwpr104p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master fwbr104i0 state UP group default qlen 1000
    link/ether 76:12:e4:4c:20:b0 brd ff:ff:ff:ff:ff:ff
16: tap107i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master fwbr107i0 state UNKNOWN group default qlen 1000
    link/ether c6:21:d9:7f:94:c6 brd ff:ff:ff:ff:ff:ff
17: fwbr107i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether 56:49:aa:1e:20:87 brd ff:ff:ff:ff:ff:ff
18: fwpr107p0@fwln107i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether ba:3f:f9:02:a1:5e brd ff:ff:ff:ff:ff:ff
19: fwln107i0@fwpr107p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master fwbr107i0 state UP group default qlen 1000
    link/ether 56:49:aa:1e:20:87 brd ff:ff:ff:ff:ff:ff
20: tap111i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master fwbr111i0 state UNKNOWN group default qlen 1000
    link/ether 22:b4:70:d1:f2:6b brd ff:ff:ff:ff:ff:ff
21: fwbr111i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether d6:d2:41:4d:f9:df brd ff:ff:ff:ff:ff:ff
22: fwpr111p0@fwln111i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 2a:ad:2e:4d:1a:73 brd ff:ff:ff:ff:ff:ff
23: fwln111i0@fwpr111p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master fwbr111i0 state UP group default qlen 1000
    link/ether d6:d2:41:4d:f9:df brd ff:ff:ff:ff:ff:ff
24: tap115i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether f6:d1:d8:b0:4a:24 brd ff:ff:ff:ff:ff:ff
25: tap120i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master fwbr120i0 state UNKNOWN group default qlen 1000
    link/ether 76:a5:bb:b7:87:7a brd ff:ff:ff:ff:ff:ff
26: fwbr120i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether ee:01:66:b0:8b:ce brd ff:ff:ff:ff:ff:ff
27: fwpr120p0@fwln120i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether a2:01:e4:84:5e:23 brd ff:ff:ff:ff:ff:ff
28: fwln120i0@fwpr120p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master fwbr120i0 state UP group default qlen 1000
    link/ether ee:01:66:b0:8b:ce brd ff:ff:ff:ff:ff:ff
29: tap123i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 4a:6b:0a:59:cc:e2 brd ff:ff:ff:ff:ff:ff
30: tap132i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master fwbr132i0 state UNKNOWN group default qlen 1000
    link/ether 72:c3:5f:28:54:53 brd ff:ff:ff:ff:ff:ff
31: fwbr132i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether e6:32:25:21:9e:c3 brd ff:ff:ff:ff:ff:ff
32: fwpr132p0@fwln132i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 2e:57:f9:25:2b:da brd ff:ff:ff:ff:ff:ff
33: fwln132i0@fwpr132p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master fwbr132i0 state UP group default qlen 1000
    link/ether e6:32:25:21:9e:c3 brd ff:ff:ff:ff:ff:ff
34: tap133i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master fwbr133i0 state UNKNOWN group default qlen 1000
    link/ether f6:11:d1:5e:3a:63 brd ff:ff:ff:ff:ff:ff
35: fwbr133i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether e6:b2:69:ee:9b:0c brd ff:ff:ff:ff:ff:ff
36: fwpr133p0@fwln133i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 3a:14:3e:43:eb:9f brd ff:ff:ff:ff:ff:ff
37: fwln133i0@fwpr133p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master fwbr133i0 state UP group default qlen 1000
    link/ether e6:b2:69:ee:9b:0c brd ff:ff:ff:ff:ff:ff
38: tap143i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master fwbr143i0 state UNKNOWN group default qlen 1000
    link/ether 42:40:cf:7c:29:22 brd ff:ff:ff:ff:ff:ff
39: fwbr143i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether a6:8b:c4:8a:29:6d brd ff:ff:ff:ff:ff:ff
40: fwpr143p0@fwln143i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 32:7c:e6:04:56:3e brd ff:ff:ff:ff:ff:ff
41: fwln143i0@fwpr143p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master fwbr143i0 state UP group default qlen 1000
    link/ether a6:8b:c4:8a:29:6d brd ff:ff:ff:ff:ff:ff
root@pve-node3:~#
 
Code:
root@pve-node3:~# uname -a
Linux pve-node3 5.0.21-1-pve #1 SMP PVE 5.0.21-2 (Wed, 28 Aug 2019 15:12:18 +0200) x86_64 GNU/Linux
root@pve-node3:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.21-1-pve)
pve-manager: 6.0-7 (running version: 6.0-7/28984024)
pve-kernel-5.0: 6.0-7
pve-kernel-helper: 6.0-7
pve-kernel-5.0.21-1-pve: 5.0.21-2
pve-kernel-5.0.18-1-pve: 5.0.18-3
pve-kernel-4.15.18-19-pve: 4.15.18-45
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.13-1-pve: 4.13.13-31
pve-kernel-4.13.8-3-pve: 4.13.8-30
pve-kernel-4.10.15-1-pve: 4.10.15-15
ceph: 14.2.2-pve1
ceph-fuse: 14.2.2-pve1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.11-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-4
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-8
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-64
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-7
pve-cluster: 6.0-7
pve-container: 3.0-7
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.0-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-7
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve2
root@pve-node3:~#
 
Code:
root@pve-node3:~# lspci
00:00.0 Host bridge: Intel Corporation Xeon E5/Core i7 DMI2 (rev 07)
00:01.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root Port 1a (rev 07)
00:02.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root Port 2a (rev 07)
00:03.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root Port 3a in PCI Express Mode (rev 07)
00:03.2 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root Port 3c (rev 07)
00:04.0 System peripheral: Intel Corporation Xeon E5/Core i7 DMA Channel 0 (rev 07)
...
00:05.0 System peripheral: Intel Corporation Xeon E5/Core i7 Address Map, VTd_Misc, System Management (rev 07)
00:05.2 System peripheral: Intel Corporation Xeon E5/Core i7 Control Status and Global Errors (rev 07)
00:05.4 PIC: Intel Corporation Xeon E5/Core i7 I/O APIC (rev 07)
00:11.0 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express Virtual Root Port (rev 06)
00:16.0 Communication controller: Intel Corporation C600/X79 series chipset MEI Controller #1 (rev 05)
00:16.1 Communication controller: Intel Corporation C600/X79 series chipset MEI Controller #2 (rev 05)
00:1a.0 USB controller: Intel Corporation C600/X79 series chipset USB2 Enhanced Host Controller #2 (rev 06)
00:1d.0 USB controller: Intel Corporation C600/X79 series chipset USB2 Enhanced Host Controller #1 (rev 06)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a6)
00:1f.0 ISA bridge: Intel Corporation C600/X79 series chipset LPC Controller (rev 06)
00:1f.2 SATA controller: Intel Corporation C600/X79 series chipset 6-Port SATA AHCI Controller (rev 06)
00:1f.3 SMBus: Intel Corporation C600/X79 series chipset SMBus Host Controller (rev 06)
00:1f.6 Signal processing controller: Intel Corporation C600/X79 series chipset Thermal Management Controller (rev 06)
01:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
01:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
05:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
08:01.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200eW WPCM450 (rev 0a)
7f:08.0 System peripheral: Intel Corporation Xeon E5/Core i7 QPI Link 0 (rev 07)
7f:08.3 System peripheral: Intel Corporation Xeon E5/Core i7 QPI Link Reut 0 (rev 07)
7f:08.4 System peripheral: Intel Corporation Xeon E5/Core i7 QPI Link Reut 0 (rev 07)
7f:09.0 System peripheral: Intel Corporation Xeon E5/Core i7 QPI Link 1 (rev 07)
7f:09.3 System peripheral: Intel Corporation Xeon E5/Core i7 QPI Link Reut 1 (rev 07)
7f:09.4 System peripheral: Intel Corporation Xeon E5/Core i7 QPI Link Reut 1 (rev 07)
7f:0a.0 System peripheral: Intel Corporation Xeon E5/Core i7 Power Control Unit 0 (rev 07)
...
7f:0b.0 System peripheral: Intel Corporation Xeon E5/Core i7 Interrupt Control Registers (rev 07)
7f:0b.3 System peripheral: Intel Corporation Xeon E5/Core i7 Semaphore and Scratchpad Configuration Registers (rev 07)
7f:0c.0 System peripheral: Intel Corporation Xeon E5/Core i7 Unicast Register 0 (rev 07)
...
7f:0c.6 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller System Address Decoder 0 (rev 07)
7f:0c.7 System peripheral: Intel Corporation Xeon E5/Core i7 System Address Decoder (rev 07)
7f:0d.0 System peripheral: Intel Corporation Xeon E5/Core i7 Unicast Register 0 (rev 07)
...
7f:0d.6 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller System Address Decoder 1 (rev 07)
7f:0e.0 System peripheral: Intel Corporation Xeon E5/Core i7 Processor Home Agent (rev 07)
7f:0e.1 Performance counters: Intel Corporation Xeon E5/Core i7 Processor Home Agent Performance Monitoring (rev 07)
7f:0f.0 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller Registers (rev 07)
7f:0f.1 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller RAS Registers (rev 07)
7f:0f.2 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller Target Address Decoder 0 (rev 07)
...
7f:10.0 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller Channel 0-3 Thermal Control 0 (rev 07)
7f:10.1 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller Channel 0-3 Thermal Control 1 (rev 07)
7f:10.2 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller ERROR Registers 0 (rev 07)
7f:10.3 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller ERROR Registers 1 (rev 07)
7f:10.4 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller Channel 0-3 Thermal Control 2 (rev 07)
7f:10.5 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller Channel 0-3 Thermal Control 3 (rev 07)
7f:10.6 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller ERROR Registers 2 (rev 07)
7f:10.7 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller ERROR Registers 3 (rev 07)
7f:11.0 System peripheral: Intel Corporation Xeon E5/Core i7 DDRIO (rev 07)
7f:13.0 System peripheral: Intel Corporation Xeon E5/Core i7 R2PCIe (rev 07)
7f:13.1 Performance counters: Intel Corporation Xeon E5/Core i7 Ring to PCI Express Performance Monitor (rev 07)
7f:13.4 Performance counters: Intel Corporation Xeon E5/Core i7 QuickPath Interconnect Agent Ring Registers (rev 07)
7f:13.5 Performance counters: Intel Corporation Xeon E5/Core i7 Ring to QuickPath Interconnect Link 0 Performance Monitor (rev 07)
7f:13.6 System peripheral: Intel Corporation Xeon E5/Core i7 Ring to QuickPath Interconnect Link 1 Performance Monitor (rev 07)
80:01.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root Port 1a (rev 07)
80:02.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root Port 2a (rev 07)
80:03.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root Port 3a in PCI Express Mode (rev 07)
80:04.0 System peripheral: Intel Corporation Xeon E5/Core i7 DMA Channel 0 (rev 07)
...
80:05.0 System peripheral: Intel Corporation Xeon E5/Core i7 Address Map, VTd_Misc, System Management (rev 07)
80:05.2 System peripheral: Intel Corporation Xeon E5/Core i7 Control Status and Global Errors (rev 07)
80:05.4 PIC: Intel Corporation Xeon E5/Core i7 I/O APIC (rev 07)
ff:08.0 System peripheral: Intel Corporation Xeon E5/Core i7 QPI Link 0 (rev 07)
ff:08.3 System peripheral: Intel Corporation Xeon E5/Core i7 QPI Link Reut 0 (rev 07)
ff:08.4 System peripheral: Intel Corporation Xeon E5/Core i7 QPI Link Reut 0 (rev 07)
ff:09.0 System peripheral: Intel Corporation Xeon E5/Core i7 QPI Link 1 (rev 07)
ff:09.3 System peripheral: Intel Corporation Xeon E5/Core i7 QPI Link Reut 1 (rev 07)
ff:09.4 System peripheral: Intel Corporation Xeon E5/Core i7 QPI Link Reut 1 (rev 07)
ff:0a.0 System peripheral: Intel Corporation Xeon E5/Core i7 Power Control Unit 0 (rev 07)
...
ff:0b.0 System peripheral: Intel Corporation Xeon E5/Core i7 Interrupt Control Registers (rev 07)
ff:0b.3 System peripheral: Intel Corporation Xeon E5/Core i7 Semaphore and Scratchpad Configuration Registers (rev 07)
ff:0c.0 System peripheral: Intel Corporation Xeon E5/Core i7 Unicast Register 0 (rev 07)
...
ff:0c.6 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller System Address Decoder 0 (rev 07)
ff:0c.7 System peripheral: Intel Corporation Xeon E5/Core i7 System Address Decoder (rev 07)
ff:0d.0 System peripheral: Intel Corporation Xeon E5/Core i7 Unicast Register 0 (rev 07)
...
ff:0d.6 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller System Address Decoder 1 (rev 07)
ff:0e.0 System peripheral: Intel Corporation Xeon E5/Core i7 Processor Home Agent (rev 07)
ff:0e.1 Performance counters: Intel Corporation Xeon E5/Core i7 Processor Home Agent Performance Monitoring (rev 07)
ff:0f.0 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller Registers (rev 07)
ff:0f.1 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller RAS Registers (rev 07)
ff:0f.2 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller Target Address Decoder 0 (rev 07)
...
ff:10.0 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller Channel 0-3 Thermal Control 0 (rev 07)
ff:10.1 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller Channel 0-3 Thermal Control 1 (rev 07)
ff:10.2 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller ERROR Registers 0 (rev 07)
ff:10.3 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller ERROR Registers 1 (rev 07)
ff:10.4 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller Channel 0-3 Thermal Control 2 (rev 07)
ff:10.5 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller Channel 0-3 Thermal Control 3 (rev 07)
ff:10.6 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller ERROR Registers 2 (rev 07)
ff:10.7 System peripheral: Intel Corporation Xeon E5/Core i7 Integrated Memory Controller ERROR Registers 3 (rev 07)
ff:11.0 System peripheral: Intel Corporation Xeon E5/Core i7 DDRIO (rev 07)
ff:13.0 System peripheral: Intel Corporation Xeon E5/Core i7 R2PCIe (rev 07)
ff:13.1 Performance counters: Intel Corporation Xeon E5/Core i7 Ring to PCI Express Performance Monitor (rev 07)
ff:13.4 Performance counters: Intel Corporation Xeon E5/Core i7 QuickPath Interconnect Agent Ring Registers (rev 07)
ff:13.5 Performance counters: Intel Corporation Xeon E5/Core i7 Ring to QuickPath Interconnect Link 0 Performance Monitor (rev 07)
ff:13.6 System peripheral: Intel Corporation Xeon E5/Core i7 Ring to QuickPath Interconnect Link 1 Performance Monitor (rev 07)
root@pve-node3:~#
 
Code:
root@pve-node3:~# dmesg -T | grep Intel
[Sun Sep  8 04:22:18 2019]   Intel GenuineIntel
[Sun Sep  8 04:22:19 2019] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz (family: 0x6, model: 0x2d, stepping: 0x7)
[Sun Sep  8 04:22:19 2019] Performance Events: PEBS fmt1+, SandyBridge events, 16-deep LBR, full-width counters, Intel PMU driver.
[Sun Sep  8 04:22:21 2019] intel_pstate: Intel P-state driver initializing
[Sun Sep  8 04:22:21 2019] igb: Intel(R) Gigabit Ethernet Network Driver - version 5.4.0-k
[Sun Sep  8 04:22:21 2019] igb: Copyright (c) 2007-2014 Intel Corporation.
[Sun Sep  8 04:22:21 2019] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 5.1.0-k
[Sun Sep  8 04:22:21 2019] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[Sun Sep  8 04:22:21 2019] igb 0000:01:00.0: Intel(R) Gigabit Ethernet Network Connection
[Sun Sep  8 04:22:21 2019] igb 0000:01:00.1: Intel(R) Gigabit Ethernet Network Connection
[Sun Sep  8 04:22:21 2019] ixgbe 0000:05:00.0: Intel(R) 10 Gigabit Network Connection
[Sun Sep  8 04:22:25 2019] ioatdma: Intel(R) QuickData Technology Driver 4.00
 
There was no unsuspected activity on that node at the time of hanging
 

Attachments

  • Screenshot_1.jpg
    Screenshot_1.jpg
    246.3 KB · Views: 11
Don't know how this could be related but following was observed during the boot

Code:
[Wed Sep 11 04:37:27 2019] ACPI: Using IOAPIC for interrupt routing
[Wed Sep 11 04:37:27 2019] HEST: Table parsing has been initialized.
[Wed Sep 11 04:37:27 2019] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[Wed Sep 11 04:37:27 2019] ACPI: Enabled 4 GPEs in block 00 to 3F
[Wed Sep 11 04:37:27 2019] ACPI BIOS Error (bug): Could not resolve [\_SB.PRAD], AE_NOT_FOUND (20181213/psargs-330)
[Wed Sep 11 04:37:27 2019]
                           Initialized Local Variables for Method [_L24]:
[Wed Sep 11 04:37:27 2019]   Local0: (____ptrval____) <Obj>           Integer 0000000000000303
[Wed Sep 11 04:37:27 2019]   Local1: (____ptrval____) <Obj>           Integer 0000000080040011
[Wed Sep 11 04:37:27 2019] No Arguments are initialized for method [_L24]
[Wed Sep 11 04:37:27 2019] ACPI Error: Method parse/execution failed \_GPE._L24, AE_NOT_FOUND (20181213/psparse-531)
[Wed Sep 11 04:37:27 2019] ACPI Error: AE_NOT_FOUND, while evaluating GPE method [_L24] (20181213/evgpe-509)
[Wed Sep 11 04:37:27 2019] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7e])
[Wed Sep 11 04:37:27 2019] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]
[Wed Sep 11 04:37:27 2019] acpi PNP0A08:00: _OSC: platform does not support [SHPCHotplug PME AER LTR]
[Wed Sep 11 04:37:27 2019] acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PCIeCapability]
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!