Kernel meldet Probleme mit Netzwerkkarte

CoolTux

Famous Member
Mar 14, 2019
1,036
144
108
46
Hallo in die Runde,
Kann mir jemand diese Logausgabe eventuell erklären. Es betrifft aktuell nur eine Node aber ich habe schon seit ein paar Tagen immer mal wieder diese Meldung im Log und mein Check_Mk meldet dann auch Ausfall der Node.
Code:
Dec 09 18:54:02 n5-pve-cluster corosync[1544]:   [KNET  ] link: host: 4 link: 0 is down
Dec 09 18:54:02 n5-pve-cluster corosync[1544]:   [KNET  ] link: host: 3 link: 0 is down
Dec 09 18:54:02 n5-pve-cluster corosync[1544]:   [KNET  ] link: host: 2 link: 0 is down
Dec 09 18:54:02 n5-pve-cluster corosync[1544]:   [KNET  ] link: host: 1 link: 0 is down
Dec 09 18:54:02 n5-pve-cluster corosync[1544]:   [KNET  ] host: host: 4 (passive) best link: 0 (pri: 1)
Dec 09 18:54:02 n5-pve-cluster corosync[1544]:   [KNET  ] host: host: 4 has no active links
Dec 09 18:54:02 n5-pve-cluster corosync[1544]:   [KNET  ] host: host: 3 (passive) best link: 0 (pri: 1)
Dec 09 18:54:02 n5-pve-cluster corosync[1544]:   [KNET  ] host: host: 3 has no active links
Dec 09 18:54:02 n5-pve-cluster corosync[1544]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Dec 09 18:54:02 n5-pve-cluster corosync[1544]:   [KNET  ] host: host: 2 has no active links
Dec 09 18:54:02 n5-pve-cluster corosync[1544]:   [KNET  ] host: host: 1 (passive) best link: 0 (pri: 1)
Dec 09 18:54:02 n5-pve-cluster corosync[1544]:   [KNET  ] host: host: 1 has no active links
Dec 09 18:54:04 n5-pve-cluster corosync[1544]:   [TOTEM ] Token has not been received in 2212 ms
Dec 09 18:54:04 n5-pve-cluster kernel: e1000e 0000:00:19.0 eno1: Detected Hardware Unit Hang:
Dec 09 18:54:04 n5-pve-cluster corosync[1544]:   [TOTEM ] A processor failed, forming new configuration.
Dec 09 18:54:06 n5-pve-cluster kernel: e1000e 0000:00:19.0 eno1: Detected Hardware Unit Hang:
MAC Status             <80083>
PHY Status             <796d>
PHY 1000BASE-T Status  <3800>
PHY Extended Status    <3000>
PCI Status             <10>
MAC Status             <80083>
PHY Status             <796d>
PHY 1000BASE-T Status  <3800>
PHY Extended Status    <3000>
PCI Status             <10>
Dec 09 18:54:10 n5-pve-cluster kernel: e1000e 0000:00:19.0 eno1: Reset adapter unexpectedly
Dec 09 18:54:10 n5-pve-cluster kernel: vmbr0: port 1(eno1) entered disabled state
Dec 09 18:54:10 n5-pve-cluster kernel: vmbr34: port 1(eno1.34) entered disabled state
Dec 09 18:54:10 n5-pve-cluster kernel: vmbr40: port 1(eno1.40) entered disabled state
Dec 09 18:54:10 n5-pve-cluster kernel: vmbr60: port 1(eno1.60) entered disabled state
Dec 09 18:54:10 n5-pve-cluster kernel: vmbr80: port 1(eno1.80) entered disabled state
Dec 09 18:54:10 n5-pve-cluster kernel: vmbr90: port 1(eno1.90) entered disabled state
Dec 09 18:54:10 n5-pve-cluster kernel: vmbr120: port 1(eno1.120) entered disabled state
Dec 09 18:54:13 n5-pve-cluster pvestatd[1634]: got timeout
Dec 09 18:54:13 n5-pve-cluster ntpd[1276]: Deleting interface #7 eno1.321, 10.32.1.9#123, interface stats: received=0, sent=0, dropped=0, active_time=190797 secs
Dec 09 18:54:13 n5-pve-cluster ntpd[1276]: Deleting interface #8 eno1.322, 10.32.2.9#123, interface stats: received=0, sent=0, dropped=0, active_time=190797 secs
Dec 09 18:54:14 n5-pve-cluster pmxcfs[1534]: [dcdb] notice: members: 5/1534
Dec 09 18:54:14 n5-pve-cluster pmxcfs[1534]: [dcdb] notice: all data is up to date
Dec 09 18:54:14 n5-pve-cluster kernel: e1000e: eno1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Dec 09 18:54:14 n5-pve-cluster kernel: vmbr0: port 1(eno1) entered blocking state
Dec 09 18:54:14 n5-pve-cluster kernel: vmbr0: port 1(eno1) entered forwarding state
Dec 09 18:54:14 n5-pve-cluster kernel: vmbr120: port 1(eno1.120) entered blocking state
Dec 09 18:54:14 n5-pve-cluster kernel: vmbr120: port 1(eno1.120) entered forwarding state
Dec 09 18:54:16 n5-pve-cluster ntpd[1276]: Listen normally on 9 eno1.321 10.32.1.9:123
Dec 09 18:54:16 n5-pve-cluster ntpd[1276]: Listen normally on 10 eno1.322 10.32.2.9:123
Dec 09 18:54:16 n5-pve-cluster ntpd[1276]: new interface(s) found: waking up resolver
Dec 09 18:54:16 n5-pve-cluster corosync[1544]:   [KNET  ] rx: host: 2 link: 0 is up
Dec 09 18:54:16 n5-pve-cluster corosync[1544]:   [KNET  ] rx: host: 1 link: 0 is up
Dec 09 18:54:16 n5-pve-cluster corosync[1544]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Dec 09 18:54:16 n5-pve-cluster corosync[1544]:   [KNET  ] host: host: 1 (passive) best link: 0 (pri: 1)
Dec 09 18:54:16 n5-pve-cluster corosync[1544]:   [TOTEM ] A new membership (1.268) was formed. Members joined: 1 2 3 4
Dec 09 18:54:16 n5-pve-cluster corosync[1544]:   [TOTEM ] Retransmit List: 4
Dec 09 18:54:16 n5-pve-cluster corosync[1544]:   [TOTEM ] Retransmit List: 4 8
Dec 09 18:54:16 n5-pve-cluster ceph-mon[1549]: 2019-12-09 18:54:16.662 7f6686acf700 -1 mon.n5-pve-cluster@4(electing) e7 failed to get devid for : fallback method has serial ''but no model
Dec 09 18:54:16 n5-pve-cluster ceph-mon[1549]: 2019-12-09 18:54:16.670 7f6686acf700 -1 mon.n5-pve-cluster@4(electing) e7 failed to get devid for : fallback method has serial ''but no model
Dec 09 18:54:17 n5-pve-cluster corosync[1544]:   [TOTEM ] Retransmit List: 4 8
Dec 09 18:54:17 n5-pve-cluster corosync[1544]:   [KNET  ] rx: host: 4 link: 0 is up
Dec 09 18:54:17 n5-pve-cluster corosync[1544]:   [KNET  ] rx: host: 3 link: 0 is up
Dec 09 18:54:17 n5-pve-cluster corosync[1544]:   [KNET  ] host: host: 4 (passive) best link: 0 (pri: 1)
Dec 09 18:54:17 n5-pve-cluster corosync[1544]:   [KNET  ] host: host: 3 (passive) best link: 0 (pri: 1)
Dec 09 18:54:17 n5-pve-cluster pmxcfs[1534]: [status] notice: cpg_send_message retry 10
Dec 09 18:54:17 n5-pve-cluster corosync[1544]:   [TOTEM ] Retransmit List: 4 8
Dec 09 18:54:18 n5-pve-cluster corosync[1544]:   [CPG   ] downlist left_list: 0 received
Dec 09 18:54:18 n5-pve-cluster corosync[1544]:   [CPG   ] downlist left_list: 0 received
Dec 09 18:54:18 n5-pve-cluster corosync[1544]:   [CPG   ] downlist left_list: 0 received
Dec 09 18:54:18 n5-pve-cluster corosync[1544]:   [CPG   ] downlist left_list: 0 received
Dec 09 18:54:18 n5-pve-cluster corosync[1544]:   [CPG   ] downlist left_list: 0 received
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [dcdb] notice: members: 1/1313, 2/1343, 3/1359, 4/1332, 5/1534
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [dcdb] notice: starting data syncronisation
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [status] notice: members: 1/1313, 2/1343, 3/1359, 4/1332, 5/1534
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [status] notice: starting data syncronisation
Dec 09 18:54:18 n5-pve-cluster corosync[1544]:   [QUORUM] This node is within the primary component and will provide service.
Dec 09 18:54:18 n5-pve-cluster corosync[1544]:   [QUORUM] Members[5]: 1 2 3 4 5
Dec 09 18:54:18 n5-pve-cluster corosync[1544]:   [MAIN  ] Completed service synchronization, ready to provide service.
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [status] notice: node has quorum
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [status] notice: cpg_send_message retried 17 times
Dec 09 18:54:18 n5-pve-cluster pvestatd[1634]: status update time (10.199 seconds)
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [dcdb] notice: received sync request (epoch 1/1313/00000009)
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [status] notice: received sync request (epoch 1/1313/00000009)
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [dcdb] notice: received all states
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [dcdb] notice: leader is 1/1313
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [dcdb] notice: synced members: 1/1313, 2/1343, 3/1359, 4/1332
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [dcdb] notice: waiting for updates from leader
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [dcdb] notice: dfsm_deliver_queue: queue length 6
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [dcdb] notice: update complete - trying to commit (got 11 inode updates)
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [dcdb] notice: all data is up to date
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [dcdb] notice: dfsm_deliver_sync_queue: queue length 6
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [dcdb] notice: dfsm_deliver_queue: queue length 1
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [status] notice: received all states
Dec 09 18:54:18 n5-pve-cluster pve-ha-lrm[1774]: successfully acquired lock 'ha_agent_n5-pve-cluster_lock'
Dec 09 18:54:18 n5-pve-cluster pve-ha-lrm[1774]: status change lost_agent_lock => active
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [status] notice: all data is up to date
Dec 09 18:54:18 n5-pve-cluster pmxcfs[1534]: [status] notice: dfsm_deliver_queue: queue length 8
Dec 09 18:54:18 n5-pve-cluster systemd[1]: check_mk@6054-10.6.4.9:6556-10.6.6.38:50704.service: Succeeded.
Dec 09 18:54:23 n5-pve-cluster pve-ha-crm[1716]: status change wait_for_quorum => slave

Musste leider einiges raus schneiden
Vielen Dank
Grüße
 
Ich habe meinen Check_Mk Container mal von der Node genommen, die Meldung kam immer nach einem Servicestart von Check_Mk. Mal schauen ob es besser ist.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!