Probleme mit proxmox Network

Sascha72036

Member
Aug 22, 2016
19
1
23
34
Hallo,

wir betreiben ein großes Cluster mit 41 Nodes (Prozessor: AMD EPYC 7402P) sowie separate 2x 10G NIC für ceph + Cluster-Traffic (corosync etc).
Cluster, Ceph usw ist alles abgetrennt vom normalen Traffic der VMs. Wir nehmen zwei redundante Switche (Arista) für ceph und zwei redundante Switche (Juniper) für die VMs um nach außen zu verbinden.

Leider passiert es ab und zu, dass nach einem Neustart einer großen VM (>100GB Ram), die 10G Netzwerk Karte für ceph + corosync aussteigt:
[Sat Mar 20 10:15:03 2021] vmbr0: port 5(tap148i0) entered disabled state
[Sat Mar 20 10:15:04 2021] fwbr148i1: port 2(tap148i1) entered disabled state
[Sat Mar 20 10:15:04 2021] fwbr148i1: port 1(fwln148i1) entered disabled state
[Sat Mar 20 10:15:04 2021] vmbr1: port 2(fwpr148p1) entered disabled state
[Sat Mar 20 10:15:04 2021] device fwln148i1 left promiscuous mode
[Sat Mar 20 10:15:04 2021] fwbr148i1: port 1(fwln148i1) entered disabled state
[Sat Mar 20 10:15:04 2021] device fwpr148p1 left promiscuous mode
[Sat Mar 20 10:15:04 2021] vmbr1: port 2(fwpr148p1) entered disabled state
[Sat Mar 20 10:15:13 2021] device bond1 left promiscuous mode
[Sat Mar 20 10:15:13 2021] device eth2 left promiscuous mode
[Sat Mar 20 10:15:13 2021] device eth3 left promiscuous mode
[Sat Mar 20 10:16:10 2021] libceph: osd28 down
[Sat Mar 20 10:16:14 2021] libceph: osd28 up
[Sat Mar 20 10:16:49 2021] libceph: osd16 down
[Sat Mar 20 10:16:55 2021] libceph: osd17 down
[Sat Mar 20 10:16:55 2021] libceph: osd16 up

Kurze Zeit später, fliegt das ganze Cluster auseinander, da sich reihenweise die 10G Netzwerkkarten der Nodes wegen (vermutlich) hohem Corosync Broadcast abschalten.
Netzwerkkarten sind entweder Intel X520-DA2 oder Mellanox Connect X3.
Kann uns hier jemand weiterhelfen?

Viele Grüße
 
Hier noch ein Auszug aus der syslog:

Mar 20 10:15:37 prox39 kernel: [1843705.835893] vmbr0: port 5(tap148i0) entered disabled state
Mar 20 10:15:37 prox39 pmxcfs[1711]: [status] notice: received log
Mar 20 10:15:37 prox39 pmxcfs[1711]: [status] notice: received log
Mar 20 10:15:37 prox39 pmxcfs[1711]: [status] notice: received log
Mar 20 10:15:37 prox39 kernel: [1843706.172555] fwbr148i1: port 2(tap148i1) entered disabled state
Mar 20 10:15:37 prox39 pmxcfs[1711]: [status] notice: received log
Mar 20 10:15:37 prox39 kernel: [1843706.218129] fwbr148i1: port 1(fwln148i1) entered disabled state
Mar 20 10:15:37 prox39 kernel: [1843706.236257] vmbr1: port 2(fwpr148p1) entered disabled state
Mar 20 10:15:37 prox39 kernel: [1843706.257099] device fwln148i1 left promiscuous mode
Mar 20 10:15:37 prox39 kernel: [1843706.257741] fwbr148i1: port 1(fwln148i1) entered disabled state
Mar 20 10:15:37 prox39 kernel: [1843706.297980] device fwpr148p1 left promiscuous mode
Mar 20 10:15:37 prox39 kernel: [1843706.298478] vmbr1: port 2(fwpr148p1) entered disabled state
Mar 20 10:15:37 prox39 pvedaemon[176919]: <root@pam> successful auth for user 'admin@pve'
Mar 20 10:15:38 prox39 lldpd[1446]: MSAP has changed for port tap148i1, sending a shutdown LLDPDU
Mar 20 10:15:38 prox39 lldpd[1446]: unable to send packet on real device for tap148i1: No such device or address
Mar 20 10:15:38 prox39 lldpd[1446]: unable to send packet on real device for fwln148i1: No such device or address
Mar 20 10:15:38 prox39 lldpd[1446]: unable to send packet on real device for tap148i0: No such device or address
Mar 20 10:15:38 prox39 pmxcfs[1711]: [status] notice: received log
Mar 20 10:15:43 prox39 pvedaemon[176929]: VM 148 qmp command failed - VM 148 qmp command 'query-proxmox-support' failed - unable to connect to VM 148 qmp socket - timeout after 31 retries
Mar 20 10:15:45 prox39 pvestatd[1761]: VM 148 qmp command failed - VM 148 qmp command 'query-proxmox-support' failed - unable to connect to VM 148 qmp socket - timeout after 31 retries
Mar 20 10:15:45 prox39 corosync[1719]: [KNET ] link: host: 40 link: 0 is down
Mar 20 10:15:45 prox39 corosync[1719]: [KNET ] link: host: 26 link: 0 is down
Mar 20 10:15:45 prox39 corosync[1719]: [KNET ] link: host: 25 link: 0 is down
Mar 20 10:15:45 prox39 corosync[1719]: [KNET ] host: host: 40 (passive) best link: 0 (pri: 1)
Mar 20 10:15:45 prox39 corosync[1719]: [KNET ] host: host: 40 has no active links
Mar 20 10:15:45 prox39 corosync[1719]: [KNET ] host: host: 26 (passive) best link: 0 (pri: 1)
Mar 20 10:15:45 prox39 corosync[1719]: [KNET ] host: host: 26 has no active links
Mar 20 10:15:45 prox39 corosync[1719]: [KNET ] host: host: 25 (passive) best link: 0 (pri: 1)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!