It only crashes on the 3 ceph nodes with additional nvme devices and Mellanox Connect-X3 - the compute nodes do not crash. Same boards, same CPU Gen, all X10DRU-i+ and E5-26xx v4.
My X540-AT2 based X10DRU-i+ run fine, when they do not have NVME storage and no Connect-X3 and no ceph-osd.
So I do not see the Intel NIC as the culprit.
Ich habe hier 11 Kisten mit Supermicro X10DRU-i+ und BIOS 3.5.
CPUs sind E5-2620 v4 in den 3 Ceph nodes und E5-2667 v4 und E5-2683 v4 in den Compute Nodes.
Ceph Nodes haben NVME Storage und Connect-X3 Karten, Compute Nodes nur die onboard X540-AT2.
6.8.4 funktioniert auf den Compute Nodes ohne...
Same here, sometimes the opcode bytes are listed though:
[Mon Jul 17 05:04:28 2023] pverados[740828]: segfault at 55a4a3d5d030 ip 000055a4a3d5d030 sp 00007ffecd408178 error 14 in perl[55a4a3d31000+195000] likely on CPU 1 (core 2, socket 0)
[Mon Jul 17 05:04:28 2023] Code: Unable to access...
Could you please elaborate a bit? How do you configure the VLAN filter?
I am trying something similar but with Intel X553, ixgbe (linux/proxmox) and iavf (opnsense). So far I have not succeeded in getting a trunk port to work in opnsense.
The multicast issue for carp can be solved with...
actually, to my understanding you don't. you can run all vxlan-ids in the same multicast group. it is counterproductive to spawn multicast groups per vxlan.
Default for linux is max. 20 igmp memberships...
net.ipv4.igmp_max_memberships = 20
yes, i had a look at bgp-evpn.
currently i am...
I have some thoughts regarding the SDN VXLAN implementation.
The VXLAN interface is configured with vxlan_remoteip <peerips>. To my understanding this means that BUM traffic is replicated from one node to all others. This has implications for scaling.
Another approach would be to use a...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.