Three servers, three NICs each - one onboard Motherboard, two in 10Gbe PCIe card.
They can each ping one another from the management NIC (onboard the motherboard).
All of these NICs were previously fully functional for testing.
All three servers were migrated to a new rack chassis - It is certainly possible that the 10Gbe card moved to a different PCIe spot... I did not expect that to be any issue.
One thing I notice is that the ip addr output contains varying counts (44, 32, 17) of fwpr, fwln, and veth entries and I really don't remember this being the case- but it could just be the addition of more VMs and added network complexity that came after initial setup- google results seem to support that.
My primary concern is - how do I get these NICs communicating again? No node can ping another even when I plug them directly into each other. I have tried multiple cables, using a 10G switch, and am pretty stumped at this point.
thx
edit: this looks pretty relevant:
https://forum.proxmox.com/threads/changing-video-card-broke-networking.40848/
They can each ping one another from the management NIC (onboard the motherboard).
All of these NICs were previously fully functional for testing.
All three servers were migrated to a new rack chassis - It is certainly possible that the 10Gbe card moved to a different PCIe spot... I did not expect that to be any issue.
One thing I notice is that the ip addr output contains varying counts (44, 32, 17) of fwpr, fwln, and veth entries and I really don't remember this being the case- but it could just be the addition of more VMs and added network complexity that came after initial setup- google results seem to support that.
My primary concern is - how do I get these NICs communicating again? No node can ping another even when I plug them directly into each other. I have tried multiple cables, using a 10G switch, and am pretty stumped at this point.
thx
edit: this looks pretty relevant:
https://forum.proxmox.com/threads/changing-video-card-broke-networking.40848/
Last edited: