Screenshot 1 shows your nic is disconnected - all links are showing DOWN status
From screenshot 2, it appears the name of the nic has changed on the new system - it was 'eno1', now it is called 'enp6s0' so would need to edit the config. So it should look like this
iface lo inet...
Of course there are arguments for (simplicity, efficiency) and against (modification of the host environment, complex cli-based administration) but I've never seen any of the proxmox staff advise against running a system this way. I've had my systems setup both ways in the past...
The Intel branded ones should work as the hardware vendor of the NIC is Intel X520. An email to FS.com support will confirm. Not sure where you are based but FS.com ship from Germany so you may have to pay import taxes if you're not in the eurozone. You can mix manufacturers - eg. HP to CISCO or...
Before anything else, you don't get '20Gbit/sec' between pc and server so don't be disappointed in your transfer speeds. However, if you have one server and two pc's then you could see each pc running at 10Gbit simultaneously giving a combined throughput of 20Gbit with a bonded interface.
Accessibility via TeamViewer only proves the VM is still running
"when restarting the router proxmox also performed a restart of the nodes" so it this a cluster setup? why would restarting the router reset the nodes?
Are your IP's statically assigned or is DHCP involved?
Your network setup seemed sensible, and just out of curiosity I generated traffic to one container while running wireshark in the other and saw no sign of any leakage. I was going to suggest that you consider multiple bridges in a routed config but it seems that's not necessary
But you're not enabling ip forwarding
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
and you're not routing via a 'real' interface - e.g
post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
If LAN1 is local to that node (i.e it has no physical connection) then DHCP broadcast is not going to reach other nodes. I don't if or how it might be done via ip-tables. Given that the VM's are local to the node, the easiest way might be to run a dhcp server on each node - you should not need...
you have your public IP and gateway on enp4s0 *but* you also have this IP assigned to vmbr0 (...ok....) but then vmbr0 is not linked to a physical NIC *and* then you seem to be routing via vmbr0 ?????
as I said, your config is not making any sense to me.....