[SOLVED] Dual NIC questions

bferrell

Well-Known Member
Nov 16, 2018
99
2
48
54
I'm realizing that I don't know enough about Linux networking as I need to, and try as I might I can't seem to gleen what I need to know from searches. Here's a picture of what I'm trying to accomplish. I think this is the actual issue that I started working in this post.

192.168.100.0/24 (BRIDGE)10G bridge for R720 PVE Hosts and guests4 Cluster HA PVE nodes
192.168.101.0/2410G storage network4 Cluster PVE nodes storage (NFS)
4 Cluster FreeNAS
192.168.102.0/241G CoroSYNC network4 Cluster HA PVE nodes

I setup the corosync network back on 5.x when I was getting odd TOTEM errors occasionally, and they went away probably because it's configured in the corosync file. I configured migration in the datacenter file, and I'm getting good migration speed (800MB/s or better). But my backups are slow (about 100MB/s) and I don't think I'm getting good overall storage speed, though I'm not sure how to objectively measure that. I can check all 3 interfaces with iperf3 and I get 9+gbps on both the 10G networks and right at 1G on the Corosync, so the hardware and the switch are working fine, I just don't have it configured properly in PVE.

My general understanding was that I only needed a BRIDGE if the VMs needed to talk to the interface. Since PVE manages the ISO and IMAGE storage, I don't think most of my VMs need that, but my Plex VM does - accesses a large SMB share for media, so I believe it needs to have to VIRTIO network cards, attached to each BRIDGE, correct?

How to get my storage communication ando the 101 network, do I need to add a static route? Since the UniFi switch can't do L3 routing, that would keep the traffic on the switch and not going up to the USG-XG-8.


root@svr-03:~# iperf3 -c 192.168.101.104
Connecting to host 192.168.101.104, port 5201
[ 5] local 192.168.101.13 port 59874 connected to 192.168.101.104 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.09 GBytes 9.40 Gbits/sec 0 1.35 MBytes
[ 5] 1.00-2.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.35 MBytes
[ 5] 2.00-3.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.35 MBytes
[ 5] 3.00-4.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.35 MBytes
[ 5] 4.00-5.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.35 MBytes
[ 5] 5.00-6.00 sec 1.09 GBytes 9.41 Gbits/sec 0 1.42 MBytes
[ 5] 6.00-7.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.42 MBytes
[ 5] 7.00-8.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.42 MBytes
[ 5] 8.00-9.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.42 MBytes
[ 5] 9.00-10.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.42 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 11.0 GBytes 9.41 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 11.0 GBytes 9.41 Gbits/sec receiver

iperf Done.
root@svr-03:~#

2020-03-07 21:12:46 migration speed: 992.97 MB/s - downtime 105 ms
2020-03-07 21:12:46 migration status: completed
2020-03-07 21:12:50 migration finished successfully (duration 00:01:16)
TASK OK

root@svr-03:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.100.1 0.0.0.0 UG 0 0 0 vmbr0
192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr0
192.168.101.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr1
192.168.102.0 0.0.0.0 255.255.255.0 U 0 0 0 eno1
root@svr-03:~#
 

Attachments

  • net_cfg.jpg
    net_cfg.jpg
    118.9 KB · Views: 15
  • diagram.jpg
    diagram.jpg
    111.1 KB · Views: 15
  • NFS_backup_cfg.jpg
    NFS_backup_cfg.jpg
    35.6 KB · Views: 10
Last edited:
So, if I check the route it seems like it should be going out the right bridge interface to get to my FreeNAS servers on the 101 network.

root@svr-04:~# ip route get 192.168.101.101
192.168.101.101 dev vmbr1 src 192.168.101.14 uid 0
cache
root@svr-04:~# ip route get 192.168.100.12
192.168.100.12 dev vmbr0 src 192.168.100.14 uid 0
cache
 
Can anyone confirm that this is fine for routing to my storage LAN (192.168.101.0/24), and that I don't need to add any routing commands?

If your NIC is on the same network as the destination, routing is normally not involved due to the direct connection.
 
@LnxBil - Thanks, I wasn't sure how it was handled when the Host has dual NICs. Like I mentioned above, it looks like it's routing to the NIC properly, but since my switch (UniFI XG-16) can't do L3 routing, and my router only has 1 10G (router-on-a-stick) I would like to confirm it's not going up there and coming back down. I'm clearly not a linux master, but I think this tells me it's going directly out of the node's secondary NIC, right (NIC1 on 192.168.100.0/24 and NIC2 on 192.168.101.0/24)?

Code:
root@svr-04:~# ip route get 192.168.101.101
192.168.101.101 dev vmbr1 src 192.168.101.14 uid 0
cache
root@svr-04:~# ip route get 192.168.100.12
192.168.100.12 dev vmbr0 src 192.168.100.14 uid 0
cache
 
Again, there is no routing involved if all nodes are in the same subnet. The data cannot go out of a nic if the subnet does not match (acutally you can, but you need to configure that manually, because it is not standard).

Can't you monitor your throughput on your switch?

And if you're unsure if you're using routing or not ... do a traceroute. There should not a host in between is there is no routing (and yes, that are also exceptions to this, but only if you build it deliberately like this).
 
So, that output isn't showing me the route - I thought that it was, in fact, showing me the routing via each bridge being used on the host?

Anyway, traceroute result is below, and it doesn't show the interface that's being used. But I don't understand your comment about traffic not being able to go out of the NIC if the subnet doesn't match, all internet traffic is on a different subnet and it goes out the cluster NIC no problem. But, given that neither output below shows my router being involved, that does narrow it down. And the screenshot below does show that when I kick the bacup off, traffic does spike on the SAN/101 network, confirming it is being utilized. So, I guess I can close this thread as "not a network problem". I'm still not sure why the read/write speeds are so low.

Code:
root@svr-04:~# traceroute 192.168.101.101
traceroute to 192.168.101.101 (192.168.101.101), 30 hops max, 60 byte packets
1  192.168.101.101 (192.168.101.101)  0.488 ms  0.431 ms  0.412 ms
root@svr-04:~# traceroute 192.168.100.11

traceroute to 192.168.100.11 (192.168.100.11), 30 hops max, 60 byte packets
1  svr-01.bdfserver.com (192.168.100.11)  0.500 ms  0.443 ms  0.422 ms
root@svr-04:~#
 

Attachments

  • server_04_networking_perf..jpg
    server_04_networking_perf..jpg
    53.3 KB · Views: 4
So, that output isn't showing me the route - I thought that it was, in fact, showing me the routing via each bridge being used on the host?

It shows you that it is not routed, it just leaves the network adapter with the corresponding matching subnet.

Maybe an example is better than trying to explain it over-and-over:

Code:
proxmox-laptop:~# ip route get 192.168.1.1
192.168.1.1 dev vmbr0 src 192.168.1.6 uid 0

proxmox-laptop:~# ip route get 8.8.8.8
8.8.8.8 via 192.168.1.1 dev vmbr0 src 192.168.1.6 uid 0

The first one is just like yours, stating that the packages are leaving vmbr0. The second is routed through 192.168.1.1 because the subnet does not match.

Anyway, traceroute result is below, and it doesn't show the interface that's being used.

Of course not, because that is - as I tried to explain repeatedly - not routing. Routing is necessary in different network segments, you are part of the network all the time, therefore no routing is performed.

But I don't understand your comment about traffic not being able to go out of the NIC if the subnet doesn't match, all internet traffic is on a different subnet and it goes out the cluster NIC no problem.

If you want to reach a computer in the same network, the package will just be send to the destination. If you want to reach a computer in a non-local network (e.g. the internet), then you cannot send directly and have to send via a gateway. Therefore in the local scenario, a router is not involved and that is what you see in your output.

I'm still not sure why the read/write speeds are so low.

We're dissecting this there.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!