Howto Network 3 NIC seperate subnet

Udbytossen

Member
Apr 25, 2019
16
0
21
49
Hi Forum.
I've just bought a new FIrewall (Netgate XG-7100-1U) with PFsense installed.
One Server i a HPE Proliant - with one onboard NIC - and 2 NIC using PCIe. Hoping this is possiby somehow so I'll try to descripbe my setup/plans, in the network.

Administration Interface ( 172.16.10.0/24 ) - Administration LAN for switch and Proxmox. --> The onboard NIC
DMZ Interface (192.168.19.0/24) - All VM is placed in this zone.
I thinking creating the cluster on Administration Interface - and can connect to the Proxmox VE ( All in subnet 172.16.10.0/24) - and then Bond the 2 NIC from the PCIe. and connect the BOND to vmbr1 (as LACP)

Since I cannot get Proxmox to use static DHCP-lease instead static IP - I'm having this issue
On my PVE - I've set the MAIN NIC ( the onboard) as vmbr0 - with the interface enp3s0 ad Slave.
I can here ping 172.16.10.1 ( and the otherhosts on this NET) - have access to internet etc. - but cannot ping google.dk or any external IP ?? Since its a static IP - this carse som issues - since its only have 1

I've would like to Bond the 2 PCIe LAN interfaces as as balance-rr both in Normal and as Openswitch. but with the same result each time.
https://forum.proxmox.com/attachments/nic-png.11184/?temp_hash=7c6756063506b14229b34481ed5b3855

I can ping the 172.16.10.1 ( Gateway = Pfsense) - but not google - says Network is unreachable ?
I cannot ping the 192.168.19.1 ( Gateway - Pfsense) - no network
But when I see the command ip address - I'm getting this:

Code:
root@pve02:~# ping google.dk
connect: Network is unreachable
root@pve02:~# ping google.dk
connect: Network is unreachable
root@pve02:~# ping 172.16.10.1
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=64 time=0.190 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=64 time=0.199 ms
^C
--- 172.16.10.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 4ms
rtt min/avg/max/mdev = 0.190/0.194/0.199/0.014 ms
root@pve02:~# ping 192.168.19.100
PING 192.168.19.100 (192.168.19.100) 56(84) bytes of data.
64 bytes from 192.168.19.100: icmp_seq=1 ttl=64 time=0.067 ms
^C
--- 192.168.19.100 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms
root@pve02:~# ping 192.168.19.1
PING 192.168.19.1 (192.168.19.1) 56(84) bytes of data.
From 192.168.19.100 icmp_seq=1 Destination Host Unreachable
From 192.168.19.100 icmp_seq=2 Destination Host Unreachable
From 192.168.19.100 icmp_seq=3 Destination Host Unreachable
^C
--- 192.168.19.1 ping statistics ---
5 packets transmitted, 0 received, +3 errors, 100% packet loss, time 93ms
pipe 4
root@pve02:~# ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 68:b5:99:79:fc:e3 brd ff:ff:ff:ff:ff:ff
3: enp2s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UP group default qlen 1000
    link/ether 00:26:55:db:65:66 brd ff:ff:ff:ff:ff:ff
4: enp2s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UP group default qlen 1000
    link/ether 00:26:55:db:65:67 brd ff:ff:ff:ff:ff:ff
5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 68:b5:99:79:fc:e3 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.11/24 brd 172.16.10.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::6ab5:99ff:fe79:fce3/64 scope link
       valid_lft forever preferred_lft forever
6: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:26:55:db:65:66 brd ff:ff:ff:ff:ff:ff
    inet 192.168.19.100/24 brd 192.168.19.255 scope global vmbr1
       valid_lft forever preferred_lft forever
    inet6 fe80::226:55ff:fedb:6566/64 scope link
       valid_lft forever preferred_lft forever

root@pve02:~# route add -net 192.168.19.0/24 gw 192.168.19.1 dev vmbr1
SIOCADDRT: Network is unreachable
root@pve02:~# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
172.16.10.0     0.0.0.0         255.255.255.0   U     0      0        0 vmbr0

So my question must be a look a like VMware setup - where you have a primary Interface ( Administration) and bridging the PCIe NIC - to be useable as a bond/switch for PVE.
How can I handle this in proxmox - I've played around Proxmox VE for some time - But cannot add the second gateway for this type of setup - SInce I cannot make this setup work as intended.
How to add these 2 NICs (likely as balance-rr / LACP 802.3AD) for use for the DMZ zone ? Since my plan is having several VM attached to this Bridge/BOND for getting the best optimization for all VM's used in DMZ Zone ?
Hopefully you'll get my point here
 

Attachments

  • NIC.PNG
    NIC.PNG
    41.4 KB · Views: 5
* It seems you have not defined a default route? (/etc/network/interfaces contains no gateway line)
* You need to set _one_ default gateway if you want your PVE-node to be able to reach anything not directly connected
* you cannot set more than one default gateway (without some manual intervention) and usually also don't need to:
** choose which interface is the management interface for the PVE node - and configure it with a gateway
** for the other interface - either do not configure any ip on the bridge (a bridge is a layer 2 switch, you don't need an IP for guests to be able to send packets out)

I hope this helps!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!