Hi Forum.
I've just bought a new FIrewall (Netgate XG-7100-1U) with PFsense installed.
One Server i a HPE Proliant - with one onboard NIC - and 2 NIC using PCIe. Hoping this is possiby somehow so I'll try to descripbe my setup/plans, in the network.
Administration Interface ( 172.16.10.0/24 ) - Administration LAN for switch and Proxmox. --> The onboard NIC
DMZ Interface (192.168.19.0/24) - All VM is placed in this zone.
I thinking creating the cluster on Administration Interface - and can connect to the Proxmox VE ( All in subnet 172.16.10.0/24) - and then Bond the 2 NIC from the PCIe. and connect the BOND to vmbr1 (as LACP)
Since I cannot get Proxmox to use static DHCP-lease instead static IP - I'm having this issue
On my PVE - I've set the MAIN NIC ( the onboard) as vmbr0 - with the interface enp3s0 ad Slave.
I can here ping 172.16.10.1 ( and the otherhosts on this NET) - have access to internet etc. - but cannot ping google.dk or any external IP ?? Since its a static IP - this carse som issues - since its only have 1
I've would like to Bond the 2 PCIe LAN interfaces as as balance-rr both in Normal and as Openswitch. but with the same result each time.
https://forum.proxmox.com/attachments/nic-png.11184/?temp_hash=7c6756063506b14229b34481ed5b3855
I can ping the 172.16.10.1 ( Gateway = Pfsense) - but not google - says Network is unreachable ?
I cannot ping the 192.168.19.1 ( Gateway - Pfsense) - no network
But when I see the command ip address - I'm getting this:
So my question must be a look a like VMware setup - where you have a primary Interface ( Administration) and bridging the PCIe NIC - to be useable as a bond/switch for PVE.
How can I handle this in proxmox - I've played around Proxmox VE for some time - But cannot add the second gateway for this type of setup - SInce I cannot make this setup work as intended.
How to add these 2 NICs (likely as balance-rr / LACP 802.3AD) for use for the DMZ zone ? Since my plan is having several VM attached to this Bridge/BOND for getting the best optimization for all VM's used in DMZ Zone ?
Hopefully you'll get my point here
I've just bought a new FIrewall (Netgate XG-7100-1U) with PFsense installed.
One Server i a HPE Proliant - with one onboard NIC - and 2 NIC using PCIe. Hoping this is possiby somehow so I'll try to descripbe my setup/plans, in the network.
Administration Interface ( 172.16.10.0/24 ) - Administration LAN for switch and Proxmox. --> The onboard NIC
DMZ Interface (192.168.19.0/24) - All VM is placed in this zone.
I thinking creating the cluster on Administration Interface - and can connect to the Proxmox VE ( All in subnet 172.16.10.0/24) - and then Bond the 2 NIC from the PCIe. and connect the BOND to vmbr1 (as LACP)
Since I cannot get Proxmox to use static DHCP-lease instead static IP - I'm having this issue
On my PVE - I've set the MAIN NIC ( the onboard) as vmbr0 - with the interface enp3s0 ad Slave.
I can here ping 172.16.10.1 ( and the otherhosts on this NET) - have access to internet etc. - but cannot ping google.dk or any external IP ?? Since its a static IP - this carse som issues - since its only have 1
I've would like to Bond the 2 PCIe LAN interfaces as as balance-rr both in Normal and as Openswitch. but with the same result each time.
https://forum.proxmox.com/attachments/nic-png.11184/?temp_hash=7c6756063506b14229b34481ed5b3855
I can ping the 172.16.10.1 ( Gateway = Pfsense) - but not google - says Network is unreachable ?
I cannot ping the 192.168.19.1 ( Gateway - Pfsense) - no network
But when I see the command ip address - I'm getting this:
Code:
root@pve02:~# ping google.dk
connect: Network is unreachable
root@pve02:~# ping google.dk
connect: Network is unreachable
root@pve02:~# ping 172.16.10.1
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=64 time=0.190 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=64 time=0.199 ms
^C
--- 172.16.10.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 4ms
rtt min/avg/max/mdev = 0.190/0.194/0.199/0.014 ms
root@pve02:~# ping 192.168.19.100
PING 192.168.19.100 (192.168.19.100) 56(84) bytes of data.
64 bytes from 192.168.19.100: icmp_seq=1 ttl=64 time=0.067 ms
^C
--- 192.168.19.100 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms
root@pve02:~# ping 192.168.19.1
PING 192.168.19.1 (192.168.19.1) 56(84) bytes of data.
From 192.168.19.100 icmp_seq=1 Destination Host Unreachable
From 192.168.19.100 icmp_seq=2 Destination Host Unreachable
From 192.168.19.100 icmp_seq=3 Destination Host Unreachable
^C
--- 192.168.19.1 ping statistics ---
5 packets transmitted, 0 received, +3 errors, 100% packet loss, time 93ms
pipe 4
root@pve02:~# ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether 68:b5:99:79:fc:e3 brd ff:ff:ff:ff:ff:ff
3: enp2s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UP group default qlen 1000
link/ether 00:26:55:db:65:66 brd ff:ff:ff:ff:ff:ff
4: enp2s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UP group default qlen 1000
link/ether 00:26:55:db:65:67 brd ff:ff:ff:ff:ff:ff
5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 68:b5:99:79:fc:e3 brd ff:ff:ff:ff:ff:ff
inet 172.16.10.11/24 brd 172.16.10.255 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::6ab5:99ff:fe79:fce3/64 scope link
valid_lft forever preferred_lft forever
6: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:26:55:db:65:66 brd ff:ff:ff:ff:ff:ff
inet 192.168.19.100/24 brd 192.168.19.255 scope global vmbr1
valid_lft forever preferred_lft forever
inet6 fe80::226:55ff:fedb:6566/64 scope link
valid_lft forever preferred_lft forever
root@pve02:~# route add -net 192.168.19.0/24 gw 192.168.19.1 dev vmbr1
SIOCADDRT: Network is unreachable
root@pve02:~# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
172.16.10.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr0
So my question must be a look a like VMware setup - where you have a primary Interface ( Administration) and bridging the PCIe NIC - to be useable as a bond/switch for PVE.
How can I handle this in proxmox - I've played around Proxmox VE for some time - But cannot add the second gateway for this type of setup - SInce I cannot make this setup work as intended.
How to add these 2 NICs (likely as balance-rr / LACP 802.3AD) for use for the DMZ zone ? Since my plan is having several VM attached to this Bridge/BOND for getting the best optimization for all VM's used in DMZ Zone ?
Hopefully you'll get my point here