Second vmbr and NIC defeats the first in terms of WebUI, SSH

MarioG

Member
Sep 29, 2023
8
0
6
I have a host with 4 NICs. The first NIC was assigned as nic0, and to vmbr0, with a CIDR of 192.168.50.40/24. I can access the Web UI and SSH into the host fine.
I then configured nic1 with vmbrWS with a CIDR of 10.10.10.40/24 and no default gateway. I want the host to access an NFS share on the 10.10.10.x network.
nic0 is physically connected to the 192.168.50.x switch, and nic1 is physically connected to the 10.10.10.x switch.
Once I apply the settings, I lose the webUI and SSH access to 192.168.50.40, but can then access them through 10.10.10.40.
But I want my webUI and cluster qdevice on the 192.168.50.x network (which is for servers).
I read that there were ARP settings for a multi-homed host and try changing some settings there, without luck.

So, how can I have the web ui, ssh, and cluster devices all on the 192.168.50.x network (vmbr0 and nic0), but still have the proxmox host access an NFS on the 10.10.10x network (vmbrWS, nic1, CIDR 10.10.10.43/0 with no default gateway)?

Note: When I had the Linux bonds like vmbrWS setup with no CIDR, it worked great for VMs. I could have a VM that had 2 network cards, each tied to a different network, and the vm could communicate with both networks. It's just the ProxMox host talking to more than one network that gives me a problem.

Thanks in advance.
 
Why do you need to give the Proxmox host an IP address on more than one virtual bridge? Can you show the actual /etc/network/interface (in CODE-tags) that you want and is giving your problems? EDIT: Are you adjusting the /etc/hosts file as well for the multiple IP addresses for the Proxmox host?
 
Last edited:
Thank you for your response.

I want a second IP address because the second virtual bridge is connected to a second physical NIC and network:
#1 I want the webui and ssh access on 192.168.50.43 (a physical switch and network card)
#2 I want to map an NFS share from the ProxMox host that is hosted on the 10.10.10.x network (also a physical switch and NIC), so I can do backups (simple ones, not PBS yet) to that device.

For background, I am trying to phase out VMWare where I have vmkernel devices to both networks, and the ESXi host can access both networks (192.168.50.x and 10.10.10.x) as well as VMs mapped to virtual switches link to physical NICs. VMware is setup to accomplish #1 and #2.

This is the /etc/network/interfaces file that works fine for #1 (nic1 and vmbrWS isn't used by VMs yet), but no NFS access (breaks #2):

Code:
auto lo
iface lo inet loopback

iface nic0 inet manual

iface nic1 inet manual

iface nic2 inet manual

iface nic3 inet manual

iface nic4 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.50.43/24
        gateway 192.168.50.1
        bridge-ports nic0
        bridge-stp off
        bridge-fd 0

auto vmbrWS
iface vmbrWS inet manual
        bridge-ports nic1
        bridge-stp off
        bridge-fd 0

source /etc/network/interfaces.d/*

When I add an IP address to vmbrWS, I can no longer access the web ui or SSH on the 192.168.50.43 address (breaking #1). I can however see the NFS server on 10.10.10.x. I can access the webUI and SSH from 10.10.10.x machines, but I don't want that. The /etc/network/interfaces file looks like this:

Code:
auto lo
iface lo inet loopback

iface nic0 inet manual

iface nic1 inet manual

iface nic2 inet manual

iface nic3 inet manual

iface nic4 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.50.43/24
        gateway 192.168.50.1
        bridge-ports nic0
        bridge-stp off
        bridge-fd 0

auto vmbrWS
iface vmbrWS inet static
        address 10.10.10.43/24
        bridge-ports nic1
        bridge-stp off
        bridge-fd 0

source /etc/network/interfaces.d/*

I understand that the second virtual switch network should not have a gateway, and it doesn't. I've read that ARP routing needs to be corrected in a scenario like this, but the settings I found don't seem to correct the issue.

I am on Proxmox 9.0.11 with a new install. I want to make sure all the networking is correct before I add my second node and disaster recovery host machines, which will all need similar networking setup (all have 4 NICs and connections to physical switches. I will also be adding a qdevice that I want on that same 192.168.50.x network but obviously won't need 10.10.10.x on that.

Here is an example of adding NFS storage seeing the NFS server on 10.10.10.x when I add the 10.10.10.43 IP to the second virtual switch vmbrWS. Without it, the Export list is not populated.
1761258281621.png
So, I want my Proxmox host to be able to access 2 networks. I have no problem doing this from VMs that are linked to each virtual switch (and have an appropriate IP address).

Maybe I should ask a simpler question: With Proxmox 9.0.11, with 2 network cards on 2 physical networks, can I give the ProxMox access to both networks without an external router/firewall? Any advice is appreciated.
 
Last edited:
Just to add a comment: I did not change the hosts file. I am not referring to anything by name yet, only IP address. Once I get the IP addresses working, I will add certificates as needed and define host names (for things like each ProxMox host and the NFS server). Currently, the hosts file lists 192.168.50.43 for the host name (fqdn and the short version).
 
I am not using such config variant, so i can theoretize it looks ok.
But i am using vlans everywhere and never assign ip to bridge, but using subinterface every time.

Anyway, PVE can access multiple networks without fw/router.
For nfs access you even don't need to have bridge - if physical card is for only one subnet.
 
Thank you for your response czechsys. The physical card, nic1, is only for the 10.10.10.x network, physically connected to a switch dedicated to 10.10.10.x.

I also tried just assigning the IP address to nic1 and no bridge. While this restored the web ui and ssh on 192.168.50.43, I then still couldn't get a list of the NFS shares (like I could in the attached image), and I can't create VMs that connect to that network.

I agree that ProxMox can access multiple networks without the fw/router, as I was able to create VMs where the VMs could access both vmbr0 and vmbrWS as long as they have 2 NICs in their hardware config. The problem starts when I try Datacenter - Storage - Add NFS for a Synology NAS at 10.10.10.24. Without the IP address on the vmbrWS bridge, the VMs could access the same NFS device that the ProxMox host can't.

I am going to say that while familiar with VLANs on switches, I have never used one via ProxMox, but my first test went badly: I removed the vmbrWS bridge and added a Linux VLAN named vlanWS, linked it (vlan-raw-device) to nic1, IP (address) 10.10.10.43/24 and vlan-id of 1. When I applied the networking changes, then I lost Web UI and SSH access via both networks (192.168.50.x and 10.10.10.x). I had to resort to a remote IP KVM device to regain control of the host and undo the changes.

So still have a problem where I can't achieve my goals:
1. Connect to 192.168.50.43 for Web UI, ssh (vmbr0 and nic0 w/connection to physical 192.168.50.x switch)
2. Allow VMs to also access that same network, via vmbr0.
3. Allow VMs to access the 10.10.10.x network (vmbrWS and nic1 w/connection to physical 10.10.10.x switch)
4. Allow the ProxMox host to connect to an NFS share on a Synology located at 10.10.10.24 (I had thought with a vmbrWS IP).

I can get any of the above working fine, except I can't get all 4 working correctly, and #4 is where things break.

I would even be willing to make 2 NICs connect to the 10.10.10.x network, one for VMs and one for the host, but I assume as soon as I gave that new bridge an IP address to connect, then I would be back in the same boat. I will only need 3 actual NICs for 3 networks, and I have 5 NICs.

Again, any input is appreciated.