VM unable to see secondary network

mikeely

New Member
Jan 3, 2025
11
2
3
Greetings. We have a cluster with the following network configuration, most of which is working correctly but when I try to add a second NIC to a VM it can't see the network. Here's the network layout:

Host level - everything works here:
bond0 (used for management only)
bond1 -> bond1.110 (VLAN, MTU 9000) -> netappbr0 (bridge with IP, MTU 9000, connects to storage)
bond1 -> vmbr1 (VLAN aware, MTU 9000, used for VM connectivity)

Cluster level SDN:
vmbr1 -> Zone Data (vlan, MTU 1500) -> VNet VLAN200 (tag 200, vlan aware, works fine)
vmbr1 -> Zone Data (vlan, MTU 1500) -> VNet VLAN210 (tag 210, vlan aware, works fine)
vmbr1 -> Zone Storage (vlan, MTU 9000) -> VNet VLAN110 (tag 110, vlan aware, DOES NOT WORK)

I'm able to verify that the MTU works without fragmentation from both netappbr0 and any VM connected to either VLAN200 or 210 (the latter two at 1500). What am I missing here as to why connections using VLAN110 aren't seeing anything on that network?
 
Please post the contents of the following files:
  • /etc/pve/qemu-server/<VMID_of_VM_needing_second_NIC>.conf
  • /etc/network/interfaces
  • /etc/network/interfaces.d/sdn
 
Sure thing, thanks:
  • /etc/pve/qemu-server/<VMID_of_VM_needing_second_NIC>.conf
    • Code:
      agent: 1
      boot: order=scsi0;ide2;net0
      cores: 1
      cpu: x86-64-v2-AES
      ide2: none,media=cdrom
      memory: 4096
      meta: creation-qemu=9.0.2,ctime=1735870522
      name: proxmox-client1.sr
      net0: virtio=BC:24:11:E9:70:B9,bridge=VLAN210
      net1: virtio=BC:24:11:94:E2:F2,bridge=VLAN110,mtu=1
      numa: 0
      onboot: 1
      ostype: l26
      scsi0: NetApp:100/vm-100-disk-0.qcow2,iothread=1,size=80G
      scsihw: virtio-scsi-single
      smbios1: uuid=902e1de7-3456-4e73-9fcf-88c9882a1c35
      sockets: 2
      vmgenid: b46ad4cd-54ea-4106-9bb1-28d26b563b97
  • /etc/network/interfaces
    • Code:
      # network interface settings; autogenerated
      # Please do NOT modify this file directly, unless you know what
      # you're doing.
      #
      # If you want to manage parts of the network configuration manually,
      # please utilize the 'source' or 'source-directory' directives to do
      # so.
      # PVE will preserve these directives, but will NOT read its network
      # configuration from sourced files, so do not attempt to move any of
      # the PVE managed interfaces into external files!
      
      auto lo
      iface lo inet loopback
      
      auto eno1
      iface eno1 inet manual
      
      auto eno2
      iface eno2 inet manual
      
      auto enp1s0f0
      iface enp1s0f0 inet manual
      
      auto enp1s0f1
      iface enp1s0f1 inet manual
      
      auto bond0
      iface bond0 inet static
              bond-slaves eno1 eno2
              bond-miimon 100
              bond-mode 802.3ad
              bond-xmit-hash-policy layer2+3
              bond-downdelay 200
              bond-updelay 200
              bond-lacp-rate fast
      
      auto bond0.210
      iface bond0.210 inet manual
      
      auto bond1
      iface bond1 inet manual
              bond-slaves enp1s0f0 enp1s0f1
              bond-miimon 100
              bond-mode 802.3ad
              bond-xmit-hash-policy layer2+3
              mtu 9000
              bond-downdelay 200
              bond-updelay 200
              bond-lacp-rate fast
      
      auto bond1.110
      iface bond1.110 inet manual
              mtu 9000
      #NetApp
      
      auto vmbr0
      iface vmbr0 inet static
              address (redacted).142/27
              gateway (redacted).129
              bridge-ports bond0.210
              bridge-stp off
              bridge-fd 0
      
      auto vmbr1
      iface vmbr1 inet manual
              bridge-ports bond1
              bridge-stp off
              bridge-fd 0
              bridge-vlan-aware yes
              bridge-vids 2-4094
              mtu 9000
      
      auto netappbr0
      iface netappbr0 inet static
              address 172.16.10.111/24
              bridge-ports bond1.110
              bridge-stp off
              bridge-fd 0
              mtu 9000
      
      source /etc/network/interfaces.d/*
  • /etc/network/interfaces.d/sdn
    • Code:
      #version:15
      
      auto VLAN110
      iface VLAN110
              bridge_ports vmbr1.110
              bridge_stp off
              bridge_fd 0
              bridge-vlan-aware yes
              bridge-vids 2-4094
              mtu 9000
              alias NetApp
      
      auto VLAN200
      iface VLAN200
              bridge_ports vmbr1.200
              bridge_stp off
              bridge_fd 0
              bridge-vlan-aware yes
              bridge-vids 2-4094
              mtu 1500
              alias Kickstart
      
      auto VLAN210
      iface VLAN210
              bridge_ports vmbr1.210
              bridge_stp off
              bridge_fd 0
              bridge-vlan-aware yes
              bridge-vids 2-4094
              mtu 1500
              alias Initial Public
 
Thank you for sharing the contents of those files.

At first glance, it seems like it should be working. Please try changing:

net1: virtio=BC:24:11:94:E2:F2,bridge=VLAN110,mtu=1

To the following. Bridge to vmbr1 and add the tag for VLAN 110:

net1: virtio=BC:24:11:94:E2:F2,bridge=vmbr1,mtu=1,tag=110

If this works, there is likely an issue in the config we have not identified. If the bridge with the VLAN tag does not work, I would look at the configuration of the switch connected to bond1.


How are you testing connections using VLAN110 and determining they "aren't seeing anything on that network?" Ping? Curl? Application?
 
I've tried your suggestion, restarted the VM and still no joy. Using ping to verify connectivity. Switch config is pretty straightforward and the VLAN does work from the Proxmox host - can verify that as the NFS store is able to connect.

Code:
vm-dist1-1.sr> show configuration interfaces xe-0/0/24     
description "[Sys] to a.proxmox.sr data";
ether-options {
    802.3ad ae33;
}

{master:0}
vm-dist1-1.sr> show configuration interfaces ae33 
apply-groups std-mclag-active-v1;
description "[Sys] to a.proxmox.sr data";
aggregated-ether-options {
    lacp {
        admin-key 33;
    }
    mc-ae {
        mc-ae-id 33;
        redundancy-group 1;
    }
}
unit 0 {
    family ethernet-switching {
        vlan {
            members [ 110 200 210-219 420 1001-1003 1100-1103 3001-3006 ];
        }
    }
}
 
I ended up creating an OVS configuration for the faster bond, then connecting the zones to the openvswitch bridge. Now all is well.
 
  • Like
Reactions: weehooey-bh