bond interface shared for VLAN and bridge | VM not get a network connection

MisterDeeds

Active Member
Nov 11, 2021
143
33
33
35
Hello together

I have a PVE host with a bond interface which I use for management and clustering but also as NIC for the VMs.

1676272665139.png

1676272675351.png

In the VM, I use the bridge as a NIC and each has a VLAN tag.

1676277407112.png

Unfortunately, however, the VM does not get a network connection. What am I doing wrong?

Code:
root@PVE004:~# pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.83-1-pve)
pve-manager: 7.3-4 (running version: 7.3-4/d69b70d4)
pve-kernel-helper: 7.3-3
pve-kernel-5.15: 7.3-1
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-15
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.4.174-2-pve: 5.4.174-2
pve-kernel-5.4.143-1-pve: 5.4.143-1
ceph: 16.2.9-pve1
ceph-fuse: 16.2.9-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.3
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.3-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-2
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-2
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.3.2-1
proxmox-backup-file-restore: 2.3.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.0-1
proxmox-widget-toolkit: 3.5.3
pve-cluster: 7.3-2
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.6-3
pve-ha-manager: 3.5.1
pve-i18n: 2.8-2
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

Code:
root@PVE004:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno3 inet manual
#1G

auto eno1
iface eno1 inet manual
#10G

auto eno2
iface eno2 inet manual
#10G

iface eno4 inet manual
#1G

auto enp131s0f0
iface enp131s0f0 inet manual
#25G

auto enp131s0f1
iface enp131s0f1 inet manual
#25G

iface idrac inet manual

auto enp133s0f0
iface enp133s0f0 inet manual
#25G

auto enp133s0f1
iface enp133s0f1 inet manual
#25G

auto bond0
iface bond0 inet static
        address 192.168.17.163/24
        bond-slaves enp131s0f0 enp131s0f1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3
        mtu 9000

auto bond1
iface bond1 inet manual
        bond-slaves enp133s0f0 enp133s0f1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3
        mtu 9000

auto vmbr0
iface vmbr0 inet manual
        bridge-ports bond1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000

auto vlan10
iface vlan10 inet static
        address 192.168.10.163/24
        gateway 192.168.10.1
        mtu 9000
        vlan-raw-device bond1

Code:
root@PVE004:~# cat /etc/pve/nodes/PVE004/qemu-server/400.conf
agent: 1,fstrim_cloned_disks=1
args: -uuid 00000000-0000-0000-0000-000000000400
bios: ovmf
boot: order=sata0;ide0
cores: 2
cpu: host
efidisk0: PVNAS1-Vm:400/vm-400-disk-1.qcow2,size=128K
hostpci0: 0000:82:00.0,device-id=0x1e30,mdev=nvidia-266,sub-device-id=0x129e,sub-vendor-id=0x10de,vendor-id=0x10de,x-vga=1
ide0: none,media=cdrom
machine: pc-q35-7.1
memory: 22528
name: vPC62
net0: virtio=6A:14:2E:07:10:65,bridge=vmbr0,tag=100
numa: 1
onboot: 1
ostype: win10
sata0: PVNAS1-Vm:400/vm-400-disk-0.qcow2,cache=writeback,discard=on,size=50G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=82c4495f-dd27-4ff7-aae4-fcd73bf8d785
sockets: 2
tpmstate0: PVNAS1-Vm:400/vm-400-disk-0.raw,size=4M,version=v2.0
vmgenid: ac40cb95-1645-40eb-9e43-2f230833bda8

Thank you and best regards
 
What is the network configuration inside the guest?
Where are you trying to connect to in order to test the configuration?

What ist the status of the networking, does the new configuration apply?
Code:
systemctl status networking
 
Dear Stefan

Thank you for the answer. The network settings have been applied.

Code:
root@PVE004:~# systemctl status networking
● networking.service - Network initialization
     Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
     Active: active (exited) since Mon 2023-02-13 07:50:57 CET; 2h 48min ago
       Docs: man:interfaces(5)
             man:ifup(8)
             man:ifdown(8)
    Process: 862268 ExecStart=/usr/share/ifupdown2/sbin/start-networking start (code=exited, status=0/SUCCESS)
   Main PID: 862268 (code=exited, status=0/SUCCESS)
        CPU: 2.781s

Feb 13 07:50:53 PVE004 systemd[1]: Starting Network initialization...
Feb 13 07:50:53 PVE004 networking[862268]: networking: Configuring network interfaces
Feb 13 07:50:53 PVE004 networking[862280]: warning: bond0: attribute bond-min-links is set to '0'
Feb 13 07:50:54 PVE004 networking[862280]: warning: bond1: attribute bond-min-links is set to '0'
Feb 13 07:50:57 PVE004 systemd[1]: Finished Network initialization.

The guest operating systems are on DHCP and should automatically receive an IP from the server. They contain only an APIPA address.

The checkbox "VLAN aware" at the bridge would define that VLAN tags are taken from the client, correct?
1676282770391.png

Thank you and best regards
 

Attachments

  • 1676282496226.png
    1676282496226.png
    29.5 KB · Views: 8
The guest operating systems are on DHCP and should automatically receive an IP from the server. They contain only an APIPA address.
Do they? Have you checked whether this is actually the case e.g. via ipconfig?

The checkbox "VLAN aware" at the bridge would define that VLAN tags are taken from the client, correct?
Yes, this means that this bridge can handle VLAN-Tags. The setting on the network device should automatically tag all outgoing traffic with the respective VLAN-Tag.

How are you trying to verify whether this is working? Are you trying to ping an IP address? Internet Access?
 
Dear Stefan

Thank you very much for the answer. I try from the VM to ping e.g. the gateway. I have also distributed a fixed IP address as a test.

1676292643427.png

1676292651935.png

I have now redone it again. That the VLAN Interface is directly a part of the bridge (this according to documentation):
1676292761121.png

1676292774652.png
As soon as I solve it like this, it works (vmbr1 is then assigned to the VM):
1676292849405.png
So the switch ports should be correctly configured as trunk. However, the switch port on which the eno3 interface is connected is statically defined as access vlan port 10. Then it works.

It looks to me like the bond1 has a problem when it gets the VLAN tag once "statically" (over vmbr0.10) and once via VLAN-Aware fromthe guest.

would you have another idea? Thank you and best regards
 
In general this should be no problem at all, which is why I suspected some issue with the testing in the first place.

The way I would set it up generally is by creating a vlan-aware vmbrX that has the respective bond as its bridge-port. Then tag the network devices of the VM with the respective VLAN-Tag. This largely seems to be the setup that you currently have.

Can you set up a vmbrX.99 / vmbrX.100 and configure an IP for the host there? Does pinging the host from the guests work then?

You can always debug the whole iface / bridge via tcpdump and look at the packages, it often times can give good indications on what is wrong with the setup:

Code:
tcpdump -i vmbrX -e vlan
tcpdump -i bondX -e vlan
 
Last edited:
Dear Stefan, dear Alex

Thank you for your answer. Unfortunately, it did not give me a notification, which is why I'm just getting back to you now....

I have now solved it as follows:

1676644226673.png

I use the bond0 to transmit my two VLAN (10 and 17). I have assigned bond1 to the VM bridge. The guest operating systems can now be set the VLAN tag, which is working.

1676644524854.png

Thanks for your help and have a nice weekend
 
  • Like
Reactions: guletz

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!