Using VLANs on the host and inside VM

jetchko

New Member
Jun 28, 2013
3
2
1
Hello,
Currently I'm testing proxmox VE and I have the following problem (kvm virtualization only):
While I'm using basic network setup (vmbr0 with eth0 as port) and connecting virtual machines to vmbr0 without dedicated VLAN everything works fine, but as soon as I configure VLAN on host interface (eth0) I'm losing connections via VLANs configured inside virtual machine. If I configure host VLAN interface on the bridge (vmbr0) everything works fine, but then if I want to use dedicated VLAN as interface for some virtual machine things go wrong again: I'm losing connectivity to the VLAN configured on vmbr0.

I hope someone here have a lot more experience than me on this and can tell me what I'm doing wrong.

Linux proxmox-2 2.6.32-23-pve #1 SMP Tue Jul 23 07:58:26 CEST 2013 x86_64 GNU/Linux
(pvetest repo)
 
Hello,Currently I'm testing proxmox VE and I have the following problem (kvm virtualization only):While I'm using basic network setup (vmbr0 with eth0 as port) and connecting virtual machines to vmbr0 without dedicated VLAN everything works fine, but as soon as I configure VLAN on host interface (eth0) I'm losing connections via VLANs configured inside virtual machine. If I configure host VLAN interface on the bridge (vmbr0) everything works fine, but then if I want to use dedicated VLAN as interface for some virtual machine things go wrong again: I'm losing connectivity to the VLAN configured on vmbr0.I hope someone here have a lot more experience than me on this and can tell me what I'm doing wrong.Linux proxmox-2 2.6.32-23-pve #1 SMP Tue Jul 23 07:58:26 CEST 2013 x86_64 GNU/Linux(pvetest repo)
cat /etc/network/interfaces
 
That is my /etc/network/interfaces. I simplified it while I was trying to find the problem.
Originally eth0 and eth1 are teamed in bond0 (active-backup mode), bond0 was the port of vmbr0, respectively vlan 301 was setup on bond0.301

Code:
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

auto vmbr0
iface vmbr0 inet static
        address  10.104.65.224
        netmask  255.255.254.0
        gateway  10.104.65.254
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

auto eth0.301
iface eth0.301 inet static
        address  172.16.2.10
        netmask  255.255.255.0
        broadcast  172.16.2.255
        network 172.16.2.0
        vlan_raw_device eth0
That is virtual machine config file:
Code:
balloon: 512
bootdisk: virtio0
cores: 2
cpu: host
ide2: none,media=cdrom
memory: 1024
name: ibm-bomc
net0: virtio=4E:A8:7C:B9:55:F4,bridge=vmbr0
ostype: l26
sockets: 1
vga: qxl
virtio0: local:2000/base-2000-disk-1.qcow2/2001/vm-2001-disk-1.qcow2,format=qcow2,size=16G
Network setup inside vm:
Code:
[B]vm$ ip link[/B]
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 4e:a8:7c:b9:55:f4 brd ff:ff:ff:ff:ff:ff
3: vlan500@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 4e:a8:7c:b9:55:f4 brd ff:ff:ff:ff:ff:ff

[B]vm$ ip address[/B]
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 4e:a8:7c:b9:55:f4 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::4ca8:7cff:feb9:55f4/64 scope link 
       valid_lft forever preferred_lft forever
3: vlan500@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 4e:a8:7c:b9:55:f4 brd ff:ff:ff:ff:ff:ff
    inet 10.80.44.3/26 brd 10.80.44.63 scope global vlan500
    inet6 fe80::4ca8:7cff:feb9:55f4/64 scope link 
       valid_lft forever preferred_lft forever

[B]vm$ ip route[/B]
10.80.44.0/26 dev vlan500  proto kernel  scope link  src 10.80.44.3 
default via 10.80.44.1 dev vlan500
And now the problem:
Code:
[B]vm$[/B] ping -q -c 10 10.80.44.1
PING 10.80.44.1 (10.80.44.1) 56(84) bytes of data.
--- 10.80.44.1 ping statistics ---
10 packets transmitted, 0 received, 100% packet loss, time 18999ms

[B]proxmox#[/B] ifdown eth0.301
Removed VLAN -:eth0.301:-

[B]vm$[/B] ping -q -c 10 10.80.44.1
PING 10.80.44.1 (10.80.44.1) 56(84) bytes of data.
--- 10.80.44.1 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9000ms
rtt min/avg/max/mdev = 0.538/0.588/0.617/0.026 ms
 
Last edited:
OK, I think I found a solution, but maybe also a bug in PVE network management.
If host needs access to VLANs, these VLANs must be configured on the bridge interface (vmbr0) and NOT on bridge ports.
Here is my /etc/network/interfaces which is working:
Code:
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

auto bond0
iface bond0 inet manual
        slaves eth0 eth1
        bond_miimon 100
        bond_mode active-backup

auto vmbr0
iface vmbr0 inet static
        address  10.104.65.224
        netmask  255.255.254.0
        gateway  10.104.65.254
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0

auto vmbr0.301
iface vmbr0.301 inet static
        address  172.16.2.10
        netmask  255.255.255.0
        broadcast  172.16.2.255
        network 172.16.2.0
        vlan_raw_device vmbr0
With this setup I have access to native VLAN and VLAN301 from the host (vmbr0 and vmbr0.301 respectively).
Also if you attach any kvm machine to vmbr0 without VLAN tag, i.e.:
Code:
net0: virtio=4E:A8:7C:B9:55:F4,bridge=vmbr0
you can use default VLAN and any VLAN tagging inside virtual machine.

Things break when you can try to attach virtual machine's network interface to specific VLAN, i.e:
Code:
net0: virtio=0A:B4:CF:69:22:6D,bridge=vmbr0,tag=500
PVE's network implementation creates something like:
Code:
# brctl show
bridge name     bridge id               STP enabled     interfaces
vmbr0           8000.e41f13ca39cc      no               bond0
                                                        tap2001i0
vmbr0v500       8000.e41f13ca39cc      no               [B][COLOR=#ff0000]bond0.500[/COLOR][/B]
                                                        tap2002i0
and this VLAN tag on bridge port (bond0) breaks tagging on the bridge itself.

I hacked /usr/share/perl5/PVE/Network.pm to create vlan tag on bridge itself instead of bridge port:
Code:
# brctl show
bridge name     bridge id               STP enabled     interfaces
vmbr0           8000.e41f13ca39cc       no              bond0
                                                        tap2001i0
vmbr0v500       8000.e41f13ca39cc       no              tap2002i0
                                                        [B][COLOR=#00ff00]vmbr0.500[/COLOR][/B]
Now everything works as expected on the host and inside virtual machine.
As a result I'm not sure is that a bug in network setup implementation or there is some specific reason so thing are done in that (wrong IMO) way.
 
Last edited: