Trunk in KVM

un1x0d

Renowned Member
Mar 14, 2012
26
4
68
Hello.

I'm using trunk in KVM.

This network configuration on node:
Code:
cat /etc/network/interfaces 
# network interface settings

auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

auto bond0
iface bond0 inet manual
    slaves eth0 eth1
    bond_miimon 100
    bond_mode 802.3ad

auto vmbr0
iface vmbr0 inet static
    address  x.x.x.x
    netmask  x.x.x.x
    gateway  x.x.x.x
    bridge_ports bond0
    bridge_stp off
    bridge_fd 0

Configuration Debian in KVM:
Code:
cat /etc/network/interfaces 
auto lo
iface lo inet loopback

auto eth0.1000

iface eth0.1000 inet static
 address x.x.x.x
 netmask x.x.x.x
 gateway x.x.x.x
 vlan_raw_device eth0

I'm adding VLAN interface into configuration on node:
Code:
iface bond0.2000 inet manual
    vlan_raw_device bond0

When I run bond0.2000 interface on node
Code:
ifup bond0.2000
Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config
Added VLAN with VID == 2000 to IF -:bond0:-
trunk in KVM change state down (ping timeout).

Proxmox 1.9 works with this configuration. Why Proxmox 2.0 not working?

I was wrong. Does not work in both versions. Why?

Proxmox version on node:
Code:
pveversion 
pve-manager/2.0/18400f07
 
Last edited:
Hi,
is eth0.1000 your guest vm network configuration ?

Maybe you can try to tag vlan on the host side.
In your vm network card configuration, choose bridge=vmbr0 and put vlan=1000.

This automaticly create a new bond.1000 interface on proxmox host, and a new bridge "vmbr0v1000" and attach guest network card on it.
 
Hi,
is eth0.1000 your guest vm network configuration ?
Yes. Trunk from network.

Maybe you can try to tag vlan on the host side.
In your vm network card configuration, choose bridge=vmbr0 and put vlan=1000.

This automaticly create a new bond.1000 interface on proxmox host, and a new bridge "vmbr0v1000" and attach guest network card on it.

This will be "access port" for VM, but I want usage trunk into VM.
 
Hi, I'm not sure you can mix bond0.2000 and vlan tagging inside vm.

maybe the vlan tag from vms are removed at the output of bond0 in the host, if others vlans exists.

Could you ping between 2vm (with eth0.1000 each) on the same bridge vmbr0 ?


maybe can you try with putting vlan 2000 on physical node interface

Code:
auto lo
iface lo inet loopback


iface eth0 inet manual


iface eth1 inet manual


iface eth0.2000 inet manual


iface eth1.2000 inet manual


auto bond0
iface bond0 inet manual
    slaves eth0 eth1
    bond_miimon 100
    bond_mode 802.3ad


auto bond2000
iface bond2000 inet manual
    slaves eth0.2000 eth1.2000
    bond_miimon 100
    bond_mode 802.3ad


auto vmbr0
iface vmbr0 inet static
    address  x.x.x.x
    netmask  x.x.x.x
    gateway  x.x.x.x
    bridge_ports bond0
    bridge_stp off
    bridge_fd 0
 
I do not mix bond0.2000 and vlan trunk inside vm. When I up bond0.2000 interface on node, trunk inside the VM stops working. When I down bond0.2000 interface on node, trunk inside VM starts and ping success.

You method not bad, but I want to manipulate the console as small as possible.

I found a similar error on one of the forums:
By default, the tagged packets are 'brouted' into the bridge code before
the vlan code gets to see them.
To stop this behaviour, you need an ebtables rule like:
ebtables -t broute -A BROUTING -p 802_1Q -i eth0 -j DROP
which tells the bridge code not to touch any 802.1q packets which in
turn lets the vlan code see them.

But this solution did not help me.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!