[SOLVED] Trunking VM not working

fmoreira86

Member
Aug 7, 2013
30
0
6
I want to configure PFsense a my lab virtual firewall.

My physical server has 4 nics but for this purpose I'm using only two with LACP.

My Cisco config:



Code:
!
interface FastEthernet0/45
 switchport trunk native vlan 192
 switchport mode trunk
 channel-group 1 mode passive
end

!
interface FastEthernet0/46
 switchport trunk native vlan 192
 switchport mode trunk
 channel-group 1 mode passive
end

interface Port-channel1
 switchport trunk native vlan 192
 switchport mode trunk
!




Proxmox config:

Code:
# network interface settings
auto lo
iface lo inet loopback


iface eth0 inet manual


iface eth1 inet manual


iface eth2 inet manual


iface eth3 inet manual


auto bond0
iface bond0 inet manual
    slaves eth0 eth1
    bond_miimon 100
    bond_mode 802.3ad


auto vmbr0
iface vmbr0 inet static
    address  192.168.192.9
    netmask  255.255.255.0
    gateway  192.168.192.253
    bridge_ports bond0
    bridge_stp off
    bridge_fd 0


auto vmbr1
iface vmbr1 inet manual
    bridge_ports eth2
    bridge_stp off
    bridge_fd 0

My VM config:

proxmox1.png

My PFSense Config

pfsense.png

I set rules of "any any allow" to all the interfaces so pfsense is only routing now.

Problem, I can't get the trunk to work. For instance, I can't ping 192.168.200.1 neither 192.168.201.1.

SOMETIMES it starts working, other times just vlan 200 works, other times only vlan 201 works, sometimes both work and MOST of the times none work...

Wan Interface (em0 on pfsense, net0 on proxmox) always work.

Any hint?

NOTE: This is a LAB for my own fun to test proxmox + pfsense to evaluate both as possible production tools...
 
Last edited:
proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1



could you please post the pveversion -v command output ?
 
Hi,

As far I remember, it's a bug in linux bridges.

you can't mix tagging on the host side and at same time tagging inside vm.


if your case, I think you have


bond0 -->vmbr0 <---- tagging inside vm

and

bond0.202 --> vmbr0v202---> vm


the bond0.202 break tagging inside the vm.



I think you can try with openvswitch, it should work out of the box.
 
Ok,

I installed Openvswitch but now my VMs network is not working.

My config:

Code:
# network interface settings
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto eth1
iface eth1 inet manual

auto eth2
iface eth2 inet manual

auto eth3
iface eth3 inet manual

allow-vmbr1 bond0
iface bond0 inet manual
    ovs_bonds eth0 eth1
    ovs_type OVSBond
    ovs_bridge vmbr1
    ovs_options lacp=active bond_mode=balance-tcp

iface vmbr0 inet manual
    bridge_ports none
    bridge_stp off
    bridge_fd 0

auto vmbr1
iface vmbr1 inet static
    address  192.168.192.9
    netmask  255.255.255.0
    gateway  192.168.192.253
    ovs_type OVSBridge
    ovs_ports bond0

Code:
  pveversion -v
proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-2 (running version: 3.3-2/995e687e)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-35
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-9
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

Plus, when I reboot proxmox bond0 and vmbr1 doesn't come up.

If I do a /etc/init.d/network restart they will show up, I can ping proxmox but I can't get network into the VMs

Any hint?
 
Last edited:
A working openvswitch with bonding, bridging, and vlans is shown on this thread, along with a work-around for why the interfaces don't come up on boot:
http://forum.proxmox.com/threads/19...ulticast-issues-(cluster-keeps-losing-quorum)


Did not work

Code:
# network interface settings
auto lo
iface lo inet loopback


auto eth0
iface eth0 inet manual


auto eth1
iface eth1 inet manual


auto eth2
iface eth2 inet manual


auto eth3
iface eth3 inet manual


allow-vmbr0 bond0
iface bond0 inet manual
    ovs_bonds eth0 eth1
    ovs_type OVSBond
    ovs_bridge vmbr0
    ovs_options bond_mode=balance-tcp lacp=active


auto vmbr0
iface vmbr0 inet static
    address  192.168.192.9
    netmask  255.255.255.0
    gateway  192.168.192.253
    ovs_type OVSBridge
    ovs_ports bond0

Tried the fix for the interfaces to come up and also did not work...

I feel that even if it works (and I believe It would), the "administrative" work to make this run and all the workarrounds necessary (rather than a simple gui config) might be a disadvantage for some IT teams.
 
Last edited:
You would need to give an example of how your network is laid out to comment on if your configuration is valid or not.

The only time you would assign an ip address to the actual bridge itself with OVS is if you're trying to use an untagged vlan.

I've never used the GUI for network configuration. The beauty with openvswitch is the config is minimal, then you just assign vlans to your vms.

So you have a very basic configuration at the host level, any sysadmin should be able to handle that.

I just wrote an Open vSwitch wiki:
http://pve.proxmox.com/wiki/Open_vSwitch
 
Hi, like I said on the first post I am evaluating proxmox among other free "tools" for our production environment.

You can check my switch config on the first post.

Basically what I am trying to achieve is to deliver a trunk to a VM. Something like ESXI 4095 "vlan".

To achieve this I was trying a simple LACP between two physical interfaces. This bond will receive vlan 192 untagged all the others are tagged.

If this was a production environment I would never set it up like this but, like I said this is for test only... And with VMware is bloody simple to achieve.


Sent from my iPad using Tapatalk
 
Hi, here the openvswitch config for your setup.


Code:
auto vmbr0
iface vmbr0 inet manual
        ovs_type OVSBridge
        ovs_ports bond0 management


allow-vmbr0 bond0
iface bond0 inet manual
        ovs_bonds eth0 eth1
        ovs_type OVSBond
        ovs_bridge vmbr0
        ovs_options lacp=active bond_mode=balance-tcp


allow-vmbr0 management
iface management inet static
        address  192.168.192.9
        netmask  255.255.255.0
        gateway  192.168.192.253
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options tag=XXX   >> optionnal, if you want to have vlan tag on your admin ip
 
Hi,

The behaviour is the same.

I can ping proxmox, access web gui etc, but my VMs doesn't have any network connection.

Result:

proxmoxnetwork.png

And I still have the problem of interfaces does not come up after reboot:

Photo_14_10_14_11_12_37.jpg
 
Last edited:
Please verify what version of openvswitch you have installed ... e.g. dpkg -l | grep openvswitch

The only other thing I see is the config spirit provided is missing

allow-ovs vmbr0

Prior to the vmbr0 definition. The OVS docs state it is required, I've never tried without it.
 
Here:

Code:
dpkg -l | grep openvswitch 
openvswitch-common               2.3.0-1                       amd64        Open vSwitch common components
openvswitch-switch               2.3.0-1                       amd64        Open vSwitch switch implementations

Please verify what version of openvswitch you have installed ... e.g. dpkg -l | grep openvswitch

The only other thing I see is the config spirit provided is missing

allow-ovs vmbr0

Prior to the vmbr0 definition. The OVS docs state it is required, I've never tried without it.
 
Ok, your version looks good, but you didn't mention if you added the "allow-ovs vmbr0"
 
Solved.

Reinstalled everything from scratch. Apt-get update, upgrade and then dist-upgrade. All of this with http://download.proxmox.com/debian repo.

Installed openvswitch, installed my machines and now everything works.

Let's continue our proxmox evaluation ;)


Sent from my iPad using Tapatalk
 
Hi, here the openvswitch config for your setup.


Code:
auto vmbr0
iface vmbr0 inet manual
        ovs_type OVSBridge
        ovs_ports bond0 management


allow-vmbr0 bond0
iface bond0 inet manual
        ovs_bonds eth0 eth1
        ovs_type OVSBond
        ovs_bridge vmbr0
        ovs_options lacp=active bond_mode=balance-tcp


allow-vmbr0 management
iface management inet static
        address  192.168.192.9
        netmask  255.255.255.0
        gateway  192.168.192.253
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options tag=XXX   >> optionnal, if you want to have vlan tag on your admin ip

Hi spirit

Since much time ago i tried find the why you use the "management" option in the stanza: "ovs_ports bond0 management", but i don't find a answer in nowhere, nor in this link "http://pve.proxmox.com/wiki/Open_vSwitch"

Can you explain what does such option?

Best regards
Cesar
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!