Migrating to Open vSwitch

I reconfigured the host to use OVS for everything in order to gain access to eth0. The OVS bridge was not given an Ethernet address -- instead, those were assigned to each IntPort. Now I can ping the routers on both 192.168.44.1 and 172.16.88.1 from the host. Unfortunately, this arrangement does not allow venet addresses to leave the box. The host and the CTs can communicate, but the CTs can get only see each other and the host.

-------------- /etc/network/interfaces ------------------
# network interface settings
allow-vmbr1 vlan11
iface vlan11 inet static
address 172.16.11.20
netmask 255.255.255.0
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=11

allow-vmbr1 open44
iface open44 inet static
address 192.168.44.20
netmask 255.255.255.0
gateway 192.168.44.1
ovs_type OVSIntPort
ovs_bridge vmbr1

allow-vmbr1 open88
iface open88 inet static
address 172.16.88.20
netmask 255.255.255.0
ovs_type OVSIntPort
ovs_bridge vmbr1

auto lo
iface lo inet loopback

allow-vmbr1 eth0
iface eth0 inet manual
ovs_type OVSPort
ovs_bridge vmbr1

auto vmbr1
iface vmbr1 inet manual
ovs_type OVSBridge
ovs_ports vlan11 open44 eth0 open88

iface vmbr0 inet manual
bridge_ports none
bridge_stp off
bridge_fd 0



---------------- ovs-vsctl show ----------------------------

336878e3-a754-4886-87b6-767062daebd8
Bridge "vmbr1"
Port "vlan11"
tag: 11
Interface "vlan11"
type: internal
Port "open88"
Interface "open88"
type: internal
Port "vmbr1"
Interface "vmbr1"
type: internal
Port "eth0"
Interface "eth0"
Port "open44"
Interface "open44"
type: internal
ovs_version: "2.0.90"
 
Moving the host IP address into the bridge rather than an IntPort has restored connectivity to the CTs with venet addresses. I re-orderd the interfaces file for readability, manually removed the vestigial vmbr0 info, and renamed the OVS bridge to vmbr0:

----------------- /etc/network/interfaces -----------------
auto lo
iface lo inet loopback

auto vmbr0
iface vmbr0 inet static
address 192.168.44.20
netmask 255.255.255.0
gateway 192.168.44.1
ovs_type OVSBridge
ovs_ports vlan11 eth0 open88

allow-vmbr0 eth0
iface eth0 inet manual
ovs_type OVSPort
ovs_bridge vmbr0

allow-vmbr0 open88
iface open88 inet static
address 172.16.88.20
netmask 255.255.255.0
ovs_type OVSIntPort
ovs_bridge vmbr0

allow-vmbr0 vlan11
iface vlan11 inet static
address 172.16.11.20
netmask 255.255.255.0
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=11



------------------- ovs-vsctl show -------------------------------
dec04e84-e3e5-4ebb-8601-5a122983c4cf
Bridge "vmbr0"
Port "vlan11"
tag: 11
Interface "vlan11"
type: internal
Port "eth0"
Interface "eth0"
Port "vmbr0"
Interface "vmbr0"
type: internal
Port "open88"
Interface "open88"
type: internal
ovs_version: "2.0.90"


I still don't seem to fully understand how the venet addresses are handled inside the host. Is there a way to get them to work using an IntPort for the host address? It seems almost like there's something inside OpenVZ that only understands the "bridge with an address" configuration.
 
Last edited:
Now I have connectivity between the host and a kvm guest, both on untagged and vlan11 interfaces:

root@pve1:~# ping -c 5 172.16.11.2
PING 172.16.11.2 (172.16.11.2) 56(84) bytes of data.
64 bytes from 172.16.11.2: icmp_req=1 ttl=64 time=0.403 ms
64 bytes from 172.16.11.2: icmp_req=2 ttl=64 time=0.165 ms
64 bytes from 172.16.11.2: icmp_req=3 ttl=64 time=0.186 ms
64 bytes from 172.16.11.2: icmp_req=4 ttl=64 time=0.239 ms
64 bytes from 172.16.11.2: icmp_req=5 ttl=64 time=0.208 ms

--- 172.16.11.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.165/0.240/0.403/0.085 ms

root@pve1:~# ping -c 5 172.16.88.2
PING 172.16.88.2 (172.16.88.2) 56(84) bytes of data.
64 bytes from 172.16.88.2: icmp_req=1 ttl=64 time=0.900 ms
64 bytes from 172.16.88.2: icmp_req=2 ttl=64 time=0.196 ms
64 bytes from 172.16.88.2: icmp_req=3 ttl=64 time=0.193 ms
64 bytes from 172.16.88.2: icmp_req=4 ttl=64 time=0.235 ms
64 bytes from 172.16.88.2: icmp_req=5 ttl=64 time=0.181 ms

--- 172.16.88.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3999ms
rtt min/avg/max/mdev = 0.181/0.341/0.900/0.280 ms


Both can reach the router on the untagged interface, but neither can reach the router on vlan11:

root@pve1:~# ping -c 5 172.16.88.1
PING 172.16.88.1 (172.16.88.1) 56(84) bytes of data.
64 bytes from 172.16.88.1: icmp_req=1 ttl=64 time=0.655 ms
64 bytes from 172.16.88.1: icmp_req=2 ttl=64 time=0.177 ms
64 bytes from 172.16.88.1: icmp_req=3 ttl=64 time=0.177 ms
64 bytes from 172.16.88.1: icmp_req=4 ttl=64 time=0.178 ms
64 bytes from 172.16.88.1: icmp_req=5 ttl=64 time=0.172 ms

--- 172.16.88.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.172/0.271/0.655/0.192 ms

root@pve1:~# ping -c 5 172.16.11.1
PING 172.16.11.1 (172.16.11.1) 56(84) bytes of data.
From 172.16.11.20 icmp_seq=1 Destination Host Unreachable
From 172.16.11.20 icmp_seq=2 Destination Host Unreachable
From 172.16.11.20 icmp_seq=3 Destination Host Unreachable
From 172.16.11.20 icmp_seq=4 Destination Host Unreachable
From 172.16.11.20 icmp_seq=5 Destination Host Unreachable

--- 172.16.11.1 ping statistics ---
5 packets transmitted, 0 received, +5 errors, 100% packet loss, time 4007ms
pipe 3


Is there a trick to getting tagged vlan packets out of the host and onto the wire?
 
I did not succeed with venet addresses too - bit if I use veth... (i.e. defining just a virtual NIC and not an address when creating a CT) it works. Connecting to ovs bridges has to be done by console commands then.

e.g.

ovs-vsctl add-port vmbr0 veth100.0 tag=11"


in /etc/pve/openvz/100.conf you have in that case

NETIF="ifname=eth0,bridge=vmbr0,mac=xx:xx:xx:xx:xx:xx,host_ifname=veth100.0,host_mac=xx:xx:xx:xx:xx:xx"
 
I did not succeed with venet addresses too - bit if I use veth... (i.e. defining just a virtual NIC and not an address when creating a CT) it works. Connecting to ovs bridges has to be done by console commands then.

e.g.

ovs-vsctl add-port vmbr0 veth100.0 tag=11"


in /etc/pve/openvz/100.conf you have in that case

NETIF="ifname=eth0,bridge=vmbr0,mac=xx:xx:xx:xx:xx:xx,host_ifname=veth100.0,host_mac=xx:xx:xx:xx:xx:xx"

We seem to be in need of better documentation on VLANs.

I added a veth named eth0.11 to the CT using the GUI, which resulted in

NETIF="ifname=eth0.11,bridge=vmbr0,mac=B2:22:94:55:C5:1A,host_ifname=veth103.0,host_mac=5E:40:2E:73:86:77"

When I start that CT from the GUI, I see an 'OK' in the task log.

When I start that CT from the command line, I get

Configure veth devices: veth103.0
Adding interface veth103.0 to bridge vmbr0 on CT0 for CT103
can't add veth103.0 to bridge vmbr0: Operation not supported
Container start in progress...


What I hoped to do was bridge the CT with the vlan11 IntPort (already tagged) somehow.
 
Manually adding veth103.0 to the bridge produces a similar result:

NETIF="ifname=eth0.11,bridge=vmbr0,mac=1A:51:BD:C6:72:C4,host_ifname=veth103.0,host_mac=1E:B7:F2:56:64:F1"

root@pve1:~# ovs-vsctl add-port vmbr0 veth103.0 tag=11

root@pve1:~# vzctl start 103
Starting container ...
Container is mounted
Setting CPU units: 1000
Setting CPUs: 2
Configure veth devices: veth103.0
Adding interface veth103.0 to bridge vmbr0 on CT0 for CT103
can't add veth103.0 to bridge vmbr0: Operation not supported
Container start in progress...

root@pve1:~# ovs-vsctl show
dec04e84-e3e5-4ebb-8601-5a122983c4cf
Bridge "vmbr0"
Port "vlan11"
tag: 11
Interface "vlan11"
type: internal
Port "eth0"
Interface "eth0"
Port "tap100i0"
tag: 11
Interface "tap100i0"
Port "tap100i1"
Interface "tap100i1"
Port "vmbr0"
Interface "vmbr0"
type: internal
Port "open88"
Interface "open88"
type: internal
Port "veth103.0"
tag: 11
Interface "veth103.0"
ovs_version: "2.0.90"
 
I was wondering if this might be easier using a Fake Bridge http://blog.scottlowe.org/2012/10/19/vlans-with-open-vswitch-fake-bridges/ but it looks like Fake Bridges might actually be the same thing as IntPorts. vlan11 was created using the GUI as an OVS IntPort. I added a Fake Bridge called vmbr0.11 and it shows up the same way in OVS:

root@pve1:~# ovs-vsctl add-br vmbr0.11 vmbr0 11
root@pve1:~# ovs-vsctl show
dec04e84-e3e5-4ebb-8601-5a122983c4cf
Bridge "vmbr0"
Port "vlan11"
tag: 11
Interface "vlan11"
type: internal

Port "eth0"
Interface "eth0"
Port "tap100i0"
tag: 11
Interface "tap100i0"
Port "tap100i1"
Interface "tap100i1"
Port "vmbr0"
Interface "vmbr0"
type: internal
Port "open88"
Interface "open88"
type: internal
Port "vmbr0.11"
tag: 11
Interface "vmbr0.11"
type: internal

ovs_version: "2.0.90"

When I remove the new Fake Bridge, it destroys the old IntPort in the process:

root@pve1:~# ovs-vsctl del-br vmbr0.11
root@pve1:~# ovs-vsctl show
dec04e84-e3e5-4ebb-8601-5a122983c4cf
Bridge "vmbr0"
Port "eth0"
Interface "eth0"
Port "tap100i1"
Interface "tap100i1"
Port "vmbr0"
Interface "vmbr0"
type: internal
Port "open88"
Interface "open88"
type: internal
ovs_version: "2.0.90"

Adding back the vlan11 Fake Bridge recreates that, but no more tap100i0. Not exactly sure what that did/does. Can't find a lot of documentation on tap interfaces, other than a bunch of mentions related to standard Linux bridging.

root@pve1:~# ovs-vsctl add-br vlan11 vmbr0 11
root@pve1:~# ovs-vsctl show
dec04e84-e3e5-4ebb-8601-5a122983c4cf
Bridge "vmbr0"
Port "eth0"
Interface "eth0"
Port "vlan11"
tag: 11
Interface "vlan11"
type: internal
Port "tap100i1"
Interface "tap100i1"
Port "vmbr0"
Interface "vmbr0"
type: internal
Port "open88"
Interface "open88"
type: internal
ovs_version: "2.0.90"

 
Last edited:
More info. The vlan11 stanza was still in /etc/network/interfaces after removing with ovs-vsctl, so I rebooted the host. Now vlan11 is back, but both tap100 interfaces are gone:

-------------- ovs-vsctl show ---------------------
7ab123e0-eb12-4059-83df-2a55452c7075
Bridge "vmbr0"
Port "eth0"
Interface "eth0"
Port "vlan11"
tag: 11
Interface "vlan11"
type: internal
Port "vmbr0"
Interface "vmbr0"
type: internal
Port "open88"
Interface "open88"
type: internal
ovs_version: "2.0.90"
 
Obviously it is not configurable by GUI - when you start the CT you get always that kind of message you mentioned (I guess it tries to connect with a linux bridge of that name):




root@pve1:~# ovs-vsctl add-port vmbr0 veth103.0 tag=11

root@pve1:~# vzctl start 103
Starting container ...
Container is mounted
Setting CPU units: 1000
Setting CPUs: 2
Configure veth devices: veth103.0
Adding interface veth103.0 to bridge vmbr0 on CT0 for CT103
can't add veth103.0 to bridge vmbr0: Operation not supported
Container start in progress...


But resolvable by calling the CLI ovs-vsctl as above after the CT started! Moreover: If already in bridge when start the CT it has no effect: delete it first (with ovs-vsctl del-port) and add it again!

Note in your case the if name is rather eth0 than eth0.11 (inside the container).

At the ovs host side use veth103.0 which is the eth0 corresponding part seen in the host.
You have no vlan tagging at all here - the tagged vlan 11 from the physical NIC is
converted into a untagged interface by ovs.

The solution works (I´ve checked by wireshark and tcpdump the LAN traffic and especially
vlan tags), would be interesting if somebody has a better idea ...
 
Thanks -- still not quite clear on the sequence you used. Create the veth before the CT is started, then start the CT, then add the veth number which was created by the add script?

Edit -- tried that and it seems to work:

root@pve1:~# vzctl start 103
Starting container ...
Container is mounted
Setting CPU units: 1000
Setting CPUs: 2
Configure veth devices: veth103.0
Adding interface veth103.0 to bridge vmbr0 on CT0 for CT103
can't add veth103.0 to bridge vmbr0: Operation not supported
Container start in progress...

root@pve1:~# ovs-vsctl add-port vmbr0 veth103.0 tag=11

root@pve1:~# ovs-vsctl show
7ab123e0-eb12-4059-83df-2a55452c7075
Bridge "vmbr0"
Port "eth0"
Interface "eth0"
Port "vlan11"
tag: 11
Interface "vlan11"
type: internal
Port "veth103.0"
tag: 11
Interface "veth103.0"
Port "vmbr0"
Interface "vmbr0"
type: internal
Port "open88"
Interface "open88"
type: internal
ovs_version: "2.0.90"


Time to do some more testing. Will this config survive a reboot?
 
Last edited:
Same results as before -- CT can ping the host in VLAN 11, but not the router out on the wire. Is there a trick to getting tagged packets to leave the box?

[root@sipx /]# ifconfig eth0 172.16.11.23 netmask 255.255.255.0

[root@sipx /]# ifconfig
eth0 Link encap:Ethernet HWaddr BE:A3:F8:E2:73:E0
inet addr:172.16.11.23 Bcast:172.16.11.255 Mask:255.255.255.0
inet6 addr: fe80::bca3:f8ff:fee2:73e0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:384 (384.0 b)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:6183 errors:0 dropped:0 overruns:0 frame:0
TX packets:6183 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1599749 (1.5 MiB) TX bytes:1599749 (1.5 MiB)

[root@sipx /]# ping -c 5 172.16.11.20
PING 172.16.11.20 (172.16.11.20) 56(84) bytes of data.
64 bytes from 172.16.11.20: icmp_seq=1 ttl=64 time=1.15 ms
64 bytes from 172.16.11.20: icmp_seq=2 ttl=64 time=0.030 ms
64 bytes from 172.16.11.20: icmp_seq=3 ttl=64 time=0.025 ms
64 bytes from 172.16.11.20: icmp_seq=4 ttl=64 time=0.031 ms
64 bytes from 172.16.11.20: icmp_seq=5 ttl=64 time=0.034 ms

--- 172.16.11.20 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.025/0.255/1.159/0.452 ms

[root@sipx /]# ping -c 5 172.16.11.1
PING 172.16.11.1 (172.16.11.1) 56(84) bytes of data.
From 172.16.11.23 icmp_seq=2 Destination Host Unreachable
From 172.16.11.23 icmp_seq=3 Destination Host Unreachable
From 172.16.11.23 icmp_seq=4 Destination Host Unreachable
From 172.16.11.23 icmp_seq=5 Destination Host Unreachable

--- 172.16.11.1 ping statistics ---
5 packets transmitted, 0 received, +4 errors, 100% packet loss, time 14000ms
pipe 3



FYI, there seems to be some kind of issue with Centos's setup scripts, possibly related to running in a CT

#system-config-network

/usr/share/system-config-network/netconfpkg/NCHostsList.py:100: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
badlines.append((num, value_exception.message))
/usr/share/system-config-network/netconfpkg/NCHostsList.py:105: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
""" % (value_exception.message, num)
/usr/share/system-config-network/netconfpkg/NCProfileList.py:142: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
self.error = e.message





┌──────┤ Select Action ├──────┐
│ │
│ Device configuration │
│ DNS configuration │
│ │
│ │
│ │
│ ┌───────────┐ ┌──────┐ │
│ │ Save&Quit │ │ Quit │ │
│ └───────────┘ └──────┘ │
│ │
│ │
└─────────────────────────────┘


And of course no apparent awareness of the existence of eth0 inside the container until it is explicitly added to the network config.
 
Last edited:
Thanks -- still not quite clear on the sequence you used. Create the veth before the CT is started, then start the CT, then add the veth number which was created by the add script?

Yes - but veth is created already automatically.

Unfortunately the "manual" configuration "ovs-vsctl add-port" does not survive a reboot - not a CT´s and of course not a host´s too. In that case you have to repeat "del-port" and "add-port"
 
Adding an untagged veth to the CT did not work as expected either:

NETIF="ifname=eth0,bridge=vmbr0,mac=BE:A3:F8:E2:73:E0,host_ifname=veth103.0,host_mac=F6:5A:BA:DE:60:0E;ifname=eth1,bridge=vmbr0,mac=A2:38:7F:B8:C3:D4,host_ifname=veth103.1,host_mac=46:CC:6C:C7:DA:3F"

root@pve1:~# ovs-vsctl add-port vmbr0 veth103.1

root@pve1:~# vzctl enter 103
entered into CT 103

[root@sipx /]# ping -c 5 172.16.88.20
PING 172.16.88.20 (172.16.88.20) 56(84) bytes of data.
From 172.16.88.23 icmp_seq=2 Destination Host Unreachable
From 172.16.88.23 icmp_seq=3 Destination Host Unreachable
From 172.16.88.23 icmp_seq=4 Destination Host Unreachable
From 172.16.88.23 icmp_seq=5 Destination Host Unreachable

--- 172.16.88.20 ping statistics ---
5 packets transmitted, 0 received, +4 errors, 100% packet loss, time 14000ms
pipe 3

[root@sipx /]# exit
logout
exited from CT 103

root@pve1:~# ovs-vsctl show
7ab123e0-eb12-4059-83df-2a55452c7075
Bridge "vmbr0"
Port "eth0"
Interface "eth0"
Port "vlan11"
tag: 11
Interface "vlan11"
type: internal
Port "veth103.0"
tag: 11
Interface "veth103.0"
Port "vmbr0"
Interface "vmbr0"
type: internal
Port "veth103.1"
Interface "veth103.1"
Port "open88"
Interface "open88"
type: internal
ovs_version: "2.0.90"


No luck adding it to the IntPort (Fake Bridge) either:

root@pve1:~# ovs-vsctl del-port vmbr0 veth103.1

root@pve1:~# ovs-vsctl show
7ab123e0-eb12-4059-83df-2a55452c7075
Bridge "vmbr0"
Port "eth0"
Interface "eth0"
Port "vlan11"
tag: 11
Interface "vlan11"
type: internal
Port "veth103.0"
tag: 11
Interface "veth103.0"
Port "vmbr0"
Interface "vmbr0"
type: internal
Port "open88"
Interface "open88"
type: internal
ovs_version: "2.0.90"

root@pve1:~# ovs-vsctl add-port open88 veth103.1
ovs-vsctl: no bridge named open88

 
Adding an untagged veth to the CT did not work as expected either:

NETIF="ifname=eth0,bridge=vmbr0,mac=BE:A3:F8:E2:73:E0,host_ifname=veth103.0,host_mac=F6:5A:BA:DE:60:0E;ifname=eth1,bridge=vmbr0,mac=A2:38:7F:B8:C3:D4,host_ifname=veth103.1,host_mac=46:CC:6C:C7:DA:3F"

root@pve1:~# ovs-vsctl add-port vmbr0 veth103.1

root@pve1:~# vzctl enter 103
entered into CT 103

[root@sipx /]# ping -c 5 172.16.88.20
PING 172.16.88.20 (172.16.88.20) 56(84) bytes of data.



As I understood 172.16.88.20/24 is on (physical) NIC eth0 with wlan tag 88.

So you need:

- an internal port (you called it "open88") on vmbr0 (where also eth0 is "connected" to) with tag=88 (missed in your example) for the host´s address in this network

- veth103.1 with tag=88 (missed in your example) for CT´s connection to it´s eth0 (where it works untagged!)
 
open88 is actually an untagged interface -- mostly there for testing to see if tags are causing the issues.

I'm seeing the same results you are within the box -- including the rather annoying need to re-establish OVS bridges after restarting containers.

However, I'm not seeing VLAN packets from CTs leave the box.

root@pve1:~# tcpdump -i vlan11
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vlan11, link-type EN10MB (Ethernet), capture size 65535 bytes

>>>>>>>>> Here's the output during a ping from CT103 to the host IP in VLAN 11

15:47:40.923886 ARP, Request who-has 172.16.11.20 tell 172.16.11.23, length 28
15:47:40.923924 ARP, Reply 172.16.11.20 is-at 66:ea:73:1d:ba:74 (oui Unknown), length 28
15:47:40.924220 IP 172.16.11.23 > 172.16.11.20: ICMP echo request, id 35881, seq 1, length 64
15:47:40.924240 IP 172.16.11.20 > 172.16.11.23: ICMP echo reply, id 35881, seq 1, length 64
15:47:41.923824 IP 172.16.11.23 > 172.16.11.20: ICMP echo request, id 35881, seq 2, length 64
15:47:41.923845 IP 172.16.11.20 > 172.16.11.23: ICMP echo reply, id 35881, seq 2, length 64
15:47:42.924812 IP 172.16.11.23 > 172.16.11.20: ICMP echo request, id 35881, seq 3, length 64
15:47:42.924839 IP 172.16.11.20 > 172.16.11.23: ICMP echo reply, id 35881, seq 3, length 64
15:47:43.924980 IP 172.16.11.23 > 172.16.11.20: ICMP echo request, id 35881, seq 4, length 64
15:47:43.924998 IP 172.16.11.20 > 172.16.11.23: ICMP echo reply, id 35881, seq 4, length 64
15:47:44.925118 IP 172.16.11.23 > 172.16.11.20: ICMP echo request, id 35881, seq 5, length 64
15:47:44.925144 IP 172.16.11.20 > 172.16.11.23: ICMP echo reply, id 35881, seq 5, length 64
15:47:45.923673 ARP, Request who-has 172.16.11.23 tell 172.16.11.20, length 28
15:47:45.923812 ARP, Reply 172.16.11.23 is-at e6:b9:ff:77:f3:24 (oui Unknown), length 28

>>>>>>>>> Here's the output during a ping from CT103 to the router in VLAN 11

15:47:52.874890 ARP, Request who-has 172.16.11.1 tell 172.16.11.23, length 28
15:47:53.874799 ARP, Request who-has 172.16.11.1 tell 172.16.11.23, length 28
15:47:54.874798 ARP, Request who-has 172.16.11.1 tell 172.16.11.23, length 28
15:47:56.875800 ARP, Request who-has 172.16.11.1 tell 172.16.11.23, length 28
15:47:57.875800 ARP, Request who-has 172.16.11.1 tell 172.16.11.23, length 28
15:47:58.875802 ARP, Request who-has 172.16.11.1 tell 172.16.11.23, length 28

>>>>>>>>> Here's a ping from the router in VLAN 11 to the host (same results to the CT) in VLAN11

172.16.11.1 Destination Host Unreachable
172.16.11.1 Destination Host Unreachable
172.16.11.1 Destination Host Unreachable
172.16.11.1 Destination Host Unreachable
172.16.11.1 Destination Host Unreachable


Invalid ping data (5 packets transmitted, 0 received, +5 errors, 100% packet loss, time 4008ms)
Invalid ping data (pipe 3)

No output from tcpdump when this happened. Same results using 'tcpdump net 172.16.11.0/24'
I could not get tcpdump to bind to eth0 or vmbr0 since they did not have IP addresses in the range.
 
Last edited:
Seems like the NIC is not responding to ARPs which led me to http://ckdake.com/content/2008/vlans-in-openvz.html (admittedly a 5+ year old post.)

but:

root@pve1:/proc/sys/net/ipv4/conf# cat vlan11/proxy_arp
0
root@pve1:/proc/sys/net/ipv4/conf# cat eth0/proxy_arp
0
root@pve1:/proc/sys/net/ipv4/conf# cat default/proxy_arp
0
root@pve1:/proc/sys/net/ipv4/conf# cat venet0/proxy_arp
0
root@pve1:/proc/sys/net/ipv4/conf# cat all/proxy_arp
0
root@pve1:/proc/sys/net/ipv4/conf# cat eth0/proxy_arp
0
root@pve1:/proc/sys/net/ipv4/conf# cat ovs-system/proxy_arp
0
root@pve1:/proc/sys/net/ipv4/conf# cat vmbr0/proxy_arp
0


How is proxy ARP configured in Proxmox?
 
Now that the internal VLAN is working, it needs to be able to survive a reboot. Adding

allow-vmbr0 veth103.0
iface veth103.0 inet manual
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=11


to /etc/network/interfaces didn't quite work.

auto veth103.0
allow-vmbr0 veth103.0
iface veth103.0 inet manual
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=11

did, but the syntax is differs from that used for other interface types.

Also:
Even though
ifdown veth103.0
returns
Cannot find device "veth103.0"
it does remove the iface.

Maybe we need an ifup script tailored for veth interfaces?
 
Last edited:
Survival for reboot means mainly survival VM´s or CT´s reboot!

Why?

Ports in VMs and Cts make only sense when respective VM or CT is up. Only for the internal ports (as there is "open88" in your case) host´s reboot is the criteria. And this works without problems.

For CTs and VMs not host´s /etc/network/interfaces is responsible but startup scripts for CTs and VMs. As far as I have seen for VMs it works fine, the respective ports ("tap<vm-id>i..") are up after the VM starts.


Where it´s not working automatically are CTs. Responsible would be /usr/sbin/vzctl which should make the proper ovs-vsctl commands when the container starts (and stops; host´s ifup, ifdwon as well as /etc/network/interfaces are not involved here).

To automate it for CTs I made a special routine (let´s call it "my-vzctl") with parameters identical to vzctl:

#!/bin/bash
/usr/sbin/vzctl "$@" 2>&1 | tee $HOME/vzctl-call.log
./vzovs $HOME



where vzovs is a perl script in the same directory:

#!/usr/bin/perl

open(vzop,"$ARGV[0]/vzctl-call.log");
while(($xz = readline (*vzop)) ne "") {


$fica = index ($xz,'t add');
if ($fica != -1) {
$bica = index ($xz,'vmbr');
$fuca = index ($xz,' to',$fica);
$buca = index ($xz,': ',$bica);
$pona = substr ($xz,$fica+6,$fuca-$fica-6);
$brina = substr ($xz,$bica,$buca-$bica);
$tag = "";
$vix = index ($pona,'-');
if ($vix != -1) {
$tag = substr ($pona,$vix+1);
$tag = "tag=$tag";
}
$ovscall2 = "ovs-vsctl add-port $brina $pona $tag";
$ovscall1 = "ovs-vsctl del-port $brina $pona 2> /dev/zero";
print "activate $pona at $brina\n";

system ($ovscall1);
system ($ovscall2);
}
}


Instead of starting a machine by GUI you call

./my-vzctl start 305

in order to start container 305.

Note: In the current configuration files is no syntax defined for a vlan tag - this script interprets a value after "-" in veth ports as vlan tag, e.g. veth305.0-16 (in that case edit the intercace by GUI and rename
the Host-Ifname) is bridged to vlan tag 16.
 
Ports in VMs and Cts make only sense when respective VM or CT is up. Only for the internal ports (as there is "open88" in your case) host´s reboot is the criteria. And this works without problems.

No argument there.

Where it´s not working automatically are CTs. Responsible would be /usr/sbin/vzctl which should make the proper ovs-vsctl commands when the container starts (and stops; host´s ifup, ifdwon as well as /etc/network/interfaces are not involved here).

Yes, this is indeed the issue. Something needs to add the veth to the bridge. It should probably be done by the GUI when the veth is created.

I think the VLAN problems deserve a new thread. I suspect few people are reading all the way to the bottom of this one. Thanks again for your assistance -- I am slowly starting to understand the basic model.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!