OpenVZ (venet) containers on different interfaces and networks

jinjer

Renowned Member
Oct 4, 2010
204
7
83
I would like to run OpenVZ containers, using the venet device on different networks.

I normally host containers on a private network, protected with firewalls, but I also need for some of these containers to bypass the firewalls and use a different routing/gateway.

I managed to achieve a half-baked setup that works by modifying the routing tables on the proxmox server to use policy based roting as follows:

1. Add an interface for the proxmox server on each network. I'm not sure if an IP is required, but I put one IP on each server.
2. Add a routing table to /etc/iproute2/rt_tables in case it's not there
3. Add a rule and routing table for each vz container with

ip rule add from $ip table vztable
ip rule add to $ip table vztable
ip route add $ip/32 dev venet0 table vztable

This is easily done when the containers are bound to the same proxmox server, but this setup is cumbersome to keep on a cluster with migrating containers.

It would be nice to modify the openvz network scripts to account for the creation/deletion of rules to and from the ip of the container as per the above schema.

Any hints on where to act will be appreciated (I admin I have not tried to dig into the vz documentation).

Thank you.
 
If you put the rules in a bash file and save this file in /etc/pve/priv on one of the nodes it will be automatically distributed to all nodes in the cluster through proxmox' cluster file system. Then you need to activate the rules in this file on demand when loading a CT.
 
I'm close to the solution of the "issue".

There is one last problem and it is the ARP packets.

Openvz is not answering arps on the second interface, where the other network is connected. I see the arp request coming in but there is no arp-reply.

Any idea where to look for this?

EDIT: It seems that the vzarp funcion in /usr/lib/vzctl/scripts has something to do with this.
 
Last edited:
Ok, it seems there is a bug in the detection of the proper device on which to respond to arp addresses.

In the default <veid>.conf we have:

NEIGHBOUR_DEVS=detect

This is supposed to work, but it does not. When I change it to the correct interface I get proper arp answers:

NEIGHBOUR_DEVS=vmbr1
 
I'd like to add a final comment on the above. After a restore of the openvz container from a backup, the NEIGHBOUR_DEVS parameter from the <veid>.conf went away (it was removed by proxmox or the vz library).

With that parameter out, I have no issues with the arp stuff anymore. The correct interface is detected and the host answers the arp queries properly.

So... this was an issue with upgrading from old versions of proxmox to new ones.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!