routed lxc setup


Renowned Member
Feb 10, 2010
we are trying to solve the following issue:

we created a linux vmbr2 bridge and assigned a dummy IP to it:

we also created a LXC using this vmbr2 as bridge to this container for its veth (eth0).
assign on it's eth0 and setting the default-gw to

the container setup is fine PVE writes post-up and pre-down ip-routes into the containers interfaces-config.

but on the host machine we'd need some "ip rou add dev vmbr2" after LXC startup and some "del" of this route pre LXC shutdown.

after manually adding this route on the host the LXC is reachable fine.

please advise,
Why do you add single ips /32 as the only routes?
The container (actually every network-stack) needs to know how to reach it's default gateway
It would be easier to just add '' to vmbr2
and then add one ip from '' to the container and set '' as default gateway
Why do you add single ips /32 as the only routes?
The container (actually every network-stack) needs to know how to reach it's default gateway
It would be easier to just add '' to vmbr2
and then add one ip from '' to the container and set '' as default gateway

well if you want to be able to move your containers from one hardware node to another without adjusting IPs and routing this is what we do, especially with scarse real IPv4 addresses.

we'd greatly appreciate the ability (or even automatism) to generate those /32 routes per container on each node without having to add some manual configuration.

hmm -
a) are both/all PVE-nodes the routers for your containers, or
b) do you have an external router/switch/....?

if b) do all nodes see the same router?
In that case you should be able to just configure this router inside the container - a linux bridge is a layer 2 'switch' and does forward arp etc.

if a) you might want to look at VXLAN and frr and running this on PVE - still experimental - @spirit send some documentation patches and packaged frr - see;hb=HEAD

else please post your `/etc/network/interfaces` from your nodes and a container and describe your setup some more
we are going for a) all nodes are routers and no layer2 broadcast-domains for containers are wanted.

we already do run routing-protocols on the hardware-nodes, thx for the suggestion.

in order to get things really automated the node still needs to setup the routes for its containers, this is what we opt for.

to build a vxlan bridge is an interesting idea for dislocated proxmox-clusters or any other layer2-connection, in fact being a service provider we'd rather use traditional vpls, but this is beside the point.

the exemplary node interface is as already mentioned:

auto vmbr2
iface vmbr2 inet static
bridge-ports none
bridge-stp off
bridge-fd 0
# test-ct this route should in fact only be added after starting the container and this is where the fix is needed:
post-up ip rou add dev vmbr2
#CT gateway

the interfaces config in the container is totally autogenerated by proxmox, there no additional config is needed:

iface eth0 inet static
# --- BEGIN PVE ---
post-up ip route add dev eth0
post-up ip route add default via dev eth0
pre-down ip route del default via dev eth0
pre-down ip route del dev eth0
# --- END PVE ---
well, we believe we could finally use these hookscripts, the issue at hand would be the question how to get the VE IPv4 and IPv6 config in a portable way while this hookscript is being called. any hint is appreciated.
Hi again,
well in the meantime the hookscript magic has been created.

But - we are still looking for some way to get the VE netconfig parameters (bridge, IP) in a correct way during the hookscript-execution.

any hint would be appreciated.
well, if anyone read this thread in the future - we did the following:
generate a hook script in /var/lib/vz/snippets/ and put a perl script like this there:

use strict;
use warnings;

print "GUEST HOOK: " . join(' ', @ARGV). "\n";
# First argument is the vmid
my $vmid = shift;
# Second argument is the phase
my $phase = shift;
# return-value
my $return;

if ($phase eq 'pre-start') {
  print "$vmid is starting, doing preparations.\n";
  $return=qx{ip route add dev vmbr2};
} elsif ($phase eq 'post-start') {
  print "$vmid started successfully.\n";
} elsif ($phase eq 'pre-stop') {
  print "$vmid will be stopped.\n";
} elsif ($phase eq 'post-stop') {
  print "$vmid stopped. Doing cleanup.\n";
  $return=qx{ip route del dev vmbr2};
} else {
    die "got unknown phase '$phase'\n";


and finally hooked it up to the VEID:
pct set 101 -hookscript local:snippets/

there is no real error handling and it would be a pretty good thing to check if the vmbr2 bridge is already there and adding multiple routes is working too, though then the $return value is definitely corrupted if any qx fails before the last one, therefore it would still be nice to have the option to handle routing not only inside but also outside the container, but at least we have an idea how to simulate the behaviour of openvz-containers.

in order to get those routes exported for example to ospf one can work with frr on the host and go for "redistribute kernel" in ospf.
an alternative solution would be to add those routes using frr and static routes, but we decided for lack of time to go with the more basic setup not adding another layer to the basic VE config and setup.


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!