Containers & vm's take 'forever' to get network connectivity

tymanthius

New Member
Nov 22, 2015
21
0
1
I'm running pfsense on a proxmox 4 install.

99% of it is running great.

However, when I reboot a vm (havent' checked with real hardware yet) it takes over 10+ minutes (sometimes much more) to get network connectivity outside it's lan.

All my vm's are on 192.168.5.xxx, gateway .5.1 and are containers using a debian 8 template.

Immediately on boot I can ping any other .5.xxx address, but nothing on .0.xxx (real machines), or the internet. Dns does resolve (if ping google.com it gives an IPv4 address, no pings). Correction: Containers can ping the host at 192.168.0.42.

Once connectivity comes up, when I do 'ping google.com' it prefers IPv6 and I get good results.

vbr0 = eth0
vbr1 = eth1
vbr2 = eth2
vbr5 = virtual only for VM's.

I track ipv6 with a /56.

What other info can I give that will help with resolution?
 
All my vm's are on 192.168.5.xxx, gateway .5.1 and are containers using a debian 8 template.

Immediately on boot I can ping any other .5.xxx address, but nothing on .0.xxx (real machines), or the internet. Dns does resolve (if ping google.com it gives an IPv4 address, no pings).

Correction: Containers can ping the host at 192.168.0.42.

Once connectivity comes up, when I do 'ping google.com' it prefers IPv6 and I get good results.

vbr0 = eth0
vbr1 = eth1
vbr2 = eth2
vbr5 = virtual only for VM's.

I track ipv6 with a /56.

What other info can I give that will help with resolution?


Post the result of

Code:
cat /etc/network/interfaces
route -n

of both host and container(s)
 
Host:
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual

iface eth4 inet manual

auto vmbr0
iface vmbr0 inet manual
bridge_ports eth0
bridge_stp off
bridge_fd 0
# post-up /sbin/ethtool -k $IFACE tx off

auto vmbr1
iface vmbr1 inet static
address 192.168.0.42
netmask 255.255.255.0
gateway 192.168.0.1
bridge_ports eth1
bridge_stp off
bridge_fd 0
# post-up /sbin/ethtool -k $Iface tx off

auto vmbr5
iface vmbr5 inet static
address 192.168.5.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
# post-up /sbin/ethtool -K $IFACE tx off

auto vmbr2
iface vmbr2 inet static
address 192.168.2.1
netmask 255.255.255.0
bridge_ports eth2
bridge_stp off
bridge_fd 0
# post-up /sbin/ethtool -K $IFACE tx off​


The post-up lines were done b/c of pfsense needing no hardware off loads. But it didn't make a diff, so commented it back out.
Route:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 vmbr1
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr1
192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr2
192.168.5.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr5

A container (spot checked 3 containers and are same, w/ diff static IP's of course):

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 192.168.5.104
netmask 255.255.255.0
gateway 192.168.5.1

iface eth0 inet6 dhcp​

route:

Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.5.1 0.0.0.0 UG 0 0 0 eth0
192.168.5.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0

So looking at that, my guess is it's the lack of routing in vm's to anything but the .5.0 network. I just don't know how to fix it.

But that's part of why I'm doing this - to learn. :)
 
Last edited:
Host:

Route:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 vmbr1
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr1
192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr2
192.168.5.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr5

A container (spot checked 3 containers and are same, w/ diff static IP's of course):


route:

Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.5.1 0.0.0.0 UG 0 0 0 eth0
192.168.5.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0

Host and container use different gateways, i.e. host works as router for the container. It depends now how the host communicates externally (to internet) - probably via a NAT router. This NAT router sees then an address like 192.168.5.x but usually expects 192.168.0.x

It depends now from the NAT router settings if this accepted or not.

If possible (depends also from the NAT router if it accepts the container virtual NIC´s MAC) bridge to vmbr1 instead of vmbr5 and obtain a 192.169.0.x also for the container

or

define a NAT in the host as follows


Code:
iptables -t nat -A POSTROUTING -o vmbr1 ! -d 192.168.0.0/24 -j MASQUERADE
 
My router is another VM running pfsense.

It has vmbr5 on 192.168.5.1. The containers use it as a gateway. The thing that is so odd to me is that eventually they get full connectivity. I don't understand why it takes so long.

It sounds like I missed a crucial point with containers - they will always live on the same network as the host.

So if I were to change all their IP's from .5.xxx to .0.xxx they should work also?

Could I still have them connected to the vmbr5 (which has no physical NIC, so gets max speed thru virtio drivers)?

Or is there something in pfsense I should change? Honestly, if I had NO connectivity across LAN & VMNet I'd understand it better I think.

And thank you for the help.

EDIT: For those folllowing along at home.

I tried a new container on the 0.xxx address and no difference weather it was on vmbr1 or 5.

Haven't applied the routeing command yet.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!