Fresh Install, LXC Container Networking doesn't work...

mantisgroove

Member
Nov 19, 2014
30
3
8
Hello all,

I've used Proxmox v3 quite a bit, and their OpenVZ containers extensively.

I'm trying to setup a 4.1 based system, but so far am having trouble with basic network connectivity for containers (LXC).

A fresh install of Proxmox 4.1, the Proxmox host has VM xxx.xxx.xxx.151 (masking my actual IP's with x's for paranoia I suppose).

I download a vanilla Debian-8-standard template from the built in template catalog.

Then I create a container from it.

I name it's network interface eth0, I select vmbr0 as the bridge (the only one available), MAC address is auto generated, I leave the firewall box unchecked, and leave the VLAN setting untouched. I select a static IPv4 address, and enter xxx.xxx.xxx.156/27 in the IPv4/CIDR box (.156 is on the same physical subnet as the proxmox host's .151 address, and one of my available IP's that I can assign to VM's). I put in xxx.xxx.xxx.129 as the IPv4 gateway (as it's my gateway, the same IP I have configured as the gateway for my proxmox host.

For IPv6, I leave it set to the DHCP radio button as I don't have any IPv6 info for it.

The command line "pct list" shows my PXC container as running. And I can use pct enter <vmid> to get into the command prompt for the running container. But I cannot so much as eternally ping the container's IP, or get so much as a ping response out from inside the container (except to the proxmox host, and the containers own ip).

ifconfig from within the container shows the eth0 interface as "UP" and having the IP I assigned to it.

the /etc/network/interfaces file shows:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address xxx.xxx.xxx.156
netmask 255.255.255.224
gateway xxx.xxx.xxx.129

iface eth0 inet6 dhcp


Any idea why I can't get any traffic either way, from outside in, or inside out?

I've tried several containers, from several templates, and it just doesn't work.

I'm quite frustrated at this point.
 
I tried but, tcpdump wasn't in the template, and, with no networking I couldn't install it. I tried to scp the tcpdump binary from the proxmox host into the container, but oddly, while ICMP pings work, and the proxmox host can ping the container's iP, I can't SSH (or therefore scp) into the container. I used pct enter <vmid> to get into the container, and then oddly was able to ssh to the proxmox host. I copied over the tcpdump binary, placed it in /usr/sbin/ and gave it execute permissions, but it apparently still wouldn't work as the container template also lacked a required library (libpcap.so.0.8).

Any other ideas? Is this normal? Do I need to do some kind of firewall or routing rules setup inside the container or on the proxmox host? My experience with OpenVZ containers on Promox 3.x I'd create a container, assign it an IP and it would work with 0 issues out of the box.
 
Forgive me if I've done the capture incorrectly, I don't have a lot of experience with what I'm looking for in this instance. Is it default behavior I'm seeing here? I mean, this is a vanilla install, all my VM's I've created within this subnet can talk to each other fine, and the exact same config worked fine on Proxmox 3.x, but with 4.1 (fresh install, not an upgrade), I can't get any network anywhere from the container except to the proxmox host. I even tried different templates.


So, I ran, as root, on the proxmox host: "tcpdump -i vmbr0 -vvv" and here's what it saw (I was simply trying to ping 8.8.8.8 from inside the container at the time, below is a sample, full capture in attached txt file:

tcpdump: listening on vmbr0, link-type EN10MB (Ethernet), capture size 262144 bytes
23:54:45.567755 IP (tos 0x10, ttl 64, id 47478, offset 0, flags [DF], proto TCP (6), length 176)
proxmox1-dc.simplymac-support.net.ssh > c-69-180-217-59.hsd1.tn.comcast.net.60814: Flags [P.], cksum 0x25a8 (incorrect -> 0x9824), seq 515215730:515215854, ack 1927170704, win 325, options [nop,nop,TS val 524819071 ecr 949291219], length 124
23:54:45.587914 IP (tos 0x0, ttl 55, id 39536, offset 0, flags [DF], proto TCP (6), length 52)
c-69-180-217-59.hsd1.tn.comcast.net.60814 > proxmox1-dc.simplymac-support.net.ssh: Flags [.], cksum 0x6ab9 (correct), seq 1, ack 124, win 4092, options [nop,nop,TS val 949291247 ecr 524819071], length 0
23:54:45.654263 IP (tos 0x0, ttl 64, id 22457, offset 0, flags [DF], proto UDP (17), length 148)
proxmox1-dc.simplymac-support.net.5404 > proxmox2-dc.simplymac-support.net.5405: [bad udp cksum 0x0cbe -> 0x0b6d!] UDP, length 120
23:54:45.657761 IP (tos 0x0, ttl 64, id 37683, offset 0, flags [DF], proto UDP (17), length 148)

..........and on, and on.......

^C
270 packets captured
305 packets received by filter
0 packets dropped by kernel



Any ideas, or suggestions?
 

Attachments

  • vmbr0-cap.txt
    64.4 KB · Views: 9
Oh, in case I wasn't clear. When I said my other VM's in this subnet work, I mean, this particular host is sitting in a datacenter attached to a subnet/VLAN all my own, and I have ESXi hosts that I can assign these IP's to, and they all talk fine, and before doing this fresh 4.1 install, I was running proxmox 3.x on this same box and the OpenVZ containers could also talk fine. I didn't mean something was working on this install. Since my very first install of Proxmox 4.1, I can cluster it with other Proxmox 4.1 hosts, but any containers I create, on any of the hosts have this same networking issue... just, no network...
 
So, I destroyed the proxmox cluster, and built a single proxmox host again from scratch. Complete erase and re-install. Totally fresh. Unlike last time where I used ZFS as the root file system, I literally kept EVERYTHING at defaults (ext4). Once installed, rather than adding some NFS stores for the containers like last time, I just used the one, default, "local" storage location. downloaded a debian template from the built in downloader, created a container, started it up, and exactly the same problem. No working network. I rebuilt a proxmox 3.4 host in the exact same place/machine, physical NIC in datacenter, etc. and the OpenVZ container I created had working networking immediately upon starting. It worked perfectly. Went back to 4.1, nope. No networking within the container except the limited ICMP I described above.

The one thing I noticed, is that the default when creating a container on Prox 3.4, the networking used (Routed Mode) venet, and assigning an IP there was all that was necessary, no subnet, gateway, etc. On Proxmox 4.1, it looks like bridged is all that is available. Should I be setting the gateway in my containers to the IP of my proxmox host, and then making some kind of network config. accommodations to "route" container traffic? All this time, I've been assuming that I should be setting the subnet to the actual subnet, and the gateway to my actual subnet's gateway IP. Is that correct?

This is just crazy that something so unbelievably basic just completely doesn't work at all. Any answers?
 
Already done. My initial setup (which I've restored), is a "cluster" of 3 proxmox hosts. I can communicate with each of them, and they can communicate with each other, but if I create a container, as described above on any of the 3 hosts (which are all running on separate physical servers, each having their own NIC), and the same result happens. I'm willing to bet I could even "move" a running container from one host to another while it continued to run with no problem... but all the while, the container would not have network access to the rest of the world...
 
If my container can access my proxmox host, but nothin else, this really makes me suspect that the proxmox host is supposed to be actually forwarding/routing traffic for the container. Is this true? If so, how would I set it up?
 
Is the IP from Container a public IP or Private IP?

if its public, did u set a Route for this public ip to your vmbr0? if not, do that :)
The Gateway for Container must be the IP from vmbr0

Can u post your /etc/network/interfaces from Host and Container?
 
Yes the container's IP is public. No I did not manually create a route, do you have a reference from anywhere it's documented that that should be done, or an example? The official proxmox people responding didn't mention that.

This proxmox instance is actually a VMware VM, and I just noticed that if I put the ESXi NIC into "promiscuous mode" (I think that's what it's called, I'm basically turning off some kind of security on the NIC) and now it works.

This is NOT necessary with Proxmox 3.x, so it has me a little worried.
 
Hi,
i dont know if this is documentated anywhere buts the only way to get run my containers with a second public IP like in Proxmox 3 (Venet)

Its a combination of routed and bridged network...

The command to set route is like this
Code:
route add -host 5.5.5.5/32 dev vmbr0
You must change the IP and the vmbr0 with your credentials.

To get it work after a Host Reboot your Host network Config looks like this (in my case public first ip from host is 5.5.5.4 and second public is 5.5.5.5)
Code:
auto lo eth0 vmbr0
iface lo inet loopback
iface eth0 inet static
address 5.5.5.4
netmask 255.255.255.0
gateway 5.5.5.1
pointopoint 5.5.5.1
post-up iptables-restore < /etc/iptables.up.rules
up route add -host 5.5.5.6/32 dev vmbr0
down route delete -host 5.5.5.6/32 dev vmbr0

iface vmbr0 inet static
address 192.168.0.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0

and in Container

Code:
# This configuration file is auto-generated.
#
# WARNING: Do not edit this file, your changes will be lost.
# Please create/edit /etc/network/interfaces.head and
# /etc/network/interfaces.tail instead, their contents will be
# inserted at the beginning and at the end of this file, respectively.
#
# NOTE: it is NOT guaranteed that the contents of /etc/network/interfaces.tail
# will be at the very end of this file.
#
# Auto generated lo interface
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 5.5.5.5
netmask 255.255.255.0
post-up ip route add 192.168.0.1 dev eth0
post-up ip route add default via 192.168.0.1
pre-down ip route del default via 192.168.0.1
pre-down ip route del 192.168.0.1 dev eth0


A complete Howto is public here
https://www.sugar-camp.com/lxc-venet-konfiguration-bei-proxmox-4/

But its in German so let me know if it works for you too and if you need further help.
 
I have a similar problem with a Proxmox 4.1 setup on top of a Minimum Debian 8
  • from the Proxmox host I can ping into the LXC container
  • from within the LXC container I can ping the Proxmox host

  • from within the LXC container I cannot ping the gateway or any external IP
  • from an external IP I cannot ping into the LXC container

I followed the guide from https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie
and the GUI is working fine.

My Proxmox interfaces is as follows
Code:
auto lo
iface lo inet loopback

auto vmbr0
iface vmbr0 inet static
        address 222.19.34.237
        netmask 255.255.255.224
        gateway 222.19.34.225
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

Route is as follows
Code:
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         222.19.34.225    0.0.0.0         UG    0      0        0 vmbr0
222.19.34.224    0.0.0.0         255.255.255.224 U     0      0        0 vmbr0

I created an LCX container 100, and after start it's interfaces is as follows:
Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 222.19.34.248
        netmask 255.255.255.224
        gateway 222.19.34.225

Route inside LXC is as follows
Code:
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         222.19.34.225    0.0.0.0         UG    0      0        0 eth0
222.19.34.224    0.0.0.0         255.255.255.224 U     0      0        0 eth0

Here is my lxc.conf:
Code:
root@prox01 /etc/pve/nodes/prox01/lxc # cat 100.conf
arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: test
memory: 512
net0: bridge=vmbr0,gw=222.19.34.225,hwaddr=62:32:33:39:38:33,ip=222.19.34.248/27,name=eth0,type=veth
ostype: debian
rootfs: data:100/vm-100-disk-1.raw,size=3G
swap: 512

I managed to install ngrep on the proxmox machine and inside the LXC container and run
Code:
ngrep -d any '' icmp -W byline
  • when pinging between proxmox and LXC cointainer everything is fine
  • when pinging from an external IP into the LXC container,
    • I can see ICMP packets on the Proxmox host
    • I cannot see any packets inside the LXC container,
  • when pinging from inside the LXC container to an external gateway
    • I can see ICMP packets on the Proxmox host going to the external Ip and coming back
    • I cannot see any packets coming back inside the LXC container
So I focus on: Why don't we forward packets into the LCX container?

I also tried:
Code:
route add -host 222.19.34.248/32 dev veth100i0
and/or
Code:
route add -host 222.19.34.248/32 dev vmbr0
but this did not help either. I am not able to see any packet arriving inside the LXC caontainer, when the source is external (when the source is the Proxmox host, it works however)

So I am lost here. Any help is appreciated.

By the way: I have two other Proxmox 4.1 machines running here in a local LAN with LXC containers and they are working just fine.
 
Last edited:
Delete this route
route add -host 222.19.34.248/32 dev veth100i0

and try this in container
iface eth0 inet static
address 222.19.34.248
netmask 255.255.255.224
gateway 222.19.34.237

Edit

If this 222.19.34.248 is not a public ip you must activate ip forwarding and masquerading...
 
While describing your problem you get the clue...:)

I added
Code:
net.ipv4.ip_forward=1
to /etc/sysctl.conf
and ran
Code:
echo 1 > /proc/sys/net/ipv4/ip_forward

Now I can ping into the container.
 
  • Like
Reactions: Baulder
Right :)
a good description is the first way for a good clue

Yes, that's what I am always telling my colleagues. Documentation and describing the problem is the most important in a project. I generally write down all tries and errors in the project document. Including the questions I ask myself. That's the only way to handle problems, which are more than trivial.
And later, everyone can read and follow the thoughts, which lead to the solution, including myself.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!