No network in CT (all CT are in same network as CT0)

Greg750

New Member
Mar 1, 2016
2
0
1
Hi,

My ct has no network acces :

- from fresh jessie install , i add proxmox-ve 4.1-37
- I have 3 cards on host
- eth0 : 10.200.83.85
- eth1 : 10.200.156.95
- eth2 : 10.200.19.86 (this one will be use as vmbr2)

For host :
  • i can ping all other machine in the same lan
  • i can ping the CT
  • i can't ping www.google.fr
For CT, it's difficult
  • i can ping the host
  • i can't ping the other machine in the same lan
  • i can't ping www.google.fr
All CT will use the eth2 from host for in/out

the first (the only one for now) ct (101) will have the 10.200.19.88 ip
/etc/pve/lxc/101.conf
arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: gv-batchqlf01-prp
memory: 512
net2: bridge=vmbr2,hwaddr=3A:65:34:35:64:37,ip=10.200.19.88/27,name=eth2,type=veth
ostype: debian
rootfs: AllCT:101/vm-101-disk-1.raw,size=9G
swap: 512

active network in proc :
for i in /proc/sys/net/ipv4/conf/*/forwarding /proc/sys/net/ipv4/conf/*/proxy_arp ; do echo "$(cat $i) = $i" ; done | sort
0 = /proc/sys/net/ipv4/conf/all/proxy_arp
0 = /proc/sys/net/ipv4/conf/default/proxy_arp
0 = /proc/sys/net/ipv4/conf/eth0/proxy_arp
0 = /proc/sys/net/ipv4/conf/eth1/proxy_arp
0 = /proc/sys/net/ipv4/conf/lo/proxy_arp
0 = /proc/sys/net/ipv4/conf/veth101i2/proxy_arp
0 = /proc/sys/net/ipv4/conf/vmbr2/proxy_arp
1 = /proc/sys/net/ipv4/conf/all/forwarding
1 = /proc/sys/net/ipv4/conf/default/forwarding
1 = /proc/sys/net/ipv4/conf/eth0/forwarding
1 = /proc/sys/net/ipv4/conf/eth1/forwarding
1 = /proc/sys/net/ipv4/conf/eth2/forwarding
1 = /proc/sys/net/ipv4/conf/eth2/proxy_arp
1 = /proc/sys/net/ipv4/conf/lo/forwarding
1 = /proc/sys/net/ipv4/conf/veth101i2/forwarding
1 = /proc/sys/net/ipv4/conf/vmbr2/forwarding

part of /etc/network/interfaces for host : :
iface eth2 inet static
address 10.200.19.86
netmask 255.255.255.224
network 10.200.19.64
gateway 10.200.19.65

auto vmbr2
iface vmbr2 inet static
address 10.200.19.86
netmask 255.255.255.224
gateway 10.200.19.65
#bridge_ports none
bridge_ports eth2
bridge_stp off
bridge_fd 0

I make lot's of change, i'm not remenber all, maybe i do good things and (very) bad things, like
- route, actually i have in CT-0
route -n | grep -e vmbr2 -e 0.0.0.0 -e default
0.0.0.0 10.200.19.65 0.0.0.0 UG 0 0 0 vmbr2
10.200.19.64 0.0.0.0 255.255.255.224 U 0 0 0 vmbr2
10.200.83.64 0.0.0.0 255.255.255.224 U 0 0 0 eth0
10.200.156.64 0.0.0.0 255.255.255.192 U 0 0 0 eth1​

- route in CT-101
10.200.19.64 * 255.255.255.224 U 0 0 0 eth2​

- iptables
iptables-save | grep -v -e "#"
*nat
:pREROUTING ACCEPT [8:504]
:INPUT ACCEPT [8:504]
:OUTPUT ACCEPT [5:300]
:pOSTROUTING ACCEPT [5:300]
-A POSTROUTING -o vmbr2 -j MASQUERADE
COMMIT
*mangle
:pREROUTING ACCEPT [599:277933]
:INPUT ACCEPT [599:277933]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [524:366156]
:pOSTROUTING ACCEPT [524:366156]
COMMIT
*filter
:INPUT ACCEPT [599:277933]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [524:366156]
COMMIT​
 
You don't seem to have the gateway configured in your container, so it doesn't know how to reach the outside world. Your host seems to use 10.200.19.65, so you could add that as gateway to the container config as well.
 
Hi,
You don't seem to have the gateway configured in your container, so it doesn't know how to reach the outside world. Your host seems to use 10.200.19.65, so you could add that as gateway to the container config as well.
i do in CT101 /etc/network/interfaces
auto eth2
iface eth2 inet static
address 10.200.19.88
netmask 255.255.255.224
gateway 10.200.19.65 # and 10.200.19.86 (ip host ~ CT-0)
# post-up route add default gw 10.200.19.65
# post-up route add default dev eth2​
restart CT101
==> nothing better
route -n
0.0.0.0 10.200.19.65 0.0.0.0 UG 0 0 0 eth2
10.200.19.64 0.0.0.0 255.255.255.224 U 0 0 0 eth2​

BTW : Network : 10.200.19.64/27
 
Last edited:
Same here. Nothing changed in network configuration but after today's updates to system packages (incl. PVE kernel etc.) CT networking is totally broken.

Typical CT networking for me is like this:

62.210.7.XXX is the public IP of a CT
195.154.150.YYY is the gateway

Code:
auto lo
iface lo inet loopback

# public
auto eth0
iface eth0 inet static
        address 62.210.7.XXX
        netmask 255.255.255.0
        post-up ip route add 195.154.150.YYY dev eth0
        post-up ip route add default via 195.154.150.YYY
        pre-down ip route del default via 195.154.150.YYY
        pre-down ip route del 195.154.150.YYY dev eth0
        pointopoint 195.154.150.YYY
        mtu 9000

# private
auto eth1
iface eth1 inet static
        address 172.16.0.19
        netmask 255.255.255.0
        post-up route add -net 10.90.0.0 netmask 255.255.0.0 dev eth1
        post-up route add -net 10.90.0.0 netmask 255.255.0.0 gw 172.16.0.1
        mtu 9000

Likewise I can ping internally between CTs using their 172.16.0.xxx addresses but nothing with the outside world.

Networking on the host works just fine.

P.S. I did notice that after the upgrade, a package called "ifenslave" was marked for removal if that helps at all (as I see it's related to networking).
 
For the record:

$ pveversion -v
proxmox-ve: 4.1-39 (running kernel: 4.2.8-1-pve)
pve-manager: 4.1-15 (running version: 4.1-15/8cd55b52)
pve-kernel-4.2.8-1-pve: 4.2.8-39
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-33
qemu-server: 4.0-62
pve-firmware: 1.1-7
libpve-common-perl: 4.0-49
libpve-access-control: 4.0-11
libpve-storage-perl: 4.0-42
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-8
pve-container: 1.0-46
pve-firewall: 2.0-18
pve-ha-manager: 1.0-23
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve1
cgmanager: 0.39-pve1
criu: 1.6.0-1

All VMs are LXC CTs (just verifying again).
 
I've actually resolved mine, which it seems boils down partially to the hosting provider and partially to more strict networking perhaps on Proxmox's or Debian's side (I guess).

I had to remove the line "pointopoint 195.154.150.YYY" inside the eth0 block as this was causing the primary issue (Debian or Proxmox related? I honestly have no idea...).

The other was to change the subnet from 255.255.255.0 to 255.255.255.255 inside the eth0 block again. This is related to the hosting provider but it didn't make any difference before the Proxmox upgrade I did just a few hours ago.

I hope this helps others.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!