Network issue, setting up two networks (OpenVZ container)

hakim

Well-Known Member
Oct 4, 2010
54
1
48
Hi,

I am trying to set an Proxmox box (pve-manager/1.6/5087) an OpenVZ container, using the debian5 template.

On my Proxmox computer I have three NICs :
- two connected to an internal network 192.168.0.0 (eth0 and eth2)
- one connected to an external network 192.168.1.0 (eth1)

After setting my Proxmox box, I end up with the following "interfaces" file :

# network interface settings
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 192.168.0.221
netmask 255.255.255.0
gateway 192.168.0.254

iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual

iface eth4 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.1.222
netmask 255.255.255.0
gateway 192.168.1.254
bridge_ports eth1
bridge_stp off
bridge_fd 0

auto vmbr1
iface vmbr1 inet static
address 192.168.0.222
netmask 255.255.255.0
gateway 192.168.0.254
bridge_ports eth2
bridge_stp off
bridge_fd 0


I added myself the gateway settings since from the UI, I could only set a gateway for vmbr0.

I created my debian container, associated with wmbr0 (eth1 - external network).

The problem I have is that my debian container does not have access to internet. It is like the network traffic on eth1 is not forwarded to the container.

You will find below some information about the network setting. Do you have any idea what I am doing wrong ?

Thanks for your help,
Hakim


On my Proxmox box, the network settings and routing table are as follow:

proxmox1:~# ifconfig -a
eth0 Link encap:Ethernet HWaddr 00:13:21:ae:85:6c
inet addr:192.168.0.221 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::213:21ff:feae:856c/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3396 errors:0 dropped:0 overruns:0 frame:0
TX packets:3238 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:606335 (592.1 KiB) TX bytes:1813240 (1.7 MiB)
Interrupt:17

eth1 Link encap:Ethernet HWaddr 00:13:21:78:26:c4

inet6 addr: fe80::213:21ff:fe78:26c4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3400 errors:0 dropped:0 overruns:0 frame:0
TX packets:529 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:1282100 (1.2 MiB) TX bytes:40650 (39.6 KiB)

eth2 Link encap:Ethernet HWaddr 00:13:21:78:26:c5

inet6 addr: fe80::213:21ff:fe78:26c5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4332 errors:0 dropped:0 overruns:0 frame:0
TX packets:913 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:501798 (490.0 KiB) TX bytes:68385 (66.7 KiB)

eth3 Link encap:Ethernet HWaddr 00:13:21:78:26:c6

BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

eth4 Link encap:Ethernet HWaddr 00:13:21:78:26:c7

BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

lo Link encap:Local Loopback

inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:4416 errors:0 dropped:0 overruns:0 frame:0
TX packets:4416 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:914458 (893.0 KiB) TX bytes:914458 (893.0 KiB)

venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00

UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

veth101.0 Link encap:Ethernet HWaddr 00:18:51:a7:cc:85

inet6 addr: fe80::218:51ff:fea7:cc85/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:3099 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

vmbr0 Link encap:Ethernet HWaddr 00:13:21:78:26:c4

inet addr:192.168.1.222 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::213:21ff:fe78:26c4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3400 errors:0 dropped:0 overruns:0 frame:0
TX packets:523 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1234500 (1.1 MiB) TX bytes:40182 (39.2 KiB)

vmbr1 Link encap:Ethernet HWaddr 00:13:21:78:26:c5

inet addr:192.168.0.222 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::213:21ff:fe78:26c5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2168 errors:0 dropped:0 overruns:0 frame:0
TX packets:907 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:235852 (230.3 KiB) TX bytes:67917 (66.3 KiB)

proxmox1:~# route

Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.1.0 * 255.255.255.0 U 0 0 0 vmbr0
192.168.0.0 * 255.255.255.0 U 0 0 0 eth0
192.168.0.0 * 255.255.255.0 U 0 0 0 vmbr1
default 192.168.0.254 0.0.0.0 UG 0 0 0 vmbr1
default 192.168.1.254 0.0.0.0 UG 0 0 0 vmbr0
default 192.168.0.254 0.0.0.0 UG 0 0 0 eth0

If I try to open an ssh connection to the debian container from the proxmox, it opens a connection on the Proxmox box itself, not on the debian container.

proxmox1:~# ssh 192.168.1.222
root@192.168.1.222's password:
Linux proxmox1 2.6.32-4-pve #1 SMP Mon Sep 20 11:36:51 CEST 2010 x86_64
(...)
Last login: Mon Nov 1 13:44:36 2010 from ...
proxmox1:~# exit
logout
Connection to 192.168.1.222 closed.
proxmox1:~#



On my Debian container, I have the following information (using the console):

debian1:/# ifconfig -a
eth0 Link encap:Ethernet HWaddr e2:59:7e:fe:8f:55
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

lo Link encap:Local Loopback

inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00

BROADCAST POINTOPOINT NOARP MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

debian1:/# route

Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
debian1:/#
 
Last edited:
Just to be more precise about my problem : it is not only that the Debian container does not have access to Internet, it does not have access to the network at all (it cannot ping the gateway).

Thanks for your help,
Hakim
 
You need to configure the network inside the container if you use bridged veth mode.

Besides, you must not have 2 gateways on the host side (you only need one 'default' gateway).
 
I could not also get two veth-interfaces to work in container with kernel 2.6.32-4-pve. Only one did work at the time, depending to which one i did put gateway on. Banged my head to the wall until i tried to boot with 2.6.18-4-pve and interfaces worked like charm. However, KVM-machines did not boot up with that one, they stopped to pxe-boot and had to change 2.6.24-12-pve. I would like to use .18 or .32, i am trying to figure out what's wrong.
 
KVM on 2.6.18 worked well, I assume you did not installed the right KVM packages. try 2.6.18 again (aptitude install proxmox-ve-2.6.18), and post the output of pveversion -v
 
KVM on 2.6.18 worked well, I assume you did not installed the right KVM packages. try 2.6.18 again (aptitude install proxmox-ve-2.6.18), and post the output of pveversion -v

I booted it to .18 to get this result only. KVM's did not boot.

:~# pveversion -v
pve-manager: 1.6-5 (pve-manager/1.6/5261)
running kernel: 2.6.18-4-pve
pve-kernel-2.6.32-3-pve: 2.6.32-18
pve-kernel-2.6.32-4-pve: 2.6.32-25
pve-kernel-2.6.18-4-pve: 2.6.18-8
pve-kernel-2.6.24-12-pve: 2.6.24-25
qemu-server: 1.1-22
pve-firmware: 1.0-9
libpve-storage-perl: 1.0-14
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-8
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1dso1
 
yes, as you did not have the kvm package pve-qemu-kvm-2.6.18 installed. pveversion should look like this:

Code:
pveversion -v
pve-manager: 1.6-5 (pve-manager/1.6/5261)
running kernel: 2.6.18-4-pve
proxmox-ve-2.6.18: 1.6-8
pve-kernel-2.6.32-4-pve: 2.6.32-24
pve-kernel-2.6.18-4-pve: 2.6.18-8
qemu-server: 1.1-22
pve-firmware: 1.0-9
libpve-storage-perl: 1.0-14
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-8
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm-2.6.18: 0.9.1-8
 
so far OpenVZ always followed the RHEL releases.

RHEL4 - 2.6.9, stable OpenVZ, still maintained
RHEL5 - 2.6.18, stable OpenVZ, the current stable (Proxmox 2.6.18 is using this)
RHEL6 - 2.6.32 - as far as I see they work hard to get a stable release. RHEL6 has currently release candidate status, I expect that redhat will release it later this year. And I hope 2011 OpenVZ will be in the position to release a stable OpenVZ also - but you better you ask the OpenVZ dev´s directly, I am just guessing here.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!