Proxmox 3. Network borked. Help Fix!

Chris_C

New Member
Jul 28, 2012
12
0
1
Help.
I have proxmox 3 running on a bare metal machine.
The 65GB OpenVZ container cannot reach the internet,
DNS serves are configured right,
During debian upgrade (not sure), something changed the network config of the containers, possibly the bridge, venet, or vmbr0 ?!
Whatever happened, the OpenVZ container can't reach the internet!
Need the container to reach the internet so in order to backup the OpenVZ container externally, then wipe the bare metal machine, install proxmox 5 over it with ZFS, and import/restore the OpenVZ container into a new LXC container.

Here's the network config of the bare metal machine
Code:
root@proxmox:~# ifconfig
eth0      Link encap:Ethernet  HWaddr xx:xx:xx:xx:xx:xx
          inet addr:192.168.0.100  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx/64 Scope:Global
          inet6 addr: fe80::xxxx:xxxx:xxxx:xxxx/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:170816 errors:0 dropped:0 overruns:0 frame:0
          TX packets:95042 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:19499959 (18.5 MiB)  TX bytes:9268083 (8.8 MiB)
          Interrupt:18

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:6 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1432 (1.3 KiB)  TX bytes:1432 (1.3 KiB)

venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet6 addr: fe80::1/128 Scope:Link
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:112490 errors:0 dropped:0 overruns:0 frame:0
          TX packets:31 errors:0 dropped:3 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:8988280 (8.5 MiB)  TX bytes:2634 (2.5 KiB)

vmbr0     Link encap:Ethernet  HWaddr 0e:19:92:71:3d:cc
          inet addr:10.0.0.1  Bcast:10.0.0.255  Mask:255.255.255.0
          inet6 addr: fe80::c19:92ff:fe71:3dcc/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2562 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:220284 (215.1 KiB)

Inside the OpenVZ container here's the network config:
Code:
root@container000:~# ifconfig
lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:436636 errors:0 dropped:0 overruns:0 frame:0
          TX packets:436636 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:233415463 (222.6 MiB)  TX bytes:233415463 (222.6 MiB)

venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet addr:127.0.0.2  P-t-P:127.0.0.2  Bcast:0.0.0.0  Mask:255.255.255.255
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:76012 errors:0 dropped:798 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:6104456 (5.8 MiB)

venet0:0  Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet addr:192.168.1.114  P-t-P:192.168.1.114  Bcast:192.168.1.114  Mask:255.255.255.255
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1

From the proxmox machine:
* DNS works.
* Internet traffic works.

Inside the container:
* DNS servers are configured correctly
* DNS doesn't work - "nslookup yahoo.com" gives: ";; connection timed out; no servers could be reached".
* PING works only to the proxmox IP (192.168.0.100) and to the local internet gateway (192.168.0.1)
* PING fails both for internet DNS names and for internet IP addresses.

Help. How to fix this broken network config inside the OpenVZ container??
 
Hi,
pve3 is a very long time ago... and I don't use openvz container (only some tests)... anyway.

Who looks your routing inside the VM and on the node?
Code:
ip route
what does iptables say?
Code:
# on node
iptables -L
How is the nameserver configured inside CT?
Code:
cat /etc/resolv.conf
Udo
 
  • Like
Reactions: Chris_C
Ahoy @udo !

1. Here's the "iptables" config on node:
Code:
root@NODE:~# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

2. Here's the DNS config on the OpenVZ container:
Code:
root@CONTAINER:~# cat /etc/resolv.conf
search mydomain.com
#nameserver 192.168.0.1
nameserver 8.8.8.8
nameserver 1.1.1.1
nameserver 8.8.4.4
 
Last edited:
@udo
3. Here's the NODE "ip route" output:
Code:
root@NODE:~# ip route
192.168.1.114 dev venet0  scope link
10.0.0.0/24 dev vmbr0  proto kernel  scope link  src 10.0.0.1
192.168.0.0/24 dev eth0  proto kernel  scope link  src 192.168.0.100
default via 192.168.0.1 dev eth0

4. And here's the CONTAINER "ip route" output:
Code:
root@CONTAINER:~# ip route
default dev venet0  scope link
 
Last edited:
@udo
3. Here's the NODE "ip route" output:
Code:
root@NODE:~# ip route
192.168.1.114 dev venet0  scope link
10.0.0.0/24 dev vmbr0  proto kernel  scope link  src 10.0.0.1
192.168.0.0/24 dev eth0  proto kernel  scope link  src 192.168.0.100
default via 192.168.0.1 dev eth0

4. And here's the CONTAINER "ip route" output:
Code:
root@CONTAINER:~# ip route
default dev venet0  scope link
Hi,
is any device connected to vmbr0?
Code:
brctl show
no nat could not be right - take a look here:
https://openvz.org/Using_NAT_for_container_with_private_IPs

Udo
 
  • Like
Reactions: Chris_C
Hi,
is any device connected to vmbr0?
Code:
brctl show
no nat could not be right - take a look here:
https://openvz.org/Using_NAT_for_container_with_private_IPs

Udo

Good idea @udo Here's the "brctl show" command output:
Code:
root@NODE:~# brctl show
bridge name     bridge id               STP enabled     interfaces
vmbr0           8000.000000000000       no

Here's another command "service networking reload" notice how it gives interesting error information:
Code:
root@NODE:~# service networking reload
Reloading network interfaces configuration...
Waiting for vmbr0 to get ready (MAXWAIT is 2 seconds).
RTNETLINK answers: Network is unreachable
ifup: interface lo already configured
done.
 
@udo
Nothing is physically connected to vmbr0 (IP address 10.0.0.1).
When proxmox 3 was installed on this NODE, the proxmox network settings were configured so that the openvz containers would use a BRIDGED connection to the NODE NIC on eth0 (192.168.0.100).
So, the containers would each have a static IP in the range of 192.168.0.101 - 199.
Somehow, this has become broken.
How to fix?
Add a "route"?
Reconfigure an "ip" on a virtual device (vmbr0 or venet0) to match the relevant "netmask" so that routing works?
 
@udo
Nothing is physically connected to vmbr0 (IP address 10.0.0.1).
When proxmox 3 was installed on this NODE, the proxmox network settings were configured so that the openvz containers would use a BRIDGED connection to the NODE NIC on eth0 (192.168.0.100).
So, the containers would each have a static IP in the range of 192.168.0.101 - 199.
Somehow, this has become broken.
How to fix?
Add a "route"?
Reconfigure an "ip" on a virtual device (vmbr0 or venet0) to match the relevant "netmask" so that routing works?
Hi,
you must NAT the openvz-traffic.

Looks, that you ignore the link I've allready posted? https://openvz.org/Using_NAT_for_container_with_private_IPs

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!