Without Internet access in VM

carles

New Member
Jul 29, 2010
4
0
1
Sorry, but I am a newbie in proxmox.
I'm using an proxmoxVE 1.5 + kernel 2.6.18. I configured one virtual machines using venet. The local IP of my proxmox is 192.168.1.10.
The local ip of my router is 192.168.1.2. I created a virtual machine using the IP 10.10.1.101 and dns ip 192.168.1.2 using venet network.

The portion of the file / etc/vz/conf/101.conf is this:
# CPU fair sheduler parameter
CPUUNITS="1000"
CPUS="1"
VE_ROOT="/var/lib/vz/root/$VEID"
VE_PRIVATE="/var/lib/vz/private/$VEID"
OSTEMPLATE="centos-5-standard_5.2-1_i386"
ORIGIN_SAMPLE="pve.auto"
IP_ADDRESS="10.10.1.101"
HOSTNAME="G5.desarrolladores.eu"
NAMESERVER="192.168.1.2"
SEARCHDOMAIN="desarrolladores.eu

Then I have a local connectivity with my address 192.168.1.10 which is the local ip of my Proxmox from the vm with ip 10.10.1.101 and vice versa, but from the VM I can not access the internet. I need to add a route or something in the network configuration ?

Thanks in advance,
 
I need to add a route or something in the network configuration ?

It should work out of the box if you use the '192.168.1.X' net for your VMs.

for the '10.X.X.X' net, try to use 'tcpdump' to see where the packets gets lost.
 
Thanks dietmar,

I've tried using VM's ip in the 192.168.1.x addresses but I do not work, either.

Also if I do a "route" (VM machine) are some destinations nets that are not configured in the interface files.
Destination Gateway Genmask Flags Metric Ref Use Iface
192.0.2.0 * 255.255.255.0 U 0 0 0 venet0
169.254.0.0 * 255.255.0.0 U 0 0 0 venet0
default 192.0.2.1 0.0.0.0 UG 0 0 0 venet0

These two networks and this gateway I've not configured but appear in the route.

Do not understand why this appears in the route but I have connectivity to the main machine doing a ping or ssh from the virtual machine IP 10.10.1.101 to the IP 192.168.1.10 (main machine).

;)
 
same problem here.
I fired up a brand new CentOS 5.2 OpenVZ VM using venet and IP 192.168.1.10 but i even can't ping my gateway though i didn't change any network config:
Code:
venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet addr:127.0.0.1  P-t-P:127.0.0.1  Bcast:0.0.0.0  Mask:255.255.255.255
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

venet0:0  Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet addr:192.168.1.10  P-t-P:192.168.1.10  Bcast:192.168.1.10  Mask:255.255.255.255
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1

[root@containertest /]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.0.2.0       0.0.0.0         255.255.255.0   U     0      0        0 venet0
169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 venet0
0.0.0.0         192.0.2.1       0.0.0.0         UG    0      0        0 venet0
[root@containertest /]# ping google.de
^C
[root@containertest /]# ping 74.125.43.103   # one of the IPs i got with "nslookup google.de" from proxmox server
PING 74.125.43.103 (74.125.43.103) 56(84) bytes of data.  
[root@containertest /]# ping 192.0.2.1
connect: Invalid argument
[root@containertest /]#
this is what i get when running tcpdump on my proxmox server while running "ping 74.125.43.103" on the VM:
Code:
proxmox:~# tcpdump host 192.168.1.10
tcpdump: WARNING: eth0: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
22:30:09.882151 IP 192.168.1.10 > bw-in-f103.1e100.net: ICMP echo request, id 34158, seq 150, length 64
22:30:10.882125 IP 192.168.1.10 > bw-in-f103.1e100.net: ICMP echo request, id 34158, seq 151, length 64
22:30:11.882119 IP 192.168.1.10 > bw-in-f103.1e100.net: ICMP echo request, id 34158, seq 152, length 64
this is what the VM says:
Code:
[root@containertest /]# tcpdump -i any
tcpdump: WARNING: Promiscuous mode not supported on the "any" device
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 96 bytes    
01:04:28.145076 IP containertest.heiwu.de.35437 > 195.71.90.106.domain:  24283+ PTR? 103.43.125.74.in-addr.arpa. (44)
01:04:28.145089 IP 195.71.90.106 > containertest.heiwu.de: ICMP 195.71.90.106 udp port domain unreachable, length 80
Do i really have to config at the proxmox server to be able to get ping reply?
And if so, why can't i reach my (autoconfigured) default GW?
I even tried changing the VM's IP to 192.0.2.10 -> exactly the same problems.

KVM VMs are working like a charm.

Any Ideas?
 
Last edited:
Do i really have to config at the proxmox server to be able to get ping reply?
And if so, why can't i reach my (autoconfigured) default GW?

Please can you post the network config from the host (/etc/network/interfaces).

And what version do you run exactly:

# pveversion -v
 
I've already solved.

We explain what the problem was.

The problem is that I defined in the interface venet0:0 of the VM the IPADDR and that is not necessary. The interface configuration file (CentOS5) has the following:

#/etc/sysconfig/network-scripts/ifcfg-venet0:0
DEVICE=venet0:0
ONBOOT=yes
NETMASK=255.255.255.255

~

Then /etc/resolv.conf file must have the primary dns address the main server, in this case is 192.168.1.10

#/etc/resolv.conf
nameserver 192.168.1.10
nameserver 80.58.61.250

Then I put the file /etc/network/interfaces of the main server:

# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

iface eth2 inet manual

auto bond0
iface bond0 inet manual
slaves eth0 eth1 eth2
bond_miimon 100
bond_mode active-backup

auto vmbr0
iface vmbr0 inet static
address 192.168.1.10
netmask 255.255.255.0
gateway 192.168.1.2
bridge_ports bond0
bridge_stp off


Finally if I run the following command to see the routes created on my main server, I see that you define a link to 192.168.1.51 that is the ip of my VM.

# ip route list
192.168.1.51 dev venet0 scope link
192.168.1.0/24 dev vmbr0 proto kernel scope link src 192.168.1.10
default via 192.168.1.2 dev vmbr0

And that's all , thanks dietmar for your help and I hope this explanation helps you rootkid.

;)
 
Last edited:
/etc/network/interfaces
Code:
cat /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

auto bond0
iface bond0 inet manual
        slaves eth0 eth1
        bond_miimon 100
        bond_mode active-backup

auto vmbr0
iface vmbr0 inet static
        address  195.71.90.106
        netmask  255.255.255.128
        gateway  195.71.90.1
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0
pveversion
Code:
pveversion -v
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.18-2-pve
proxmox-ve-2.6.18: 1.5-5
pve-kernel-2.6.18-2-pve: 2.6.18-5
qemu-server: 1.1-16
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm-2.6.18: 0.9.1-5
 
I see you have installed this kernel 2.6.18-5, you better update this to another using this command:

# apt-get upgrade && apt-get install proxomox-ve-2.6.24

;)
 
head -> desk ;)

i upgraded (think you meant proxmox, not proxomox), everythin came up again, but i got this messages:
Code:
kvm: 4899: cpu0 unimplemented perfctr wrmsr: 0x186 data 0x130079
kvm: 4899: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xffcf2b8c
kvm: 4899: cpu0 unimplemented perfctr wrmsr: 0x186 data 0x530079
kvm: 4920: cpu0 unimplemented perfctr wrmsr: 0x186 data 0x130079
kvm: 4920: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xffcf2b8c
kvm: 4920: cpu0 unimplemented perfctr wrmsr: 0x186 data 0x530079

should i worry about this?

I'll test if my containers work now as soon as i have the time...
 
now (after i upgraded the kernel) i created a new centos 5.2 openVZ container with venet IP 192.168.1.15.
I can ping my proxmox host (vmbr0 inet addr:195.71.90.106) but can not ping my DNS Server (195.71.90.1):
Code:
tcpdump -i any icmp
22:44:23.021567 IP 192.168.1.15 > 195.71.90.1: ICMP echo request, id 63489, seq 4, length 64
22:44:23.021567 IP 192.168.1.15 > 195.71.90.1: ICMP echo request, id 63489, seq 4, length 64
22:44:23.021583 IP 192.168.1.15 > 195.71.90.1: ICMP echo request, id 63489, seq 4, length 64
22:44:23.021586 IP 192.168.1.15 > 195.71.90.1: ICMP echo request, id 63489, seq 4, length 64
22:44:23.021589 IP 192.168.1.15 > 195.71.90.1: ICMP echo request, id 63489, seq 4, length 64
1.: I suppose i don't get answers because proxmox doesn't NAT the container's IP.
2.: Did i get it right, that venet containers that have private IPs can't be reached from "outside" of my environment and venet VMs with public IPs are handled like KVM VMs with a bridged network interface?
3.: Why does my container still have that ugly routing table?
Code:
[root@containertest3 /]# route -n                                                            
Kernel IP routing table                                                                      
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.0.2.0       0.0.0.0         255.255.255.0   U     0      0        0 venet0
169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 venet0
0.0.0.0         192.0.2.1       0.0.0.0         UG    0      0        0 venet0
4.: And how can it reach the proxmox host if there's no entry for it in the table?
5.: If i have set up my firewall (including NAT), can i set it as the container's new default gw so i can ping the dns (that has a public IP)?
 
Last edited:
1.) I suppose i don't get answers because proxmox doesn't NAT the container's IP.

Does the packets arrive at the destination? Is there a route back?

2.: Did i get it right, that venet containers that have private IPs can't be reached from "outside" of my environment and venet VMs with public IPs are handled like KVM VMs with a bridged network interface?

venet is a routed network setup:

http://wiki.openvz.org/Differences_between_venet_and_veth

3.: Why does my container still have that ugly routing table?

seem you have some custom network config - I guess this does not happen when you create a new centos container?

check /etc/sysconfig/network-scripts/ifcfg-venet*


4.: And how can it reach the proxmox host if there's no entry for it in the table?

There is an entry ('169.254.0.0' is used as alias for the host)


5.: If i have set up my firewall (including NAT), can i set it as the container's new default gw so i can ping the dns (that has a public IP)?

No - if you use venet packets are always routed to you host - the routing table is setup by vzctl, and should look like:

Code:
# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
169.254.0.0     *               255.255.0.0     U     0      0        0 venet0
default         *               0.0.0.0         U     0      0        0 venet0
 
"Does the packets arrive at the destination? Is there a route back?" - no, the packet's don't arrive at the destination host.
i created several centos and debian containers using venet and veth, but both wasn't successfull.
And I understand that i have to use veth if i want to set up a custom network, for example if i want to use a default gw that's not my proxmox host.

This is all confusing me a lot and http://wiki.openvz.org/Differences_between_venet_and_veth doesn't help me much.
So one last try (i can't belive it is so hard to configure) :

My Setup:
- Nameserver and default GW for my Proxmox host: 195.71.90.1
- Proxmox host:
-- bond0 containing eth0 and eth1
-- vmbr0 with IP 195.71.90.106

how must i create and configure a container (centos or debian, better centos) so that it has internet connection?
Could someone PLEASE tell me the steps in the gui and in the container? I'm really lost :(
 
how must i create and configure a container (centos or debian, better centos) so that it has internet connection?
Could someone PLEASE tell me the steps in the gui and in the container? I'm really lost :(

Simply use venet with address in that address range '195.71.90.XXX'.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!