Version 1.6 route-venet0

everclear

New Member
Feb 27, 2010
2
0
1
I upgraded to 1.6 and found that one of the OpenVZ instances couldn't connect to the network. The system was BlueOnyx on a CentOS5 template (migrated from a physical machine in the past).

In version 1.6, /etc/vz/dists/scripts/redhat-add_ip.sh now removes /etc/sysconfig/network-scripts/route-venet0.

That means that /etc/sysconfig/network can't contain GATEWAY="192.0.2.1", but must instead contain GATEWAYDEV="venet0".

BlueOnyx assumes if it's an setup that it should set GATEWAY="192.0.2.1" on boot.

Just noting this issue in case anyone else has the problem. Would be good if Proxmox didn't remove the file. I don't know if there is a reason why it would be best to remove it.
 
it looks that this change happens in newer vzctl version so I do not think that we can fix this. did you get feedback from blueonyx?
 
I installed proxmox 1.6 a few days a ago on a new server and have it running locally at 192.168.1.101
I am able to access the "master node" with no problem.

I then created a new openvz container from a blueonyx template and put it at 192.168.1.111. It's set as venet

If I change the /etc/sysconfig/network (in the container) to add the GATEWAYDEV="venet0" as below, and then restart the network, then I am able to access it and ping out from it.

FORWARD_IPV4=false
#GATEWAY=192.0.2.1
GATEWAYDEV="venet0"
HOSTNAME=localhost
NETWORKING=yes


My problem is that after each restart of the container, then /etc/sysconfig/network reverts back to
FORWARD_IPV4=false
GATEWAY=192.0.2.1
HOSTNAME=localhost
NETWORKING=yes

Should I just set up a cron to check and fix it, or is there a more elegant solution?


Ken Marcus
 
Dietmar

Thanks for your reply.


The output of the pveversion -v is:

pve-manager: 1.6-2 (pve-manager/1.6/5087)
running kernel: 2.6.32-3-pve
proxmox-ve-2.6.32: 1.6-14
pve-kernel-2.6.32-3-pve: 2.6.32-14
qemu-server: 1.1-18
pve-firmware: 1.0-7
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-7
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.5-1
ksm-control-daemon: 1.0-4



Ken Marcus
 
Possibly the BlueOnyx system is rewriting it.

I went ahead and just set up a cron to run the script below to check and fix it.

#!/usr/bin/perl
#checks the network config file

$tempfile = "/root/tempfile.txt";
if (-e "$tempfile") { system ("rm $tempfile"); }


$conffile = "/etc/sysconfig/network";

$cat = `cat $conffile`;

if ($cat !~ /#GATEWAY=192.0.2.1/) {
open (FIL,"$conffile") or die "Can't Open $conffile\n";
open (FIL2,">$tempfile") or die "Can't Open $tempfile\n";
while (<FIL>) {
$thisline = $_;

if ( ($thisline =~ /^GATEWAY=192.0.2.1/ ) ) {
print FIL2 "#"."$thisline";
print FIL2 "GATEWAYDEV=\"venet0\"\n";
} else {
print FIL2 "$thisline";
}
}
close (FIL);
close (FIL2);
system ("cp $tempfile $conffile ");
system ("/etc/rc.d/init.d/network restart");

} else
{
#print "The cat is $cat";
}



Ken M
 
I have the same problem after 'apt-get -u dist-upgrade' to 1.6, the route-venet0 files disappeared, so I need to manually

route add -net 192.0.2.0 netmask 255.255.255.0 dev venet0
route add -net 0.0.0.0 netmask 0.0.0.0 gw 192.0.2.1

to get networking to work in the vz containers.


pveversion -v
pve-manager: 1.6-5 (pve-manager/1.6/5261)
running kernel: 2.6.18-4-pve
proxmox-ve-2.6.18: 1.6-8
pve-kernel-2.6.18-2-pve: 2.6.18-5
pve-kernel-2.6.18-4-pve: 2.6.18-8
qemu-server: 1.1-22
pve-firmware: 1.0-9
libpve-storage-perl: 1.0-14
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-8
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm-2.6.18: 0.9.1-8

I presume that any reboot of these containers will now require manual intervention to recover network access.

Any fix for this? I could always just add those route commands in a cron task.