OpenVZ venet network problems after host network changes

Jason Calvert

Active Member
Oct 9, 2013
10
0
41
Loughton, United Kingdom
Hi,
I have been running Proxmox 3.x for 12+ months (mostly) without problems, with both OpenVZ containers with venet IPs, and KVM VMs.
Until recently my network config looked like this:
eth0,eth1 -> bond0 -> vmbr0, with IP on the bridge only

I have now separated out the eth1 interface to dedicate to NFS traffic on a different subnet, so bridge config is just:
eth0 -> vmbr0, with same IP on the bridge

(also eth2 for glusterfs replication with a crossover cable; and eth3 for portsniffing - both unchanged)

Since the change, most (but not all) OpenVZ containers have no network connectivity!
If I run 'vzctl set <ctid> --ipadd <ip> --save' then it seems to bounce the venet and makes it work. However it then won't shut down.
Stopping and restarting the OpenVZ vm returns it to not working - IP is in config, but doesn't work until re-run the --ipadd function.

I even tried a new OpenVZ VM, with same result, and removed and reinstalled one of the ProxMox nodes, also with the same result!
and restored VM from a backup.

Any ideas please?
Thanks
Jason
 
/etc/network/interfaces:
# network interface settings
auto lo
iface lo inet loopback


iface eth0 inet manual


auto eth1
iface eth1 inet static
address 192.168.9.14
netmask 255.255.255.0


auto eth2
iface eth2 inet static
address 10.0.0.1
netmask 255.255.255.0


iface eth3 inet manual


auto vmbr0
iface vmbr0 inet static
address 192.168.0.14
netmask 255.255.255.0
gateway 192.168.0.1
bridge_ports eth0
bridge_stp off
bridge_fd 0


auto vmbr1
iface vmbr1 inet manual
bridge_ports eth3
bridge_stp off
bridge_fd 0


auto vmbr7
iface vmbr7 inet manual
bridge_ports eth2.7
bridge_stp off
bridge_fd 0


eth0 = Proxmox and VM traffic
eth1 = NFS traffic
eth2 = Replication between nodes for GlusterFS; and also a test DMZ (VLAN7)
eth3 = Port Sniffer interface (for a snort VM)


root@proxmox1:~# brctl show
bridge name bridge id STP enabled interfaces
vmbr0 8000.f81a670059b8 no eth0
vmbr1 8000.e8393512a543 no eth3
vmbr7 8000.e8393512a542 no eth2.7


thanks,
Jason
 
I discovered that after the reinstall of node "proxmox1" the NICs were in the wrong order. I have now corrected (in /etc/udev/rules.d/70-persistent-net.rules) and so the output of brctl is now:

root@proxmox1:~# brctl show
bridge name bridge id STP enabled interfaces
vmbr0 8000.78acc0b0cbca no eth0
vmbr1 8000.f81a670059b8 no eth3
vmbr7 8000.e8393512a543 no eth2.7


Within the OpenVZ VM when it is not working:


[root@syslog /]# ip addr
1: lo: <LOOPBACK> mtu 16436 qdisc noop state DOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: venet0: <BROADCAST,POINTOPOINT,NOARP> mtu 1500 qdisc noop state DOWN
link/void

After resetting it's IP using 'vzctl set':
[root@syslog /]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: venet0: <BROADCAST,POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/void
inet 127.0.0.1/32 scope host venet0
inet 192.168.0.16/32 brd 192.168.0.16 scope global venet0:0


Jason
 
Hello,

As below:

root@proxmox1:~# cat /etc/pve/openvz/105.conf
ONBOOT="no"


PHYSPAGES="0:512M"
SWAPPAGES="0:512M"
KMEMSIZE="232M:256M"
DCACHESIZE="116M:128M"
LOCKEDPAGES="256M"
PRIVVMPAGES="unlimited"
SHMPAGES="unlimited"
NUMPROC="unlimited"
VMGUARPAGES="0:unlimited"
OOMGUARPAGES="0:unlimited"
NUMTCPSOCK="unlimited"
NUMFLOCK="unlimited"
NUMPTY="unlimited"
NUMSIGINFO="unlimited"
TCPSNDBUF="unlimited"
TCPRCVBUF="unlimited"
OTHERSOCKBUF="unlimited"
DGRAMRCVBUF="unlimited"
NUMOTHERSOCK="unlimited"
NUMFILE="unlimited"
NUMIPTENT="unlimited"

# Disk quota parameters (in form of softlimit:hardlimit)
DISKSPACE="4G:4613734"
DISKINODES="800000:880000"
QUOTATIME="0"
QUOTAUGIDLIMIT="0"


# CPU fair scheduler parameter
CPUUNITS="1000"
CPUS="1"
HOSTNAME="syslog.gumby.local"
SEARCHDOMAIN="gumby.local"
NAMESERVER="192.168.0.5 192.168.0.3"
IP_ADDRESS="192.168.0.16"
VE_ROOT="/var/lib/vz/root/$VEID"
VE_PRIVATE="/mnt/pve/NFS-Datastore1/private/105"
OSTEMPLATE="centos-6-standard_6.3-1_i386.tar.gz"


Jason
 
I have determined the cause of this issue.
It was only happening on NFS storage, not local storage, and was also after a recent update of my FreeNAS server.
I don't know what had changed, but it appeared that that update caused an incompatibility with ProxMox and NFS on ZFS!

After the following changes (ZFS parameters) the problem was resolved:

zfs set aclmode=passthrough vol0/NFS/VM-Datastore1
zfs set aclinherit=passthrough-x vol0/NFS/VM-Datastore1

Jason
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!