OpenVZ virtual host with 2 VENET interfaces ?

fortechitsolutions

Renowned Member
Jun 4, 2008
471
61
93
Hi, I'm curious if it is possible to have 2 venet interfaces on an openvz virtual host ?

I've got 2 interfaces on my physical ProxVE host,

eth0 is bridge vmbr0 which connects to public internet
eth3 is bridge vmbr1 which connects to private NFS gig-ether subnet

I would like to have a virtual host (web server) running which is making some data accessible that lives in part on the private NFS gig-ether subnet.. ie, I need to have access to the private NFS gig-ether subnet from my openvz virtual host..

Or, am I restricted to only a single interface in openvz based venet interfaces ?

Would I then need to use, (a) veth based interfaces in my OpenVZ host, or (b) use KVM virtualization instead .. ?

If I can keep this virtual host on OpenVZ that would be nicest.

Thanks!


Tim Chipman
 
Doh, I think this was easier than I had convinced myself it should be.

A quick hint from the URL, http://wiki.openvz.org/Venet

suggested to me that multiple interfaces could simply be configured.

So,

- SSH onto ProxVE virtualization host
- issue command, "vzctl set 101 --ipadd 192.168.100.101 --save"
- take a look at ProxVE web interface / overview for this virtual host, now it shows both IPs listed in comma-delimited fashion where it previously showed just one IP.
- start up the host, bingo, we have 2 virtual interfaces up and I can ping where I want to.

ie,

moa-2:~# vzctl enter 101
entered into CT 101
[root@moaweb /]# ifconfig -a
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:6 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:396 (396.0 b) TX bytes:396 (396.0 b)

venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:127.0.0.1 P-t-P:127.0.0.1 Bcast:0.0.0.0 Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:9000 Metric:1
RX packets:2214 errors:0 dropped:0 overruns:0 frame:0
TX packets:1212 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2873575 (2.7 MiB) TX bytes:71251 (69.5 KiB)

venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:129.XXX.213.201 P-t-P:129.XXX.213.201 Bcast:129.XXX.213.201 Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:9000 Metric:1

venet0:1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:192.168.100.101 P-t-P:192.168.100.101 Bcast:192.168.100.101 Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:9000 Metric:1

[root@moaweb /]#


[root@moaweb /]# ping www.slashdot.org
PING www.slashdot.org (216.34.181.48) 56(84) bytes of data.
64 bytes from star.slashdot.org (216.34.181.48): icmp_seq=1 ttl=241 time=46.5 ms
64 bytes from star.slashdot.org (216.34.181.48): icmp_seq=2 ttl=241 time=46.9 ms
64 bytes from star.slashdot.org (216.34.181.48): icmp_seq=3 ttl=241 time=47.0 ms

--- www.slashdot.org ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 46.569/46.878/47.089/0.223 ms

Now try the private NFS server interface:

[root@moaweb /]# ping 192.168.100.1
PING 192.168.100.1 (192.168.100.1) 56(84) bytes of data.
64 bytes from 192.168.100.1: icmp_seq=1 ttl=63 time=201 ms
64 bytes from 192.168.100.1: icmp_seq=2 ttl=63 time=0.137 ms

--- 192.168.100.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.137/100.855/201.574/100.719 ms
[root@moaweb /]#



It seems, now my only problem is that the NFS server is denying me the permission to mount the export. A bit odd because I can do the mount from the ProxVE physical host.. the 192.168.100.0 subnet is trusted .. I don't quite follow. But I'll get there soon, I hope.


--Tim
 
Brief update, for any who are interested...

So, for now, I simply went with a PVM based virtual machine. Setup of the interface->bridge association was trivial, and NFS client worked painlessly, once I tweaked iptables on the client. I expect I won't bother with OpenVZ based fix on this for now as a result.

--Tim