venet

T

thefool808

Guest
I'm trying to understand venet. I can't seem to figure out how to control which physical interface on the hardware node a specific venet interface's traffic goes...

Looking at /usr/sbin/vznetaddbr it looks like it always defaults to vmbr0... is this correct? I haven't had any networking problems, but it seems I would probably want to control which containers are on which physical interface to balance the load (I had to do this a lot on vmware...).

Thanks.

Julian
 
Interestingly that means that every openvz VE has access to every network that the PVE is connected to. For example, I have two NICs in the PVE server, one with an IP of 192.168.1.1 and one with an IP of 192.168.2.1. Also, I have a VE with an IP of 192.168.1.2. That VE is able to ping machines on the 192.168.2.0 subnet.

I guess this makes perfect sense, but I found it a little suprising.

Anyways, if I wanted to prevent that than I would use a veth device on one of the vmbr devices (at the expense of speed and cpu cycles).
 
Oops. That's not good.

After reading the network model document I understood bridges as virtual network switches with no connection to each other unless there is some VM playing the router role between them.

I planned to have this special setup:

- Having multiple physical eth's on my machine.
- Install IPCop or Endian FW as Firewall/Router with 2 red, 1 blue, 1 orange, 1 green interface
- have several OVZ VMs in blue and orange network
- have several OVZ and KVM (Windows) VMs in the green network
- have physical clients in blue, orange and green network

I don't want any VM passing packets to the other networks other way than thru the IPCop/Endian firewall.

If the routing between different bridges for venet devices (also KVM network devices ?) is done thru the PVE kernel, offering multiple bridges makes no sense to me.
It's like connecting different network switches that were bought to separate networks by a cable to combine the networks.

Could you please tell me how to disable this kernel routing functionality ?
It really makes only sense to have multiple bridges when they cannot talk to each other without 'external' router help.


Thanks,
Mike
 
Please create bridges without routes (no IP/Netmask - use 0.0.0.0 as IP address on the web interface). That way there is no route, so this is what you want.

Maybe we need additional protection, but i am not sure. If you only use bridged networking you can try to turn off routing in /etc/sysctl.conf

Code:
net.ipv4.conf.default.forwarding=0

It would also be possible to disable routing with iptables, but never tried that.

- Dietmar
 
- Install IPCop or Endian FW as Firewall/Router with 2 red, 1 blue, 1 orange, 1 green interface

I would like to play around with that myself, but unfortunatley had no time so far.

Please can you post your results here - I am quite curious if it works as expected.

- Dietmar
 
To be clear this only occurs when using venet networking in an openVZ VE. I've tested to make sure this does NOT occur when using a veth (vmbr) device in an openVZ VE and also when using a bridged (vmbr) device within a KVM virtual machine.

As I understand it, since the bridge is bound to a specific eth device on the host, by default it will not send traffic on any other eth device.

When people are putting vmbr devices on public facing eth devices, I think a good strategy is to NOT give the vmbr any ip address (as Dietmar points out). This will prevent anybody from ever having access to the server itself on that interface.
 
As I understand it, since the bridge is bound to a specific eth device on the host, by default it will not send traffic on any other eth device.

Any guest on the bridge can send traffic to the host. But usually there is not route back. I am not sure if that is a big problem.

- Dietmar
 
Hello Dietmar,

sorry for the delay, but here is a result now about I was working with the last months:

venet is quite unuseful for me, because PVE plays a bridge role: I cannot reach the networks I want, no-one can reach me (sight of the server).

So the only useful way for me going is to use virtual network devices; but I'd like to also have multiple of them in OVZ containers, that would be really great.

I just bought a Eicon Diva Server Card BRI-4/8M V1.0 and I am planning to use it for a freePBX installation inside a Debian OVZ container. I got some how to's, so I'll let you know.

I am happy about 1.0 but worried tabout to update because of GRUB: I have a gpt 4TB Partition only and thus it won't work.
To be honest: PVE is meant for really large systems, why don't you go for GRUB2 ?

After now months of running our system in production environment I can say:
GREAT WORK !!!
No problems at all.
We had one power fail - longer that the UPS was able to run - and there's one wish:

having a loading priority option, means: start OS<NR> only when OS<NR> is up, tested by ping|http-test| or so.

Thanks,
Mike
 
So the only useful way for me going is to use virtual network devices; but I'd like to also have multiple of them in OVZ containers, that would be really great.

That is already on our todo list.

I am happy about 1.0 but worried tabout to update because of GRUB: I have a gpt 4TB Partition only and thus it won't work.
To be honest: PVE is meant for really large systems, why don't you go for GRUB2 ?

because it is pre-alpha - I tested recently and AFAIK many things do not work currentyl.

and there's one wish: having a loading priority option, means: start OS<NR> only when OS<NR> is up, tested by ping|http-test| or so.

OK, added to our TODO

- Dietmar
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!