routing between vms on same system but different interfaces

nt_support

New Member
Mar 29, 2011
9
0
1
Norfolk Island
Hoping someone can explain what's happening here:

Have:

proxmox server (clustered but not relevant)
Server has two NICs configured vmbr0(eth0) and vmbr1(eth1).

VM1 - OpenVZ (Linux) - venet0:0 via vmbr0 - network1

VM2 - KVM (Linux) - eth0 via vmbr1 - network2

There is a firewall between network1 and network2

When I try to ping between the devices, some traffic goes through firewall, some traffic goes internal (by the looks), but more importantly VM1 doesn't see the traffic from VM2.

I used tcpdump to monitor & what I found was that if I check on the PM interfaces (eth0, vmbr0) I see the traffic inbound to VM1 but I can't see it on the VM1 interface (venet0:0). I do see any other traffic inbound for the VM1 on the venet0:0 interface so it's working fine for any other traffic.

To be specific about what I see:

traffic -> eth0 on PM Server (ok) -> vmbr0 on PM Svr (ok) -> venet0 on VM1 (nothing)

I can understand (sort of) how some traffic goes via the FW and some direct between interfaces but what I can't understand is why the VM1 doesn't see the traffic from VM2.

Any pointers or help appreciated.
thanks
Stu
 
Actually I think I've half solved my issue.

Basically the openVZ model routing traffic to network 2 through the vmbr1 interface because it's a venet IF on the PM.

I want the system to route each network separately & since the VM on the vmbr1 is KVM & bridged I figure what I need to do is bring up vmbr1 with no IP. Is this possible?
 
Well, not sure about the no IP on VMBR but I solved my issue by assigning the vmbr interface a /32 address in the network2.
Of course I can't route to it but I manage this through my vmbr0 IF.

I now have pysical routing between my OpenVZ dev on vmbr0 and KVM on vmbr1 via an external firewall.

cheers