VMs do not reach each other via "internal" bridge

holgerb

Member
Aug 3, 2009
45
0
6
Hi all,

we currently have three IBM x3650 servers running Proxmox 1.5.
All server are connected to our LAN via 100 MBit. For faster communication between the VMs running on the three servers we have set up an internal 1 GBit network with an additional switch. We created a second bridge connected to the internal network.

A typical network configuration on any of the clients looks like this:

iface bond0 inet manual
slaves eth1 eth2
bond_miimon 100
bond_mode active-backup

auto vmbr0
iface vmbr0 inet static
address x.x.80.87
netmask 255.255.255.0
gateway x.x.80.1
bridge_ports eth0
bridge_stp off
bridge_fd 0

auto vmbr1
iface vmbr1 inet static
address x.x.199.87
netmask 255.255.255.0
bridge_ports bond0
bridge_stp off
bridge_fd 0

The x.x.80.87 is the external interface. The x.x.199.87 is the internal interface connected to the GBit LAN.

It is planned that each VM gets a second network interface which is connected to vmbr1.

Problem here:
While we are perfectly able to ping the internal bridge from within the VM, we are not able to ping other VMs connected to the same internal bridge. It seems to have something to do with ARP. Test VMs here were WinXP SP3 based. According to the TCPDump analysis of my colleague the following seems to happen:
Scenario VM1 and VM2 are XP boxes on the same proxmox host connected to the same internal bridge.
Internal network adapter in each VM is configured to a static IP within the same subnet as the bridge (x.x.199.x)
We ping VM2 from VM1 over internal network interface. VM2 seems to receive the PING request, answers but the answering package vanishs into the void :-(

I haven't crosschecked with a Linux VM though.

Has anyone experienced the same problem ?

TIA,
Holger