Loose bridge network connectivity after upgrade from 3.1 to 3.4

justin

New Member
Jun 30, 2015
8
0
1
Hello,

I have three nodes, each with internet access and internal network between them, witch is 10.10.10.0/24. After upgrading node1 from 3.1-24 to 3.4-6, my vm in 10.10.10.103 can still ping the host (10.10.10.1) but not other vm (10.10.10.21 for example). Before the upgrade, it was working fine.

I suspect a bridge routing problem; after googeling several hours I came here, my knoledge is limited, any idea are welcome...

I run debian weezy,
my interfaces file:
### Hetzner Online AG - installimage
# Loopback device:
auto lo
iface lo inet loopback

# device: eth0
auto eth0
iface eth0 inet static
address x.x.x.x
broadcast 144.76.196.223
netmask 255.255.255.224
gateway 144.76.196.193
# default route to access subnet
up route add -net 144.76.196.192 netmask 255.255.255.224 gw 144.76.196.193 eth0

iface eth1 inet manual
auto vmbr0
iface vmbr0 inet static
address 10.10.10.1
netmask 255.255.255.0
bridge_ports eth1
bridge_stp off
bridge_fd 0
# bridge_hello 2
# bridge_maxage 12

auto vmbr1
iface vmbr1 inet static
address 10.9.8.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0

Routing table:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default static.193.196. 0.0.0.0 UG 0 0 0 eth0
10.9.8.0 * 255.255.255.0 U 0 0 0 vmbr1
10.10.10.0 * 255.255.255.0 U 0 0 0 vmbr0
144.76.196.192 static.193.196. 255.255.255.224 UG 0 0 0 eth0
144.76.196.192 * 255.255.255.224 U 0 0 0 eth0


pveversion:
proxmox-ve-2.6.32: 3.4-159 (running kernel: 3.2.0-4-amd64)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-40-pve: 2.6.32-159
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-18
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
 
To which bridge did you had the interfaces of the Virtual machines ? From your set up I understand you should the virtual interfaces to the vmbr0 bridge.
 
To which bridge did you had the interfaces of the Virtual machines ? From your set up I understand you should the virtual interfaces to the vmbr0 bridge.

Yes it's on vmbr0
vmbr1 is almost not used.

In fact I check this issue in live with a good linux network knowledge guy during one our, and it's look very strange & complicated problem.
As I can understand: outgoing tcpip packets are send to vmbr0 instead of eth0.

We will look onto that next week-end and I will keep the list informed: in worse case I'm ready to reinstall the node but we want to know what's happend because I've somes other nodes to update.