What is the difference between host and vm on the same vmbr?

vmanz

Renowned Member
Sep 15, 2010
38
4
73
Hi,

I had a strange behavior between the kvm-VMs and my OpenIndiana storage-box. Onla my VMs were unable to even ping the storage-box. Maybe someone finds it useful to read this post.

Due to a fire-alarm i shutted down my storage-box and removed it out of the lab. When i connected it back i left one NIC disconnected with the bound IP which I actually do not use in my setup and everything seemed to work fine. My desktop and the proxmox-host mounted the shares like before. I began to work on my things I have to do.

As I installed a kvm-VM, which should also mount some of the same shares I ran into trouble. I found that the VM cannot ping to that IP of the storage-box, like the other hosts do.
I setup a new proxmox host without mounts to the storage-box, fired up new VMs with all kinds of NICs from virtio to rtl8139, without success.

Then I reconnected the missing NIC and now all IPs (even that ones bound to the other NICs) answer to the VM. Also Tested: If I disconnect again, the storage-box can reach the internet - i.e. no problem with the default gateway.

I wrote this article, because I thought the VMs that use a bridged interface behave like the host - but they obviously do not.

Hope it helps.

Greets to the proxmox-team for their great work,

vmanz
 
Excuse me, but I could not understand the whole deep of problem.
Bridged interfaces from linux kernel's point of view are just routed network with ARP proxy running. So, all packets do obey iptables, ebtables and other forwarding rules. (By the way, haven't you disable ipv4 forwarding on host?). Any way, you can always start tcpdump, wireshark or any other packet sniffer on your host machine an find out whether packets missing on send or on receive. And what is the picture with ARP requests. Any other suggestions can be only made after TCP stack debugging.