Hey all,
I'm having issues when dealing with multiple interfaces on a KVM VM. On each VM I have a eth0 which links back to vmbr0 - the internet gateway. I'm trying to introduce a private network between some VM's on a second interface and I've noticed that traffic simply wont pass through them.
I tried removing the gateway interfaces (eth0) and rebooting the VM, and everything seems to work fine.
There appears to be an issue when Proxmox KVM instances have more than one interface. When both interfaces are configured, eth0 works fine but the private bridge simply doesn't work. If I have two hosts, A and B, if host A pings host B, there is no response from B, and tcpdump doesn't see any of the echo requests.
If I induce arp records on A and B, I can then see the echo requests on vmbr1 but not host B, although packet_in counters on ifconfig increased proportionally to the ICMP requests.
The routing table is fine, I don't believe this is an IP issue moreover a ethernet/linux bridging issue. Any similar experiences and potential fixes would be highly appreciated as this is a blocker for us at the moment.
Running pve3.1:
root@vm2:~# pveversion -v
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-19-pve: 2.6.32-93
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1
root@vm2:~#
Cheers!
I'm having issues when dealing with multiple interfaces on a KVM VM. On each VM I have a eth0 which links back to vmbr0 - the internet gateway. I'm trying to introduce a private network between some VM's on a second interface and I've noticed that traffic simply wont pass through them.
I tried removing the gateway interfaces (eth0) and rebooting the VM, and everything seems to work fine.
There appears to be an issue when Proxmox KVM instances have more than one interface. When both interfaces are configured, eth0 works fine but the private bridge simply doesn't work. If I have two hosts, A and B, if host A pings host B, there is no response from B, and tcpdump doesn't see any of the echo requests.
If I induce arp records on A and B, I can then see the echo requests on vmbr1 but not host B, although packet_in counters on ifconfig increased proportionally to the ICMP requests.
The routing table is fine, I don't believe this is an IP issue moreover a ethernet/linux bridging issue. Any similar experiences and potential fixes would be highly appreciated as this is a blocker for us at the moment.
Running pve3.1:
root@vm2:~# pveversion -v
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-19-pve: 2.6.32-93
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1
root@vm2:~#
Cheers!