VM1 cannot ping VM2 on different VMBR

Mar 28, 2018
1
0
21
41
Hi,

my problem is, that VM1 cannot ping VM2 on a different VMBR.

We've got 3 Proxmox-servers running as a cluster. Each server hat its own /29-subnet
(e.g. IPs 1.2.3.1/29, 1.2.3.9/29, 1.2.3.16/29) and some failover-IPs for
critical VMs (e.g. IPs 2.3.4.1, 2.3.4.2), so there VMs can migrate between servers.

Our current setup is like this:
- VMs on the server-subnet (e.g. IP 1.2.3.2) use VMBR0
- VMs that use the failover-IPs use an internal subnet (192.168.0.0/24 mapped to the failover-IPs) on VMBR1

The problem is, that a VM inside a subnet (on VMBR0) cannot reach a VM on a Failover-IP.
There not even a response to a ping.


Situation for VMs on VMBR0:
+ they can ping each other
+ they can ping all 3 Proxmox-servers
+ they can ping the outside e.g. google.com
- they cannot ping VMs on VMBR1

Situation for VMs on VMBR1:
+ they can ping each other
+ they can ping all 3 Proxmox-servers
+ they can ping the outside e.g. google.com
- they cannot ping VMs on VMBR0

Situation for the 3 Proxmox-servers:
+ they can ping each other
+ they can ping the outside e.g. google.com
+ they CAN ping VMs on VMBR0
+ they CAN ping VMs on VMBR1

Here's the interfaces-file of one of the servers on the cluster:


Code:
# Loopback device:
auto  lo
iface lo inet loopback

# 1Gbit (external)
auto eno1
iface eno1 inet static
    address       4.3.2.6            # ServerIP
    netmask       255.255.255.255
    pointopoint   4.3.2.2
    gateway       4.3.2.2

# Failover-IPs
    up ip addr add 2.3.4.1/32 dev eno1
    up ip addr add 2.3.4.2/32 dev eno1

# Failover-IP Routing
    post-up iptables -t nat -A PREROUTING  -d 2.3.4.1          -j DNAT --to-destination   192.168.0.162
    post-up iptables -t nat -A POSTROUTING -s 192.168.0.162    -j SNAT --to-source        2.3.4.1
    post-up iptables -t nat -A PREROUTING  -d 2.3.4.2          -j DNAT --to-destination   192.168.0.163
    post-up iptables -t nat -A POSTROUTING -s 192.168.0.163    -j SNAT --to-source        2.3.4.2


# 1Gbit (internal)
auto eno2
iface eno2 inet manual
    mtu 9000


# 10Gbit (internal)
auto enp5s0
iface enp5s0 inet manual
    mtu 9000


#Ceph Bond
auto bond0
iface bond0 inet static
    bond-slaves enp5s0 eno2
    bond-mode active-backup
    bond-miimon 100
    address 10.10.10.1
    netmask 255.255.255.0
    network 10.10.10.0
    broadcast 0.0.0.255
    mtu 9000


# /29 Subnet 1.2.3.1/29
auto vmbr0
iface vmbr0 inet static
    address       1.2.3.1
    netmask       255.255.255.248
    gateway       4.3.2.6
    bridge_ports  none
    bridge_stp    off
    bridge_fd     0


# /24 Internal Subnet 192.168.0.0/24
auto vmbr1
iface vmbr1 inet static
    address       192.168.0.1
    netmask       255.255.255.0
    bridge_ports  none
    bridge_stp    off
    bridge_fd     0

Any ideas how I can let the VMs reach each other?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!