VM gateway on cluster

engin

Active Member
Feb 21, 2018
20
0
41
41
hi all,

i'm trying to setup VM networking properly. here is my topology;

3 proxmox hypervisor

- eth0 -> public ethernet (190.xxx.12.xxx )
- eth1 -> private lan

node #1 /etc/network/interfaces

Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp
    address 191.xxx.102.11
    netmask 255.255.255.0
    gateway 191.xxx.102.1
    dns-nameservers 8.8.8.8 8.8.4.4
#Public Network

auto eth1
iface eth1 inet manual
    mtu 9000

auto vmbr0
iface vmbr0 inet static
    address 10.10.1.1
    netmask 255.255.0.0
    bridge_ports eth1
    bridge_stp off
    bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -o eth0 -s 10.10.0.0/16 ! -d 10.10.0.0/16 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -o eth0 -s 10.10.0.0/16 ! -d 10.10.0.0/16 -j MASQUERADE

#VM Bridge

node2:
- vmbr0: 10.10.2.1

node3:
- vmbr0: 10.10.3.1

vmbrtest: (on node3)

Code:
root@vmbrtest:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback
iface lo inet6 loopback

# interfaces(5) file used by ifup(8) and ifdown(8)
# Include files from /etc/network/interfaces.d:
source-directory /etc/network/interfaces.d

auto eth0
iface eth0 inet static
address 10.10.3.31
netmask 255.255.0.0
gateway 10.10.3.1

ping tests:

Code:
root@vmbrtest:~# ping -c 2 10.10.3.1
PING 10.10.3.1 (10.10.3.1) 56(84) bytes of data.
64 bytes from 10.10.3.1: icmp_seq=1 ttl=64 time=0.053 ms
64 bytes from 10.10.3.1: icmp_seq=2 ttl=64 time=0.034 ms

--- 10.10.3.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1003ms
rtt min/avg/max/mdev = 0.034/0.043/0.053/0.011 ms


root@vmbrtest:~# ping -c 2 10.10.1.1
PING 10.10.1.1 (10.10.1.1) 56(84) bytes of data.
64 bytes from 10.10.1.1: icmp_seq=1 ttl=64 time=1.10 ms
64 bytes from 10.10.1.1: icmp_seq=2 ttl=64 time=0.728 ms

--- 10.10.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.728/0.915/1.102/0.187 ms
root@vmbrtest:~#


root@vmbrtest:~# ping -c 2 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=60 time=0.885 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=60 time=1.09 ms

--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.885/0.990/1.095/0.105 ms

routing in vmbrtest:
Code:
root@vmbrtest:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.10.3.1 0.0.0.0 UG 0 0 0 eth0
10.10.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0

everything works expected; but there is the deal:

after migrating a vm from hypervisor node3 to node1, routing still works until old hypervisors stop working.
so i have to update networking manually;

here is the outgoing package flow from vm :
Code:
vm(on node3) -> node3 -> actual router
becoming:
Code:
vm(on node1) -> node1 -> node3 -> actual router

can i define a global vm gateway can works across the cluster like 10.10.0.1?

without this, i don't know how to implement cluster wide dhcp server.

thanks..
 
so basically, in every vm, can we use something like?

Code:
auto eth0
iface eth0 inet static
address 10.10.x.y
netmask 255.255.0.0
gateway 10.10.0.1 <<< cluster wide routed network
or
Code:
gateway ${HYPERVISOR_VMBR0}
 
so basically, in every vm, can we use something like?

Code:
auto eth0
iface eth0 inet static
address 10.10.x.y
netmask 255.255.0.0
gateway 10.10.0.1 <<< cluster wide routed network
or
Code:
gateway ${HYPERVISOR_VMBR0}


AFAIU you want the guest OS to detect automatically the current host´s IP address in 10.10.0.0/16 network and assign it as the default router.

Such a feature is not implemented, but you can write a script which is running at the hosts which updates frequently the currently running VMs´s ip configuration.
 
i created secondary interface vmbr1:0 and disabled broadcast messages for it.
it has static mac addr on the vmbr1 bridge; and it's working so far.

Code:
auto enp1s0f1
iface enp1s0f1 inet manual
    mtu 9000

auto vmbr1
iface vmbr1 inet static
    # hypervisor #6
    address 10.10.6.1
    netmask 255.255.0.0
    bridge_ports enp1s0f1.2001
    bridge_stp off
    bridge_fd 0

    pre-up echo 1 > /proc/sys/net/ipv4/ip_forward

    post-up   iptables -t nat -A POSTROUTING -o vmbr0 -s 10.10.0.0/16 ! -d 10.10.0.0/16 -m comment --comment "vm nat networking" -j MASQUERADE
    post-down iptables -t nat -D POSTROUTING -o vmbr0 -s 10.10.0.0/16 ! -d 10.10.0.0/16 -m comment --comment "vm nat networking" -j MASQUERADE

auto vmbr1:0
iface vmbr1:0 inet static
    # defined on all proxmox hosts
    address 10.10.0.1
    netmask 255.255.0.0
    pre-up ip -s -s neigh flush all
    post-up iptables -A FORWARD -m pkttype --pkt-type broadcast -i vmbr1:0 -j DROP
    post-up iptables -A INPUT -m pkttype --pkt-type broadcast -i vmbr1:0 -j DROP
    post-down iptables -D FORWARD -m pkttype --pkt-type broadcast -i vmbr1:0 -j DROP
    post-down iptables -D INPUT -m pkttype --pkt-type broadcast -i vmbr1:0 -j DROP

#RPN2 Network

auto enp1s0f1.2002
iface enp1s0f1.2002 inet static
    address 192.168.0.6
    netmask 255.255.255.0

# Corosync network

i don't want to go with custom scripts, coz i want my dhcp clean and unique router ip works for all hypervisors.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!