hi all,
i'm trying to setup VM networking properly. here is my topology;
3 proxmox hypervisor
- eth0 -> public ethernet (190.xxx.12.xxx )
- eth1 -> private lan
node #1 /etc/network/interfaces
node2:
- vmbr0: 10.10.2.1
node3:
- vmbr0: 10.10.3.1
vmbrtest: (on node3)
ping tests:
routing in vmbrtest:
everything works expected; but there is the deal:
after migrating a vm from hypervisor node3 to node1, routing still works until old hypervisors stop working.
so i have to update networking manually;
here is the outgoing package flow from vm :
becoming:
can i define a global vm gateway can works across the cluster like 10.10.0.1?
without this, i don't know how to implement cluster wide dhcp server.
thanks..
i'm trying to setup VM networking properly. here is my topology;
3 proxmox hypervisor
- eth0 -> public ethernet (190.xxx.12.xxx )
- eth1 -> private lan
node #1 /etc/network/interfaces
Code:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
address 191.xxx.102.11
netmask 255.255.255.0
gateway 191.xxx.102.1
dns-nameservers 8.8.8.8 8.8.4.4
#Public Network
auto eth1
iface eth1 inet manual
mtu 9000
auto vmbr0
iface vmbr0 inet static
address 10.10.1.1
netmask 255.255.0.0
bridge_ports eth1
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -o eth0 -s 10.10.0.0/16 ! -d 10.10.0.0/16 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -o eth0 -s 10.10.0.0/16 ! -d 10.10.0.0/16 -j MASQUERADE
#VM Bridge
node2:
- vmbr0: 10.10.2.1
node3:
- vmbr0: 10.10.3.1
vmbrtest: (on node3)
Code:
root@vmbrtest:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback
iface lo inet6 loopback
# interfaces(5) file used by ifup(8) and ifdown(8)
# Include files from /etc/network/interfaces.d:
source-directory /etc/network/interfaces.d
auto eth0
iface eth0 inet static
address 10.10.3.31
netmask 255.255.0.0
gateway 10.10.3.1
ping tests:
Code:
root@vmbrtest:~# ping -c 2 10.10.3.1
PING 10.10.3.1 (10.10.3.1) 56(84) bytes of data.
64 bytes from 10.10.3.1: icmp_seq=1 ttl=64 time=0.053 ms
64 bytes from 10.10.3.1: icmp_seq=2 ttl=64 time=0.034 ms
--- 10.10.3.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1003ms
rtt min/avg/max/mdev = 0.034/0.043/0.053/0.011 ms
root@vmbrtest:~# ping -c 2 10.10.1.1
PING 10.10.1.1 (10.10.1.1) 56(84) bytes of data.
64 bytes from 10.10.1.1: icmp_seq=1 ttl=64 time=1.10 ms
64 bytes from 10.10.1.1: icmp_seq=2 ttl=64 time=0.728 ms
--- 10.10.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.728/0.915/1.102/0.187 ms
root@vmbrtest:~#
root@vmbrtest:~# ping -c 2 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=60 time=0.885 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=60 time=1.09 ms
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.885/0.990/1.095/0.105 ms
routing in vmbrtest:
Code:
root@vmbrtest:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.10.3.1 0.0.0.0 UG 0 0 0 eth0
10.10.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
everything works expected; but there is the deal:
after migrating a vm from hypervisor node3 to node1, routing still works until old hypervisors stop working.
so i have to update networking manually;
here is the outgoing package flow from vm :
Code:
vm(on node3) -> node3 -> actual router
Code:
vm(on node1) -> node1 -> node3 -> actual router
can i define a global vm gateway can works across the cluster like 10.10.0.1?
without this, i don't know how to implement cluster wide dhcp server.
thanks..