Hello,
I am learning proxmox and would like to try migrating a VM between hosts. I am having some trouble with my networking as when it migrates, the client is left still trying to route through the original pve instead of going to the new pve.
Setup
My setup is running 3 proxmox instances inside virtualbox on Ubuntu (this is a setup for learning, not for production.) I am using Virtualbox 6.1 as the 7.x branch has a lot of crashyness issues when using nested VMs (this appears to be a known issue at virtualbox).
I have set up my networking in the "Routed Configuration" on the wiki (https://pve.proxmox.com/wiki/Network_Configuration). Partially because on the wiki it says that the default bridge configuration is not supported by most hosts and if I intend to set up real servers, I want to set up the virtual architecture in a realistic-ish way. Also, the default bridge configuration I had trouble with earlier, but maybe it might work now that I'm on older virtualbox.
So, here is my networking:
host is 192.168.0.33
test VM is 192.168.0.70
pve1:
pve2:
pve3:
Test case
- I run a test VM in pve1 ok
- can ping host -> test VM ok
- I migrate the test VM to pve2 ok
- at this point, pinging from host -> test VM dies (though the VM state looks fine in console). From ping output, it is still trying to route through pve1 instead of pve2
- from console, test VM can ping google OK after a slight delay
- from console, test VM cannot ping host
- shut down pve1 where vm migrated from
- now ping from host is trying to route through pve3
- shut down pve3 [only pve2 now running]
- ping now works ok! Routing through pve2 as expected
So basically, with three gateways covering the same range, I think they're fighting over routing somehow -- or somehow the connecting client (in this case the host) needs to be aware of the handoff somehow. Is there a better way I should be doing this? My goal is to try simulating failover / high availability so if one of pve1/2/3 dies, another can take over and clients will be routed correctly to the new pve.
Thank you
I am learning proxmox and would like to try migrating a VM between hosts. I am having some trouble with my networking as when it migrates, the client is left still trying to route through the original pve instead of going to the new pve.
Setup
My setup is running 3 proxmox instances inside virtualbox on Ubuntu (this is a setup for learning, not for production.) I am using Virtualbox 6.1 as the 7.x branch has a lot of crashyness issues when using nested VMs (this appears to be a known issue at virtualbox).
I have set up my networking in the "Routed Configuration" on the wiki (https://pve.proxmox.com/wiki/Network_Configuration). Partially because on the wiki it says that the default bridge configuration is not supported by most hosts and if I intend to set up real servers, I want to set up the virtual architecture in a realistic-ish way. Also, the default bridge configuration I had trouble with earlier, but maybe it might work now that I'm on older virtualbox.
So, here is my networking:
host is 192.168.0.33
test VM is 192.168.0.70
pve1:
Code:
auto lo
iface lo inet loopback
auto enp0s3
iface enp0s3 inet static
address 192.168.0.35/24
gateway 192.168.0.1
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up echo 1 > /proc/sys/net/ipv4/conf/enp0s3/proxy_arp
auto vmbr0
iface vmbr0 inet static
address 192.168.0.65/27
bridge-ports none
bridge-stp off
bridge-fd 0
source /etc/network/interfaces.d/*
pve2:
Code:
auto lo
iface lo inet loopback
auto enp0s3
iface enp0s3 inet static
address 192.168.0.36/24
gateway 192.168.0.1
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up echo 1 > /proc/sys/net/ipv4/conf/enp0s3/proxy_arp
auto vmbr0
iface vmbr0 inet static
address 192.168.0.65/27
bridge-ports none
bridge-stp off
bridge-fd 0
source /etc/network/interfaces.d/*
pve3:
Code:
auto lo
iface lo inet loopback
auto enp0s3
iface enp0s3 inet static
address 192.168.0.37/24
gateway 192.168.0.1
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up echo 1 > /proc/sys/net/ipv4/conf/enp0s3/proxy_arp
auto vmbr0
iface vmbr0 inet static
address 192.168.0.65/27
bridge-ports none
bridge-stp off
bridge-fd 0
source /etc/network/interfaces.d/*
Test case
- I run a test VM in pve1 ok
- can ping host -> test VM ok
- I migrate the test VM to pve2 ok
- at this point, pinging from host -> test VM dies (though the VM state looks fine in console). From ping output, it is still trying to route through pve1 instead of pve2
- from console, test VM can ping google OK after a slight delay
- from console, test VM cannot ping host
- shut down pve1 where vm migrated from
- now ping from host is trying to route through pve3
- shut down pve3 [only pve2 now running]
- ping now works ok! Routing through pve2 as expected
So basically, with three gateways covering the same range, I think they're fighting over routing somehow -- or somehow the connecting client (in this case the host) needs to be aware of the handoff somehow. Is there a better way I should be doing this? My goal is to try simulating failover / high availability so if one of pve1/2/3 dies, another can take over and clients will be routed correctly to the new pve.
Thank you