Hi all,
this is a bit of long shot but just in case. I've got a client still using a few legacy proxmox servers running Proxmox5.x which we haven't migrated up to Proxmox6 quiet yet. (sigh).
I had to deploy a new host for them yesterday. Their environment is all on OVH_SYS. The new box was deployed by starting with stock debian9 template, then once deployed, adding in proxmox-on-debian install config. Which normally I've done without any serious drama.
One new-weird thing appears to be happening, that is making me pull out some hair. And I am curious if anyone can comment / has seen this before.
Initial stock config of the debian host was non-bridge / one IP address on the public physical interface.
I changed config so it has a more proxmox-friendly setup, 2 bridges - one on physical interface / and vmbr0 has the public IP address.
then vmbr1 is bound to dummy0 interface / and has private LAN range behind it / also linked to tinc VPN between the cluster nodes / and VMs bridge only to this IP range/interface. I'm doing NAT on the public interface / so things bound on the private range use the proxmox physical host as gateway to get to internets.
What is strange, is that when I boot up this new proxmox host. I have unwanted references to the old physical interface present in the route table. Which I must then manually delete before VMs routing out to internet works as desired. And I cannot find any traces in my /etc/network directory that is putting this config here, but it definitely is re-appearing if I reboot. I've put in a ticket to OVH Support to ask nicely, "hey is there some customization on network/script hidden somewhere that manually sets this route info?" but I am not holding my breath, ie, I expect them to say "ha ha, sorry, that is internal OS config work, not our problem, have a nice day!". (But maybe they will surprise me?!)
So... here is what I've got in my /etc/network/interfaces file presently.
and when I reboot this dude, here is what I've got active at first for output from 'route'
ie, I've got duplicate entries for the (default gateway designation) / one references eno1 and one references vmbr0
and also duplicate entry for the 1X2.Y5.31.0 destination / each again references the eno1 and vmbr0 interfaces.
at this point VMs running here cannot ping out to google.
Then if I manually delete unwanted route entries thus:
Then those unwanted lines are gone, and I am left with:
and now inside my VM guests, ping to google works just dandy.
If I reboot the physical server, the unwanted route entries re-appear and I am back where I started.
So ... gobs of fun.
If anyone has ever banged their head against this and has any thoughts, please let me know.
(BTW it seems that OVH have removed their pre-cooked proxmox templates? or at least on the product line I bought a server for yesterday, this is the case. Not sure if they were finding proxmox was interfering with their VMware resale? or if they just are letting people use proxmox if they are more comfortable with a DIY method rather than a pre-cooked template method.)
Anyhoo.
Many thanks if you read this far.
-Tim
this is a bit of long shot but just in case. I've got a client still using a few legacy proxmox servers running Proxmox5.x which we haven't migrated up to Proxmox6 quiet yet. (sigh).
I had to deploy a new host for them yesterday. Their environment is all on OVH_SYS. The new box was deployed by starting with stock debian9 template, then once deployed, adding in proxmox-on-debian install config. Which normally I've done without any serious drama.
One new-weird thing appears to be happening, that is making me pull out some hair. And I am curious if anyone can comment / has seen this before.
Initial stock config of the debian host was non-bridge / one IP address on the public physical interface.
I changed config so it has a more proxmox-friendly setup, 2 bridges - one on physical interface / and vmbr0 has the public IP address.
then vmbr1 is bound to dummy0 interface / and has private LAN range behind it / also linked to tinc VPN between the cluster nodes / and VMs bridge only to this IP range/interface. I'm doing NAT on the public interface / so things bound on the private range use the proxmox physical host as gateway to get to internets.
What is strange, is that when I boot up this new proxmox host. I have unwanted references to the old physical interface present in the route table. Which I must then manually delete before VMs routing out to internet works as desired. And I cannot find any traces in my /etc/network directory that is putting this config here, but it definitely is re-appearing if I reboot. I've put in a ticket to OVH Support to ask nicely, "hey is there some customization on network/script hidden somewhere that manually sets this route info?" but I am not holding my breath, ie, I expect them to say "ha ha, sorry, that is internal OS config work, not our problem, have a nice day!". (But maybe they will surprise me?!)
So... here is what I've got in my /etc/network/interfaces file presently.
Code:
root@proxmox:/etc/network# cat interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
iface eno1 inet manual
# vmbr0: Bridging. Make sure to use only MAC adresses that were assigned to you.
# NOTE X AND Y ARE HERE JUST TO OBSCURE MY ACTUAL PUBLIC IP IN FORUM POST.
# THEY REPRESENT A SINGLE DIGIT EACH FOR VALID IP-GW ...
auto vmbr0
iface vmbr0 inet static
address 1X2.Y5.31.93/24
gateway 1X2.Y5.31.254
bridge_ports eno1
bridge_stp off
bridge_fd 0
#public
# Internal Network
auto vmbr1
iface vmbr1 inet static
address 192.168.95.251/24
bridge_ports dummy0
bridge_stp off
bridge_fd 0
#private
root@proxmox:/etc/network#
and when I reboot this dude, here is what I've got active at first for output from 'route'
Code:
root@proxmox:/etc/network# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 1X2.Y5.31.254 0.0.0.0 UG 0 0 0 eno1
default 1X2.Y5.31.254 0.0.0.0 UG 0 0 0 vmbr0
1X2.Y5.31.0 0.0.0.0 255.255.255.0 U 0 0 0 eno1
1X2.Y5.31.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr0
192.168.95.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr1
224.0.0.0 0.0.0.0 240.0.0.0 U 0 0 0 vmbr1
root@proxmox:/etc/network#
ie, I've got duplicate entries for the (default gateway designation) / one references eno1 and one references vmbr0
and also duplicate entry for the 1X2.Y5.31.0 destination / each again references the eno1 and vmbr0 interfaces.
at this point VMs running here cannot ping out to google.
Then if I manually delete unwanted route entries thus:
Code:
route delete default gw 1X2.Y5.31.254 eno1
route del -net 1X2.Y5.31.0 gw 0.0.0.0 netmask 255.255.255.0 dev eno1
Then those unwanted lines are gone, and I am left with:
Code:
root@proxmox:/etc/network# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 1X2.Y5.31.254 0.0.0.0 UG 0 0 0 vmbr0
1X2.Y5.31.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr0
192.168.95.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr1
224.0.0.0 0.0.0.0 240.0.0.0 U 0 0 0 vmbr1
root@proxmox:/etc/network#
and now inside my VM guests, ping to google works just dandy.
If I reboot the physical server, the unwanted route entries re-appear and I am back where I started.
So ... gobs of fun.
If anyone has ever banged their head against this and has any thoughts, please let me know.
(BTW it seems that OVH have removed their pre-cooked proxmox templates? or at least on the product line I bought a server for yesterday, this is the case. Not sure if they were finding proxmox was interfering with their VMware resale? or if they just are letting people use proxmox if they are more comfortable with a DIY method rather than a pre-cooked template method.)
Anyhoo.
Many thanks if you read this far.
-Tim