more than one IP -> routing | deleteing one of them -> routing

etron770

Well-Known Member
Feb 14, 2018
57
3
48
60
It looks like Proxmox has a bug in its network configuration.
Reproduction of the error:
1. Insert a public IPv4/IPV4 network into a VM.
2. Add an internal IPv4 network, e.g., 10.8.0.1.

Proxmox then sets the default route to the
internal network

I don't know exactly how it happened.
Either I deleted the public IPv4 IPv4 – that should be the final state – and added it again.

In any case, the result was that only an internal 10.8.0.x route was available.

Deleting this IP then meant that there was no IPv4 route at all, even though a public IPv4 address was still available.
Deleting this public IPv4 address and setting it again restored the route.

Desired state.
If there are multiple IP addresses, a default route should be selectable.
When an IP is deleted, the next IP should be entered as the route.

Translated with DeepL.com (free version)
 
Would you like to describe the steps you actually did and what configuration you changed where? As far as I could understand:
  1. You added a network interface to a virtual machine.
  2. You provided an ip address (somehow) to this network interface.
  3. You added another ip address to the vm.
  4. Now routing in vm is not as expected?
For the basics: If you add multiple IP addresses with a gateway each, you need to provide additional routing configuration, like which gateway has to be used for which routes.
 
  • Like
Reactions: Johannes S
It is not the routing on the root server but the entries from Proxmox in the container.
erver1] ~ # ip route
default via x.y.z.18 dev eth0
10.8.0.254 dev eth1 scope link
x.y.z.18 dev eth0 scope link
[server1] ~ # ip route
default via 10.8.1.1 dev eth111
10.8.1.0/24 dev eth111 proto kernel scope link src 10.8.1.32
x.y.z.18 dev eth0 scope link
[server1] ~ # ip route
x.y.z.18 dev eth0 scope link
[server1] ~ # ip route
default via 10.8.1.1 dev eth111
10.8.1.0/24 dev eth111 proto kernel scope link src 10.8.1.32
x.y.z.18 dev eth0 scope link
[server1] ~ #



#cat /etc/debian_version
12.8
# pveversion
pve-manager/8.3.2/3e76eec21c4a14a7 (running kernel: 6.8.12-5-pve)


It's not the routing on the root server but the entries from Proxmox in the container


I can't quite reproduce it.
In the first example, you can see that the default route goes via the externally visible IP.

Then I deleted it and reinserted it.

Apart from inserting the default route first, I don't see any other way to set it.
But that's more of a workaround.

What I find completely incomprehensible is that
I had the following as a route
[server1] ~ # ip route
default via 10.8.1.1 dev eth111
10.8.1.0/24 dev eth111 proto kernel scope link src 10.8.1.32

but the x.y.z.18 address was still there.
When I deleted 10.8.1.32, there was no route left at all.
Only after deleting and inserting x.y.z.18 did the default route via x.y.z.18 dev eth0 scope link reappear.

Unfortunately, I cannot reproduce this last situation at the moment.

Of course, I could set the default route at the system level to suit my needs.
But the question is how this could be done from the Proxmox GUI.
 
Last edited:
Ok, you're talking about LXC it seems.
If you add multiple network interfaces, you should configure only one with a gateway. This is the one which will be the default route. All other routes have to be configured within the container.
 
This is a test server and I don't know how to reproduce these entries being created in the interfaces file.

What I did last in Proxmox:
Deleted the interface with 10.8.1.xx.
Deleted the interface with 2222:333:444:555::113 and reinserted it.

The fact is that this is now the second time
In Proxmox, there are only eth0 (ID net0) and eth1 (ID net1).
Code:
[server3] ~ # cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet6 static
        address 2222:333:444:555::113/64
        gateway 2222:333:444:555::3

auto eth1
auto eth2
iface eth2 inet6 static
        address 2222:333:444:555::113/64
        gateway 2222:333:444:555::3

auto eth111
iface eth111 inet static
        address 10.8.1.32/24
        gateway 10.8.1.1

iface eth1 inet6 static
        address 2222:333:444:555::112/64
        gateway 2222:333:444:555::3

[server3] ~ #
[server3] ~ # ip -4 route
[server3] ~ # ip -6 route
2222:333:444:555::/64 dev eth0 proto kernel metric 256 pref medium
2222:333:444:555::/64 dev eth1 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth1 proto kernel metric 256 pref medium
default via 2222:333:444:555::3 dev eth0 metric 1024 pref medium
[server3] ~ #
 
Last edited:
At the proxmox host

Code:
auto vmbr111
iface vmbr111 inet static
        address 10.8.1.1
        netmask 255.255.255.0
        bridge-ports none
        bridge-stp off
        bridge-fd 0
 
It cannot be a coincidence that /etc/network/interfaces now shows network connections that no longer exist.

I changed the network structure and reconfigured several network devices via the GUI.
All Vserver (Debian 12) have these strange entries that originate from deleted network connections.
 
You're posting details of server1 and server3 and talk about vm and suddenly containers. I do not understand, what you're actually configuring.

To proceed further, would you like to describe your setup in a structured more detailed way?
  • Like which servers are there,
  • what is their network configuration (/etc/network/interfaces),
  • what is the configuration of the vm and/or lxc you configuring
 
Since I didn't want to buy any new expensive IPv4 addresses, or might not be able to get any at some point, I did configure...
Regardless of the strange /etc/network/interfaces file, which I simply cleaned up manually without knowing where these entries came from.
I implemented the following. In the meantime, I often changed, deleted, or added network settings with interfaces to ultimately arrive at the following configuration. This is the working status:


System Description – Edge Proxy with IPv4/IPv6 and WireGuard Egress




  1. Architecture Overview



The setup consists of a dual-stack edge server (“Edge Proxy”) and multiple IPv6-only backend systems reachable through it.
An internal WireGuard network provides IPv4 egress capability for IPv6-only clients via the Edge Proxy.


Roles:


  • Edge Proxy: Public reverse proxy, IPv4/IPv6 gateway, WireGuard server, NAT endpoint.
  • Clients: IPv6-only backend using a WireGuard client for IPv4 egress through the Edge Proxy.
  • Additional Backends: IPv6-only systems with direct SSH access via their global IPv6 addresses.



  1. Network and DNS Design



  • The Edge Proxy has public IPv4 and IPv6 addresses.
  • Backends (e.g., Clients) have global IPv6 addresses; IPv4 traffic is tunneled through WireGuard.
  • Public domain A/AAAA DNS records point to the Edge Proxy.
  • Administrative hostnames retain direct AAAA records for IPv6-based SSH access.
  • IPv6 routes are native; IPv4 routes from the backends are forwarded via WireGuard through the Edge Proxy.



  1. Reverse Proxy Layer (TLS Passthrough)



The Edge Proxy runs nginx with the stream module enabled to forward incoming TLS traffic on port 443 based on the SNI hostname to the correct IPv6 backend.
TLS is passed through transparently—no termination occurs at the proxy.


Key characteristics:


  • TLS certificates are managed on the backends.
  • Routing uses an SNI-to-upstream mapping table.
  • Port 80 may optionally be proxied to a single backend handling HTTP→HTTPS redirects.
  • Both IPv4 and IPv6 clients connect through the same Edge Proxy endpoint.
  • Stream logging includes client address, SNI, upstream target, and connection status.



  1. WireGuard Egress Network



An internal IPv4 transit network (e.g., 10.10.10.0/24) connects the Edge Proxy and the backends.


  • Edge Proxy acts as WireGuard server with 10.10.10.1/24.
  • Each backend (e.g., Clients) has a fixed /32 tunnel address (e.g., 10.10.10.2/32).
  • The Edge Proxy performs NAT (MASQUERADE) on the public interface for all 10.10.10.0/24 traffic.
  • Backends route all outbound IPv4 traffic through the tunnel (AllowedIPs = 0.0.0.0/0).
  • IPv6 traffic remains native and is not tunneled.
  • UDP/51820 is open on the Edge Proxy for inbound WireGuard connections.

Result:


  • IPv6-only backends gain full outbound IPv4 access through the Edge Proxy.
  • All backend IPv4 traffic appears externally under the Edge Proxy’s public IPv4 address.



  1. Operational and Security Aspects



  • The Edge Proxy is the only publicly exposed node.
  • WireGuard AllowedIPs are tightly limited to one /32 per backend.
  • SSH access to backends is IPv6-only.
  • Intrusion prevention (e.g., Fail2ban) operates on the Edge Proxy.
  • Optional enhancement: enable the PROXY protocol to forward real client IPs to backends.



  1. Functional Summary



  1. Incoming HTTP/HTTPS requests reach the Edge Proxy.
  2. TLS connections are forwarded transparently based on the SNI hostname.
  3. Backends present their own TLS certificates and handle the requests directly.
  4. IPv6 communication is native; backend IPv4 communication is tunneled via WireGuard.
  5. The Edge Proxy provides NAT and forwarding, serving as a unified IPv4 gateway for all backends.
 
Last edited: