I know, there are a lot of threads about that, but most what I read didnt help me.
I have a Proxmox setup with one public IPv4 and a 64-block for IPv6. In the past I used that IPv6 block for IPv6 only VMs within my cluster. This worked mostly when using Tayga and DNS64 etc. But sometimes, it doesnt work, so I thought, I'm using a private network and use the one public IPv4 as a gateway direct from the host.
So, I extended my already given vmbr0 with an IPv4 configuration on the host
So, I've a LAN vmbr0. IPv4/IPv6 forwarding is enabled on the host...
I've updated a VM to use the VMBR0 IPv4 settings like this
and made a "reboot is always a good idea". So, after rebooting I can ping from guest to host via IPv4, but not the otherway around. Also I can still call IPv6 addresses from the internet, but not IPv4. Its strange. The IP routes are looking good to me:
I try to understand, why the packet from the guest never reaches an external system or maybe why the host cannot ping the guest. Maybe it's the same problem. From my point of view, the vmbr0 is a virtual network without an external link. But the POSTROUTING should do the job to forward any traffic from guest via the link of eno1.
For tests I turned of the firewall.
I have a Proxmox setup with one public IPv4 and a 64-block for IPv6. In the past I used that IPv6 block for IPv6 only VMs within my cluster. This worked mostly when using Tayga and DNS64 etc. But sometimes, it doesnt work, so I thought, I'm using a private network and use the one public IPv4 as a gateway direct from the host.
So, I extended my already given vmbr0 with an IPv4 configuration on the host
Code:
# The loopback network interface
# The primary network interface
auto eno1
iface eno1 inet static
address <main-ipv4>
netmask 255.255.255.255
gateway <gateway>
iface eno1 inet6 static
address <main-ipv6>::1
netmask 128
gateway fe80::1
up sysctl -p
# The virtual bridge interface
auto vmbr0
iface vmbr0 inet static
address 10.255.242.1
netmask 24
bridge_ports none
bridge_stp off
bridge_fd 0
post-up iptables -t nat -A POSTROUTING -s 10.255.242.0/24 -o eno1 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s 10.255.242.0/24 -o eno1 -j MASQUERADE
iface vmbr0 inet6 static
address <main-ipv6>::2
netmask 64
bridge_ports none
bridge_stp off
bridge_fd 0
So, I've a LAN vmbr0. IPv4/IPv6 forwarding is enabled on the host...
I've updated a VM to use the VMBR0 IPv4 settings like this
Code:
# The primary network interface
allow-hotplug eth0
iface eth0 inet static
address 10.255.242.101
netmask 255.255.255.0
gateway 10.255.242.1
iface eth0 inet6 static
address <main-ipv6>:3::1
netmask 80
gateway <main-ipv6>::2
and made a "reboot is always a good idea". So, after rebooting I can ping from guest to host via IPv4, but not the otherway around. Also I can still call IPv6 addresses from the internet, but not IPv4. Its strange. The IP routes are looking good to me:
Code:
host>$ ip r s
default via <gateway> dev eno1 onlink
10.255.242.0/24 dev vmbr0 proto kernel scope link src 10.255.242.1
<gateway> dev eno1 proto kernel scope link src <main-ipv4>
Code:
guest>$ ip r s
default via 10.255.242.1 dev eth0 onlink
10.255.242.0/24 dev eth0 proto kernel scope link src 10.255.242.101
I try to understand, why the packet from the guest never reaches an external system or maybe why the host cannot ping the guest. Maybe it's the same problem. From my point of view, the vmbr0 is a virtual network without an external link. But the POSTROUTING should do the job to forward any traffic from guest via the link of eno1.
For tests I turned of the firewall.
Last edited: