How to enable port forwarding for IPv6

kamzata

Renowned Member
Jan 21, 2011
219
9
83
Italy
I'd like to enable port forwarding for IPv6. For IPv4 I just simply appended this at my /etc/network/interfaces:
Code:
post-up echo 1 > /proc/sys/net/ipv4/ip_forward

I tried to append this for IPv6:
Code:
post-up echo 1 > /proc/sys/net/ipv6/conf/all/forwarding
but, once rebooted, system says "connect: Network is unreachable" and I'm not more able to ping google (ping6 ipv6.google.com).

What's the correct way to enable?
 
I guess that a guest cannot connect to the internet via ipv6 anymore?

Please post:
* the `/etc/network/interfaces` (of the PVE-node and of the guest (or whatever network configuration the guest uses))
* the output of `ip link`
* `ip addr`
* `ip route`
 
I guess that a guest cannot connect to the internet via ipv6 anymore?

Please post:
* the `/etc/network/interfaces` (of the PVE-node and of the guest (or whatever network configuration the guest uses))
* the output of `ip link`
* `ip addr`
* `ip route`

When I put this
Code:
post-up echo 1 > /proc/sys/net/ipv6/conf/all/forwarding
on the Proxmox host /etc/network/interfaces file, the host itself lost IPv6 connection and if run
Code:
ping6 ipv6.google.com
it doesn't work anymore and says
Code:
connect: Network is unreachable

/etc/network/interfaces [Proxmox Host]
Code:
auto lo
iface lo inet loopback

auto enp1s0f1
iface enp1s0f1 inet dhcp

auto enp1s0f0
iface enp1s0f0 inet manual

iface enp1s0f0 inet6 static
        address  2001:bc8:3cc6:101::
        netmask  64

iface enp1s0f0 inet6 static
        address  2001:bc8:3cc6:102::
        netmask  64

auto enp1s0f0:0
iface enp1s0f0:0 inet static
        address  62.210.132.20
        netmask  255.255.255.0
        gateway  62.210.132.1

auto enp1s0f0:1
iface enp1s0f0:1 inet static
        address  62.210.132.21
        netmask  255.255.255.0
        gateway  62.210.132.1

auto vmbr0
iface vmbr0 inet static
        address  192.168.1.1
        netmask  255.255.255.0
        bridge-ports none
        bridge-stp off
        bridge-fd 0

iface vmbr0 inet6 static
        address  fd12:3456:789a:1::
        netmask  64

auto vmbr1
iface vmbr1 inet static
        address  192.168.2.1
        netmask  255.255.255.0
        bridge-ports none
        bridge-stp off
        bridge-fd 0

post-up echo 1 > /proc/sys/net/ipv4/ip_forward

/etc/network/interfaces [Container Guest]
Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 192.168.1.100
        netmask 255.255.255.0
        gateway 192.168.1.1

iface eth0 inet6 static
        address fd12:3456:789b:1::
        netmask 64
# --- BEGIN PVE ---
        post-up ip route add fd12:3456:789a:1:: dev eth0
        post-up ip route add default via fd12:3456:789a:1:: dev eth0
        pre-down ip route del default via fd12:3456:789a:1:: dev eth0
        pre-down ip route del fd12:3456:789a:1:: dev eth0
# --- END PVE ---

ip link output:
Code:
root@mysrv01:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 0c:c4:7a:83:1a:da brd ff:ff:ff:ff:ff:ff
3: enp1s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 0c:c4:7a:83:1a:db brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fe:aa:ff:27:f4:8c brd ff:ff:ff:ff:ff:ff
5: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fe:07:b2:d2:56:1b brd ff:ff:ff:ff:ff:ff
7: veth100i0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether fe:aa:ff:27:f4:8c brd ff:ff:ff:ff:ff:ff link-netnsid 0
9: veth102i0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether fe:28:87:af:b4:b8 brd ff:ff:ff:ff:ff:ff link-netnsid 1
11: veth120i0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether fe:0b:7d:da:71:6d brd ff:ff:ff:ff:ff:ff link-netnsid 2
13: veth200i0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP mode DEFAULT group default qlen 1000
    link/ether fe:07:b2:d2:56:1b brd ff:ff:ff:ff:ff:ff link-netnsid 3
15: veth202i0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP mode DEFAULT group default qlen 1000
    link/ether fe:5a:88:87:e9:ac brd ff:ff:ff:ff:ff:ff link-netnsid 4
17: veth210i0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP mode DEFAULT group default qlen 1000
    link/ether fe:d8:d9:10:ef:35 brd ff:ff:ff:ff:ff:ff link-netnsid 5

ip addr output:
Code:
root@mysrv01:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 0c:c4:7a:83:1a:da brd ff:ff:ff:ff:ff:ff
    inet 62.210.132.20/24 brd 62.210.132.255 scope global enp1s0f0:0
       valid_lft forever preferred_lft forever
    inet 62.210.132.21/24 brd 62.210.132.255 scope global secondary enp1s0f0:1
       valid_lft forever preferred_lft forever
    inet6 2001:bc8:3cc6:102::/64 scope global
       valid_lft forever preferred_lft forever
    inet6 2001:bc8:3cc6:101::/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::ec4:7aff:fe83:1ada/64 scope link
       valid_lft forever preferred_lft forever
3: enp1s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
    link/ether 0c:c4:7a:83:1a:db brd ff:ff:ff:ff:ff:ff
    inet 10.91.154.16/25 brd 10.91.154.127 scope global enp1s0f1
       valid_lft forever preferred_lft forever
    inet6 fe80::ec4:7aff:fe83:1adb/64 scope link
       valid_lft forever preferred_lft forever
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:aa:ff:27:f4:8c brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.1/24 brd 192.168.1.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fd12:3456:789a:1::/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::b0ca:b0ff:feba:262/64 scope link
       valid_lft forever preferred_lft forever
5: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:07:b2:d2:56:1b brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.1/24 brd 192.168.2.255 scope global vmbr1
       valid_lft forever preferred_lft forever
    inet6 fe80::28ec:f1ff:fecd:15ef/64 scope link
       valid_lft forever preferred_lft forever
7: veth100i0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:aa:ff:27:f4:8c brd ff:ff:ff:ff:ff:ff link-netnsid 0
9: veth102i0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:28:87:af:b4:b8 brd ff:ff:ff:ff:ff:ff link-netnsid 1
11: veth120i0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:0b:7d:da:71:6d brd ff:ff:ff:ff:ff:ff link-netnsid 2
13: veth200i0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
    link/ether fe:07:b2:d2:56:1b brd ff:ff:ff:ff:ff:ff link-netnsid 3
15: veth202i0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
    link/ether fe:5a:88:87:e9:ac brd ff:ff:ff:ff:ff:ff link-netnsid 4
17: veth210i0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
    link/ether fe:d8:d9:10:ef:35 brd ff:ff:ff:ff:ff:ff link-netnsid 5

ip -6 route output:
Code:
root@mysrv01:~# ip -6 route
2001:bc8:3cc6:101::/64 dev enp1s0f0 proto kernel metric 256 pref medium
2001:bc8:3cc6:102::/64 dev enp1s0f0 proto kernel metric 256 pref medium
fd12:3456:789a:1::/64 dev vmbr0 proto kernel metric 256 pref medium
fe80::/64 dev enp1s0f1 proto kernel metric 256 pref medium
fe80::/64 dev enp1s0f0 proto kernel metric 256 pref medium
fe80::/64 dev vmbr0 proto kernel metric 256 pref medium
fe80::/64 dev vmbr1 proto kernel metric 256 pref medium
default via fe80::2c8:8bff:fee2:6c45 dev enp1s0f0 proto ra metric 1024 expires 1653sec hoplimit 64 pref medium

root@mysrv01:~# ping6 ipv6.google.com output:
Code:
PING ipv6.google.com(par10s33-in-x0e.1e100.net (2a00:1450:4007:816::200e)) 56 data bytes
64 bytes from par10s33-in-x0e.1e100.net (2a00:1450:4007:816::200e): icmp_seq=1 ttl=58 time=1.32 ms
64 bytes from par10s33-in-x0e.1e100.net (2a00:1450:4007:816::200e): icmp_seq=2 ttl=58 time=1.37 ms
64 bytes from par10s33-in-x0e.1e100.net (2a00:1450:4007:816::200e): icmp_seq=3 ttl=58 time=1.36 ms
 
When I put this
Code:
post-up echo 1 > /proc/sys/net/ipv6/conf/all/forwarding
on the Proxmox host /etc/network/interfaces file, the host itself lost IPv6 connection and if run

This is most likely due to ipv6-forwarding disabling SLAAC (ipv6-address autoconfiguraton) and accept_ra (router advertisments) - see [0]
set the ra_accept sysctl to 2 - or configure a static gateway.

iface enp1s0f0 inet6 static
address 2001:bc8:3cc6:101::
netmask 64

iface enp1s0f0 inet6 static
address 2001:bc8:3cc6:102::
netmask 64
Why do you need 2 different networks on the same interface? (just curious)
While it seems to work - you have configured 2001:bc8:3cc6:102::0 as interface address - in my experience you leave the ::0 unconfigured and use some higher numbers in the lower quads. - Though I'm not sure that's required.

Hope this helps

[0] https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt
 
This is most likely due to ipv6-forwarding disabling SLAAC (ipv6-address autoconfiguraton) and accept_ra (router advertisments) - see [0]
set the ra_accept sysctl to 2 - or configure a static gateway.

I've already tried to set it in /etc/network/interfaces:
Code:
post-up echo 1 > /proc/sys/net/ipv6/conf/all/forwarding
post-up echo 2 > /proc/sys/net/ipv6/conf/all/accept_ra

but it leads to the same result:
Code:
connect: Network is unreachable



Why do you need 2 different networks on the same interface? (just curious)
Because I need to split my internal network in 2 for development.

While it seems to work - you have configured 2001:bc8:3cc6:102::0 as interface address - in my experience you leave the ::0 unconfigured and use some higher numbers in the lower quads. - Though I'm not sure that's required.
As far as I know, you cannot set interface aliases with IPv6. Simply they don't work.
 
Set a static gateway?
(At least it takes one moving part out of the equation)
 
aeh - try the one you get via RA?
(start the machine with forwarding disabled - check the output of `ip -6 r` - take the default route entry
 
aeh - try the one you get via RA?
(start the machine with forwarding disabled - check the output of `ip -6 r` - take the default route entry

Good tip! Set gateway and set
Code:
post-up echo 1 > /proc/sys/net/ipv6/conf/all/forwarding
on /etc/network/interfaces.

Now I'm able to ping Google from Proxmox host but NOT from the Container itself (trying to ping make it goes in resolution name timeout).
I used this ip6tables rule:
Code:
-A POSTROUTING -s fd12:3456:789a:1:: -o enp1s0f0 -j SNAT --to-source 2001:bc8:3cc6:101::

fd12:3456:789a:1 is the address of vmbr0
enp1s0f0 is the external interface
2001:bc8:3cc6:101:: is the IP of enp1s0f0
 
Last edited:
nice - one step further!

Please don't NAT with IPv6 - you should have gotten at least 2x /64 ( 2001:bc8:3cc6:101::/64, 2001:bc8:3cc6:102::/64), that should be more than enough addresses for a few galaxies ...
 
nice - one step further!

Please don't NAT with IPv6 - you should have gotten at least 2x /64 ( 2001:bc8:3cc6:101::/64, 2001:bc8:3cc6:102::/64), that should be more than enough addresses for a few galaxies ...

What do you mean? What IPv6 addresses should I use for Containers? So shouldn't be need any vmbr0 interface and any POSTROUTING rule? Should I use some other IPv6 address for Container? What address? Be patience, it's my first IPv6 setup.
 
Depending on your provider (check with them) - I'm rather certain that you got at least both /64s - meaning all addresses from
2001:bc8:3cc6:101::0 - 2001:bc8:3cc6:102:ffff:ffff:ffff:ffff belong to you and should be usable for your containers.

* Depending on how they set it up - you probably can just assign one of those addresses to each container.

I would try to:
* configure the IP6 address (e.g. 2001:bc8:3cc6:101::1/64) on the PVE-host on vmbr0
* add the ethernet (enp1s0f0) to that bridge as bridge_port
* configure another address inside the container

(that way you should not even need to enable ip-forwarding (since it happens on Bridge/Layer 2)

Keep in mind that I'm only talking about the IPv6 setup - keeping your IPv4 working should work (since I suppose all of it happens via vmbr1) but you need to test it

Hope this helps!
 
Depending on your provider (check with them) - I'm rather certain that you got at least both /64s - meaning all addresses from
2001:bc8:3cc6:101::0 - 2001:bc8:3cc6:102:ffff:ffff:ffff:ffff belong to you and should be usable for your containers.

* Depending on how they set it up - you probably can just assign one of those addresses to each container.

I would try to:
* configure the IP6 address (e.g. 2001:bc8:3cc6:101::1/64) on the PVE-host on vmbr0
* add the ethernet (enp1s0f0) to that bridge as bridge_port
* configure another address inside the container

(that way you should not even need to enable ip-forwarding (since it happens on Bridge/Layer 2)

Keep in mind that I'm only talking about the IPv6 setup - keeping your IPv4 working should work (since I suppose all of it happens via vmbr1) but you need to test it

Hope this helps!

* add the ethernet (enp1s0f0) to that bridge as bridge_port
if I set bridge_port on vmbr0 I will lost IPv4 connection, won't I?
 
Last edited:
First - make sure you have access to the host even if you lose connectivity! - That's always a prerequisite when changing the network configuration! Either you sit in front of it, or you have some kind of IPMI/iDrac/iLO/KVM/externally reachable serial console which gives you access.
So if anything goes wrong you can repair it.

What should work:
* configure both ipv4 and ipv6 on vmbr0
* put enp1s0f0 as bridge_port in vmbr0
* change your NAT rules for ipv4 to use vmbr0 as outgoing interface
 
First - make sure you have access to the host even if you lose connectivity! - That's always a prerequisite when changing the network configuration! Either you sit in front of it, or you have some kind of IPMI/iDrac/iLO/KVM/externally reachable serial console which gives you access.
So if anything goes wrong you can repair it.

What should work:
* configure both ipv4 and ipv6 on vmbr0
* put enp1s0f0 as bridge_port in vmbr0
* change your NAT rules for ipv4 to use vmbr0 as outgoing interface
Unfortunately, it doesn't boot. Removing bridge_port on vmbr0 it makes boot. Could we write some how-to in Proxmox doc as for IPv4 if we make it works?
 
Last edited:
Please post the output of:
* `cat /etc/network/interfaces`
* `ip addr`
* `ip route`

* after adding enp1s0f0 as bridge port
* also (if you have access to the console) try to run `ifdown -a ; sleep 1; ifup -a` (from the console - not over ssh)
 
I restored them again. Anyway, this is the actual situation:

cat /etc/network/interfaces
Code:
auto lo
iface lo inet loopback

auto enp1s0f1
iface enp1s0f1 inet dhcp

auto enp1s0f0
iface enp1s0f0 inet manual

iface enp1s0f0 inet6 static
        address  2001:bc8:3cc6:101::
        netmask  64
        gateway  fe80::2c8:8bff:fee2:6c45

iface enp1s0f0 inet6 static
        address  2001:bc8:3cc6:102::
        netmask  64

auto enp1s0f0:0
iface enp1s0f0:0 inet static
        address  62.210.132.20
        netmask  255.255.255.0
        gateway  62.210.132.1

auto enp1s0f0:1
iface enp1s0f0:1 inet static
        address  62.210.132.21
        netmask  255.255.255.0
        gateway  62.210.132.1

auto vmbr0
iface vmbr0 inet static
        address  192.168.1.1
        netmask  255.255.255.0
        bridge-ports none
        bridge-stp off
        bridge-fd 0

iface vmbr0 inet6 static
        address  fd12:3456:789a:1::
        netmask  64

auto vmbr1
iface vmbr1 inet static
        address  192.168.2.1
        netmask  255.255.255.0
        bridge-ports none
        bridge-stp off
        bridge-fd 0

    post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    #post-up echo 1 > /proc/sys/net/ipv6/conf/all/forwarding
    #post-up echo 1 > /proc/sys/net/ipv6/conf/all/accept_ra

ip addr
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 0c:c4:7a:83:1a:da brd ff:ff:ff:ff:ff:ff
    inet 62.210.132.20/24 brd 62.210.132.255 scope global enp1s0f0:0
       valid_lft forever preferred_lft forever
    inet 62.210.132.21/24 brd 62.210.132.255 scope global secondary enp1s0f0:1
       valid_lft forever preferred_lft forever
    inet6 2001:bc8:3cc6:102::/64 scope global
       valid_lft forever preferred_lft forever
    inet6 2001:bc8:3cc6:101::/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::ec4:7aff:fe83:1ada/64 scope link
       valid_lft forever preferred_lft forever
3: enp1s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
    link/ether 0c:c4:7a:83:1a:db brd ff:ff:ff:ff:ff:ff
    inet 10.91.154.16/25 brd 10.91.154.127 scope global enp1s0f1
       valid_lft forever preferred_lft forever
    inet6 fe80::ec4:7aff:fe83:1adb/64 scope link
       valid_lft forever preferred_lft forever
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:a4:0b:5c:42:6d brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.1/24 brd 192.168.1.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fd12:3456:789a:1::/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::98:5bff:fe5f:fd57/64 scope link
       valid_lft forever preferred_lft forever
5: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:4b:7e:88:87:17 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.1/24 brd 192.168.2.255 scope global vmbr1
       valid_lft forever preferred_lft forever
    inet6 fe80::388d:66ff:fea8:f99e/64 scope link
       valid_lft forever preferred_lft forever
7: veth100i0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:a4:0b:5c:42:6d brd ff:ff:ff:ff:ff:ff link-netnsid 0
9: veth102i0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:0c:99:4a:8e:9f brd ff:ff:ff:ff:ff:ff link-netnsid 1
11: veth120i0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:89:c0:70:32:fe brd ff:ff:ff:ff:ff:ff link-netnsid 2
13: veth200i0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
    link/ether fe:4b:7e:88:87:17 brd ff:ff:ff:ff:ff:ff link-netnsid 3
15: veth202i0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
    link/ether fe:81:2b:31:71:9c brd ff:ff:ff:ff:ff:ff link-netnsid 4
17: veth210i0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
    link/ether fe:81:13:88:bb:a3 brd ff:ff:ff:ff:ff:ff link-netnsid 5

ip route
Code:
root@mysrv01:~# ip route
default via 62.210.132.1 dev enp1s0f0 onlink
10.88.0.0/13 via 10.91.154.1 dev enp1s0f1
10.91.154.0/25 dev enp1s0f1 proto kernel scope link src 10.91.154.16
62.210.132.0/24 dev enp1s0f0 proto kernel scope link src 62.210.132.20
192.168.1.0/24 dev vmbr0 proto kernel scope link src 192.168.1.1
192.168.2.0/24 dev vmbr1 proto kernel scope link src 192.168.2.1
root@mysrv01:~# ip -6 route
2001:bc8:3cc6:101::/64 dev enp1s0f0 proto kernel metric 256 pref medium
2001:bc8:3cc6:102::/64 dev enp1s0f0 proto kernel metric 256 pref medium
fd12:3456:789a:1::/64 dev vmbr0 proto kernel metric 256 pref medium
fe80::/64 dev enp1s0f1 proto kernel metric 256 pref medium
fe80::/64 dev enp1s0f0 proto kernel metric 256 pref medium
fe80::/64 dev vmbr0 proto kernel metric 256 pref medium
fe80::/64 dev vmbr1 proto kernel metric 256 pref medium
default via fe80::2c8:8bff:fee2:6c45 dev enp1s0f0 metric 1024 pref medium

After added enp1s0f0 as bridge port on vmbr0 I'm not more able to boot.
 
* Not booting would be curious - not having access via ssh I could understand.
* I would need the output after you made your changes in order to help - not after reverting them back
* if you add an interface as bridge-port make sure that this interface has no ip configured on it (configure it on the bridge) - also it should not have any aliases (create the aliases on the bridge)
 
* Not booting would be curious - not having access via ssh I could understand.
* I would need the output after you made your changes in order to help - not after reverting them back
* if you add an interface as bridge-port make sure that this interface has no ip configured on it (configure it on the bridge) - also it should not have any aliases (create the aliases on the bridge)
Sorry, you are perfectly right.

This is the host interfaces file edited:
Code:
auto lo
iface lo inet loopback
auto enp1s0f1
iface enp1s0f1 inet dhcp
auto enp1s0f0
iface enp1s0f0 inet manual
auto vmbr0
iface vmbr0 inet static
        address  62.210.132.20
        netmask  255.255.255.0
        gateway  62.210.132.1
        bridge-ports enp1s0f0
        bridge-stp off
        bridge-fd 0
iface vmbr0 inet6 static
        address  2001:bc8:3cc6:101::
        netmask  64

    post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    pre-up/sbin/sysctl -w net.ipv6.conf.enp1s0f0.accept_ra=2
    pre-up/sbin/sysctl -w net.ipv6.conf.enp1s0f0.forwarding=1
From the host, I can ping google from ipv4 and ipv6. They both working perfectly. What about the container configuration for ipv4 (What gateway should I set now?) and ipv6 (without using NAT)?
 
Hi,
That's good news! regarding the host being reachable via ipv4 and ipv6.

For the guest - I think a dual approach would work best:
For IPv6:
* add an interface to the container, which is connected to vmbr0
* configure an ip from 2001:bc8:3cc6:101::/64
* set the same gateway as you have on the host itself
For IPv4:
* add a second bridge (vmbr1) to the node, without any bridge_ports
* configure a rfc1918 address on it e.g.192.168.1.1/24
* add an interface to the container connected to vmbr1
* configure an ip from 192.168.1.0/24 on the container
* set 192.168.1.1 as gateway
* add fitting MASQUARADE rules for ipv4 on the node

Test ipv6 first - once this works try ipv4

Hope this helps!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!