[SOLVED] Proxmox Root Server - Networking

r0g

Well-Known Member
May 31, 2016
32
0
46
28
Hi,

i´m running a dedicated root server from Hetzner and i want to run proxmox for installing virtual machines.

At the moment Proxmox is installed successfully and running. My problem is the network configuration.
I just followed a lot of guides and manuals - for example:
This This This and This... For me not one of them is working and i already reinstalled my server uncoutable times.

Im running:
Debian 8 Minimal
Proxmox 4.4-13

My current network configuration:
Proxmox Node /etc/network/interfaces

Code:
source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

iface lo inet6 loopback

auto eth0
iface eth0 inet static
        address  176.xx.xx.219
        netmask  255.255.255.224
        pointopoint 176.xx.xx.193
        gateway  176.xx.xx.193
        up route add -net 176.xx.xx.192 netmask 255.255.255.224 gw 176.xx.xx.193 dev eth0
# route 176.xx.xx.192/27 via 176.xx.xx.193

iface eth0 inet6 static
        address  2a01:xx.xx708f::2
        netmask  64
        gateway  fe80::1

auto vmbr0
iface vmbr0 inet static
        address  176.xx.xx.219
        netmask  255.255.255.255
        bridge_ports none
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0
        pre-up brctl addbr vmbr0
        up ip route add 176.xx.xx.158/32 dev vmbr0

iface vmbr0 inet6 static
        address  2a01:XX:XX:708f::2
        netmask  64

auto vmbr1
iface vmbr1 inet static
        address 10.20.30.1
        netmask 255.255.255.0
        bridge_ports none
        bridge_stp off
        bridge_fd 0
        post-up iptables -t nat -A POSTROUTING -s '10.20.30.0/24' -o eth0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '10.20.30.0/24' -o eth0 -j MASQUERADE

The following settings have been made on the node:

Code:
sysctl -w net.ipv4.ip_forward=1
sysctl -w net.ipv6.conf.all.forwarding=1

I also edited the file " /etc/sysctl.conf" and "/etc/sysctl.d/99-hetzner.conf" for saving this configuration.

Debian VM on Proxmox Node /etc/network/interfaces:

Code:
source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

allow hotplug eth0
iface eth0 inet static
        address 176.xx.xx.158
        netmask 255.255.255.255
        pointopoint 176.xx.xx.219
        gateway 176.xx.xx.219
        dns-nameservers 213.133.100.100 213.133.98.98

With this configuration i´m able to ping my VM from the Proxmox Node:
Code:
root@PROXMOX ~ # ping 176.xx.xx.158
PING 176.xx.xx.158 (176.xx.xx.158) 56(84) bytes of data.
64 bytes from 176.xx.xx.158: icmp_seq=1 ttl=64 time=0.154 ms
64 bytes from 176.xx.xx.158: icmp_seq=2 ttl=64 time=0.183 ms
64 bytes from 176.xx.xx.158: icmp_seq=3 ttl=64 time=0.205 ms
64 bytes from 176.xx.xx.158: icmp_seq=4 ttl=64 time=0.220 ms
64 bytes from 176.xx.xx.158: icmp_seq=5 ttl=64 time=0.235 ms
64 bytes from 176.xx.xx.158: icmp_seq=6 ttl=64 time=0.198 ms

And i´m able to ping my Proxmox Node from the VM:
Code:
root@VM01 ~ # ping 176.xx.xx.219
PING 176.xx.xx.219 (176.xx.xx.219) 56(84) bytes of data.
64 bytes from 176.xx.xx.219: icmp_seq=1 ttl=64 time=0.133 ms
64 bytes from 176.xx.xx.219: icmp_seq=2 ttl=64 time=0.127 ms
64 bytes from 176.xx.xx.219: icmp_seq=3 ttl=64 time=0.143 ms
64 bytes from 176.xx.xx.219: icmp_seq=4 ttl=64 time=0.167 ms
64 bytes from 176.xx.xx.219: icmp_seq=5 ttl=64 time=0.166 ms
64 bytes from 176.xx.xx.219: icmp_seq=6 ttl=64 time=0.144 ms

But i´m not able to ping to the internet from the VM:

Code:
root@VM01 ~ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

Can anyone help and tell me what i´m doing wrong?
 
Last edited:
I just noticed that IPv4 routing is working with bridging eth0:

Code:
source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

iface lo inet6 loopback

auto eth0
iface eth0 inet static
       address  176.xx.xx.219
       netmask  255.255.255.224
       pointopoint 176.xx.xx.193
       gateway  176.xx.xx.193
       up route add -net 176.xx.xx.192 netmask 255.255.255.224 gw 176.xx.xx.193 dev eth0
# route 176.xx.xx.192/27 via 176.xx.xx.193

iface eth0 inet6 static
       address  2a01:xx.xx708f::2
       netmask  64
       gateway  fe80::1

auto vmbr0
iface vmbr0 inet static
       address  176.xx.xx.219
       netmask  255.255.255.255
       bridge_ports eth0
       bridge_stp off
       bridge_fd 0
       bridge_maxwait 0
       pre-up brctl addbr vmbr0
       up ip route add 176.xx.xx.158/32 dev vmbr0

iface vmbr0 inet6 static
       address  2a01:XX:XX:708f::2
       netmask  64

auto vmbr1
iface vmbr1 inet static
       address 10.20.30.1
       netmask 255.255.255.0
       bridge_ports none
       bridge_stp off
       bridge_fd 0
       post-up iptables -t nat -A POSTROUTING -s '10.20.30.0/24' -o eth0 -j MASQUERADE
       post-down iptables -t nat -D POSTROUTING -s '10.20.30.0/24' -o eth0 -j MASQUERADE

With this configuration my VM has access to internet and i can also reach it from outside.

The problem is that IPv6 is not working with this configuration. Ifconfig gives:
Code:
eth0      Link encap:Ethernet  HWaddr 14:da:e9:ed:e1:00
          inet addr:176.xx.xx.219  Bcast:176.xx.xx.223  Mask:255.255.255.224
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1952 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1729 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:171635 (167.6 KiB)  TX bytes:163106 (159.2 KiB)

Turing off the bridge:

Code:
eth0      Link encap:Ethernet  HWaddr 14:da:e9:ed:e1:00
          inet addr:176.xx.xx.219  Bcast:176.xx.xx.223  Mask:255.255.255.224
          inet6 addr: 2a01:xx:xx:708f::2/64 Scope:Global
          inet6 addr: fe80::16da:e9ff:feed:e100/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1952 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1729 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:171635 (167.6 KiB)  TX bytes:163106 (159.2 KiB)

Anyone an idea?
 
This Routed configuration is now working for me:

Code:
# Loopback device:
auto lo
iface lo inet loopback

# device: eth0
auto eth0
iface eth0 inet static
  address 176.x.x.219
  netmask 255.255.255.255
  pointopoint 176.x.x.193
  gateway 176.x.x.193

iface eth0 inet6 static
  address 2a01:x:x:708f::2
  netmask 128
  gateway fe80::1
  up sysctl -p

# fuer Einzel-IPs
auto vmbr0
iface vmbr0 inet static
  address 176.x.x.219
  netmask 255.255.255.255
  bridge_ports none
  bridge_stp off
  bridge_fd 0
  up ip route add 176.x.x.158/32 dev vmbr0

iface vmbr0 inet6 static
  address 2a01:x:x:708f::2
  netmask 64

I can ping my node via IPv4/6 while having internet access on the virtual machine. The VM configuration looks like that:
Code:
# Loopback device:
auto lo
iface lo inet loopback

# device: eth0
auto eth0
iface eth0 inet static
   address 176.x.x.158
   netmask 255.255.255.255
   pointopoint 176.x.x.219
   gateway 176.x.x.219

  iface eth0 inet6 static
   address 2a01:x:x:708f::3
   netmask 64
   gateway 2a01:x:x:708f::2

But on my VM IPv6 isn´t working.
Code:
root@host01:~# ping6 ipv6.google.com
PING ipv6.google.com(fra15s28-in-x0e.1e100.net) 56 data bytes

Traceroute gives:
Code:
root@host01:~# traceroute6 ipv6.google.com
traceroute to ipv6.google.com (2a00:1450:4001:80b::200e), 30 hops max, 80 byte packets
 1  MYNODE.co (2a01:x:x:708f::2)  0.173 ms  0.157 ms  0.145 ms
So it seems like IPv6 forwarding isn´t working on my node although forwarding is enabled as you can see in my first posting. Any further ideas?
 
Last edited:
Did you activate Ipv6 forwarding ?

sysctl net.ipv6.conf.all.forwarding

this should return = 1
 
yes IPv6 forwarding is enabled on node:

Code:
root@node01~ # sysctl net.ipv6.conf.all.forwarding
net.ipv6.conf.all.forwarding = 1
 
There's a big difference between forwarding in IPv4 and IPv6 when you're on a root server: all of the ones I know of assume that you have 1 machine.
This means the gateway will attempt to use neighbor discovery to discover the machine each IPv6 address you're using is coming from. Neighbor discovery, however, uses a link-local multicast address to find the machine. Link-local means they are not being forwarded by routers, so they never reach your guests in a routed setup and they can't respond.
What you need is to proxy the NDP requests. If you have a known limited number of guests you can do this manually with `ip neighbor` (see the ip-neighbor(8) manpage and the `net.ipv6.conf.$iface.proxy_ndp` sysctl)
If you don't want to have to define all neighbors manually and need some more flexibility you can take a look at ndppd (neighbor discovery protocol proxy daemon).
 
  • Like
Reactions: r0g
Thanks for your answer wbumiller!

I think i have to learn more about IPv6... So i just activate the NDP proxy via:

Code:
sysctl -w net.ipv6.conf.all.proxy_ndp=1
sysctl -w net.ipv6.conf.default.proxy_ndp=1

ip -6 neighbour shows:
Code:
root@node01~ # ip -6 neighbour
fe80::1 dev eth0 lladdr 00:XX:XX:0d:2c:51 router STALE
2a01:xx.xx:708f::4 dev vmbr0 lladdr 56:XX:XX:c5:92:db STALE
fe80::5421:42ff:fec5:92db dev vmbr0 lladdr 56:XX:XX:c5:92:db STALE
So this one "2a01:xx.xx:708f::4 dev vmbr0 lladdr 56:XX:XX:c5:92:db STALE" is already the right entry for this setup isn´t it? It was generated automatically.

At the moment my VM has the IP: 2a01:xx.xx:708f::4

It´s not working - So is there anything left to do?
 
This shows you the seen neighbors, not what you're proxying ;-)
Try to use the `ip -6 neighbor proxy` subcommand, something like this should work:
Code:
# ip neigh add proxy 2a01::your:container dev eth0
 
Nice thank you! That worked for me!
For the german guys. This article helped too.

Edit: wbumiller the correct command was
Code:
 ip -6 neighbour show proxy
:)
 
One last thing: How is it possible to make this active after a reboot?
 
The `... show ...` commands only print information, they don't modify it, so that's not what made it work ;-) (but yes, to see the active proxy entries, that was the right command). You can still take a look at ndppd for a more flexible neighbor-proxying solution.

As for running them at boot time: you do that like with any other command (eg., write a systemd .service file for it).

One note about the linked article: There actually is nat for ipv6 available (at least when using nftables).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!