[TUTORIAL] OVH Dedicated Server Network Configuration Explained

grprpmcof

New Member
Aug 24, 2020
1
0
1
58
Greetings. I've been setting up my ovh dedicated server with failover ips (really just extra ips). I tried using various recommendations on forums and at ovh; but usually something was missing. Maybe this should go in networking forum.

The following is just a summary of what I did to get proxmox to work with IP4 and IP6 on the host and vms on a Dedicated OVH server.

There is no NAT involved. Just direct public Internet IP addresses on each VM and the host.

It's not a complete tutorial, but should fill in the missing pieces to get this to work specifially at OVH on a dedicated server (VPS, other companies, and other platforms are a little different). This is mostly just a quick brain dump while it's fresh in my mind.

Some initial notes and preparation:

- OVH dedicated server networking ips may use gateways outside the subnets. In some Linux distributions this makes it hard as it won't just work or follow any easily found documentation.

- I added a block of IP4 failover ip addresses so I could setup the virtual mac on each vm. I read from OVH that a public IPV6 will not work without a virual mac. And the only place to add a virtual mac is with an extra ip. So, yeah, time to buy some ips.

- I initially tried setting up NAT with various scenarios and it was just a miserable experience trying to get it all to work. I decided to just simplify and setup an IP to each VM and later just use an internal virtual network to communicate between vms. I'll just set a default firewall on each VM to block everything except what is needed if anything from the public Internet IP.

- I installed proxmox from the ISO using IPMI. OVH has a template that may work as well. But, I had so messed up my partitioning during reinstalling and testing that the OVH installer didn't work anymore for me.

- I'm using Ubuntu 20.04 on the guest vms with netplan network configuration.

Step 1:
Get the host to work on ip4. This part is pretty easy since dhcp seems to pick it up automatically.

Step 2:
Get the host to work on ip6. Some of this I configured in the gui. But here's the working file:

/etc/network/interfaces

Code:
auto lo

iface lo inet loopback

iface lo inet6 loopback



iface eno1 inet manual

iface eno1 inet6 manual



iface enp0s20f0u8u3c2 inet manual



iface eno2 inet manual



auto vmbr0

iface vmbr0 inet static

        address 1.2.3.4/24 # your public ipv4 IP

        gateway 1.2.3.254 # ovh gateway ends in .254

        bridge-ports eno1

        bridge-stp off

        bridge-fd 0



iface vmbr0 inet6 static

        address 1111:2222:3333:6666::1/64 # your public ipv6 IP and subnet

        gateway 1111:2222:3333:66FF:FF:FF:FF:FF # ovh gateway ends outside the subnet with ff:ff:ff:ff:ff partially replacing the ip

        bridge-ports eno1

        bridge-stp off

        bridge-fd 0



auto vmbr1

iface vmbr1 inet static

        address 10.1.1.1/24 # optional just an internal network not routed

        bridge-ports none

        bridge-stp off

        bridge-fd 0

That should work. Test ping with ip4 and ip6.

Step 3:
Get VM to work.
Using Ubunu 20.04 in my case with netplan based networking.
Make sure IP4 and IP6 are assigned in OVH control panel. Assign the virtual mac from OVH to your VM.
Probably will need to use the host console to the vm the first time to get the networking to function.

Edit this file as needed to make it work for your setup.
/etc/netplan/00-installer-config.yaml
Code:
# This is the network config written by 'subiquity'
network:
  ethernets:
    ens18:
      addresses:
      - 3.4.5.6/32 # an OVH public failover ip assigned to your server with vmac setup
      - 1111:2222:3333:6666::2/64 # an OVH ipv6 from your server allocation
      nameservers:
        addresses: [ "127.0.0.1" ]
      optional: true
      routes:
              - to: 0.0.0.0/0
                via: 1.2.3.254 # OVH ip4 gateway BASED ON GATEWAY OF THE MAIN SERVER IP
                on-link: true
              - to: "::/0"
                via: "1111:2222:3333:66ff:ff:ff:ff:ff" #OVH ipv6 gateway outside the IP6 subnet
                on-link: true
    ens19:
      addresses:
      - 10.1.1.2/24 # optional internal network
  version: 2

Try the netplan and see if it all works. Your configurations may vary but the core pieces are there as far as the hard to figure out subnetting for OVH dedicated.
 

Dominic

Proxmox Staff Member
Staff member
Mar 18, 2019
1,388
168
68
Thank you for writing down your experiences! If you want to, you can mark your post as Tutorial by editing your post and setting a prefix :)
 
Oct 25, 2018
8
3
8
32
Hello,

for those who do not want to use failover IPs, I recently installed PVE6 OVH template on a RISE-1.

Here is the /etc/network/interfaces right after OS installation:
Code:
auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

auto vmbr0
iface vmbr0 inet dhcp
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
Surprinsingly, the public IP address 1.2.3.4 of the server was attached to vmbr0
Code:
[14:26:26|Wed Oct 13][OVH][root@prox-01] ~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether a4:bf:01:2d:ad:68 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether a4:bf:01:2d:ad:69 brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a4:bf:01:2d:ad:68 brd ff:ff:ff:ff:ff:ff
    inet 1.2.3.4/24 brd 1.2.3.255 scope global dynamic vmbr0
       valid_lft 79279sec preferred_lft 79279sec
    inet6 fe80::a6bf:1ff:fe2d:ad68/64 scope link
So if you need to create a private subnet (let's say 10.0.0.1/24) for your containers / VM and masquerade their outgoing trafic through your public IP 1.2.3.4, you need to create an other linux bridge vmbr1 and masquerade it through vmbr0, not eno1 :
Code:
auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

auto vmbr0
iface vmbr0 inet dhcp
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet static
        address 10.0.0.1/24
        gateway 1.2.3.254
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        post-up   echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up iptables -t nat -A POSTROUTING -s '10.0.0.0/24' -o vmbr0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '10.0.0.0/24' -o vmbr0 -j MASQUERADE

Hope this can help (took me a while to figure it out :p )
 
Last edited:
  • Like
Reactions: einverne

einverne

New Member
Oct 9, 2021
7
0
1
29
Hello,

for those who do not want to use failover IPs, I recently installed PVE6 OVH template on a RISE-1.

Here is the /etc/network/interfaces right after OS installation:
Code:
auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

auto vmbr0
iface vmbr0 inet dhcp
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
Surprinsingly, the public IP address 1.2.3.4 of the server was attached to vmbr0
Code:
[14:26:26|Wed Oct 13][OVH][root@prox-01] ~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether a4:bf:01:2d:ad:68 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether a4:bf:01:2d:ad:69 brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a4:bf:01:2d:ad:68 brd ff:ff:ff:ff:ff:ff
    inet 1.2.3.4/24 brd 1.2.3.255 scope global dynamic vmbr0
       valid_lft 79279sec preferred_lft 79279sec
    inet6 fe80::a6bf:1ff:fe2d:ad68/64 scope link
So if you need to create a private subnet (let's say 10.0.0.1/24) for your containers / VM and masquerade their outgoing trafic through your public IP 1.2.3.4, you need to create an other linux bridge vmbr1 and masquerade it through vmbr0, not eno1 :
Code:
auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

auto vmbr0
iface vmbr0 inet dhcp
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet static
        address 10.0.0.1/24
        gateway 1.2.3.254
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        post-up   echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up iptables -t nat -A POSTROUTING -s '10.0.0.0/24' -o vmbr0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '10.0.0.0/24' -o vmbr0 -j MASQUERADE

Hope this can help (took me a while to figure it out :p )
I configure the Proxmox VE and VM, but the vm has no connect to the Internet.
 
Oct 25, 2018
8
3
8
32
Hi,
can you check if ip forwarding is enabled (cat /proc/sys/net/ipv4/ip_forward) and masquerading is active (iptables-save | grep -i masquerade) ?
Sometimes, post-up directives in /etc/network/interfaces won't do the job.
Also, how did you configure network at the VM level ?
 

einverne

New Member
Oct 9, 2021
7
0
1
29
From PVE:

Code:
root@pve:~# cat /proc/sys/net/ipv4/ip_forward
1

AND:

Code:
root@pve:~# iptables-save | grep -i masquerade
-A POSTROUTING -s 10.0.0.0/24 -o vmbr0 -j MASQUERADE

I tried to reboot both PVE and VM, not working.

PVE config:

Code:
auto lo

iface lo inet loopback



auto eno3

iface eno3 inet manual



iface eno4 inet manual



auto vmbr0

iface vmbr0 inet dhcp

    bridge-ports eno3

    bridge-stp off

    bridge-fd 0



auto vmbr1

iface vmbr1 inet static

    address 10.0.0.1/24

    bridge-ports none

    bridge-stp off

    bridge-fd 0

    post-up echo 1 > /proc/sys/net/ipv4/ip_forward

    post-up iptables -t nat -A POSTROUTING -s '10.0.0.0/24' -o vmbr0 -j MASQUERADE

    post-down iptables -t nat -D POSTROUTING -s '10.0.0.0/24' -o vmbr0 -j MASQUERADE



    post-up iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 2022 -j DNAT --to 10.0.0.2:22

    post-down iptables -t nat -D PREROUTING -i vmbr0 -p tcp --dport 2022 -j DNAT --to 10.0.0.2:22

    post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1

    post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1


The config in VM, Ubuntu 20.04 in `/etc/netplan/00-installer-config.yaml` :

Code:
# This is the network config written by 'subiquity'
network:
  ethernets:
    ens18:
      addresses:
      - 10.0.0.3/24
      gateway4: 10.0.0.1
      nameservers:
        addresses:
        - 8.8.8.8
        - 8.8.4.4
  version: 2
 
Last edited:
Oct 25, 2018
8
3
8
32
Hmm, I don't know what's wrong, your config seems OK to me. I never used netplan and subiquity, maybe I am missing something there.
 

einverne

New Member
Oct 9, 2021
7
0
1
29
Yes, a classic Debian 11. I configured network manually using the VNC console in Proxmox VE GUI.
Can you share your Debian VM network configuration? Let me try later. I have configured the failover IP successfully, but still figuring out how to config the NAT network.
 
Oct 25, 2018
8
3
8
32
Sure:
Code:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug ens18
iface ens18 inet static
        address failoverip/24
        gateway x.x.x.254
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 1.1.1.1
        dns-nameservers 9.9.9.9

auto ens19
iface ens19 inet static
        address 10.0.0.100/32
# --- BEGIN PVE ---
        post-up ip route add 10.0.0.1 dev ens19
        post-up ip route add 10.0.0.0/24 via 10.0.0.1 dev ens19
        pre-down ip route del 10.0.0.0/24 via 10.0.0.1 dev ens19
        pre-down ip route del 10.0.0.1 dev ens19
# --- END PVE ---

ens18 has a failover IP, and uses vmbr0, while ens19 uses vmbr1. Outgoing WAN trafic is not redirected through vmbr1 in that case (default route uses ens18), but as for my containers (which do not have failover IPs), their network configuration is as follows:
Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 10.0.0.200/32
# --- BEGIN PVE ---
        post-up ip route add 10.0.0.1 dev eth0
        post-up ip route add default via 10.0.0.1 dev eth0
        pre-down ip route del default via 10.0.0.1 dev eth0
        pre-down ip route del 10.0.0.1 dev eth0
# --- END PVE ---
with container network configuration in PVE GUI as follows:
1634719551600.png
 
  • Like
Reactions: einverne

einverne

New Member
Oct 9, 2021
7
0
1
29
Sure:
Code:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug ens18
iface ens18 inet static
        address failoverip/24
        gateway x.x.x.254
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 1.1.1.1
        dns-nameservers 9.9.9.9

auto ens19
iface ens19 inet static
        address 10.0.0.100/32
# --- BEGIN PVE ---
        post-up ip route add 10.0.0.1 dev ens19
        post-up ip route add 10.0.0.0/24 via 10.0.0.1 dev ens19
        pre-down ip route del 10.0.0.0/24 via 10.0.0.1 dev ens19
        pre-down ip route del 10.0.0.1 dev ens19
# --- END PVE ---

ens18 has a failover IP, and uses vmbr0, while ens19 uses vmbr1. Outgoing WAN trafic is not redirected through vmbr1 in that case (default route uses ens18), but as for my containers (which do not have failover IPs), their network configuration is as follows:
Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 10.0.0.200/32
# --- BEGIN PVE ---
        post-up ip route add 10.0.0.1 dev eth0
        post-up ip route add default via 10.0.0.1 dev eth0
        pre-down ip route del default via 10.0.0.1 dev eth0
        pre-down ip route del 10.0.0.1 dev eth0
# --- END PVE ---
with container network configuration in PVE GUI as follows:
View attachment 30647
@AlexandreGoethals Really thank you for your help. The config you give and my config is all ok. I restart my Proxmox VE server, everything is ok. I don't know why ifup command not working, and even systemctl restart networking not working. So if anyone meet the same problem try to reboot your server and see what happen.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!