Issues with 4 NIC (OVH Scale, High Grade) vms have no internet

NoSum

New Member
May 15, 2021
27
0
1
36
I recently got my new OVH Scale range server which comes with redundant network (2xpublic 2xvrack) and can't get the virtual machines network to work with or without virtual MACs

Below are the network settings I am not sure how to set it up correctly to get vmbr0 for the VMs working

/etc/network/interfaces

Code:
auto lo
iface lo inet loopback


iface enp193s0f0 inet dhcp


iface enp133s0f0 inet manual


iface enp133s0f1 inet manual


iface enp193s0f1 inet manual


iface enp9s0f3u2u2c2 inet manual


auto bond0
iface bond0 inet manual
      bond-slaves enp193s0f0 enp193s0f1
      bond-miimon 100
      bond-mode 802.3ad
      bond-xmit-hash-policy layer2+3


auto vmbr0
iface vmbr0 inet static
        address  51.195.234.XXX
        gateway  51.195.234.254
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0

When applying this it does show the following error

ifup -a
warning: enp193s0f0: ignoring ip address. Assigning an IP address is not allowed on enslaved interfaces. enp193s0f0 is enslaved to bond0


ip addr
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group defaul                                                                                                                                                             t qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp193s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master                                                                                                                                                              bond0 state UP group default qlen 1000
    link/ether 04:3f:72:b4:6a:70 brd ff:ff:ff:ff:ff:ff
3: enp193s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master                                                                                                                                                              bond0 state UP group default qlen 1000
    link/ether 04:3f:72:b4:6a:70 brd ff:ff:ff:ff:ff:ff
4: enp133s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group defaul                                                                                                                                                             t qlen 1000
    link/ether 0c:42:a1:6c:42:dc brd ff:ff:ff:ff:ff:ff
5: enp133s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group defaul                                                                                                                                                             t qlen 1000
    link/ether 0c:42:a1:6c:42:dd brd ff:ff:ff:ff:ff:ff
7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master                                                                                                                                                              vmbr0 state UP group default qlen 1000
    link/ether 04:3f:72:b4:6a:70 brd ff:ff:ff:ff:ff:ff
8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP grou                                                                                                                                                             p default qlen 1000
    link/ether 04:3f:72:b4:6a:70 brd ff:ff:ff:ff:ff:ff
    inet 51.195.234.XXX/32 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::63f:72ff:feb4:6a70/64 scope link
       valid_lft forever preferred_lft forever
10: tap106i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fas                                                                                                                                                             t master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 72:e3:07:7f:f5:57 brd ff:ff:ff:ff:ff:ff

According to OVH website the following MACs are part of public network

public
Public Aggregation
04:3f:72:b4:6a:70, 04:3f:72:b4:6a:71

and the following are the vrack

vrack
Private Aggregation
0c:42:a1:6c:42:dd, 0c:42:a1:6c:42:dc

/etc/default/isc-dhcp-server

Code:
INTERFACESv4="vmbr0"
INTERFACESv6=""

/etc/dhcp/dhcpd.conf

Code:
ddns-update-style none;
default-lease-time 600;
max-lease-time 7200;
log-facility local7;
option rfc3442-classless-static-routes code 121 = array of integer 8;
option ms-classless-static-routes code 249 = array of integer 8;


subnet 0.0.0.0 netmask 0.0.0.0 {
 authoritative;
 default-lease-time 21600000;
 max-lease-time 432000000;
 option routers 51.195.234.254;
 option domain-name-servers 8.8.8.8,4.2.2.1;
 option rfc3442-classless-static-routes 32, 51, 195, 234, 254, 0, 0, 0, 0, 0, 51, 195, 234, 254;
 option ms-classless-static-routes 32, 51, 195, 234, 254, 0, 0, 0, 0, 0, 51, 195, 234, 254;
  #ProxmoxIPv4
  host 1 {hardware ethernet 02:00:00:c9:f7:f6;fixed-address 198.244.139.XXX;option subnet-mask 255.255.255.255;option routers 51.195.234.254;}
 }


Windows ipconfig /all

72044435d49640583f96d4c5440635c3.png
 
Last edited:
ifup -a
warning: enp193s0f0: ignoring ip address. Assigning an IP address is not allowed on enslaved interfaces. enp193s0f0 is enslaved to bond0

change "iface enp193s0f0 inet dhcp" to "iface enp193s0f0 inet manual"

you can have an ip address on a interface enslaved in a bond
 
change "iface enp193s0f0 inet dhcp" to "iface enp193s0f0 inet manual"

you can have an ip address on a interface enslaved in a bond
Thank you for the reply but from what I can see this has not changed anything.

warning: bond0: attribute bond-min-links is set to '0'

Shows but 51.195.234.xxx is working (same as before).

Some more config files have been added above for DHCP
 
are you sure to use correct physical interface ?

I don't known too much about the new ovh bonded option, but previously, they was a dedicated interface for the vrack. (so maybe 2 interfaces bonded for public && 2 other interfaces bonded for vrack).

you should assign vmac of public ips in the vrack in the ovh panel, then use theses mac for proxmox vms nic, and it should work out of the box.
 
Attemping to use 51.195.234.xxx (nodes public IP) with enp193s0f1 fails it can only be used with enp193s0f0 - Additional IPs do work with enp193s0f1 when I do not set any virtual mac (not in a VM through vmbr)

IPs trough vrack is not an option as the first/last IP of each block is unusable
 
First thing first, you cannot put an IP on a network device that lays total claim on it by another network device.

That said, you cannot assign an IP on a physical Ethernet device if bonding or bridging device lays claim on it.

You can only place an IP address on the bridge or bonding interfaces.

This is a design limitation shared by NetBSD, Linux, and many other Unixes.
 
First thing first, you cannot put an IP on a network device that lays total claim on it by another network device.

That said, you cannot assign an IP on a physical Ethernet device if bonding or bridging device lays claim on it.

You can only place an IP address on the bridge or bonding interfaces.

This is a design limitation shared by NetBSD, Linux, and many other Unixes.
Thanks for the info. Here is how I normally get it working with OVH

Code:
auto lo
iface lo inet loopback


iface eth0 inet dhcp


iface eth1 inet dhcp


iface eth2 inet dhcp


iface enp5s0f0 inet manual


iface enp5s0f1 inet manual


iface enp7s0f3u2u2c2 inet manual


auto vmbr0
iface vmbr0 inet static
        address 51.89.172.xxx/24
        gateway 51.89.172.254
        bridge-ports enp5s0f0
        bridge-stp off
        bridge-fd 0

But doing that doesn't seem to work (below)

Code:
auto vmbr0
iface vmbr0 inet static
        address  51.195.234.XXX
        gateway  51.195.234.254
        bridge-ports enp193s0f0
        bridge-stp off
        bridge-fd 0

I only went the bond way because I thought maybe it was required and the idea of getting a redundant network is good but at this point I just want to get anything working.
 
I have it working now. Looks like you have to add the following for new server ranges in /etc/network/interfaces vmbr0 part

post-up ip route add 198.244.139.xxx/32 dev vmbr0
post-up echo 1 >/proc/sys/net/ipv4/ip_forward
 
I've got the same server setup and the IPs are working great on the VMs and I'm seeing traffic across both interfaces. I do seem to have some issue that is keeping me from hitting any kind of decent network performance between my 2 boxes that have this same setup (get only like 200 Mbs), but performance to/from my old servers that don't have bonded interfaces is great (seen as high as 8 Gbs). Note: I took the address I was getting via DHCP on the public aggregate link and used it so I could use that interface in my Proxmox Cluster which makes sense. Below if my config file, hopefully this helps you.

-Paul


Code:
auto lo
iface lo inet loopback


auto ens33f0
iface ens33f0 inet manual


auto ens33f1
iface ens33f1 inet manual


auto ens43f0
iface ens43f0 inet manual


auto ens43f1
iface ens43f1 inet manual


auto bond0
iface bond0 inet manual
    bond-slaves ens43f0 ens43f1
    bond-miimon 100
    bond-mode 802.3ad
    bond-xmit-hash-policy layer3+4
    bond-downdelay 200
    bond-lacp-rate 1
#vRack OAL


auto bond1
iface bond1 inet manual
    bond-slaves ens33f0 ens33f1
    bond-miimon 100
    bond-mode 802.3ad
    bond-xmit-hash-policy layer3+4
    bond-downdelay 200
    bond-lacp-rate 1
#Public OAL


auto vmbr0
iface vmbr0 inet static
    address 192.168.xx.xx/24
    bridge-ports bond0
    bridge-stp off
    bridge-fd 0
#vRack


auto vmbr1
iface vmbr1 inet static
    address xx.xx.xx.xx/32
    gateway 100.64.0.1
    bridge-ports bond1
    bridge-stp off
    bridge-fd 0
#Public
 
I have it working now. Looks like you have to add the following for new server ranges in /etc/network/interfaces vmbr0 part

post-up ip route add 198.244.139.xxx/32 dev vmbr0
post-up echo 1 >/proc/sys/net/ipv4/ip_forward
Hey Nosum, stumbled at your solution. Great effort. Do you mind sharing your full config and also how you configured internet access on the vms? I'm also setting up Proxmox 7.4 on a scale server from OVH.

Thanks for the help.
 
  • Like
Reactions: zerolimites

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!