Accessing OVH RBD service via secondary NIC doesn't work

Jul 17, 2019
3
1
8
Hi all!!

How can I access an external storage using a secondary NIC instead of using the main pve bridge?

Important data first:

proxmox-pve 5.4-2

External storage is Cloud Disk Array (Ceph cloud via RBD).

The server has 2 NICs:
a) "default" NIC is attached to OVH vRack, has a public IP address and support the gateway for the network traffic. vmbr0 uses this NIC. Real name: bmvr0 on top of enp97s0f1
b) "secondary" NIC is attached to the default network (a.k.a. public network), and I'm trying to route through it the traffic to external storage. Real name: enp97s0f0

My simplest try:

/etc/network/interfaces (fake IPs):

auto lo
iface lo inet loopback

auto enp97s0f1
iface enp97s0f1 inet static

auto enp97s0f0
iface enp97s0f0 inet static
address 1.1.1.1
netmask 255.255.255.0

iface enp34s0f3u2u2c2 inet manual

auto vmbr0
iface vmbr0 inet static
address 2.2.2.2
netmask 255.255.255.240
gateway 2.2.2.3
broadcast 2.2.2.4
bridge-ports enp97s0f1
bridge-stp off
bridge-fd 0


Now everything was OK, of course, but running through vmbr0. Next step: adding routes to storage through enp97s0f0

10.10.10.10 dev enp97s0f0 scope link
11.11.11.11 dev enp97s0f0 scope link
12.12.12.12 dev enp97s0f0 scope link

Everything keeps working, some connections go through enp97s0f0, but the vast majority of the data come through vmbr0. So, if I delete the authorization (in the storage) for 2.2.2.2 I can not access the data but can see the chart about free/busy space (an it keeps updating).

Building a bridge (vmbr1) on top of enp97s0f0 and routing the same way gives the same results.
Making all in a reverse way (using secondary NIC as default, and viceversa) gives the same results.

Is there anything I forgot?

Thanks in advance
 
Best ask OVH how this should be configured. But judging from you description, the Ceph traffic (public & cluster) is going through 'enp97s0f1'. And that you will need a second bridge on 'enp97s0f1' to run the public (not Ceph) traffic for VMs.
 
My 2 cents : give up now. OVH CDA is just not up to the task (I tried). It's far too slow to be used by anything, even lightly loaded test VM
 
Best ask OVH how this should be configured. But judging from you description, the Ceph traffic (public & cluster) is going through 'enp97s0f1'. And that you will need a second bridge on 'enp97s0f1' to run the public (not Ceph) traffic for VMs.

Thank you Alwin, you put me on the right way. I have linked vmbr0 to the "default" NIC (just storage) and a new vmbr1 (with no IP address) linked to the secondary NIC; every container/VM's NIC has been linked to vmbr1 and everything works like a charm :)
 
  • Like
Reactions: Alwin
My 2 cents : give up now. OVH CDA is just not up to the task (I tried). It's far too slow to be used by anything, even lightly loaded test VM

I'm using servers with 10Gb NICs, located in the same datacenter than the storage, and seems to work fine by now, but next week I'll run very intense tests. Thanks for the note :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!