Install Proxmox in an OVH Vrack

jmorfali

Member
Jan 29, 2020
11
2
8
42
Hello,

I am trying to install Proxmox on a dedicated server at OVH but cannot find much instruction and know Chinese for me. Here is what I have: two dedicated server with ProxMox. Both servers have an external ip address but I would like the VMs to communicate with each other via private IP addresses. I have an IP address block:

53.70.241.32/27
Network IP : 66.70.241.32
Broadcast : 66.70.241.63
Gateway : 66.70.241.62
Netmask : 255.255.255.224

Here is the information of my network cards.

Code:
root@4:~# ip addr list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 3c:ec:ef:0d:2d:54 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 3c:ec:ef:0d:2d:55 brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3c:ec:ef:0d:2d:54 brd ff:ff:ff:ff:ff:ff
    inet 51.222.82.191/24 brd 51.222.82.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::3eec:efff:fe0d:2d54/64 scope link
       valid_lft forever preferred_lft forever

And here is the information of my network configuration.

Code:
auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 40.222.82.191
        netmask 255.255.255.0
        gateway 40.222.82.254
        bridge_ports eno1
        bridge_stp off
        bridge_fd 0

iface eno2 inet manual

The problem is, I don't know what change to make in my configurations.

Thank you very much !
 
Why are the addresses in your network configuration file completely different from those that you posted in the beginning?
Anyway, the network configuration section of our administration guide would be a good start.
 
I don't know if it is still relevant, but I made the same today on a OVH server, and worked like a charm. ProxMox identify two network cards for OVH server with vRack:
  • eno1: the network card attached to public network
  • eno2: the network card attached to vRack
Go to proxmox web GUi. In "System" -> "Network", you will find that a "vmbr0" linux bridge is fully functional on your server, working on eno1 with the public IP of your machine.

All you have to do to let your VMs to use private and public IPs on vRack is to create a new linux bridge, with:
  • name: vmbr1
  • IPV4/CIDR: a static IP on a private network of your choice, eg: 192.168.100.10/24
  • Bridge ports: eno2
Now restart networking, or restart the server. That's all.

When you create a new VM, choose "vmbr1" as network card. You can now set a public or private IP on the host VM, from every IP pool that is assigned to your vRack.

Hope it helps
 
This works well.
But how it is possible to vlan this connection / public IP ?
we have a test VM with eth0 (bridge=vmbr1) / vmbr1 is connected to OVH vRACK with 192.168.10..../24 privat IP to communicate in cluster
works well with public ip.
but if i switch to eth0 (bridge=vmbr1, vlan 1111) and set this VM CentOS7 to use vlan on eth0. the server is not at the internet. no cennection from or to is possible.

Thanks
 
Thanks. Our setup works. But we like to vlan our additional public IPs.
Proxmox cluster is 192.168.1.0/24 with 6 Nodes connectet via vRACK to each other. Corosync works on that network.

Because "the problem" is:
we create a VM with NIC eth0 on vmbr1. normaly the customer get an ip of our public OVH vRACK ip pool. all is fine.
Now the customer have root access to his VM.
He/she can set/change the ip of its eth0 to one of 192.168.1.0/24 and can ping the cluster nodes (and so on).
we like to have an isolated network for our public customer ips.

hope explanation is enought.

we set eth0 on vmbr1 to vlan 111 and configure centos7 (VM) to use it. may we are wrong with that. we read docs about vlan in centos7. the issue the VM is not online. may vlan vRACK is not possible for public IPs (OVH).
Are there any succestions to solve the problem => we like to have an isolated network for our public customer ips.

thanks so much
 
I think what you want is possible but you'll have to setup your public IP block on a VLAN on the Vrack network card, not sure it can run on the Wan netword card.
https://docs.ovh.com/us/en/dedicated/ip-block-vrack/
Thanks, Yes we do. Like described it works.
BUT:
proxmox node:
eth0 -> vmbr0 with its unique public ip of proxmox node like 100.100.100.5 (not connectet to vRACK, MAC OVH mac binding direct to this hardware server).
eth1 -> vmbr1 is vRACK interface ip 192.168.1.5.
Other public ip block 50.50.50.0/24 connectet to vRACK on OVH.

The problem is Proxmox Nodes uses the same NIC (vRACK) for it's private cluster network. OVH Servers do only have one public NIC (eth0) and one connectet to vRACK (eth1).
So any guest VM with eth0 -> vmbr1 get correct connection with 50.50.50.50.3 BUT can "join" the proxmox net by user change it ip from 50.50.50.50.3 to 192.168.1.3.
so we like vlan this (eth0 -> vmbr1) VM interface. but with tests the server is not online if we change.

hope it is more clear now.
 
Thanks, but that's not the case. we can use vlans, but we can not us it via public ips
 
Is this still active? and would I, to be able to: add a new dedicated server on the same vRack as your existing proxmox ve one?
 
Hi, resurrecting this thread...

So what's the best way to configure a proxmox cluster with OVH vrack on a bare metal host?

I have 3 bare metal OVH servers, all in the same VRACK, and a public IP subnet assigned to the VRACK.

So far I've done this:

Code:
... (vmbr0 has the OVH public IP, do not use for VMs or anything else)

auto vmbr1
iface vmbr1 inet manual
        bridge-ports eno2
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#VRACK VLAN aware (assigned to 2nd host interface, VRACK)

auto vlan666
iface vlan666 inet static
  address 10.66.6.1/24
  vlan-raw-device vmbr1
#cluster traffic only

Then I create VMs with virtual interfaces in separate VLANs attached to the host's vmbr1.

Still missing from that piece is some router/NAT VM to redirect public IPs to private VMs inside the VLANs (that's the way to do it right?)

I think the vrack public IPs in the vrack are not in any vlan ?
 
Last edited:
I don't know if it is still relevant, but I made the same today on a OVH server, and worked like a charm. ProxMox identify two network cards for OVH server with vRack:
  • eno1: the network card attached to public network
  • eno2: the network card attached to vRack
Go to proxmox web GUi. In "System" -> "Network", you will find that a "vmbr0" linux bridge is fully functional on your server, working on eno1 with the public IP of your machine.

All you have to do to let your VMs to use private and public IPs on vRack is to create a new linux bridge, with:
  • name: vmbr1
  • IPV4/CIDR: a static IP on a private network of your choice, eg: 192.168.100.10/24
  • Bridge ports: eno2
Now restart networking, or restart the server. That's all.

When you create a new VM, choose "vmbr1" as network card. You can now set a public or private IP on the host VM, from every IP pool that is assigned to your vRack.

Hope it helps
Thank you!

I can confirm, it's working like a charm!

Have a wonderful day :)
 
  • Like
Reactions: tolew1
I don't know if it is still relevant, but I made the same today on a OVH server, and worked like a charm. ProxMox identify two network cards for OVH server with vRack:
  • eno1: the network card attached to public network
  • eno2: the network card attached to vRack
Go to proxmox web GUi. In "System" -> "Network", you will find that a "vmbr0" linux bridge is fully functional on your server, working on eno1 with the public IP of your machine.

All you have to do to let your VMs to use private and public IPs on vRack is to create a new linux bridge, with:
  • name: vmbr1
  • IPV4/CIDR: a static IP on a private network of your choice, eg: 192.168.100.10/24
  • Bridge ports: eno2
Now restart networking, or restart the server. That's all.

When you create a new VM, choose "vmbr1" as network card. You can now set a public or private IP on the host VM, from every IP pool that is assigned to your vRack.

Hope it helps
In 2024 this is still very helpful. OVH's docs are really bad when it comes to vracks. I couldn't figure out how to do it. It seems overly complicated. Thank you for this.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!