[SOLVED] Basic question regarding OVH's failover IPs

Remarkable-Guille

Active Member
Apr 3, 2018
8
1
43
Miami
remarkablecloud.com
Hi

Sorry for the newbie question...

If I set a Proxmox Cluster in OVH with each vm using failover IPs it means I can migrate from/to any node without changing the device assigned to the FO IP from OVH management panel? or do i have to update the device in ovh's panel each time i migrate a vm to another node?

Thank You
 
Hello. You can certainly move the FO IP from one host to the other manually using the OVH manager. That's how I work since years.

I've read it's possible to make this automatic using the OVH API and/or using Ripe IP range and using a script to send some arp packages. But I've never investigate that way.
 
Thank You,

I already using vRack but just for the cluster and backups but @spirit suggestion makes sense and worth the try

Another way would be calling the OVH APl + Proxmox Hookscripts but I feel this is way beyond my coding skill :(

Another mistake I did with OVH was to order IP's in blocks (/29), is better to order /32 because FO IP can be only moved in ranges and is the same price per IP no matter the block size. Hopefully this will help others.
 
Yes I order them always /32 for the same reason.

But am I wrong, but it seems we can only associate IPs range to the vrack, not individual /32 IPs ?
 
Here is something interesting
https://community.ovh.com/t/cluster-proxmox-ip-dediee-a-chaque-vm/11621/8

Translated interesting §:
If you want to do automatic failover between several servers / vm, the most flexible solution remains the vrack.
You have to order a complete ripe block and add it to the vrack associated with your servers. You put your machines in place, the mac addresses are not to be configured in this case, the arp tables directly take the macs that you put on your vm.
Once the failover ip is configured, be careful to use the last ip of your block as a gateway,
For the switch to take place, traffic must be generated from the new location. This will update the arp tables, so your switch between servers can be easily automated.

To be tested
 
  • Like
Reactions: Remarkable-Guille
Tested and it works very well !

I had to change the configuration of my vmbr1 bridge on the host and be careful as we can't have 2 different gateways in the same time on a container

will write a blog post about this and will referenced it here later

In brief:
- need an IPs range (not individual IP /32) linked to the vrack
- need a bridge configured on the vrack interface on the host
- configure an IP from the range (not the first one, neither the two last one from the range) without MAC, using the correct bridge and with the correct gateway (which is the one before the last one from the range)
 
  • Like
Reactions: Remarkable-Guille
Tested and it works very well !

I had to change the configuration of my vmbr1 bridge on the host and be careful as we can't have 2 different gateways in the same time on a container

will write a blog post about this and will referenced it here later

In brief:
- need an IPs range (not individual IP /32) linked to the vrack
- need a bridge configured on the vrack interface on the host
- configure an IP from the range (not the first one, neither the two last one from the range) without MAC, using the correct bridge and with the correct gateway (which is the one before the last one from the range)
Are your servers located in the same datacenter or country? I guess this won't work for servers in different countries and/or regions (RIPE/ARIN)
 
Tested and it works very well !

I had to change the configuration of my vmbr1 bridge on the host and be careful as we can't have 2 different gateways in the same time on a container

will write a blog post about this and will referenced it here later

In brief:
- need an IPs range (not individual IP /32) linked to the vrack
- need a bridge configured on the vrack interface on the host
- configure an IP from the range (not the first one, neither the two last one from the range) without MAC, using the correct bridge and with the correct gateway (which is the one before the last one from the range)
Hello friend congrats for this... i have stopped in this step...

- need a bridge configured on the vrack interface on the host

i must be create vmbr1 but... with Vrack local IP range...? someone can help me?

Thanks in advance
 
Yes friend thank u for your answer... but nothing is working well :(

1676403842867.png

and for CT i have this setup...

1676403978277.png
X.X.X.170 is the second IP usable on the block... and X.X.X.174 is the gateway IP block

1676404147395.png
 
Last edited:
Almost right

vmbr1 shoud use the internal vrack IP not the range one (the local IPs you use to create the cluster ... but it seems you don't have one ?)
In my case it's a 172.16.x.x IP

1676446734162.png

In OVH manager, link your range(s) to the vrack
1676447186974.png

Then assign an ip from the range to the CT but :
- not the first one (network address)
- not the last one (it's the broadcast address)
- not the before last (it's the gateway one)

1676447271205.png


In my case I've a x.x.x.128/27 range
- x.x.x.128
- x.x.x.159
- x.x.x.158
can't be used for CT

CT should be reachable. Schedule automatic replication (every 15 minutes by default). Then migrate a CT from a host to another should take only seconds :)
 
hello friend, thank u very much for answer... but already something wrong. i am already try with one Public Instance 1676448254320.png
and after proxmox install i have two network interfaces (ens3 & ens4) and yes... seems to be i dont have private ip on Proxmox ens4 ... very strange... Waiting helps... Mmmmm maybe this dont work on Proxmox installed on the Public Instance???
 
Last edited:
well... it seems that the Additional IP Public block associated with VRack does not work in Public Cloud instances :mad:
because public ips cant be used on vrack on cloud... any private range is ok... not public though

On dedicated servers it works very well.

I'm thinking of using load balancers (another negative thing that OVH public cloud load balancers are only reserved for use with kubernetes)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!