Migration Network multiple IP

Mar 21, 2018
4
0
6
124
Hi,

For some reasons I have 2 bond interfaces:
  • bond0 4x1G for public network with PVE with OpenVSwitch.
  • bond1 2x10G for Ceph. Note: this network is absolutely isolated on dedicated switches with no uplink.
By default migration was done on the public network instead of the Ceph. Having the migration_network option in the datacenter.cfg file helps a lot. (BTW being able to configure it via the WebUI would be great, but it's not the point).


For some other reason some nodes have 2 IP addresses on the Ceph interface:

/sbin/ip address show to 172.30.80.0/24 up
18: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
inet 172.30.80.22/24 brd 172.30.80.255 scope global bond1
valid_lft forever preferred_lft forever
inet 172.30.80.16/32 brd 172.30.80.16 scope global bond1:1
valid_lft forever preferred_lft forever


This makes the mtunnel configuration fail since it expects only one single IP on he range:

pvecm mtunnel --get_migration_ip --migration_network 172.30.80.0/24
could not get migration ip: multiple IP address configured for network '172.30.80.0/24'


As specified in the code: https://git.proxmox.com/?p=pve-cluster.git;a=blob;f=data/PVE/Cluster.pm;#l1054

Is there a specific reason for that? Knowing that ip(1) returns the primary IP address first.

Is it safe to bypass (comment out) this check since the get_local_migration_ip function seems to be only called for migration purposes. The few tests I have done are fine for now.

The $noerr parameter seems to not being used by any caller.

Thanks in advance.

PS: I am using PVE 4.4-22/2728f613 Community edition.
 
For some other reason some nodes have 2 IP addresses on the Ceph interface:

/sbin/ip address show to 172.30.80.0/24 up
18: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
inet 172.30.80.22/24 brd 172.30.80.255 scope global bond1
valid_lft forever preferred_lft forever
inet 172.30.80.16/32 brd 172.30.80.16 scope global bond1:1
valid_lft forever preferred_lft forever


This makes the mtunnel configuration fail since it expects only one single IP on he range:

pvecm mtunnel --get_migration_ip --migration_network 172.30.80.0/24
could not get migration ip: multiple IP address configured for network '172.30.80.0/24'


As specified in the code: https://git.proxmox.com/?p=pve-cluster.git;a=blob;f=data/PVE/Cluster.pm;#l1054

Is there a specific reason for that? Knowing that ip(1) returns the primary IP address first.

Multiple IP addresses in the same subnet and node causes more potential complexity and is simply not implemented.

Is it safe to bypass (comment out) this check since the get_local_migration_ip function seems to be only called for migration purposes. The few tests I have done are fine for now.

The $noerr parameter seems to not being used by any caller.

Has never been tested and is not supported - you can make experiments at your own risk.


PS: I am using PVE 4.4-22/2728f613 Community edition.

A rather old version - but relating to the current topic there is no change in newer versions.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!