Hi,
For some reasons I have 2 bond interfaces:
For some other reason some nodes have 2 IP addresses on the Ceph interface:
/sbin/ip address show to 172.30.80.0/24 up
18: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
inet 172.30.80.22/24 brd 172.30.80.255 scope global bond1
valid_lft forever preferred_lft forever
inet 172.30.80.16/32 brd 172.30.80.16 scope global bond1:1
valid_lft forever preferred_lft forever
This makes the mtunnel configuration fail since it expects only one single IP on he range:
pvecm mtunnel --get_migration_ip --migration_network 172.30.80.0/24
could not get migration ip: multiple IP address configured for network '172.30.80.0/24'
As specified in the code: https://git.proxmox.com/?p=pve-cluster.git;a=blob;f=data/PVE/Cluster.pm;#l1054
Is there a specific reason for that? Knowing that ip(1) returns the primary IP address first.
Is it safe to bypass (comment out) this check since the get_local_migration_ip function seems to be only called for migration purposes. The few tests I have done are fine for now.
The $noerr parameter seems to not being used by any caller.
Thanks in advance.
PS: I am using PVE 4.4-22/2728f613 Community edition.
For some reasons I have 2 bond interfaces:
- bond0 4x1G for public network with PVE with OpenVSwitch.
- bond1 2x10G for Ceph. Note: this network is absolutely isolated on dedicated switches with no uplink.
For some other reason some nodes have 2 IP addresses on the Ceph interface:
/sbin/ip address show to 172.30.80.0/24 up
18: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
inet 172.30.80.22/24 brd 172.30.80.255 scope global bond1
valid_lft forever preferred_lft forever
inet 172.30.80.16/32 brd 172.30.80.16 scope global bond1:1
valid_lft forever preferred_lft forever
This makes the mtunnel configuration fail since it expects only one single IP on he range:
pvecm mtunnel --get_migration_ip --migration_network 172.30.80.0/24
could not get migration ip: multiple IP address configured for network '172.30.80.0/24'
As specified in the code: https://git.proxmox.com/?p=pve-cluster.git;a=blob;f=data/PVE/Cluster.pm;#l1054
Is there a specific reason for that? Knowing that ip(1) returns the primary IP address first.
Is it safe to bypass (comment out) this check since the get_local_migration_ip function seems to be only called for migration purposes. The few tests I have done are fine for now.
The $noerr parameter seems to not being used by any caller.
Thanks in advance.
PS: I am using PVE 4.4-22/2728f613 Community edition.