How can i set Migrate Interface

informant

Renowned Member
Jan 31, 2012
780
10
83
Hi all, i have one internal network interface and one public interface. public is for reachable VMs and internal i would use for migrate VMs to other nodes. but, if i migrade, node are using everytime public interface and not internal interface. why and where i can configure, witch interface are using for witch service? can i select and configure it? best regards...
 
Hi all, i have one internal network interface and one public interface. public is for reachable VMs and internal i would use for migrate VMs to other nodes. but, if i migrade, node are using everytime public interface and not internal interface. why and where i can configure, witch interface are using for witch service? can i select and configure it? best regards...
I guess that vmbr0 is used for internal communication and migration, so you should set vmbr1 with your public IP and vmbr0 with your private one.
 
Hi, thanks for answering. But its not configureable right? Or exists a workarround?

regards
 
Hi, shure, here the cat´s:
Code:
cat /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 217.*.*.*
        netmask 255.255.254.0
        gateway 217.*.*.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

auto vmbr1
iface vmbr1 inet static
        address 10.11.12.63
        netmask 255.255.0.0
        bridge_ports eth1
        bridge_stp off
        bridge_fd 0

Code:
cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
217.*.*.* srv01.mydomain.tld srv01 pvelocalhost

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
 
It seems that you have made the mistake during cluster Initialisation to specify the IP used for Cluster-communication to be the public one.

Before Joining nodes to your cluster you'd have to take the following steps:

nano /etc/hosts
and you add the following line:

Code:
10.11.12.63 srv01.mydomain.tld srv01

You then switch the pvelocalhost flag to the network that Proxmox should communicate over

it should then look like this:

Code:
127.0.0.1 localhost.localdomain localhost
217.*.*.* srv01.mydomain.tld srv01
10.11.12.63 srv01.mydomain.tld srv01 pvelocalhost

When you then join nodes to the cluster, you do it like this:

Code:
pvecm add 10.11.12.x

I am not sure if you actually can - now that you have the cluster already set up - change the the IP's on a whim like that. This is due to the fact that during cluster setup you have already joined the nodes via the Public IP. and Afaik they will keep talking over that unless you change the configs.


If you are unsure which IPs you used when joining nodes to the cluster, issue this command:

Code:
pvecm status
 
Hi, if i add node with local ip, public ip was list after add.

i can also only change vmbr0 and vmbr1 ip´s and after change it must work, right?

regards
 
I believe that once you make the changes to /etc/hosts that q-wulf mentions you can simply reboot all your nodes and things should work ok.

Last time I changed the network after setting up clustering was many years ago back in Proxmox 1.X and thats how I did it.
No idea if anything has changed that would prevent this from working.

I have also always ensured that there is a hosts entry for every node in each nodes /etc/hosts
 
hi, i have change it, but after reboot it was all the same, backups run over internet ip, not over local ip. whats next step?

regards
 
I have a hard time understanding the following part:

This is mainly due to the level of english used.

Are you trying to say the following ? :

I have implemented the change in post #7 of this topic ( https://forum.proxmox.com/threads/how-can-i-set-migrate-interface.25340/#post-127789 )

However after a reboot, it still lists the Internet_Ip instead of the Public_Ip.
I have created a new Node. I then added this Node to the Cluster, using the Local_IP.
However the Cluster still adds the Node using the Internet_IP.

Do i need to change vmbr0 to vmbr1 and vice versa to get it to work ?

If that is what you are trying to say, then this would be my reply:


Q1: Where did you implement these changes ? On the Original Node(s) ?
Q2: Did you implement those changes on the new Node(s) you later joined to your Cluster as well ?
Q3: have you "removed" your initial Cluster from your previous Node(s) (Q1) and then set it up again, before trying to join a new Node ???
Q4: Assuming Q1-Q3 do not lead to a successful resolution, can you post the "cat /etc/hosts" and "cat
/etc/network/interfaces" for every Node involved in the Cluster ?
 
hi, thanks for answer,

i have change cluster hosts entry and create a new node with this entry in hosts, than i have restart new node and cluster, than i have add new node with local ip to the cluster, on cluster are 4 other oldadded nodes and this new node. all use internet ip and nut local ip if i use command pvecm status.
do i must change the hosts entrx in all ready adding nodes and restart it or do i must delete all the nodes and readd it with local ip again? i do not understand correctly why local ip was not used, if i connect the new node with this. i have not remove old existing nodes from cluster.

cluster hosts is:
Code:
cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
217.11.12.67 pegasus.domain.com pegasus
10.11.12.60 pegasus.domain.com pegasus pvelocalhost

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

new node hosts is:
Code:
cat /etc/hosts
127.0.0.1       localhost
217.11.12.61   euronda euronda.domain.com
10.11.12.68     euronda.domain.com euronda pvelocalhost

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

the 4 other nodes i have not change in hosts at the moment.

Code:
 pvecm status
Quorum information
------------------
Date:             Tue Jan 26 11:25:31 2016
Quorum provider:  corosync_votequorum
Nodes:            7
Node ID:          0x00000008
Ring ID:          26396
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   8
Highest expected: 8
Total votes:      7
Quorum:           5
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000008          1 217.11.12.61 (local)
0x00000003          1 217.11.12.64
0x00000006          1 217.11.12.65
0x00000004          1 217.11.12.66
0x00000001          1 217.11.12.67
0x00000005          1 217.11.12.68
0x00000002          1 217.11.12.69

network setting are:
Code:
cat /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 217.11.12.61
        netmask 255.255.254.0
        gateway 217.11.12.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0


auto vmbr1
iface vmbr1 inet static
        address 10.11.12.68
        netmask 255.255.0.0
        bridge_ports eth1
        bridge_stp off
        bridge_fd 0
 
Last edited:
the 4 other nodes i have not change in hosts at the moment.

that won't work.

you will need to do the following steps in the order they are written here:
1) Destroy the cluster (as in remove it from ALL nodes) - by doing a reinstall, or better yet following the steps on the proxmox wiki here: https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster#Remove_a_cluster_node
2) apply the steps i mentioned above for EVERY node you want to join to your new cluster
3) Create a new Cluster.


you afaik can not change the Cluster IP's after you have already joined nodes to a Cluster without much pain. You need to do that beforehand.
 
hi, thanks again, i must also remove all nodes from cluster, change hosts entry and add all nodes new with local ip to cluster, right? or do i must delete and create cluster new on same server too?
 
Yes, you need to fully disintegrate / destroy / purge / remove the Cluster on ALL nodes.
Then you change your host entries, check that you can ping the IP's.

Then you create a NEW Cluster on your nodes.
 
Hi thanks for replay.
This is the only way or is there yet another? well its a livesystem with many peoples.

regards
 
  • Like
Reactions: bizzarrone

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!