proxmov clustering bound to only one NIC ?

holgerb

Member
Aug 3, 2009
45
0
6
Hi there,

I have a question/problem in regard to having more than one NIC in your proxmox server.

Ok, here is our infrastructure:
We have two 2xQuadcore servers connected as proxmox cluster. Each server has a 100 MBit connection into our house network (in other word "external" interface). Additional to this each server has another 4port 1GBit NIC. So we decided to use those as "internal" network between both servers. To get more performance we bundled the GBit port by creating a bond device on each server. To each bond device we have assigned a static IP-adress from network segment completely different from the "external" IP adress.

In order to get proxmox to communicate across the internal (faster) network connection I removed the node from the master node (first bound via external ip) and (re)added it via the internal interface (other IP).

This is where the "fun" starts. proxmox reports that the master node accessed via internal ip is not a master:

Code:
proxmox-epr003:~# pveca -a -h x.x.199.155
The authenticity of host 'x.x.199.155 (x.x.199.155)' can't be established.
RSA key fingerprint is bc:6f:4e:02:50:5d:1c:1b:ab:40:68:8f:1a:d7:4a:75.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'x.x.199.155' (RSA) to the list of known hosts.
host 'x.x.199.155' is not cluster master

In the proxmox webgui I can see the other server as node and see all the VMs running on it but it also permanently reports "nosync".

So here is my question:
Is proxmox clustering bound to one NIC ? Or does a master server also act as "master" if you access different nodes via more than one NIC ?

TIA,
Holger
 
Proxmox use either eth0 or vmbr0 for cluster communication.

But I do not understand why you need more performance for the cluster communication (that is low traffic)?
 
But I do not understand why you need more performance for the cluster communication (that is low traffic)?
Hm, but the same network connection is used for migration of VMs, isn´t it ?

So > 1 GBit network makes it faster than 100 MBit ?

In other words:
Can I influence the network used for rsyncing the VM image to another proxmox server other than connecting the other server to the cluster via the corresponding network adapter ?
 
Last edited:
Sure, but do you really want to copy at maximal speed (which leads to very high io load on the host, rendering all other vm quite unusable)? If you want fast migration you should use shared storage (next release).
 
Ok, but how ? ;)

but do you really want to copy at maximal speed (which leads to very high io load on the host, rendering all other vm quite unusable)?
Yes, for the moment I would prefer fast migration with performance suffering for the other VMs :D

BTW: This migration feature was one main reason for us to move away from XenServer to proxmox.

If you want fast migration you should use shared storage (next release).
Shared storage is not an option for us since our "NAS" is connected to three other VM servers in a different room. OK, agreed, we could use a NFS shared stored published on one of our servers.
 
Use bond0 on vmbr0 - does that work?
Oops, I think I forgot an important detail:
One of the cluster members is connected via eth0 (our house network) while the other cluster member is connected via eth1 & eth2 running as bond0. I think this is causing the problem. :o

Is there a "trick" I can do to get all members of the cluster into "sync" again ?

Migration of VMs between all cluster members seems to work fine though.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!