Cluster with two computers having two network cards each

tstav

Renowned Member
Jun 21, 2010
9
0
66
I have two computers with Proxmox 1.5.
One net card is bind to vmbr0 and has a public IP.
The other is bind to eth1 and has a private address.
The two computers are connected with a cable on the private IP interfaces.
I create the Master with pveca -c.
But when I try to create the slave with pveca -a -h 192.168.168.20, which is the private IP of the master, I get a message that "host '192.168.168.20' is not cluster master".
How can I do this so I do not have the cost of the "synchronization bandwith"?

Tassos
 
Hello,

So we cannot create a cluster using eth1 for example? I want to have vmbr0 to internet access, but cluster/management to be made using eth1.

Isnt this possible?
 
I had the same problem, followed all the steps of wiki http://pve.proxmox.com/wiki/Proxmox_VE_Cluster, and the help of Dietmar

The eth0 are connected to the public network and eth1 are connected to each other through a crossover cable

eth0 con la red 192.168.1.0
eth1 con la red 10.0.7.0


The system is configured with through eth0 sincronzación therefore
at the end on both server, change the ip on some files

changing network ip 192, through the appropriate network ip 10
sed -i s:ip-red192:ip-red10:g

/etc/pve/cluster.cfg
/root/.ssh/authorized_keys
/root/.ssh/known_hosts


server1
Code:
vm2:~# cat /etc/hosts
127.0.0.1       localhost
192.168.1.201 vm2.iuoglocal vm2 pvelocalhost
10.0.7.105      vm2



cat /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

auto eth1
iface eth1 inet static
        address  10.0.7.105
        netmask  255.255.240.0

auto vmbr0
iface vmbr0 inet static
        address  192.168.1.201
        netmask  255.255.240.0
        gateway  192.168.2.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

server2

server1
vm2:~# cat /etc/hosts
127.0.0.1 localhost
192.168.1.202 vm2.iuglocal vm2 pvelocalhost
10.0.7.105 vm1



cat /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

auto eth1
iface eth1 inet static
address 10.0.7.106
netmask 255.255.240.0

auto vmbr0
iface vmbr0 inet static
address 192.168.1.202
netmask 255.255.240.0
gateway 192.168.2.1
bridge_ports eth0
bridge_stp off
bridge_fd 0

[/CODE]
 
So we cannot create a cluster using eth1 for example? I want to have vmbr0 to internet access, but cluster/management to be made using eth1.

No. But why dont you use eth0 for cluster management and eth1/vmbr0 for internet access?
 
Hello,

Maybe i didnt explain well. I will access proxmox/vms eith vmbr0/eth0, but i want cluster communication to use eth1. Atm i have a switch where eth1 port on servers connect to make backups, so would be much faster than using eth0 ( that have internet traffic ). The idea is to separate it, and do live migrations using eth1/backend network.

is it possible?

thanks
 
Maybe i didnt explain well. I will access proxmox/vms eith vmbr0/eth0, but i want cluster communication to use eth1.

eth0 and eth1 are just names - so why don't you simply replace eth0 with eth1 and eth1 with eth0? It is functional equivalent - or what is the difference?
 
eth0 and eth1 are just names - so why don't you simply replace eth0 with eth1 and eth1 with eth0? It is functional equivalent - or what is the difference?

Just edit /etc/udev/rules.d/70-persistent-net.rules to change the assignment to physical interfaces.
 
this is all new for me. Since server have two nic's, how can i force it to use one for cluster communication ( using another vlan ) and one to access vms/proxmox?

Probably is easy, but i'm not getting it :S
 
I will have vms with about 300gb, and may need to do live migrations. Atm in my servers, i have eth0 for internet access, and eth1 to a backend network ( other switch etc ). If i make live migrations using the same nic as vms are connected to internet it will take years, no?

Maybe i'm in a big confusion here. I just need to ensure, that in case of live migrations it completes fast.

please advice, thanks
 
hello,

i configured a san with open-e, and then created a volume group. However, proxmox only detect the lun if the volume is created as a block-io, and not as file-io. However open-e, recommends this to be created as file-io. Also, when i create as block-io, it will use the whole space of the drive, and i cannot specify the size of the vm.

What is the best option? Also, does anyone use open-e and can help-me on this?

thanks
 
Just to be sure that I will do the best, I will explain my config:

I have two servers with this config:
Code:
[FONT=Fixedsys]eth0         (1 Gb )   linked to net 192.168.123.xxx 
eth1-vmbr0   (100mb )  linked to net 192.168.255.xxx 

OpenFiler Server is  on net 192.168.123.xxx (1Gb)[/FONT]
When I try to add node to the 192.168.123.xxx Ip of the master it says it can not find server. ( As you said, we must use the vmbr0 address.

When on your post, you ask for 'cluster comunication':
Re: Cluster with two computers having two network cards each
quote_icon.png
Originally Posted by luispt
how can i force it to use one for cluster communication ( using another vlan ) and one to access vms/proxmox?



Why do you exactly mean by 'cluster communication'?​
I also was thinking that servers will have some kind of 'traffic' between them to syncronize and of course to 'migrate' one machine to the other server.

Can you evaluate this 'traffic' ??

Or simply it is better to have this 'traffic' on the administration network, vs the Shared storage network ??

Also I'm having in mind next 2.0 (HA) release and what network use for what pourpose.

Best Regards

Vicente
 
I had similiar problem too. I have solved it, using some kind of magic with manual wodoo on configuration files. But it happened a long time ago, so unfortunately I dont remember the way I've done it. However, I beleive that it is possible.
 
I solved this by binding vmbr0 to eth1 and creating a separate vmbr1 which is bound to eth0... Only caveat is having to remember to change the interface when creating new virtual images.