Cluster iscsi connection

cyberbootje

Member
Nov 29, 2009
77
0
6
Hi,

The proxmox cluster is working great in combination with a SAN over ISCSI.
Now, the SAN has redundant 4 x 1 gbit nic's.(total of 8 nic's)
Every nic has a separate internal ip address.

Setting up a cluster is fine just that it automatically copies the cluster config files what basically says to all the nodes where the disks are stored.
That is OK, but i would like to use 1 nic on the SAN for 1 node.

Is it possible to make a cluster and have set that every node connects to a different ip, and so, a different nic on the SAN?
 
Hi,

The proxmox cluster is working great in combination with a SAN over ISCSI.
Now, the SAN has redundant 4 x 1 gbit nic's.(total of 8 nic's)
Every nic has a separate internal ip address.

Setting up a cluster is fine just that it automatically copies the cluster config files what basically says to all the nodes where the disks are stored.
That is OK, but i would like to use 1 nic on the SAN for 1 node.

Is it possible to make a cluster and have set that every node connects to a different ip, and so, a different nic on the SAN?
Hi,
two ideas:
1. use a hostname for the san instead of the ip - and define on each pve-node a different ip for the nam in /etc/hosts.

2. use an ip of the san, which aren't in the same network (for othe other three nodes) and define on each pve-node a different route for this ip. Like this:
SAN:
1. nic: 10.10.1.100/24
2. nic: 10.10.2.100/24
3. nic: 10.10.3.100/24
4. nic: 10.10.4.100/24

Storage at 10.10.1.100

pve-node1 storage adapter: 10.10.1.20
pve-node2 storage adapter: 10.10.2.20 + "ip route add 10.10.1.100/32 via 10.10.2.100"
pve-node2 storage adapter: 10.10.3.20 + "ip route add 10.10.1.100/32 via 10.10.3.100"
pve-node2 storage adapter: 10.10.4.20 + "ip route add 10.10.1.100/32 via 10.10.4.100"

Perhaps it's depends on the SAN, but should work.

Udo
 
Option 1 is the best.

The cluster is now based on 1 ip, so the best way to change it in a hostname is to edit the config file on the master i think?
Now, is there a way to change this on a working and active cluster without having trouble with vm's that won't start?
Vm's have to be shutdown to prevent corrupted data i think?
 
Option 1 is the best.

The cluster is now based on 1 ip, so the best way to change it in a hostname is to edit the config file on the master i think?
Now, is there a way to change this on a working and active cluster without having trouble with vm's that won't start?
Vm's have to be shutdown to prevent corrupted data i think?
Hi,
you should stop all vms, deactivating the volumegroup (vgchange -a n vgname), perhaps remove the device-mapper enry of the san (dmsetup info/remove) and then check if the changes work. At the end perhaps you should also reboot the nodes (to be sure all things worked after reboot).

Udo
 
I don't think i am willing to take these risks.
I will go for a safer solution and that is making a new separate cluster and move the vm's one by one.

Or maby delete the cluster and test it with 1 node and see how it goes without any important vm's on it.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!