Correct iSCSI Setup on Cluster

Extcee

Active Member
Apr 11, 2011
50
1
26
Hi All,

Just hoping someone can help me fine-tune my Proxmox setup with my iSCSI SAN, i have searched around but can't seem to find much about this..

I have a 3 node cluster setup with the following IPs:

root@proxnode1:~# cat /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback


auto eth0
iface eth0 inet manual


auto eth1
iface eth1 inet static
address 10.10.142.111
netmask 255.255.255.0
mtu 9000


auto vmbr0
iface vmbr0 inet static
address 10.10.1.51
netmask 255.255.255.0
gateway 10.10.1.1
bridge_ports eth0
bridge_stp off
bridge_fd 0


root@proxnode2:~# cat /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback


auto eth0
iface eth0 inet manual


auto eth1
iface eth1 inet static
address 10.10.143.112
netmask 255.255.255.0
mtu 9000
#gateway 10.10.142.1


auto vmbr0
iface vmbr0 inet static
address 10.10.1.52
netmask 255.255.255.0
gateway 10.10.1.1
bridge_ports eth0
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/conf/vmbr0/proxy_arp



Node3 is not really important here as it just sits around for "quorum".

All nodes are in a cluster, and can see the iSCSI storage (on 10.10.142.101):

From /etc/pve/storage.cfg
iscsi: tgh-md3200-01
target iqn.1984-05.com.dell:powervault.md3200i.6782bcb00024b8810000000051024745
portal 10.10.142.101
content none
nodes proxnode3,proxnode2,proxnode1

I initially had both iSCSI IPs on the same LAN (10.10.142.111 and 10.10.142.112 respectively) but I was getting slow performance. I think this is because all traffic is going to IP address listed above under iSCSI portal. I changed Node2 to use address 10.10.143.112 to test iSCSI performance using a separate LAN.

Is there a way I can *safely* add another IP into the Storage cfg so this can use multiple IPs..?

On my Dell SAN I have the following IPs assigned (one on each LAN, to each controller):
10.10.140.101
10.10.140.102
10.10.141.101
10.10.141.102
10.10.142.101
10.10.142.102
10.10.143.101
10.10.143.102

Ideally, if possible (please correct me if I am wrong!) I would like node1 to use 142.101/102 and node2 to use 143.101/102 but obviously be able to migrate VMs between each node etc.

I hope this makes sense?

If I have missed some info please let me know?

Thanks in advance.
 
Hi,

Ideally, each node will have as much NIC as your SAN have on a given controller : 4 NIC.
Node 1 :
10.10.140.111
10.10.141.111
10.10.142.111
10.10.143.111

Node 2 :
10.10.140.122
10.10.141.122
10.10.142.122
10.10.143.122

And so on.

Then, you need to set up multipathing (see wiki).

Christophe.
 
Hi Christophe,

I have multipath enabled:
36782bcb00024b88100004b2751edefb6 dm-4 DELL,MD32xxi
size=500G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| |- 10:0:0:61 sdl 8:176 active ghost running
| |- 7:0:0:61 sdj 8:144 active ready running
| `- 5:0:0:61 sdp 8:240 active ghost running
`-+- policy='round-robin 0' prio=1 status=enabled
|- 8:0:0:61 sdn 8:208 active ready running
|- 9:0:0:61 sdd 8:48 active ghost running
`- 6:0:0:61 sdg 8:96 active ghost running
36782bcb00024b4c800003b074059617f dm-3 DELL,MD32xxi
size=3.0T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=2 status=active
| |- 9:0:0:51 sdb 8:16 active ready running
| |- 6:0:0:51 sdc 8:32 active ready running
| `- 8:0:0:51 sdh 8:112 active ready running
`-+- policy='round-robin 0' prio=0 status=enabled
|- 7:0:0:51 sdf 8:80 active ghost running
|- 5:0:0:51 sdk 8:160 active ghost running
`- 10:0:0:51 sde 8:64 active ghost running
36782bcb00024b4c80000386f404ffc35 dm-5 DELL,MD32xxi
size=500G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=2 status=active
| |- 9:0:0:62 sdi 8:128 active ready running
| |- 6:0:0:62 sdm 8:192 active ready running
| `- 8:0:0:62 sdr 65:16 active ready running
`-+- policy='round-robin 0' prio=0 status=enabled
|- 10:0:0:62 sdq 65:0 active ghost running
|- 5:0:0:62 sds 65:32 active ghost running
`- 7:0:0:62 sdo 8:224 active ghost running

However I only have 2 NICs in each server, one for "Normal" traffic and one for "SAN" traffic..
I only intend to use the 10.10.142/143.* subnet for my Prox Nodes as the others will be utilised for other iSCSI attached devices..
 
OK..

Last night I changed Node2 to use 10.10.142.112 for its SAN IP.

It can connect and see the storage array (without having to route anymore) so theoretically I should get the best performance here.

On my Node3 i added a new iSCSI target and got it to use *almost* the same details, this then showed in the storage.cfg that it was again using IP 10.10.142.101 as its "portal".

I added another IP in here and the storage device disappeared, so I can't add multiple IPs.

However I then re-edited this and used a DNS name (which it seemed to resolve properly) and my resource came live again.. Now my curiosity is if this is using the IP 10.10.142.101 or 10.10.142.102 as both of them resolve against the same DNS record..

Or is all of this actually pointless as the secondary controller (10.10.142.102 in this instance) becomes dormant until controller 1 dies?
 
Dell MD-32xxi SAN are active / passive SANs : a LUN is available on one and only one controller at a given time.

You can manually move a LUN on the other controller in the control panel of your Dell.

Christophe.
 
Christophe,

Thank you.. I think i understand it clearer now.

Appreciate your time.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!