Proxmox-ve and Dell EqualLogic storage (iSCSI)

xPakrikx

Active Member
Jan 15, 2019
35
2
28
30
inanis.xyz
Hi

Do you have any idea how can be connected Proxmox-ve and Dell EqualLogic storage (PS4210 iSCSI) ?
Currently we are using Group IP for connection to iSCSI targets... without any additional configuration like multipath etc... But i dont think this is proper setup, because of performance and also redundancy. So we are using only one iSCSI target

For proper multipath i need to use some special software from Dell ? HIT ? But this is supported only on RHEL.

Can i use eth0 and eth1 IPs with multipath on proxmox side ?

Very thanks for all ideas

Topology:

Server01-07 (2x10Gbit LACP bond) <-------> 2x Dell MXL switch stack <--------> Dell EQL PS4210

iSCSI traffic is in vlan.

With only one active port on EQL performance looks ok but with both active ports there is some performance degradation
 
Last edited:
Does your EQL have dual connection to your switches from each controller?
If it does - is it also LACP bond? Is that where Group IP sits?
If it is - then you have redundancy, not the best from performance perspective but redundancy.
 
LACP is only on server to switch side. On switch to storage side there are connected with 4 10Gbit cables (only two ports are active) there is no LACP on storage side. I dont think there is support for LACP on EQL devices. From my understanding there is one group IP (something like virtual IP) for two active ports on controllers (idealy one active port on one controller and second active port on second controller). From Dell documentation my setup looks ok.

But what i dont know is if my VMs goes down or freeze up when one controller goes down or interface change on storage side. I assume there is some down time when target need to be re-initialized. I think multipathd can mitigate this down time but i dont use it on server side because i use LACP.

Looks like EQL works more like vrrp + load balance ? So when connection goes down there is complete downtime for very short time until re-initialization. But i think i need to test that.

So real question is how to fix IO delays and optimize iSCSI for downtime mitigation.
But i found out some optimization in this post so i try test that : https://forum.proxmox.com/threads/s...-problems-with-dell-equallogic-storage.43018/ (thnx to DANILO MONTAGNA)
 
Hello,
The EQL array does not support bonding, LACP, trunking, etc.. It relies on MPIO on the host side. The EQL prefers a single IP subnet. Any iSCSI port on the EQL will be used as needed, and periodically, an iSCSI session may be balanced to another port.

With Linux, using the open-iSCSI initiator, you can specify egress ports for iSCSI. I have NOT tried this on Proxmox.

This PDF is for Redhat. Some file locations will be different on Proxmox But the principles are the same.

https://downloads.dell.com/solution...es/(3199-CD-L)RHEL-PSseries-Configuration.pdf

I would try this on a NON production server. There are some other network settings in the PDF as well to make it work. I am not sure if that will also affect Proxmox. Backing up all files would be strongly suggested.

Also, if you are running two switches, there needs to be a LAG between them for all ports on the EQL and any servers you connect. The array expects that any port on the array, can reach any port on any server. Make sure the inter switch link (ISL) is more than 50% of the total bandwidth of the array(s).

Good luck! Please update if you attempt this.

I use EQL with VMware, Citrix XenServer, MS HyperV, I have connected proxmox as well, but there were a lot of manual steps to get it accessible.
I also use QEMU/KVM with debian/ubuntu and EQL as well.

Regards,
Don
 
Hello, you might be thinking of a different Dell SAN. The majority of iSCSI SANs use IPs on different IP subnets. (I.e ME/MD/SC)

In Linux if you put two NICs on the same IP subnet only one will be active. You maybe have two sessions, but they will use the same physical NIC.

Kernel settings The sysctl command is an interface for examining and dynamically changing kernel parameters in Linux. The interface mechanism is exported to /proc/sys. The kernel has default settings that can be changed to optimize the system as needed. The user-defined kernel settings are stored in the /etc/sysctl.conf file exclusively for RHEL 6 and can be separated into the /etc/sysctl.conf files in the /etc/sysctl.d directory for RHEL 7. 1. When there are multiple iSCSI connections to a PS Series array, alter the Linux ARP behavior to prevent ARP resets (RST) from the initiator to the target, which would not allow more than one interface to serve traffic.

Add the following information to the /etc/sysctl.conf file.

net.ipv4.conf.em1.arp_ignore = 1
net.ipv4.conf.em1.arp_announce = 2
net.ipv4.conf.em1.rp_filter = 2
net.ipv4.conf.em2.arp_ignore = 1
net.ipv4.conf.em2.arp_announce = 2
net.ipv4.conf.em2.rp_filter = 2 2.

In a RHEL 6 system, add the following to the /etc/sysctl.conf file as well.

In a RHEL 7 system, create a file called /etc/sysctl.d/equallogic.conf and add the same content.

net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.core.wmem_default = 262144
net.core.rmem_default = 262144
 
In Linux if you put two NICs on the same IP subnet only one will be active. You maybe have two sessions, but they will use the same physical NIC.

This statement may be correct, but only if the system is not configured properly. While its much more straightforward to use different subnets , its fine to put both NICs on the same subnet on Linux side. I suspect its the case for Dell as well, but you never know with proprietary systems how badly they mock with up the networking.
We have a few customers using multipath via two+ NICs programmed to use the same subnet. The serverfault answer here provides a proper way to deal with it: https://serverfault.com/questions/524054/simple-multihomed-linux-server-issue


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hello,
Correct you can do if you configure the host correctly. Though the serverfault link solution was a bit messy I thought. That was an old Linux OS. With open iSCSI support for egress ports you don't need that routing, but you do (as also shown in that link) need th sysctl.conf changes

In /etc/sysctl.conf:

net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.eth0.arp_announce = 2

I haven't tried it with Proxmox. The integration of iSCSI targets from EQL required edits at the CLI. Which is a little disappointing Everything else I tried worked really well. That was wit v7.1. I use VMware/XenServer/Hyper-V at work and Xenserver at home.
 
I should clarify. I haven't tried MPIO with Proxmox and EQL iSCSI SANs. I did single nic without issue.
D
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!