Proxmox on IBM with hyperswap function

joros

New Member
Sep 10, 2025
4
1
3
Hello everyone,

I am planning a migration from VMware to Proxmox VE 9.0.x and would like to hear about the community's experience with a specific high-availability storage configuration.

The proposed infrastructure is based on two IBM FlashSystem 9110 arrays. These are configured in a Hyperswap topology to provide synchronous replication and active-active availability for volumes between two data centers.

Current Environment:

  • Hypervisor: VMware vSphere
  • Storage: Two IBM FS9110 arrays with Hyperswap.
  • Connectivity: 10 Gbps Fibre Channel over a dedicated 10 Gbps fiber optic link between data centers.
  • This setup is currently operational and stable.
Question:
Has anyone successfully deployed Proxmox VE in a similar IBM Hyperswap environment? I am specifically interested in:

  1. Compatibility: Are there any known incompatibilities between Proxmox's storage layer (LVM-Thin, ZFS, etc.) and the Hyperswap configuration?
  2. Best Practices: Any critical configuration suggestions for the Proxmox hosts (multipathing, OS settings) to ensure optimal stability and performance with this storage?
  3. Potential Pitfalls: Were there any unforeseen challenges or issues during implementation?
The goal is to achieve the same level of storage transparency and high availability that we currently have with VMware. Any insights, documentation pointers, or personal experiences would be greatly appreciated.

Thank you in advance for your help.
 
What a coincidence!

I have connected our Flashsystem 5200 with multipath to proxmox this week.
This is a hyperswap cluster.
This is now a shared LVM.
But the performance is really bad, but i think it needs performance tuning.
The option "Disable the TCP delayed acknowledgment feature" must be set but this is as far as i can find not possible within proxmox/iscsi.
https://www.ibm.com/docs/en/flashsystem-5x00/9.1.0?topic=problem-iscsi-performance-analysis-tuning

I also connected 2 active clusters with pure but those are really fast.
For now i contacted IBM support to give me some help, but i am afraid proxmox is not suported yet.
Make sure you are on the latest version of 9 because i had some issues with disk migrations and not cleaning up the old disk, but that was fixed with 9.0.6 (or a version in between)

I have documented all my steps regarding the multipath, i can share this but i need to filter out company info, let me know if you are interested.
 
  • Like
Reactions: waltar
Support (IBM) is telling me to do the following, but i am hesitated to do this because this is for all TCP communication:
"
Proxmox runs on Debian-based Linux kernel so settings to change tcp_delack_min to 1 should disable delayed ack.
Something like below in /etc/sysctl.conf file
net.ipv4.tcp_delack_min = 1
"
 
  • Like
Reactions: waltar
What a coincidence!

I have connected our Flashsystem 5200 with multipath to proxmox this week.
This is a hyperswap cluster.
This is now a shared LVM.
But the performance is really bad, but i think it needs performance tuning.
The option "Disable the TCP delayed acknowledgment feature" must be set but this is as far as i can find not possible within proxmox/iscsi.
https://www.ibm.com/docs/en/flashsystem-5x00/9.1.0?topic=problem-iscsi-performance-analysis-tuning

I also connected 2 active clusters with pure but those are really fast.
For now i contacted IBM support to give me some help, but i am afraid proxmox is not suported yet.
Make sure you are on the latest version of 9 because i had some issues with disk migrations and not cleaning up the old disk, but that was fixed with 9.0.6 (or a version in between)

I have documented all my steps regarding the multipath, i can share this but i need to filter out company info, let me know if you are interested.
Hi Barberos,

Thank you for sharing your real-world experience - this is incredibly valuable for the community!

I'd be very interested to see your documentation after you've had a chance to filter out any confidential information. Your hands-on experience with this setup is exactly what many of us need.

Regarding your performance issues with the FlashSystem 5200 HyperSwap cluster, you've identified a key bottleneck with the TCP delayed acknowledgment setting. This is a known optimization requirement for iSCSI, but as you noted, it can be challenging to configure properly in Proxmox's Linux environment.

Quick question for clarification: You mentioned you have HyperSwap working with your current setup, but is the performance issue affecting both your FlashSystem 5200 and your older FS9110 units, or just the 5200?

I'm particularly interested in your experience because I'm currently managing two IBM FS9110 storage systems that are scheduled for replacement with FS7300 units. Your feedback on HyperSwap functionality (even with the performance challenges) is very relevant to my migration planning.

If you get any useful tuning recommendations from IBM support, please do share them. Even though Proxmox isn't officially supported by IBM, their insights on storage-side configuration might help.

Looking forward to seeing your multipath configuration details when you have time to share them.

Best regards,
 
They asked me to change the following file:
/etc/sysctl.conf file
and then the setting:
net.ipv4.tcp_delack_min = 1

But this is for the whole tcp stack not only iscsi, so i am not sure yet what i will do.
I have 2 5200 IBM units in a hyperswap.

I used this guide (because i also have pure) https://support.purestorage.com/bun...opics/t_connecting_proxmox_to_flasharray.html
But the steps are almost the same as for IBM.
The manual edits are below.

My multipath.conf where i striped out some stuff that is not relavent for you:

Code:
blacklist {
    devnode "^(sr|nvme|loop|fd|hd).*$"
}

defaults {
    polling_interval          10
}

devices {
    device {
        vendor "IBM"
        product "2145"
        path_grouping_policy "group_by_prio"
        path_selector "service-time 0"
        prio "alua"
        path_checker "tur"
        failback "immediate"
        no_path_retry 5 # or no_path_retry "fail"
        retain_attached_hw_handler "yes"
        fast_io_fail_tmo 5
        rr_min_io 1000
        rr_min_io_rq 1
        rr_weight "uniform"
     }
}


multipaths {
    multipath {
        wwid  360050768128001changemechangeme
        alias IBM-VOLUME01
    }
}
 
Last edited:
  • Like
Reactions: waltar
Hi Barberos
Thanks a lot for sharing your experience and the configuration details.
I’m setting up a similar environment but using Fibre Channel instead of iSCSI.
As far as I understand, the net.ipv4.tcp_delack_min tuning shouldn’t apply to FC, but do you know if there is anything specific that needs to be adjusted for FC?
Also, please let us know how your performance tests go after you decide on the tuning—your feedback is really helpful.
 
  • Like
Reactions: waltar
Hi Barberos
Thanks a lot for sharing your experience and the configuration details.
I’m setting up a similar environment but using Fibre Channel instead of iSCSI.
As far as I understand, the net.ipv4.tcp_delack_min tuning shouldn’t apply to FC, but do you know if there is anything specific that needs to be adjusted for FC?
Also, please let us know how your performance tests go after you decide on the tuning—your feedback is really helpful.
I am going to be honest with you here i am not a big fan of the IBM storage, so i do not want to change something that could impact my purestorage.
So i am thinking of a seprate cluster for the IBM, it is "only" 50TB nvme.
Not sure if the net.ipv4.tcp_delack_min will impact corosync.

But i will keep you up to speed of what i am doing.
I am currently stil testing iscsi + lvm
I have also created a windows machine that host a smb share for my ISO files.
Pure can do nfs or SMB put it needs to be enabled and impacts the performance to much.
I am not sure if the IBM 5200 can do SMB of NFS, havent seen the option.

But with a vm sharing a SMB on Pure with clustering i am always online.
Just keep in mind to detach the ISO, because when i shutdown the VM that host the ISO share the vm that is using a iso on the share freezes.
 
Last edited:
I totally get your point, I’m not a big fan of IBM either.
Another option I’ve been considering is actually replacing the IBM systems, since unfortunately they are still very limited—they only officially support hypervisors based on RHEL (like Red Hat Virtualization, SUSE, and VMware).
They don’t yet support Ubuntu, which would allow us to deploy OpenStack, and they also don’t support Debian for Proxmox.


Let me know how your tests go; I’m also evaluating a possible replacement, maybe moving those IBM arrays to Pure Storage or even Hitachi.