Slow read performance on iscsi SAN

Testani

Member
Oct 22, 2022
37
3
13
Hi all,
we are going to build a production 3 node cluster use an ISCSI 10gb SAN Lenovo DE.

We are at the beginning and we have configured the first node using two 10gb network cards and a single LUN on the SAN.I attach the multipath configuration, the paths that are active and the performance I obtain.Anyone in my situation or can give me some advice?All mtu are set to 1500, changing to 9000 has no effect.Thank you

here my multipath.conf:


blacklist {


wwid .*


}





blacklist_exceptions {


wwid "36d039ea000bc680e000000ff66fd6ed2"


}





multipaths {


multipath {


wwid "36d039ea000bc680e000000ff66fd6ed2"


alias mpath0


}


}





defaults {


polling_interval 2


path_selector "round-robin 0"


path_grouping_policy multibus


uid_attribute ID_SERIAL


rr_min_io 100


failback immediate


no_path_retry queue


user_friendly_names yes


}








devices {


device {


vendor "LENOVO"


product "DE_Series"


product_blacklist "Universal Xport"


path_grouping_policy "group_by_prio"


path_checker "rdac"


features "2 pg_init_retries 50"


hardware_handler "1 rdac"


prio "rdac"


failback immediate


rr_weight "uniform"


no_path_retry 30


retain_attached_hw_handler yes


detect_prio yes


}


}
 

Attachments

  • Screenshot 2024-10-18 alle 11.30.16.png
    Screenshot 2024-10-18 alle 11.30.16.png
    52.3 KB · Views: 9
  • Screenshot 2024-10-18 alle 11.36.16.png
    Screenshot 2024-10-18 alle 11.36.16.png
    258.2 KB · Views: 9
Hi @Testani , performance troubleshooting and optimization is always multifaceted. There isn't usually a one knob that "fixes" bad performance.

The first step is always to establish a baseline directly on the hypervisor. Then you move on to the VM.

We've written a number of performance oriented articles, you should review them and see what may apply to you:
https://kb.blockbridge.com/technote/proxmox-tuning-low-latency-storage
https://kb.blockbridge.com/technote/proxmox-optimizing-windows-server/part-1.html

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: carles89
After several debugging attempts we found the problem in proxmox's iscsi initiator. Using the same bridge with a virtualized windows machine and initializing the iscsi from windows we get 10gb of throughput. By connecting a physical Windows machine the iscsi goes to 10gb, by connecting an ESX the same. The problem lies in the initiator used by proxmox, how can I debug it further?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!