Proxmox iSCSI Performance Issue - Support Report
Executive Summary
For more than a year I've been using this great product and had worked flawlessly.
Now that I need to add a shared storage I've seen a significant performance degradation in Proxmox iSCSI initiator on 5-node cluster. Proxmox host achieves only ~5GbE speeds (498MiB/s read, 408MiB/s write) despite 25GbE connection, while VMs on same hardware achieve full 25GbE performance (1872-2173MiB/s).
Environment
- Proxmox VE: 9.0.6, PVE-No Subscription
- Cluster: 5 nodes (Node 1: 25GbE, Nodes 2-5: 10GbE)
- Storage: HPE MSA 2060 with 25GbE connection
- iSCSI: open-iscsi 2.1.11-1, multipath 0.11.1-2
Problem Description
Proxmox native iSCSI initiator underperforms by ~80% compared to expected 25GbE throughput, while guest OS iSCSI implementations achieve expected performance on identical hardware.
Test Results Summary
Proxmox Host (25GbE) - UNDERPERFORMING
Windows Server 2025 VM - PARTIAL PERFORMANCE
Ubuntu 24.04 LTS VM - GOOD PERFORMANCE
Debian Trixie VM - EXCELLENT PERFORMANCE
Key Technical Details
Test Configuration
- Test Tool: fio 3.39 with libaio engine
- Block Size: 1MB sequential I/O
- Queue Depth: 32
- Runtime: 60 seconds per test
- Direct I/O: Enabled (bypass cache)
Network Baseline
- 25GbE Theoretical: 3.125 GB/s
- Expected Practical: 2.8-3.0 GB/s
- Proxmox Actual: 0.52 GB/s (83% performance loss)
Root Cause Analysis
Hardware/Network:
CONFIRMED WORKING
- Network infrastructure functional (VMs achieve high performance)
- HPE MSA 2060 storage performing correctly
- 25GbE connection established and stable
Proxmox iSCSI Stack:
IDENTIFIED BOTTLENECK
- Native open-iscsi implementation severely underperforming
- Same open-iscsi version in Debian VM performs excellently
- Issue appears specific to Proxmox integration/configuration
Tested Configurations
- Multiple multipath configurations
- Various iSCSI session parameters
- Different queue depths and block sizes
- Results consistent across all variations
Business Impact
- Severity: HIGH - Direct impact on VM storage performance
- Scope: Cluster-wide shared storage affected
- Production Risk: Potential performance degradation for workloads
Requested Actions
1. Root Cause Investigation: Why does Proxmox iSCSI underperform vs guest OS?
2. Performance Tuning: Optimize Proxmox iSCSI configuration
3. Best Practices: Updated guidance for high-performance iSCSI setups
4. Configuration Review: Analyze current LVM/iSCSI settings
Environment Details
- Kernel: Linux 6.14.11-1-pve
- NIC: Broadcom BCM57414 NetXtreme-E 25Gb
- Switch: Ubiquiti USW Pro Aggregation
- Config: iSCSI LVM shared volume, cluster-level
Expected Resolution
Performance optimization to achieve 80%+ of theoretical 25GbE throughput (~2.4GB/s) matching guest OS performance levels.
Executive Summary
For more than a year I've been using this great product and had worked flawlessly.
Now that I need to add a shared storage I've seen a significant performance degradation in Proxmox iSCSI initiator on 5-node cluster. Proxmox host achieves only ~5GbE speeds (498MiB/s read, 408MiB/s write) despite 25GbE connection, while VMs on same hardware achieve full 25GbE performance (1872-2173MiB/s).
Environment
- Proxmox VE: 9.0.6, PVE-No Subscription
- Cluster: 5 nodes (Node 1: 25GbE, Nodes 2-5: 10GbE)
- Storage: HPE MSA 2060 with 25GbE connection
- iSCSI: open-iscsi 2.1.11-1, multipath 0.11.1-2
Problem Description
Proxmox native iSCSI initiator underperforms by ~80% compared to expected 25GbE throughput, while guest OS iSCSI implementations achieve expected performance on identical hardware.
Test Results Summary
Proxmox Host (25GbE) - UNDERPERFORMING

Code:
Sequential Read: 498MiB/s (522MB/s) - 17.4% of expected
Sequential Write: 408MiB/s (428MB/s) - 15.3% of expected
Mixed R/W: 229+229MiB/s - 16.6% of expected
Latency: 64-78ms (high)
Windows Server 2025 VM - PARTIAL PERFORMANCE

Code:
Sequential Read: 1451MiB/s (1452MB/s) - 48.4% of expected
Sequential Write: 808MiB/s (808MB/s) - 29.0% of expected
Mixed R/W: 425+427MiB/s - 30.6% of expected
Ubuntu 24.04 LTS VM - GOOD PERFORMANCE

Code:
Sequential Read: 1872MiB/s (1963MB/s) - 65.4% of expected
Sequential Write: 1738MiB/s (1823MB/s) - 65.1% of expected
Mixed R/W: 987+984MiB/s - 71.3% of expected
Latency: 12-19ms (normal)
Debian Trixie VM - EXCELLENT PERFORMANCE

Code:
Sequential Read: 2173MiB/s (2279MB/s) - 76.0% of expected
Sequential Write: 1824MiB/s (1913MB/s) - 68.3% of expected
Mixed R/W: 1096+1090MiB/s - 79.1% of expected
Latency: 11-17ms (normal)
Key Technical Details
Test Configuration
- Test Tool: fio 3.39 with libaio engine
- Block Size: 1MB sequential I/O
- Queue Depth: 32
- Runtime: 60 seconds per test
- Direct I/O: Enabled (bypass cache)
Network Baseline
- 25GbE Theoretical: 3.125 GB/s
- Expected Practical: 2.8-3.0 GB/s
- Proxmox Actual: 0.52 GB/s (83% performance loss)
Root Cause Analysis
Hardware/Network:

- Network infrastructure functional (VMs achieve high performance)
- HPE MSA 2060 storage performing correctly
- 25GbE connection established and stable
Proxmox iSCSI Stack:

- Native open-iscsi implementation severely underperforming
- Same open-iscsi version in Debian VM performs excellently
- Issue appears specific to Proxmox integration/configuration
Tested Configurations
- Multiple multipath configurations
- Various iSCSI session parameters
- Different queue depths and block sizes
- Results consistent across all variations
Business Impact
- Severity: HIGH - Direct impact on VM storage performance
- Scope: Cluster-wide shared storage affected
- Production Risk: Potential performance degradation for workloads
Requested Actions
1. Root Cause Investigation: Why does Proxmox iSCSI underperform vs guest OS?
2. Performance Tuning: Optimize Proxmox iSCSI configuration
3. Best Practices: Updated guidance for high-performance iSCSI setups
4. Configuration Review: Analyze current LVM/iSCSI settings
Environment Details
- Kernel: Linux 6.14.11-1-pve
- NIC: Broadcom BCM57414 NetXtreme-E 25Gb
- Switch: Ubiquiti USW Pro Aggregation
- Config: iSCSI LVM shared volume, cluster-level
Expected Resolution
Performance optimization to achieve 80%+ of theoretical 25GbE throughput (~2.4GB/s) matching guest OS performance levels.
Last edited: