Performance problem with iscsi+multipath

nounours54

New Member
Mar 28, 2024
6
0
1
Hi.
I'm quite new to Proxmox. So sorry for the newbie question.
I'm trying to build an iSCSI with multipath partition shared inside a Proxmox cluster (4 nodes).
I've read tutos and configured open-iscsi and multipath-tools packages.

Finally, multipath seems to be OK :
root@joe:~# multipath -ll

poseidon3-iscsi2 (360014056ff6003ad493cd495ed91b6dd) dm-15 SYNOLOGY,Storage
size=1.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=raw
`-+- policy='round-robin 0' prio=50 status=active
|- 11:0:0:2 sdi 8:128 active ready running
|- 12:0:0:2 sdh 8:112 active ready running
|- 13:0:0:2 sdj 8:144 active ready running
`- 14:0:0:2 sdk 8:160 active ready running

However, when I run a pvcreate /dev/mapper/poseidon3-iscsi2, the command runs around 10 minutes to complete (but completes without error).
After that, the vgcreate takes around 30 minutes.
So In the configuration, I added a "storage" with LVM over this vg.

When I create a new VM with 1 disk in that storage, no real problem. Install is quite smooth and performance OK.
However, when I clone a VM (inside the same host), the performance is awful (<1MB/s)

When a run a new multipath -ll command, I get :

root@william:~# multipath -ll

poseidon3-iscsi2 (360014056ff6003ad493cd495ed91b6dd) dm-12 SYNOLOGY,Storage
size=7.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 13:0:0:2 sdi 8:128 active ready running
|- 11:0:0:2 sdh 8:112 failed ready running
|- 14:0:0:2 sdj 8:144 failed ready running
`- 12:0:0:2 sdk 8:160 active i/o pending running

I don't really understand... If someone can help...

Regards
 
However, when I run a pvcreate /dev/mapper/poseidon3-iscsi2, the command runs around 10 minutes to complete (but completes without error).
After that, the vgcreate takes around 30 minutes.
This is not normal.
poseidon3-iscsi2 (360014056ff6003ad493cd495ed91b6dd) dm-12 SYNOLOGY,Storage
size=7.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 13:0:0:2 sdi 8:128 active ready running
|- 11:0:0:2 sdh 8:112 failed ready running
|- 14:0:0:2 sdj 8:144 failed ready running
`- 12:0:0:2 sdk 8:160 active i/o pending running
I would recommend reaching out to Synology support and following their troubleshooting procedure. This is a simple case of iSCSI/Linux client, that they should be able to assist you with.
The troubleshooting will undoubtedly involve analyzing the system log (journalctl), as well as understanding network topology and stability.
Common culprits are MTU, packet loss, etc.

My recommendation is: take 3 paths down and establish a baseline with one known good path.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi, All.
I've managed to get correct iSCSI performance.
You were right : all my problems were MTU related.
It seems important to validate pmtud in journalctl. I've noticed some mistakes for 1 interface inside a group of 35, leading MTU to 1500.
Now new MTU value measured is 8885.
I'll see if it works like a rock ;-)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!