iSCSI super slow - have I done something wrong?

unsichtbarre

Member
Oct 1, 2024
43
7
8
My iSCSI in my lab environment is super slow. It is backed by SSD, presented with TrueNAS. I was wondering if it was my fault?

THX in ADV,
-JB

Code:
root@pve101:~# iscsiadm -m session
tcp: [1] 10.26.22.20:3260,1 iqn.2009-10.com.vmsources.lab:pveclass (non-flash)
tcp: [2] 10.26.23.20:3260,1 iqn.2009-10.com.vmsources.lab:pveclass (non-flash)
root@pve101:~# multipath -ll
mpatha (36589cfc000000ce719e13a82329c03c3) dm-5 TrueNAS,iSCSI Disk
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 4:0:0:101 sdh 8:112 active ready       running
  `- 3:0:0:101 sdg 8:96  active i/o pending running
root@pve101:~# multipath -ll
mpatha (36589cfc000000ce719e13a82329c03c3) dm-5 TrueNAS,iSCSI Disk
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 4:0:0:101 sdh 8:112 active i/o pending running
  `- 3:0:0:101 sdg 8:96  active i/o pending running
root@pve101:~# multipath -ll
mpatha (36589cfc000000ce719e13a82329c03c3) dm-5 TrueNAS,iSCSI Disk
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 4:0:0:101 sdh 8:112 active ready running
  `- 3:0:0:101 sdg 8:96  active ready running
root@pve101:~# cat /etc/multipath.conf
defaults {
        polling_interval        2
        path_selector           "round-robin 0"
        path_grouping_policy    multibus
        uid_attribute           ID_SERIAL
        rr_min_io               100
        failback                immediate
        no_path_retry           queue
        user_friendly_names     yes
}
root@pve101:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

iscsi: PVE-SAN-1
        portal 10.26.22.20
        target iqn.2009-10.com.vmsources.lab:pveclass
        content none

lvm: PVE-LVM-101
        vgname PVE-VG-1
        base PVE-SAN-1:0.0.101.scsi-36589cfc000000ce719e13a82329c03c3
        content images,rootdir
        saferemove 0
        shared 1

cifs: ISOs
        path /mnt/pve/ISOs
        server share
        share files
        content iso
        prune-backups keep-all=1
        subdir /ISOs
        username admin101@lab.vmsources.com

rbd: CEPH-POOL-1
        content images,rootdir
        krbd 0
        pool CEPH-POOL-1
root@pve101:~# pvs
  PV                 VG                                        Fmt  Attr PSize    PFree
  /dev/mapper/mpatha PVE-VG-1                                  lvm2 a--   499.99g 435.99g
  /dev/sda3          pve                                       lvm2 a--   <31.50g   8.00m
  /dev/sdc           ceph-9acc2995-3e0f-4013-8eb1-5d7af084d52f lvm2 a--  <300.00g      0
  /dev/sdd           ceph-c9bb9d65-3d28-40c7-b0b2-916c395838ef lvm2 a--  <300.00g      0
  /dev/sde           ceph-b6dcf708-6dbc-4b1a-87b1-06a5f874ee39 lvm2 a--  <300.00g      0
  /dev/sdf           ceph-f4c58000-eafd-47f0-abae-395d935ff218 lvm2 a--  <300.00g      0
root@pve101:~# vgs
  VG                                        #PV #LV #SN Attr   VSize    VFree
  PVE-VG-1                                    1   2   0 wz--n-  499.99g 435.99g
  ceph-9acc2995-3e0f-4013-8eb1-5d7af084d52f   1   1   0 wz--n- <300.00g      0
  ceph-b6dcf708-6dbc-4b1a-87b1-06a5f874ee39   1   1   0 wz--n- <300.00g      0
  ceph-c9bb9d65-3d28-40c7-b0b2-916c395838ef   1   1   0 wz--n- <300.00g      0
  ceph-f4c58000-eafd-47f0-abae-395d935ff218   1   1   0 wz--n- <300.00g      0
  pve                                         1   3   0 wz--n-  <31.50g   8.00m
root@pve101:~#
 
Hi @unsichtbarre,

“Super slow” is a very subjective way to describe performance. A good first step is to establish a baseline with FIO.

Keep in mind that I/O over iSCSI passes through many layers: disk/volume configuration and CPU/memory on TrueNAS, network on both ends (and the switch in between), CPU/memory on PVE, and finally the virtualization layer/VM itself. Any of these could be the bottleneck.

From the output you shared, nothing looks obviously wrong. The best approach is to test each layer methodically: FIO directly on TrueNAS, iPerf for the network, then FIO on the hypervisor and inside a VM. That will help you identify where the slowdown occurs.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: unsichtbarre
Problem turned out to be TrueNAS (me). I had configured both possible iSCSI target addresses on one portal. I needed to configure one portal per IP and than reconfigure the iSCSI Target with both portals.
1756158796830.png
1756158832217.png