[SOLVED] Using iSCSI LUN *directly*

kiler129

Member
Oct 20, 2020
28
44
18
I see that Proxmox has support for iSCSI with two different drivers: kernel open-iscsi and user-mode libiscsi2. While this isn't specified in the docs I found out that the kernel one is supposed to be attached to the host and create block device, which is then passed through to the guest. This, besides lower performance, seems to imply that Proxmox must create a datastore on it. This is not what I want, as I already have exposed LUNs with existing data on the NAS, e.g.:

Code:
# fdisk -l /dev/zvol/ssd-dev/iscsi-iot04dev1
Disk /dev/zvol/ssd-dev/iscsi-iot04dev1: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 16384 bytes
Disklabel type: gpt
Disk identifier: ***

Device                                  Start      End  Sectors  Size Type
/dev/zvol/ssd-dev/iscsi-iot04dev11    2048    67583    65536   32M EFI System
/dev/zvol/ssd-dev/iscsi-iot04dev12   67584   116735    49152   24M Linux filesystem
(...)

I tried adding the iSCSI to /etc/pve/storage.cfg using iscsidirect driver:
Code:
iscsidirect: iscsi-iot04dev
        portal store-XXX-int.AAA.BBB
        target iqn.2000-01.BBB.AAA.store-XXX-int:iscsi

However, I cannot find ANY documentation or examples on how to attach a LUN from such a backend to a VM. I see that QEMU has docs around their iSCSI but I would rather not go around and use args: hack. For the record, in case someone finds this thread in the future, it does indeed work, albeit only in OVMF (UEFI) mode:

Code:
# cat /etc/pve/qemu-server/192.conf
args: -drive file=iscsi://store-XXX-int.AAA.BBB/iqn.2000-01.BBB.AAA.store-XXX-int:iscsi/43
bios: ovmf
cpu: host
(...)


Using args: hides it from the WebUI and also doesn't seem like the correct/Proxmox-way of adding it. I tried using the WebUI and the "Add: Hard Disk" wizard but it simply errors out:

1670828850135.png


Any help would be appreciated :)
 
Last edited:
Hi,
please try to install the package libiscsi-bin (which contains the iscsi-ls binary).
 
  • Like
Reactions: kiler129
Hi,
please try to install the package libiscsi-bin (which contains the iscsi-ls binary).
Damn, I was close, thank you Fiona! ;) Indeed, installing that package makes the UI functional and allows for selection of lun# in "Disk Image" dropdown and makes the machine boot.

I think the documentation at https://pve.proxmox.com/wiki/Storage:_iSCSI could be improved with some "how-to" like for PCIe passthrough. In addition, the docs about user-mode libiscsi2 should probably include the libiscsi-bin.
I will be happy to do both but I don't see an option to create account on the wiki nor edit the pve-docs.
 
Damn, I was close, thank you Fiona! ;) Indeed, installing that package makes the UI functional and allows for selection of lun# in "Disk Image" dropdown and makes the machine boot.
Glad to hear :) Please mark the thread as solved by editing the thread/first post and selecting the [SOLVED] prefix. This helps other users to find solutions more quickly.

I think the documentation at https://pve.proxmox.com/wiki/Storage:_iSCSI could be improved with some "how-to" like for PCIe passthrough. In addition, the docs about user-mode libiscsi2 should probably include the libiscsi-bin.
I will be happy to do both but I don't see an option to create account on the wiki nor edit the pve-docs.
This wiki article is generated from the docs, changes to pve-docs are done via our mailing list. If you wish to contribute, see also the developer documentation. But I can just go ahead and send a patch to mention that the package is required (nowadays, it's also libiscsi7 rather than libiscsi2).

EDIT: The patch has been sent. Feel free to send another one to provide a how-to.
 
Last edited:
Glad to hear :) Please mark the thread as solved by editing the thread/first post and selecting the [SOLVED] prefix. This helps other users to find solutions more quickly.
Done! :)

I will do more of a writeup this week, as holidays give me some more free time. However, before submitting this I think I found a major problem in the direct iSCSI when used from Proxmox, causing a total crash of a VM using it. This may become a "spider-man meme", as it involves Proxmox, QEMU, and TrueNAS. I lack sufficient knowledge to debug it and the issue is most likely either in QEMU or the modifications Proxmox did to QEMU: https://gitlab.com/qemu-project/qemu/-/issues/1378 - do you think it should be reported to https://bugzilla.proxmox.com/ ?
 
Done! :)

I will do more of a writeup this week, as holidays give me some more free time. However, before submitting this I think I found a major problem in the direct iSCSI when used from Proxmox, causing a total crash of a VM using it. This may become a "spider-man meme", as it involves Proxmox, QEMU, and TrueNAS. I lack sufficient knowledge to debug it and the issue is most likely either in QEMU or the modifications Proxmox did to QEMU: https://gitlab.com/qemu-project/qemu/-/issues/1378 - do you think it should be reported to https://bugzilla.proxmox.com/ ?
I think the only modification to the iSCSI code in QEMU that we do is defaulting to the open-iscsi initiator name, so likely it is also an issue with upstream/stock QEMU. To make sure, you could get a stock QEMU binary and run the VM with that (use qm showcmd <ID> --pretty to get the commandline. Drop the -id argument and the +pve0 part from the -machine argument, then you should be able to start it ;)

What version of pve-qemu-kvm do you have installed?
 
I think the only modification to the iSCSI code in QEMU that we do is defaulting to the open-iscsi initiator name, so likely it is also an issue with upstream/stock QEMU.
Hm, yes, assuming that the "/etc/iscsi/initiatorname.iscsi" exists and it's valid the patch (while slightly fragile) is unlikely to cause any problems, and for sure not such issues as in my case.

To make sure, you could get a stock QEMU binary and run the VM with that (...)
I quickly tried that with one lifted from bullseye (+ a apt-ed a few libs) but it's built without libiscsi support unfortunately, resulting in big sad qemu-system-x86_64: -drive file=iscsi://storage...../0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap: Unknown protocol 'iscsi'. I can technically dive further and try to build `master` from the upstream to be sure, but I have a feeling with the number of moving parts in qemu it's not gonna be easy ;)

What version of pve-qemu-kvm do you have installed?
7.1.0-4; I checked pvetest repo and that's the newest one there too. PVE git repo contains 7.2.0-1 but I doubt it will fix it. There's a chance that some of the cherrypicks in PVE broke something, but more likely it may be a memory corruption somewhere in the upstream, or even more likely in libiscsi. The code handling TASK_SET_FULL isn't that complicated in itself: https://github.com/qemu/qemu/blob/8540a1f69578afb3b37866b1ce5bec46a9f6efbc/block/iscsi.c#L247 - I certainly don't see any obvious out-of-bound writes here, which could cause random corruption down the line.



Edit:

Well, I have good and bad news potentially. It's either modifications done by Proxmox, or the new version accidentally fixed something which fixed that bug... which probably scares every developer ;) I'm running a VM with the compiled version for a few hours and it didn't crash (yet?).
See "edit 2" update below.

I went ahead and compiled the QEMU's current master on Debian 11 box:
Code:
# git rev-parse HEAD
113f00e38737d6616a18a465916ae880a67ff342

# git submodule status --recursive
 b6910bec11614980a21e46fbccc35934b671bd81 dtc (v1.6.1)
 3a9b285a55b91b53b2acda987192274352ecb5be meson (0.61.5)
 90c488d5f4a407342247b9ea869df1c2d9c8e266 roms/QemuMacDrivers (heads/master)
 6b6c16b4b40763507cf1f518096f3c3883c5cf2d roms/SLOF (qemu-slof-20190703-109-g6b6c16b)
 b24306f15daa2ff8510b06702114724b33895d3c roms/edk2 (edk2-stable201903-4029-gb24306f15d)
-b64af41c3276f97f0e181920400ee056b9c88037 roms/edk2/ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3
-f4153a09f87cbb9c826d8fc12c74642bb2d879ea roms/edk2/BaseTools/Source/C/BrotliCompress/brotli
-52c587d60be67c337364b830dd3fdc15404a2f04 roms/edk2/CryptoPkg/Library/OpensslLib/openssl
-f4153a09f87cbb9c826d8fc12c74642bb2d879ea roms/edk2/MdeModulePkg/Library/BrotliCustomDecompressLib/brotli
-abfc8ff81df4067f309032467785e06975678f0d roms/edk2/MdeModulePkg/Universal/RegularExpressionDxe/oniguruma
-e9ebfa7e77a6bee77df44e096b100e7131044059 roms/edk2/RedfishPkg/Library/JsonLib/jansson
-1cc9cde3448cdd2e000886a26acf1caac2db7cf1 roms/edk2/UnitTestFrameworkPkg/Library/CmockaLib/cmocka
 4bd064de239dab2426b31c9789a1f4d78087dc63 roms/ipxe (v1.20.1-70-g4bd064de)
 0e0afae6579c1efe9f0d85505b75ffe989554133 roms/openbios (heads/master)
 4489876e933d8ba0d8bc6c64bae71e295d45faac roms/opensbi (v1.1)
 8ca302e86d685fa05b16e2b208888243da319941 roms/qboot (heads/master)
 99d9b4dcf27d7fbcbadab71bdc88ef6531baf6bf roms/qemu-palcode (heads/master)
 3208b098f51a9ef96d0dfa71d5ec3a3eaec88f0a roms/seabios (rel-1.16.1)
 458626c4c6441045c0612f24313c7cf1f95e71c6 roms/seabios-hppa (seabios-hppa-v6)
 cbaee52287e5f32373181cff50a00b6c4ac9015a roms/sgabios (cbaee52)
 24a7eb35966d93455520bc2debdd7954314b638b roms/skiboot (v7.0)
 840658b093976390e9537724f802281c9c8439f5 roms/u-boot (v2021.07)
 60b3916f33e617a815973c5a6df77055b2e3a588 roms/u-boot-sam460ex (heads/master)
 0c37a43527f0ee2b9584e7fb2fdc805e902635ac roms/vbootrom (0c37a43)
 0b28d205572c80b568a1003db2c8f37ca333e4d7 subprojects/libvfio-user (v0.1-657-g0b28d20)
 b64af41c3276f97f0e181920400ee056b9c88037 tests/fp/berkeley-softfloat-3 (heads/master)
 5a59dcec19327396a011a17fd924aed4fec416b3 tests/fp/berkeley-testfloat-3 (heads/master)
 e3eb28cf2e17fbcf7fe7e19505ee432b8ec5bbb5 tests/lcitool/libvirt-ci (e3eb28c)
 d21009b1c9f94b740ea66be8e48a1d8ad8124023 ui/keycodemapdb (d21009b)

# /root/qemu-master/qemu-system-x86_64 --version
QEMU emulator version 7.2.50 (v7.2.0-318-g113f00e387)
Copyright (c) 2003-2022 Fabrice Bellard and the QEMU Project developers

I used a config statement which is modification of the Proxmox one (https://git.proxmox.com/?p=pve-qemu...f2cab1a0b5fef5c171da6391;hb=refs/heads/master). I attached the exact config command used, along with the output to this post. After compiling it and moving it to Proxmox host (pve-manager/7.3-3/c3928077 (running kernel: 5.15.74-1-pve)) I had to:

Code:
# install a few libs
apt install libcapstone4 libxenmisc4.14 libslirp0 libcacard0 libexecs0 libpmem1 libbrlapi0.8 libdaxctl1 libndctl6 libpcsclite1 libxencall1 libxendevicemodel1 libxenevtchn1 libxenforeignmemory1 libxengnttab1 libxenhypfs1 libxenstore3.0 libxentoolcore1 libxentoollog1 libyajl2 libvdeplug2

# put the binary in /root/qemu-master/qemu-system-x86_64
# link firmware files:
mkdir /root/share/ && ln -s /usr/share/kvm /root/share/kvm-master

To run the machine I took the command from qm showcmd <ID> --pretty and modified it:
  • remove "-id" from cmdline
  • remove "+pve0" from "-machine" in cmdline
  • added "-accel kvm" to cmdline

The final command turned out to:
Code:
/root/qemu-master/qemu-system-x86_64 \
  -name 'homeassistant,debug-threads=on' \
  -no-shutdown \
  -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server=on,wait=off' \
  -mon 'chardev=qmp,mode=control' \
  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
  -mon 'chardev=qmp-event,mode=control' \
  -pidfile /var/run/qemu-server/101.pid \
  -daemonize \
  -smbios 'type=1,uuid=<UUID>' \
  -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd' \
  -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,size=540672,file=/dev/zvol/rpool/data/vm-101-disk-0' \
  -smp '2,sockets=1,cores=2,maxcpus=2' \
  -nodefaults \
  -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
  -vnc 'unix:/var/run/qemu-server/101.vnc,password=on' \
  -cpu host,+kvm_pv_eoi,+kvm_pv_unhalt \
  -m 2048 \
  -object 'iothread,id=iothread-virtio0' \
  -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg \
  -device 'vmgenid,guid=<UUID>' \
  -device 'qemu-xhci,p2=15,p3=15,id=xhci,bus=pci.1,addr=0x1b' \
  -device 'usb-tablet,id=tablet,bus=ehci.0,port=1' \
  -device 'usb-host,bus=xhci.0,port=1,vendorid=0x1cf1,productid=0x0030,id=usb0' \
  -chardev 'socket,id=serial0,path=/var/run/qemu-server/101.serial0,server=on,wait=off' \
  -device 'isa-serial,chardev=serial0' \
  -device 'VGA,id=vga,bus=pcie.0,addr=0x1' \
  -chardev 'socket,path=/var/run/qemu-server/101.qga,server=on,wait=off,id=qga0' \
  -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' \
  -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' \
  -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:6189a1634d5' \
  -drive 'file=iscsi://storage..../iqn...../0,if=none,id=drive-virtio0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
  -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,iothread=iothread-virtio0' \
  -netdev 'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' \
  -device 'virtio-net-pci,mac=<MAC>,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=1024,host_mtu=1500' \
  -machine 'type=q35' \
  -accel kvm



Edit2:

Welp, it did crash again. Then it looks like it's an upstream bug after all.
 

Attachments

  • qemu-master-config.txt
    21.5 KB · Views: 6
Last edited:
I have installed libiscsi-bin package but GUI still doesn't offer me to choose a LUN. :(
Probably I have to restart some daemon?
1709130057174.png

Code:
iscsidirect: iscsi-direct
portal 172.24.123.123
target iqn.2010-06.com.nutanix:ph-proxmoxiscsikernel-c2f2e7be-fd59-412d-9129-a1d37bc11ad1-tgt0
 
Last edited:
I have installed libiscsi-bin package but GUI still doesn't offer me to choose a LUN. :(
Probably I have to restart some daemon?
Did you confirm that your iSCSI is configured properly? I.e via output of "pvesm status" , "pvesm list [storage]", iscsiadm discovery, etc. Any errors in journal ?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Did you confirm that your iSCSI is configured properly? I.e via output of "pvesm status" , "pvesm list [storage]", iscsiadm discovery, etc. Any errors in journal ?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Yep, for example I can see LUNs using iscsi-ls.
Code:
# iscsi-ls -i iqn.1993-08.org.debian:01:ca3bbfba8185 -s iscsi://172.24.178.14

Target:iqn.2010-06.com.nutanix:ph-proxmoxiscsi-413f2d1c-2361-4579-b27e-4bce51c37c0c-tgt0 Portal:172.24.178.14:3260,1
Lun:0    Type:DIRECT_ACCESS (Size:499G)

Target:iqn.2010-06.com.nutanix:ph-proxmoxiscsikernel-c2f2e7be-fd59-412d-9129-a1d37bc11ad1-tgt0 Portal:172.24.178.14:3260,1
Target:iqn.2010-06.com.nutanix:ph-proxmoxiscsi10gb-876bdf91-944a-4b65-aa67-a9870d206e60-tgt0 Portal:172.24.178.14:3260,1
Lun:0    Type:DIRECT_ACCESS (Size:39G)

Target:iqn.2010-06.com.nutanix:alaniscsi-84fd291b-d047-4515-9fb5-8e97ba8ce8a6-tgt0 Portal:172.24.178.14:3260,1
Lun:0    Type:DIRECT_ACCESS (Size:63G)
 
Last edited:
Yep, for example I can see LUNs using iscsi-ls.
You should :
- open your own thread, provide all relevant information usually required in such cases:
-- context of storage.cfg
-- output of pvesm status
-- output of pvesm list [storage]
-- output/review of system log messages
- use CODE tags around text

iscsi-ls is not a tool normally installed on PVE, nor is it used by PVE directly, so the fact that its working does not mean PVE is configured properly


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: RocketSam

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!