[SOLVED] iSCSI configured at datacenter level but not visible at VM level

davidski

New Member
Jun 15, 2023
4
0
1
I'm new to Proxmox, currently checking out a fresh install of Proxmox CE v8.0.0-8. I've set up an iSCSI connection to a Synology, using LUNs directly and no node restrictions. The datacenter node reports an active connection to the Synology, but I'm not getting an iSCSI option in the hardware edit panel of a VM. I had this briefly working at one point, but I reset the iSCSI target and the connection option has disappeared from the VM.

I've tried recreating the target and LUN on the Synology, deleting and re-adding this on the Proxmox side (both with and without using the LUNs natively), and creating a fresh VM. Any pointers on how to diagnose this and get things work would be much appreciated!
 
Thanks for the quick response! I have tried rebooted (just tried that again now, and still the iSCSI option fails to appear on the host add menu). The rest of the diagnostic output is also included.



Code:
root@vetinari:~# pvesm status
Name                  Type     Status           Total            Used       Available        %
local                  dir     active        98497780        11213296        82234936   11.38%
local-lvm          lvmthin     active      1793077248        43033853      1750043394    2.40%
synology-iscsi       iscsi     active               0               0               0    0.00%

Code:
root@vetinari:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

iscsi: synology-iscsi
        portal detritus.woohouse.world
        target iqn.2000-01.com.synology:detritus.default-target.b1f52a4e7e3
        content none

Code:
root@vetinari:~# journalctl -n 100
Jun 15 19:18:07 vetinari systemd[1]: Stopping user-runtime-dir@0.service - User Runtime Directory /run/user/0...
Jun 15 19:18:07 vetinari systemd[1]: run-user-0.mount: Deactivated successfully.
Jun 15 19:18:07 vetinari systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jun 15 19:18:07 vetinari systemd[1]: Stopped user-runtime-dir@0.service - User Runtime Directory /run/user/0.
Jun 15 19:18:07 vetinari systemd[1]: Removed slice user-0.slice - User Slice of UID 0.
Jun 15 19:18:07 vetinari systemd[1]: user-0.slice: Consumed 1.729s CPU time.
Jun 15 19:22:57 vetinari pvedaemon[951]: <root@pam> successful auth for user 'root@pam'
Jun 15 19:26:01 vetinari pvedaemon[951]: worker exit
Jun 15 19:26:01 vetinari pvedaemon[949]: worker 951 finished
Jun 15 19:26:01 vetinari pvedaemon[949]: starting 1 worker(s)
Jun 15 19:26:01 vetinari pvedaemon[949]: worker 223946 started
Jun 15 19:29:00 vetinari pvedaemon[220761]: <root@pam> starting task UPID:vetinari:00036E70:0087F408:648B9EBC:vncshell::root@pam:
Jun 15 19:29:00 vetinari pvedaemon[224880]: starting termproxy UPID:vetinari:00036E70:0087F408:648B9EBC:vncshell::root@pam:
Jun 15 19:29:00 vetinari pvedaemon[223946]: <root@pam> successful auth for user 'root@pam'
Jun 15 19:29:00 vetinari login[224883]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Jun 15 19:29:00 vetinari systemd[1]: Created slice user-0.slice - User Slice of UID 0.
Jun 15 19:29:00 vetinari systemd[1]: Starting user-runtime-dir@0.service - User Runtime Directory /run/user/0...
Jun 15 19:29:00 vetinari systemd-logind[602]: New session 30 of user root.
Jun 15 19:29:00 vetinari systemd[1]: Finished user-runtime-dir@0.service - User Runtime Directory /run/user/0.
Jun 15 19:29:00 vetinari systemd[1]: Starting user@0.service - User Manager for UID 0...
Jun 15 19:29:00 vetinari (systemd)[224889]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Jun 15 19:29:00 vetinari systemd[224889]: Queued start job for default target default.target.
Jun 15 19:29:00 vetinari systemd[224889]: Created slice app.slice - User Application Slice.
Jun 15 19:29:00 vetinari systemd[224889]: Reached target paths.target - Paths.
Jun 15 19:29:00 vetinari systemd[224889]: Reached target timers.target - Timers.
Jun 15 19:29:00 vetinari systemd[224889]: Listening on dirmngr.socket - GnuPG network certificate management daemon.
Jun 15 19:29:00 vetinari systemd[224889]: Listening on gpg-agent-browser.socket - GnuPG cryptographic agent and passphrase cache (>
Jun 15 19:29:00 vetinari systemd[224889]: Listening on gpg-agent-extra.socket - GnuPG cryptographic agent and passphrase cache (re>
Jun 15 19:29:00 vetinari systemd[224889]: Listening on gpg-agent-ssh.socket - GnuPG cryptographic agent (ssh-agent emulation).
Jun 15 19:29:00 vetinari systemd[224889]: Listening on gpg-agent.socket - GnuPG cryptographic agent and passphrase cache.
Jun 15 19:29:00 vetinari systemd[224889]: Reached target sockets.target - Sockets.
Jun 15 19:29:00 vetinari systemd[224889]: Reached target basic.target - Basic System.
Jun 15 19:29:00 vetinari systemd[224889]: Reached target default.target - Main User Target.
Jun 15 19:29:00 vetinari systemd[224889]: Startup finished in 96ms.
Jun 15 19:29:00 vetinari systemd[1]: Started user@0.service - User Manager for UID 0.
Jun 15 19:29:00 vetinari systemd[1]: Started session-30.scope - Session 30 of User root.
Jun 15 19:29:00 vetinari login[224905]: ROOT LOGIN  on '/dev/pts/0'
 

Attachments

  • 1686873848144.png
    1686873848144.png
    52.7 KB · Views: 2
Code:
content none
this means that the last attempt was with "direct LUN" not checked. Generally you'd use this when you want to build an LVM structure on top of iSCSI.
If you want to use LUNs directly from your storage, then you must check "direct LUN", so that content=images. Once you do that you may need to restart services:
systemctl try-reload-or-^Cstart pvedaemon pveproxy pvestatd pvescheduler

After that you can check that LUNs are visible in "Folder View>Storage>StorageName>VM disks".
To add a disk to VM you would select "Hard Disk" from the menu in your screenshot, then your iSCSI storage and corresponding disk/LUN.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
this means that the last attempt was with "direct LUN" not checked. Generally you'd use this when you want to build an LVM structure on top of iSCSI.

Quite right! I was trying both modes.

After rechecking the Use LUNs directly option and doing the service restart, there is no change in what the GUI is reporting. The folder summary for the iSCSI connection shows enabled and available while the VM Disk pane shows the NAS hosted LUN. Both existing and new VMs fail to offer iSCSI as a drop down option. :(

From the shell, I can see a valid connection to the NAS:

Code:
root@vetinari:~# iscsiadm -m session
tcp: [1] 192.168.50.13:3260,1 iqn.2000-01.com.synology:detritus.default-target.b1f52a4e7e3 (non-flash)
 
Oh son of a gun.... :oops: I was expecting to see an "iSCSI Device"option in the overall hardware add drop down, rather than selecting the storage pool from the add hard disk dialog. I do see the storage pool and the LUN when going to the Hard Disk dialog.

I did mention I was new to Proxmox, right? :p
 

Attachments

  • 1686925339939.png
    1686925339939.png
    42 KB · Views: 5

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!