iSCSI questions

hashman

New Member
Jun 11, 2025
2
0
1
I have a couple of Dell R730s with the old boss cards (2 x 32GB SD cards) that I've installed Proxmox on. I have several 10GB NIC ports that I've connected to a Nimble SAN via iSCSI and was able to add a volume/LVM disk using the Cluster settings. I also added a volume to one of the hosts as local storage to be able to store ISOs. I then enabled multipath and the Shared volume works as expected, but the local volume I can't mount to the ISOs folder anymore. I also need to create a local volume that can be used for VMs as this is where Veeam creates a worker appliance. I have the volume created on storage, and the host sees it, but I get a "No Disks unused" message when trying to add it.
 
Hi @hashman , welcome to the forum.

Can you please provide command line output to better illustrate your environment?

cat /etc/pve/storage.cfg
pvesm status
lsscsi
lsblk
blkid
mount

Use text output encoded with CODE </> tags. Add comments where needed to identify what is a shared storage and what is local.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Here is the output from the listed commands:

Code:
root@pve01:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,iso,vztmpl

#Local storage
lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

# iSCSI volume
iscsi: Proxmox-01
        portal 192.168.250.202
        target iqn.2007-11.com.nimblestorage:proxmox-01-v711643a4c477761a.0000001b.b56135be
        content none

#LVM disk on top of iSCSI volume
lvm: Proxmox-01Nimble
        vgname Proxmox-LUN01
        base Proxmox-01:0.0.0.scsi-22b8a3d76f776869f6c9ce900be3561b5
        content images,rootdir
        saferemove 0
        shared 1

#Folder that was previously mounted to sdb
dir: ISOs
        path /mnt/ISOs
        content iso
        prune-backups keep-all=1
        shared 0


root@pve01:~# pvesm status
Name                    Type     Status           Total            Used       Available        %
ISOs                     dir     active         6854092         4149528         2335108   60.54%
Proxmox-01             iscsi     active               0               0               0    0.00%
Proxmox-01Nimble         lvm     active      1073737728       157294592       916443136   14.65%
local                    dir     active         6854092         4149528         2335108   60.54%
local-lvm            lvmthin     active         4956160               0         4956160    0.00%


root@pve01:~# lsscsi
-bash: lsscsi: command not found


root@pve01:~# lsblk
NAME                                MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
sda                                   8:0    0   15G  0 disk
├─sda1                                8:1    0 1007K  0 part
├─sda2                                8:2    0  512M  0 part
└─sda3                                8:3    0 14.5G  0 part
  ├─pve-swap                        252:0    0    1G  0 lvm   [SWAP]
  ├─pve-root                        252:1    0  6.7G  0 lvm   /
  ├─pve-data_tmeta                  252:2    0    1G  0 lvm
  │ └─pve-data                      252:4    0  4.7G  0 lvm
  └─pve-data_tdata                  252:3    0  4.7G  0 lvm
    └─pve-data                      252:4    0  4.7G  0 lvm
#Before multipath was enabled mounted to /mnt/ISOs and configured to host ISOs
sdb                                   8:16   0  500G  0 disk
└─2747f0f3550c21d1e6c9ce900be3561b5 252:5    0  500G  0 mpath
#iSCSI volume to be used as local storage on the host for Veeam helper appliance/VM
sdc                                   8:32   0  250G  0 disk
sdd                                   8:48   0    1T  0 disk
└─22b8a3d76f776869f6c9ce900be3561b5 252:6    0    1T  0 mpath
  ├─Proxmox--LUN01-vm--100--disk--0 252:7    0    4M  0 lvm
  ├─Proxmox--LUN01-vm--100--disk--1 252:8    0  150G  0 lvm
  └─Proxmox--LUN01-vm--100--disk--2 252:9    0    4M  0 lvm
sr0                                  11:0    1 1024M  0 rom


root@pve01:~# blkid
/dev/mapper/pve-root: UUID="25257256-6352-4fdf-a9f8-a8609455085d" BLOCK_SIZE="4096" TYPE="ext4"
/dev/sdd: UUID="QBsmej-IOJ3-gDsQ-Hkqs-gQ9L-zDHi-JdkVOs" TYPE="LVM2_member"
/dev/mapper/Proxmox--LUN01-vm--100--disk--1: PTUUID="38326a9d-ea77-4fc7-8bf0-26ffdd125b96" PTTYPE="gpt"
/dev/sdb: UUID="85f4b81e-6d85-4448-b89c-73540c1ca7e2" BLOCK_SIZE="4096" TYPE="ext4"
/dev/mapper/22b8a3d76f776869f6c9ce900be3561b5: UUID="QBsmej-IOJ3-gDsQ-Hkqs-gQ9L-zDHi-JdkVOs" TYPE="LVM2_member"
/dev/mapper/pve-swap: UUID="13baf5ae-b447-4475-ab9d-91815442d3c8" TYPE="swap"
/dev/sdc: PTUUID="5da64b68-d287-3d4b-827b-97b71583fa25" PTTYPE="gpt"
/dev/sda2: UUID="1CE8-D674" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="45ffc38f-33be-4f7b-81ce-ff16fc5710d1"
/dev/sda3: UUID="l9cUZY-VUpe-ntgz-Z8Wh-CQVS-C6Ed-c98aLR" TYPE="LVM2_member" PARTUUID="7b85876d-7bff-40ba-a2ea-3bf475634fd1"
/dev/sda1: PARTUUID="05b8c8e5-546f-4b0a-ad7e-846a496129d8"
/dev/mapper/2747f0f3550c21d1e6c9ce900be3561b5: UUID="85f4b81e-6d85-4448-b89c-73540c1ca7e2" BLOCK_SIZE="4096" TYPE="ext4"


root@pve01:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=98948508k,nr_inodes=24737127,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=19796480k,mode=755,inode64)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=15190)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
ramfs on /run/credentials/systemd-sysusers.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
ramfs on /run/credentials/systemd-tmpfiles-setup-dev.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
ramfs on /run/credentials/systemd-sysctl.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
ramfs on /run/credentials/systemd-tmpfiles-setup.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=19796476k,nr_inodes=4949119,mode=700,inode64)

Thanks!
 
If you want a SCSI disk (sd) to be used for local storage, specifically as directory storage for files, then you need to :
- format the disk (you seem to have done it already)
- create an appropriate mount point
- mount the disk, preferrably via /etc/fstab so that the mount will persist across reboots
- create PVE directory storage pool. Make sure to set "is_mount_point" attribute to true
- designate that new storage pool to support appropriate content type

I don't see the disks you expect to be used as local storage mounted on your system.
As you are using multipath, you need to add the mpath device to fstab, along with sdc



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited: