ISCSI with proxmox, specifically Nimble Storage Array

dan_karabin

New Member
Apr 4, 2024
8
0
1
Has anyone successfully configured a Nimble Storage Array for use with proxmox? We're looking at using our existing Nimbles for storage but haven't really found anything out about how to do this successfully. Any help would be appreciated.
 
Has anyone successfully configured a Nimble Storage Array for use with proxmox? We're looking at using our existing Nimbles for storage but haven't really found anything out about how to do this successfully. Any help would be appreciated.
Since there is no special integrations, you just need to make iSCSI or NFS, or CIFS, available to PVE.
Think of PVE as a client of one of those protocols, so the setup should be pretty standard.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Do you have any documentation on how to do this? Pretty new to [proxmox and having plenty of difficulties
In best case scenario the configuration can be driven completely from PVE GUI. In more complex environments you may need to drop to shell.
Some helpful links are:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#storage_open_iscsi
https://pve.proxmox.com/wiki/Storage:_iSCSI
https://pve.proxmox.com/wiki/ISCSI_Multipath

Give it a try and if you have a specific question, feel free to come back.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I have a nimble running in our new PVE environment, but it is a bit tricky. If you are not used to iSCSI and multipathing with Linux, you have to get your head around it (Red Hat has some great resources), but it is all at the OS (Debian) level. In PVE, you simply set up the storage that will be available in the cluster; you have to set up iSCSI and multipathing on each node, and then, at the cluster level, you can create your LVM volumes.
 
Hi @pthomson , I'd disagree about "trickiness" label. The high level steps are pretty much identical to either ESXi or HyperV:
1 enter iSCSI information for initiator to target connectivity
2 ensure that system sees multiple paths properly
3 place a layer on top of the raw lan (vmvfs, csv, lvm)


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Rahim
新しい PVE 環境で Nimble を実行していますが、少し扱いにくいです。Linux での iSCSI とマルチパスに慣れていない場合は、理解する必要があります (Red Hat には優れたリソースがあります)。ただし、すべて OS (Debian) レベルです。PVE では、クラスターで使用できるストレージを設定するだけです。各ノードで iSCSI とマルチパスを設定すると、クラスター レベルで LVM ボリュームを作成できます。
Nimble の後継である Alletra を使用していますが、iSCSI とマルチパスの設定に問題があります。
設定方法を教えてください。LVM
ボリュームを作成したいです。
 
1 enter iSCSI information for initiator to target connectivity - DONE
2 ensure that system sees multiple paths properly - HOW ?
3 place a layer on top of the raw lan (vmvfs, csv, lvm)

Getting

iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: Could not login to [iface: default, target: iqn.2007-11.com.nimblestorage:gcnimble-g69551a7ebf6a6c90, portal: 192.168.30.13,3260].
iscsiadm: initiator reported error (15 - session exists)
iscsiadm: Could not login to [iface: default, target: iqn.2007-11.com.nimblestorage:gcnimble-g69551a7ebf6a6c90, portal: 192.168.30.14,3260].
iscsiadm: initiator reported error (15 - session exists)
iscsiadm: Could not log into all portals
Logging in to [iface: default, target: iqn.2007-11.com.nimblestorage:gcnimble-g69551a7ebf6a6c90, portal: 192.168.30.13,3260]
Logging in to [iface: default, target: iqn.2007-11.com.nimblestorage:gcnimble-g69551a7ebf6a6c90, portal: 192.168.30.14,3260]
command '/usr/bin/iscsiadm --mode node --targetname iqn.2007-11.com.nimblestorage:gcnimble-g69551a7ebf6a6c90 --login' failed: exit code 15
 
Hi @dave10x .
It would be best if you opened your own thread. While the backend storage may be the same, the environmental factors and configuration is likely entirely different from the September thread you bumped.

2 ensure that system sees multiple paths properly - HOW ?
Using basic Linux command line tools: lsblk, blkid, lsscsi, etc

You may want to review this document: https://pve.proxmox.com/wiki/Multipath

Good luck

PS as for the errors you are getting - they are not an indication of a critical system issue, but rather they suggest you've been playing around with the system and accumulated some "history". A reboot, or at the least iSCSI database clean out may help.
In the the new thread you will open, you can add more details about your PVE cluster, ie storage config, current state , etc


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox