Trouble attaching SAN to Proxmox

sikeb

New Member
Aug 16, 2024
14
0
1
I am testing Proxmox and we have a SAN from Pure Storage. I have a 3 host cluster and each can see the storage available (both local and SAN), but the SAN storage is duplicated with each Fibre channel connection. I have installed multipath tools and have used some steps found here to edit the conf file with no change in how the storage is seen.
SAN storage will not attach to all 3 hosts. Every once in a while I can get the SAN storage to attach to one host, but not all three.
Is there a Pure Storage plugin like VMWare?
How would I attach a SAN? All other posts I have found have not worked so far.
Any information would be helpful.
 
Hi @sikeb, welcome to the forum.

Proxmox is based on Debian OS with Ubuntu derived Kernel. Since your initial connectivity is done at Linux OS/Kernel level, there is nothing Proxmox specific in the steps.

Have you reviewed your vendor's documentation on connecting SAN via iSCSI to Linux?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
We do have some REHL servers that use the same storage on production. I will give this a look and see what I can do. Are you suggesting that Proxmox has no issue here and its all on the SAN?
I just want to make sure I am going in the right direction.
 
I just want to make sure I am going in the right direction.
You have not provided any technical information to draw conclusions. Again, treat initial troubleshooting/config as Linux/iSCSI/Pure setup.
Perhaps output of the following commands might help forum members to get a better picture:
- lsscsi (from each node)
- lsblk (from each node)
- cat /etc/pve/storage.cfg
- pvesm status (from each node)
- multipath -ll (from each node)

if you do provide above information, please use Text encoded with CODE tags (</> icon in the edit box toolbar)

Before doing anything with multipath, you need to ensure that each node sees the expected LUN. If multiple paths are used - you should see it as many times as you have paths.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Don't know if I am doing this right and using :
lsscsi: command not found
multipath -ll nothing happens
Node 1
lsblk

Code:
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                            8:0    0 447.1G  0 disk
├─sda1                         8:1    0  1007K  0 part
├─sda2                         8:2    0     1G  0 part /boot/efi
└─sda3                         8:3    0 446.1G  0 part
  ├─pve-swap                 252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           252:2    0   3.3G  0 lvm 
  │ └─pve-data-tpool         252:4    0 319.6G  0 lvm 
  │   ├─pve-data             252:5    0 319.6G  1 lvm 
  │   ├─pve-vm--103--disk--2 252:6    0     4M  0 lvm 
  │   ├─pve-vm--103--disk--0 252:7    0     4M  0 lvm 
  │   ├─pve-vm--103--disk--1 252:8    0   100G  0 lvm 
  │   └─pve-vz               252:9    0     7T  0 lvm 
  └─pve-data_tdata           252:3    0 319.6G  0 lvm 
    └─pve-data-tpool         252:4    0 319.6G  0 lvm 
      ├─pve-data             252:5    0 319.6G  1 lvm 
      ├─pve-vm--103--disk--2 252:6    0     4M  0 lvm 
      ├─pve-vm--103--disk--0 252:7    0     4M  0 lvm 
      ├─pve-vm--103--disk--1 252:8    0   100G  0 lvm 
      └─pve-vz               252:9    0     7T  0 lvm 
sdb                            8:16   0     4T  0 disk
sdc                            8:32   0     8T  0 disk
sdd                            8:48   0     4T  0 disk
sde                            8:64   0     8T  0 disk
sdf                            8:80   0     4T  0 disk
sdg                            8:96   0     8T  0 disk
sdh                            8:112  0     4T  0 disk
sdi                            8:128  0     8T  0 disk
sdj                            8:144  1   7.4G  0 disk
├─sdj1                         8:145  1   100M  0 part
├─sdj5                         8:149  1   500M  0 part
├─sdj6                         8:150  1   500M  0 part
└─sdj7                         8:151  1   6.3G  0 part
sdk                            8:160  1    64K  1 disk
sdl                            8:176  1   320K  0 disk
sdm                            8:192  0     4T  0 disk
sdn                            8:208  0     8T  0 disk
sdo                            8:224  0     4T  0 disk
sdp                            8:240  0     8T  0 disk
sdq                           65:0    0     4T  0 disk
sdr                           65:16   0     8T  0 disk
sds                           65:32   0     4T  0 disk
sdt                           65:48   0     8T  0 disk

Rich (BB code):
lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

dir: local
        path /var/lib/vz
        content images,snippets,backup,rootdir,iso,vztmpl
        prune-backups keep-all=1

dir: DIR01
        path /dev/sdt
        content vztmpl,iso,rootdir,backup,images,snippets
        prune-backups keep-all=1
        shared 1

dir: DIR-024
        path /var/dev/sdg
        content images,snippets,rootdir,backup,iso,vztmpl
        prune-backups keep-all=1
        shared 1

dir: DIR-23
        path /mnt/pve/DIR-23
        content iso,vztmpl,snippets,images,backup,rootdir
        is_mountpoint 1
        nodes hst-5

lvm: Storage-01
        vgname Storage-01
        content images,rootdir
        nodes hst-6
        shared 0

Code:
mkdir /dev/sdt: File exists at /usr/share/perl5/PVE/Storage/Plugin.pm line 1753.
Name              Type     Status           Total            Used       Available        %
DIR-024            dir     active        98497780        48888000        44560232   49.63%
DIR-23             dir   disabled               0               0               0      N/A
DIR01              dir   inactive               0               0               0    0.00%
Storage-01         lvm   disabled               0               0               0      N/A
local              dir     active        98497780        48888000        44560232   49.63%
local-lvm      lvmthin     active       335134720       104863653       230271066   31.29%
 
Node 2
lsscsi command not found
lsblk

Code:
root@hst-5:~# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                  8:0    0 447.1G  0 disk
├─sda1               8:1    0  1007K  0 part
├─sda2               8:2    0     1G  0 part /boot/efi
└─sda3               8:3    0 446.1G  0 part
  ├─pve-swap       252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta 252:2    0   3.3G  0 lvm 
  │ └─pve-data     252:4    0 319.6G  0 lvm 
  └─pve-data_tdata 252:3    0 319.6G  0 lvm 
    └─pve-data     252:4    0 319.6G  0 lvm 
sdb                  8:16   0     4T  0 disk
sdc                  8:32   0     8T  0 disk
sdd                  8:48   0     4T  0 disk
sde                  8:64   0     8T  0 disk
sdf                  8:80   0     4T  0 disk
sdg                  8:96   0     8T  0 disk
└─sdg1               8:97   0     8T  0 part /mnt/pve/DIR-23
sdh                  8:112  0     4T  0 disk
sdi                  8:128  0     8T  0 disk
sdj                  8:144  0     4T  0 disk
sdk                  8:160  0     8T  0 disk
sdl                  8:176  0     4T  0 disk
sdm                  8:192  0     8T  0 disk
sdn                  8:208  0     4T  0 disk
sdo                  8:224  0     8T  0 disk
sdp                  8:240  0     4T  0 disk
sdq                 65:0    0     8T  0 disk

Code:
cat /etc/pve/storage.cfg
lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

dir: local
        path /var/lib/vz
        content images,snippets,backup,rootdir,iso,vztmpl
        prune-backups keep-all=1

dir: DIR01
        path /dev/sdt
        content vztmpl,iso,rootdir,backup,images,snippets
        prune-backups keep-all=1
        shared 1

dir: DIR-024
        path /var/dev/sdg
        content images,snippets,rootdir,backup,iso,vztmpl
        prune-backups keep-all=1
        shared 1

dir: DIR-23
        path /mnt/pve/DIR-23
        content iso,vztmpl,snippets,images,backup,rootdir
        is_mountpoint 1
        nodes hst-5

lvm: Storage-01
        vgname Storage-01
        content images,rootdir
        nodes hst-6
        shared 0
Code:
root@hst-5:~# pvesm status
Name              Type     Status           Total            Used       Available        %
DIR-024            dir     active        98497780         6138808        87309424    6.23%
DIR-23             dir     active      8521063312              56      8091550196    0.00%
DIR01              dir     active       263968524               0       263968524    0.00%
Storage-01         lvm   disabled               0               0               0      N/A
local              dir     active        98497780         6138808        87309424    6.23%
local-lvm      lvmthin     active       335134720               0       335134720    0.00%

multipath -ll nothing
 
multipath -ll nothing


Code:
lsscsi
[0:0:0:0]    disk    ATA      MK000480GWSSC    HPG2  /dev/sda
[4:0:0:253]  disk    PURE     FlashArray       8888  /dev/sdb
[4:0:0:254]  disk    PURE     FlashArray       8888  /dev/sdc
[4:0:1:253]  disk    PURE     FlashArray       8888  /dev/sdd
[4:0:1:254]  disk    PURE     FlashArray       8888  /dev/sde
[4:0:2:253]  disk    PURE     FlashArray       8888  /dev/sdf
[4:0:2:254]  disk    PURE     FlashArray       8888  /dev/sdg
[4:0:3:253]  disk    PURE     FlashArray       8888  /dev/sdh
[4:0:3:254]  disk    PURE     FlashArray       8888  /dev/sdi
[5:0:0:0]    disk    HPE      USB RAID LUN     0202  /dev/sdj
[5:0:0:1]    disk    HPE      Log File LUN     0202  /dev/sdk
[5:0:0:2]    disk    HPE      SPI Flash LUN    0202  /dev/sdl
[6:0:0:253]  disk    PURE     FlashArray       8888  /dev/sdm
[6:0:0:254]  disk    PURE     FlashArray       8888  /dev/sdn
[6:0:1:253]  disk    PURE     FlashArray       8888  /dev/sdo
[6:0:1:254]  disk    PURE     FlashArray       8888  /dev/sdp
[6:0:2:253]  disk    PURE     FlashArray       8888  /dev/sdq
[6:0:2:254]  disk    PURE     FlashArray       8888  /dev/sdr
[6:0:3:253]  disk    PURE     FlashArray       8888  /dev/sds
[6:0:3:254]  disk    PURE     FlashArray       8888  /dev/sdt


Code:
lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                            8:0    0 447.1G  0 disk
├─sda1                         8:1    0  1007K  0 part
├─sda2                         8:2    0     1G  0 part /boot/efi
└─sda3                         8:3    0 446.1G  0 part
  ├─pve-swap                 252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           252:2    0   3.3G  0 lvm 
  │ └─pve-data-tpool         252:4    0 319.6G  0 lvm 
  │   ├─pve-data             252:5    0 319.6G  1 lvm 
  │   ├─pve-vm--103--disk--2 252:6    0     4M  0 lvm 
  │   ├─pve-vm--103--disk--0 252:7    0     4M  0 lvm 
  │   ├─pve-vm--103--disk--1 252:8    0   100G  0 lvm 
  │   └─pve-vz               252:9    0     7T  0 lvm 
  └─pve-data_tdata           252:3    0 319.6G  0 lvm 
    └─pve-data-tpool         252:4    0 319.6G  0 lvm 
      ├─pve-data             252:5    0 319.6G  1 lvm 
      ├─pve-vm--103--disk--2 252:6    0     4M  0 lvm 
      ├─pve-vm--103--disk--0 252:7    0     4M  0 lvm 
      ├─pve-vm--103--disk--1 252:8    0   100G  0 lvm 
      └─pve-vz               252:9    0     7T  0 lvm 
sdb                            8:16   0     4T  0 disk
sdc                            8:32   0     8T  0 disk
sdd                            8:48   0     4T  0 disk
sde                            8:64   0     8T  0 disk
sdf                            8:80   0     4T  0 disk
sdg                            8:96   0     8T  0 disk
sdh                            8:112  0     4T  0 disk
sdi                            8:128  0     8T  0 disk
sdj                            8:144  1   7.4G  0 disk
├─sdj1                         8:145  1   100M  0 part
├─sdj5                         8:149  1   500M  0 part
├─sdj6                         8:150  1   500M  0 part
└─sdj7                         8:151  1   6.3G  0 part
sdk                            8:160  1    64K  1 disk
sdl                            8:176  1   320K  0 disk
sdm                            8:192  0     4T  0 disk
sdn                            8:208  0     8T  0 disk
sdo                            8:224  0     4T  0 disk
sdp                            8:240  0     8T  0 disk
sdq                           65:0    0     4T  0 disk
sdr                           65:16   0     8T  0 disk
sds                           65:32   0     4T  0 disk
sdt                           65:48   0     8T  0 disk


Code:
 cat /etc/pve/storage.cfg
lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

dir: local
        path /var/lib/vz
        content images,snippets,backup,rootdir,iso,vztmpl
        prune-backups keep-all=1

dir: DIR01
        path /dev/sdt
        content vztmpl,iso,rootdir,backup,images,snippets
        prune-backups keep-all=1
        shared 1

dir: DIR-024
        path /var/dev/sdg
        content images,snippets,rootdir,backup,iso,vztmpl
        prune-backups keep-all=1
        shared 1

dir: DIR-23
        path /mnt/pve/DIR-23
        content iso,vztmpl,snippets,images,backup,rootdir
        is_mountpoint 1
        nodes hst-5

lvm: Storage-01
        vgname Storage-01
        content images,rootdir
        nodes hst-6
        shared 0




Code:
 pvesm status
mkdir /dev/sdt: File exists at /usr/share/perl5/PVE/Storage/Plugin.pm line 1753.
Name              Type     Status           Total            Used       Available        %
DIR-024            dir     active        98497780        48887352        44560880   49.63%
DIR-23             dir   disabled               0               0               0      N/A
DIR01              dir   inactive               0               0               0    0.00%
Storage-01         lvm   disabled               0               0               0      N/A
local              dir     active        98497780        48887352        44560880   49.63%
local-lvm      lvmthin     active       335134720       104863653       230271066   31.29%
 
You can install "lsscsi" with "apt install lsscsi".
If multipath shows nothing , then you do not have it configured properly.

You also dont appear to have any iSCSI storage pools configured in PVE. How did you configure iSCSI connectivity?
You are pointing some storage pools directly to RAW device in your configuration. Although status shows Active, this is very unorthodox... I'd never set it up like that.
You seem to have placed a filesystem directly on raw devices, that may have multipath pair. (i see one of them is a single path HP).

My advice is to remove all storage pools except "local" and "local-lvm", wipe your disks, and go back to step one.

Other outputs that may be helpful are:
iscsiadm -m node
iscsiadm -m session
mount
cat /etc/fstab


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I deleted no problem, they were just there as we played with different configurations. Everything wiped except local and local-lvm and I used apt install lsscsi on every shell.
While messing with it, I am unsure how I got stuff on the raw. I now want to make sure I am following best practices for how to attach this storage. Any other recommendations are very much appreciated.
 
FC
Okay. I added the extras anyway. Will that be a detriment? Any videos or other posts that you would recommend that could help me get this storage attached?
Also are you saying that for every Fibre channel connection it is normal to see the disk listed for each connection? We cannot have it just see the one disk, one time?
 
Okay. I added the extras anyway. Will that be a detriment? Any videos or other posts that you would recommend that could help me get this storage attached?
FC is actually somewhat simpler than iSCSI The LUNs "just show up" if you did the job right on SAN side.

The best resource for MP configuration for Pure FC is the Pure support website. You can also plug "proxmox multipath" into youtube, while most use iSCSI (because one does not need fancy array for it), the concepts are the same.

Also are you saying that for every Fibre channel connection it is normal to see the disk listed for each connection?
Thats how it worked for the last 30+ years. Absolutely normal and expected.
We cannot have it just see the one disk, one time?
why would you want that? That leaves you with a single point of failure.

Once multipath is configured, you are going to see /dev/mapper/mpathX device. That's what you will use going forward, not the underlying /dev/sdX.

Good luck

https://pve.proxmox.com/wiki/ISCSI_Multipath


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
/dev/mapper/mpathX is not showing up .. maybe multipath isnt configured correctly? Not sure what I am missing on that.
Also I can still see the storage, I used the documentation pvcreate, lvcreate try to add it to storage and it "adds" but it shows 0 bytes and other nodes are showing status unknown. Usually 1 of the 3 nodes shows status unknown...
 
/dev/mapper/mpathX is not showing up .. maybe multipath isnt configured correctly? Not sure what I am missing on that.
Also I can still see the storage, I used the documentation pvcreate, lvcreate try to add it to storage and it "adds" but it shows 0 bytes and other nodes are showing status unknown. Usually 1 of the 3 nodes shows status unknown...
Since you don't see any mpath devices, you should look at your multipath configuration.
The storage may still be blacklisted. You can whitelist the devices in the multipath with multipath -a /dev/sdX.
With multipath -v3 you do a rescan and then multipath -l should show you your devices correctly.

If you then have your multipath devices on all nodes, you can create an LVM PV on a node pvcreate /dev/mapper/mpatha.
Then create a VG vgcreate LUN001 /dev/mapper/mpatha. LUN001 is your Name for this Device.
And then create an LVM on the GUI and select your VG. (LUN0001 or similar)
 
I got to a host and go to the shell
I type in multipath -a /dev/sdX
I get:
Code:
73437.709072 | '/dev/sdX.' is not a valid argument

multipath -v3 I get
Code:
73510.165253 | set open fds limit to 1048576/1048576
73510.165311 | loading /lib/multipath/libchecktur.so checker
73510.165402 | checker tur: message table size = 3
73510.165411 | loading /lib/multipath/libprioconst.so prioritizer
73510.165491 | _init_foreign: foreign library "nvme" is not enabled
73510.178243 | sdj: size = 15564799
73510.178354 | sdj: vendor = HPE
73510.178373 | sdj: product = USB RAID LUN
73510.178390 | sdj: rev = 0202
73510.178837 | sdj: h:b:t:l = 5:0:0:0
73510.178989 | sdj: tgt_node_name = 2-4:1.0
73510.178994 | sdj: skip USB device 2-4:1.0
73510.179347 | sdk: size = 128
73510.179452 | sdk: vendor = HPE
73510.179469 | sdk: product = Log File LUN
73510.179486 | sdk: rev = 0202
73510.179939 | sdk: h:b:t:l = 5:0:0:1
73510.180081 | sdk: tgt_node_name = 2-4:1.0
73510.180085 | sdk: skip USB device 2-4:1.0
73510.180180 | sdl: size = 640
73510.180286 | sdl: vendor = HPE
73510.180303 | sdl: product = SPI Flash LUN
73510.180320 | sdl: rev = 0202
73510.180769 | sdl: h:b:t:l = 5:0:0:2
73510.180902 | sdl: tgt_node_name = 2-4:1.0
73510.180906 | sdl: skip USB device 2-4:1.0
73510.180990 | sda: size = 937703088
73510.181088 | sda: vendor = ATA
73510.181104 | sda: product = MK000480GWSSC
73510.181121 | sda: rev = HPG0
73510.181540 | sda: h:b:t:l = 0:0:0:0
73510.181714 | sda: tgt_node_name = ata-1.00
73510.181723 | sda: uid_attribute = ID_SERIAL (setting: multipath.conf defaults/devices section)
73510.181727 | sda: recheck_wwid = 1 (setting: multipath.conf defaults/devices section)
73510.181878 | sda: 58369 cyl, 255 heads, 63 sectors/track, start at 0
73510.181883 | sda: vpd_vendor_id = 0 "undef" (setting: multipath internal)
73510.181898 | sda: serial = S4DGNE0M203598
73510.181903 | sda: detect_checker = yes (setting: multipath internal)
73510.181965 | loading /lib/multipath/libcheckreadsector0.so checker
73510.182070 | checker readsector0: message table size = 0
73510.182075 | sda: path_checker = readsector0 (setting: multipath.conf defaults/devices section)
73510.182080 | sda: checker timeout = 30 s (setting: kernel sysfs)
73510.182252 | sda: readsector0 state = up
73510.182258 | sda: uid = MK000480GWSSC_S4DGNE0M203598 (udev)
73510.182268 | sda: wwid MK000480GWSSC_S4DGNE0M203598 blacklisted
73510.182559 | sdb: size = 8589934592
73510.182661 | sdb: vendor = PURE
73510.182677 | sdb: product = FlashArray
73510.182694 | sdb: rev = 8888
73510.183148 | sdb: h:b:t:l = 4:0:0:253
73510.183447 | sdb: tgt_node_name = 0x524a937cc3a16e00
73510.183452 | sdb: uid_attribute = ID_SERIAL (setting: multipath.conf defaults/devices section)
73510.183456 | sdb: recheck_wwid = 1 (setting: multipath.conf defaults/devices section)
73510.183902 | sdb: 0 cyl, 64 heads, 32 sectors/track, start at 0
73510.183908 | sdb: vpd_vendor_id = 0 "undef" (setting: multipath internal)
73510.183922 | sdb: serial = 9855039A90FC46EC000120E2
73510.183927 | sdb: detect_checker = yes (setting: multipath internal)
73510.184464 | sdb: path_checker = tur (setting: storage device autodetected)
73510.184483 | sdb: checker timeout = 30 s (setting: kernel sysfs)
73510.184651 | sdb: tur state = up
73510.184656 | sdb: uid = 3624a93709855039a90fc46ec000120e2 (udev)
73510.184708 | sdb: wwid 3624a93709855039a90fc46ec000120e2 blacklisted


I think that I see the WWID whitelisted? What am I doing wrong? What else would I need to post?
 
It wont let me post the full amount because of reply restrictions. Please let me know if that is helpful in any meaningful way.
 
So I ran that command on each host only the second one said "wwid "wwid#" added. The rest say nothing.
I am still unable to connect any SAN to the clustered hosts. What am I doing wrong?
 
Hi @sikeb , its been a bit since your last post, a lot of things could have changed. You will need to post all of the information about the current state of your systems again to get a meaningful suggestion.

I'd highly recommend that you open a case with Pure "how to attach my Pure SAN to Debian host". You paid a lot of money for that product and corresponding support.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Falk R.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!