Is multipath enabled by default?

Scott Zupek

Active Member
Nov 5, 2018
27
1
43
44
Hello,

I am running into an odd issue where my SYNOLOGY LUN is showing "?". I added a 2nd host/created a cluster today and the new host connected fine, but the old one still shows the same ?. I have rebooted both hosts and am trying to figure out why H1 (host 1) is continuing to show that the LUN is unavailable/offline but all the VMs are currently running and backing up from then LUN.

2024-07-27_15-53.png

Someone recommended that THIS but I find it difficult to believe that PVE wouldn't have multipath enabled by default. The synology is already configured for multipath.

Here are results from my troubleshooting:

Code:
root@C14G-H1-PVE:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,iso
        maxfiles 1
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

lvm: local-Datastore1
        vgname local-Datastore1
        content rootdir,images
        nodes C14G-H1-PVE
        shared 0

iscsi: SynNAS_Raid5
        portal 10.127.0.150
        target iqn.2000-01.com.synology:SYNSAN1-c14g.Target-1.cb16691f22
        content none

nfs: MAIN-FNS
        export /volume1/MAIN-NFS
        path /mnt/pve/MAIN-FNS
        server 10.127.0.150
        content iso,snippets,rootdir,backup,images,vztmpl
        maxfiles 7

Code:
root@C14G-H1-PVE:~# pvesm scan iscsi 10.127.0.150
iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 10.127.0.150,3260,1 iqn.2000-01.com.synology:SYNSAN1-c14g.Target-1.cb16691f22]
iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] fe80::211:32ff:fe6a:19b5,3260,1 iqn.2000-01.com.synology:SYNSAN1-c14g.Target-1.cb16691f22]
iqn.2000-01.com.synology:SYNSAN1-c14g.Target-1.cb16691f22 10.127.0.150:3260,[fe80::211:32ff:fe6a:19b5]:3260

Code:
root@C14G-H1-PVE:~#  iscsiadm -m session -P 1 | grep 'iSCSI.*State' 
iscsiadm: No active sessions.


If I run on H2 (no VMs currently running from it) I get
Code:
iscsiadm -m session -P 1 | grep 'iSCSI.*State'

Code:
root@C14G-H2-PVE:~# pvesm scan iscsi 10.127.0.150
iqn.2000-01.com.synology:SYNSAN1-c14g.Target-1.cb16691f22 10.127.0.150:3260,[fe80::211:32ff:fe6a:19b5]:3260

storage.cfg file is IDENTICAL

and here is the multipath support check in Synology LUN
2024-07-27_18-30.png
 
Is multipath package installed on your system?
https://superuser.com/questions/175...package-has-been-installed-via-package-manage

Its not by default, hence the wiki installation step.

Why is it it surprising that an optional technology, that is not in use by majority, is not installed/enabled by default?

On a side note, its not enabled by default on ESXi or HyperV either. One has to go through a number of steps to get multipath running.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Additionally, to address your point about "?" - thats a health check status result. Its done by pvestatd. The question mark means that there is some sort of trouble with the storage from the point of view of pvestatd. It is independent of whether the multipath is enabled or not.

You did "pvesm scan" which shows a bit of unusual result. You should also do "pvesm status". It may have another clue.

Best,


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Just realized, that your "?" is possibly a result of your storage advertising iSCSI on IPs that are not reachable by PVE, i.e. internal ones.
This is a recent change in behavior of PVE. There is a Bugzilla entry somewhere, as well as few discussions on the forum. The search function should yield some results.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Is multipath package installed on your system?
https://superuser.com/questions/175...package-has-been-installed-via-package-manage

Its not by default, hence the wiki installation step.

Why is it it surprising that an optional technology, that is not in use by majority, is not installed/enabled by default?

On a side note, its not enabled by default on ESXi or HyperV either. One has to go through a number of steps to get multipath running.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
That's a good point. I do come from the ESXi side (pre-realtek nightmare). I was under the expectation the package would be installed, but just not enabled. I believe the issue may be tied to previous IP scheming before I moved to a managed network. When I search for 3260 traffic (iSCSI) in the firewall, I see the host in question here (H1) trying to connect to the OLD IP address of the Synology. I have looked high and low for a config file showing this IP and I ran the --cache option to flush it out, but that didn't do anything for iSCSI. I am wondering if there is a .conf file I missed somewhere...

I will go through the documentation/wiki and configure iSCSI as required, I just wanted to make sure it wasn't an option just not enabled (but still installed) before hand. Thank you!

2024-07-27_18-03.png
 
Also for more details, I can ping and reach via ssh the SYNOLOGY from the CLI Shell but for some reason it shows as NOT ACTIVE, even though it's working.
2024-07-28_00-09.png
 
Just to add my two cents, if you want to disable using those IP's from proxmox's end, what you could do is run the following commands (replace the 1.1.1.1 for the IP('s) that might be unreachable)
Code:
iscsiadm -m node -p 1.1.1.1 -o update -n node.startup -v manual
iscsiadm -m node -p 1.1.1.1 --logout
Personally I have set up multipath with proxmox before, because my ESXi hosts had it and I wanted to replicate things as 1:1 as possible, but I did so via CLI (instead of directly from the GUI, so I could set up a single volume(-group) on it, instead of a LUN per disk being created, which wouldn't be that bad if it wasn't for our storage-based backup solution not having an "all luns" or "all luns in target" option...).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!