Multipath iSCSI problems with 8.1

Yes, I know that iSCSI ist legacy tech, and I avoid it where I can, but still a lot of customers of me, especially those coming from VMWare often bring iSCSI Clusters.
Although NVMe/TCP is gaining steam, I wouldn't put iSCSI into the legacy column. With proper implementation, it can still provide a lot of oomph to the user. iSCSI is an industry-standard (there is a built-in initiator in every OS) way to access block storage.

@Lephisto , do you or the customers you are helping have appropriate PVE subscriptions? Have you raised a case through the support portal? Forum works great for many things, but if you have a critical issue, direct contact is always the best approach.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I have community subscriptions but this shouldn't make a difference. Changes in the last Proxmox releases have just broken Multipath support, I don't see why I should need a more expensive subscription.

Might be that iSCSI is still in use, but at least Proxmox is treating it legacy'ish? It's two perl modules that carry the problematic code, but so far no statement from Proxmox.
 
Changes in the last Proxmox releases have just broken Multipath support, I don't see why I should need a more expensive subscription.
I sympathize with your plight. The breaking change should have been configurable IMHO.
However, Multipath continues to work just fine for our customers who use iSCSI.

On the other hand, the change exposed a flaw in badly designed storage systems. They should not be leaking internal IPs to clients. That is just bad architecture.

You have a few options: a) wait for PVE developers reply/decision b) roll-back the change (you can find pre-change versions in github, or older ISOs) c) implement filter yourself

I'd say your most immediate path to resolution is to rollback to pre-change versions.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
yeah, but proxmox is not exactly designed for rolling back things. I already thought about just replacing the perl modules manually, but still this is not exactly nice :D
 
already thought about just replacing the perl modules manually, but still this is not exactly nice
That is exactly what I meant. Not the whole system.

Track down the change in the Dev mailing list or via RO github mirror and "un-patch" the relevant files. Obviously this would not be a supported/recommended path. Testing that it works and is not overwritten by next update will be completely on you.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I am very disappointed with this issue.

Not all interfaces on my storage solution have IPs on routable networks for my PROXMOX server. Not a problem for Windows, Linux or even PROXMOX 7.x clients, but after upgrading to PROXMOX 8.x, all interfaces in my iSCSI storage solution are discovered and multipath is forced, retrying impossible connections over and over again, degrading disk access.

I propose a temporary workaround, not a solution, be careful in production environments!, but maybe it will help you.

  1. Stop VMs that are using iSCSI drives.
  2. Run iscsiadm -m node -o delete -T your_IQN --portal yourIP:yourPORT in order to delete autodetected portals -don't delete the portal you used to create te ISCSI destination!- (you can see iSCSI portals for your under /etc/iscsi/<your_IQN> folder)
  3. Restart iscsi service: service iscsid restart
  4. Start the VMs that use iSCSI drives
Performance graphics under Summary section and iSCSI access to the drives should work.
 
Last edited:
Yeah, i also work around with it through iscsiadm -m node -o delete [..] but it's really really annoying.

I know that it#s basically the iscsi vendors wrong implementation, but previously proxmox behaved in a way that it was easy to work around. It would really be good to get a feedback of a proxmox dev.
 
Hi, I have the same problem, I can't add a new iscsi volume with Proxmox 8.1

Code:
iscsiadm -m node --login
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: Could not log into all portals

did they solve it?
 
Hi,
Hi, I have the same problem, I can't add a new iscsi volume with Proxmox 8.1

Code:
iscsiadm -m node --login
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: Could not log into all portals

did they solve it?
can you show the output of iscsiadm -m node and iscsiadm -m session? Can the host reach all the portals listed in iscsiadm -m node?

As far as I can tell and as pointed out by others in this thread, the issues reported here are most likely due to the iSCSI target advertising portals that the host cannot reach, and pvestatd (via the iSCSI storage plugin) constantly retrying to log into the unavailable portal(s). As discussed before, the preferred and most robust solution is to configure the target such that it only advertises portals that the host can actually reach.

That said, the timeout for these unsuccessful logins is currently (as of libpve-storage-perl 8.2.5) at ~120 seconds, which is much higher than it should be, and is probably part of the reason why nodes go grey. A patch to decrease this timeout to ~15 seconds was recently applied [1]. It is not yet available in a new package version, but I'll post here when it is.

We're also currently working on a better solution, but I cannot give an ETA on a patch.

[1] https://git.proxmox.com/?p=pve-storage.git;a=commit;h=e16c816f97a865a709bbd2d8c01e7b1551b0cc97
 
Last edited:
  • Like
Reactions: Johannes S
Hi,

can you show the output of iscsiadm -m node and iscsiadm -m session? Can the host reach all the portals listed in iscsiadm -m node?

As far as I can tell and as pointed out by others in this thread, the issues reported here are most likely due to the iSCSI target advertising portals that the host cannot reach, and pvestatd (via the iSCSI storage plugin) constantly retrying to log into the unavailable portal(s). As discussed before, the preferred and most robust solution is to configure the target such that it only advertises portals that the host can actually reach.

That said, the timeout for these unsuccessful logins is currently (as of libpve-storage-perl 8.2.5) at ~120 seconds, which is much higher than it should be, and is probably part of the reason why nodes go grey. A patch to decrease this timeout to ~15 seconds was recently applied [1]. It is not yet available in a new package version, but I'll post here when it is.

We're also currently working on a better solution, but I cannot give an ETA on a patch.

[1] https://git.proxmox.com/?p=pve-storage.git;a=commit;h=e16c816f97a865a709bbd2d8c01e7b1551b0cc97
Hi, I have no connectivity problems between my computers with SAN, my network is 10Gb Ethernet

Code:
[root@srv-02 ~]# iscsiadm -m node
10.10.9.10:3260,1 iqn.2021-08.com.storage2.domain.com:tgt-iscsi-a
10.10.10.10:3260,2 iqn.2021-08.com.storage2.domain.com:tgt-iscsi-b
10.10.9.1:3260,1 iqn.2024-11.com.storage1.domain.com:tgt-iscsi-a
10.10.10.1:3260,2 iqn.2024-11.com.storage1.domain.com:tgt-iscsi-b

Code:
[root@srv-02 ~]# iscsiadm -m session
tcp: [1] 10.10.10.10:3260,2 iqn.2021-08.com.storage2.domain.com:tgt-iscsi-b (non-flash)
tcp: [2] 10.10.9.10:3260,1 iqn.2021-08.com.storage2.domain.com:tgt-iscsi-a (non-flash)
tcp: [3] 10.10.10.1:3260,2 iqn.2024-11.com.storage1.domain.com:tgt-iscsi-b (non-flash)
tcp: [4] 10.10.9.1:3260,1 iqn.2024-11.com.storage1.domain.com:tgt-iscsi-a (non-flash)

I have not yet updated to the latest version of proxmox, I have access to the enterprise repo, but will it help with this?

I tried modifying the times in the iscsi file node.session.initial_login_retry_max = 0 , but it did not help.
Right now I can't connect my new storage via iSCSI :(
 
If I list the disks I don't see the correct size for sdh - sdi

Code:
[root@srv-02 ~]# lsscsi -s
[0:0:1:0]    disk    ATA      KINGSTON SA400S3 B1H5  /dev/sda    120GB
[1:0:1:0]    disk    ATA      KINGSTON SA400S3 B1H5  /dev/sdb    120GB
[3:0:0:0]    disk    ATA      CT240BX500SSD1   052   /dev/sdc    240GB
[6:0:0:6]    disk    FreeNAS  iSCSI Disk       0123  /dev/sdd   1.28TB
[6:0:0:12]   disk    FreeNAS  iSCSI Disk       0123  /dev/sdf   6.15TB
[7:0:0:6]    disk    FreeNAS  iSCSI Disk       0123  /dev/sde   1.28TB
[7:0:0:12]   disk    FreeNAS  iSCSI Disk       0123  /dev/sdg   6.15TB
[8:0:0:1]    disk    FreeNAS  iSCSI Disk       0123  /dev/sdh    131kB
[9:0:0:1]    disk    FreeNAS  iSCSI Disk       0123  /dev/sdi    131kB
 
Hi, I have no connectivity problems between my computers with SAN, my network is 10Gb Ethernet

Code:
[root@srv-02 ~]# iscsiadm -m node
10.10.9.10:3260,1 iqn.2021-08.com.storage2.domain.com:tgt-iscsi-a
10.10.10.10:3260,2 iqn.2021-08.com.storage2.domain.com:tgt-iscsi-b
10.10.9.1:3260,1 iqn.2024-11.com.storage1.domain.com:tgt-iscsi-a
10.10.10.1:3260,2 iqn.2024-11.com.storage1.domain.com:tgt-iscsi-b

Code:
[root@srv-02 ~]# iscsiadm -m session
tcp: [1] 10.10.10.10:3260,2 iqn.2021-08.com.storage2.domain.com:tgt-iscsi-b (non-flash)
tcp: [2] 10.10.9.10:3260,1 iqn.2021-08.com.storage2.domain.com:tgt-iscsi-a (non-flash)
tcp: [3] 10.10.10.1:3260,2 iqn.2024-11.com.storage1.domain.com:tgt-iscsi-b (non-flash)
tcp: [4] 10.10.9.1:3260,1 iqn.2024-11.com.storage1.domain.com:tgt-iscsi-a (non-flash)
Thank. As there is one session to each discovered portal, this seems like a different issue than the one discussed in this thread. Let's move the troubleshooting to the thread you opened [1], I posted an answer there.

[1] https://forum.proxmox.com/threads/add-new-volume-with-iscsi.157006/
 
  • Like
Reactions: Johannes S

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!