Shared iSCSI with different path per PVE node

AngryAnt

Member
Mar 13, 2021
18
7
8
I have been successfully running an iSCSI setup for a while, but since updating to PVE8 and having a less than stable experience, I became less certain that my functional setup is also a supported one. I would appreciate any insight into what might be off here.

Storage is a four port Synology box with 14 iSCSI targets, each exposing a single LUN to three of the ports with static a IP on different subnets.

- Three PVE nodes sit in a cluster, each with at least two ethernet ports: One for general traffic (of which management is one VLAN), and one for iSCSI traffic - with a static IP on one of the three subnets (one node per subnet).
- The hosts file of each node has an entry for "storage" which maps to the IP of the Synology port on the same subnet as that of the storage port of the node.
- Cluster storage has entries for each iSCSI target, one example being:
Code:
iscsi: NAS-Media-iSCSI
    portal storage
    target iqn.2000-01.com.synology:NAS.Media.<ID>
    content none

lvm: NAS-Media
    vgname NAS-Media
    base NAS-Media-iSCSI:0.0.1.scsi-<ID>
    content images
    shared 1
As alluded to initially, each node successfully mounts the iSCSI targets and uses the LVM on the LUN for VM disks. However the syslog on each node also outputs errors about not being able to connect to the other two subnets the targets are available on. Is this an actual problem I can fix? Or if not, can I quiet it down? Or have I completely stepped in it with this setup?

Ex:
Code:
<time> <node> iscsid[2094]: Connection-1:0 to [target: iqn.2000-01.com.synology:NAS.Media.<ID>, portal: <IP on subnet this node has no access to>,3260] through [iface: default] is shutdown.
 
OK so it turns out the silence was less "that looks fine to me" and more "that's a weird corner case - no clue".

1: Is this an actual problem?
Yes. pvestatd will on each evaluation try to connect to all iSCSI nodes - the ones discovered as well as what you explicitly specified. I have found no way to change this behaviour. Switching /etc/iscsi/iscsid.conf scan from automatic to manual has no effect and using iscsiadm to remove entries seems to be overruled by the next pvestatd run (or some other PVE service).

Unfortunately the connection attempts invoked by pvestatd do not just produce log spam - they happen linearly, which means for each bad iSCSI node record the system will wait for a connection timeout before moving on. The result of these timeouts, given enough iSCSI nodes, is that pvestatd itself times out. This at minimum shows the PVE node as disconnected in its UI, resulting in functions like migration and VM creation being unavailable, but seems to just in general result in instability. My guess is something in PVE8 either increased some iSCSI timeouts or decreased the pvestatd one/made the system in general more sensitive to it.

2: Can I fix it?
Yes. Don't try to use a segmented network structure like that. It was a legacy decision on my part - from before the nodes were clustered, but I figured having more explicit control over per-node bandwidth might still be useful, so I kept it. Unfortunately PVE storage manager just clearly was not implemented with this use case in mind - assuming all iSCSI nodes are equally reachable.

To fix the issue at hand I dropped all network interfaces on both ends to the same subnet and pruned cached iSCSI node entries with the old IPs via iscsiadm.

Immediately the logs cleared up and with pvestatd running unimpeded, service stability returned to normal. I intend to make sure all PVE nodes run multipath for all iSCSI targets and hopefully will end up with even better bandwidth utilization than before.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!