TrueNAS Storage Plugin

I think I see a few issues.

1. You have the MTU on ens3f0.11 set to 9216, usually 9216 is what you set the MTU on the switch as, so header information can fit ontop of a 9000 MTU packet. Is it supposed to be like that? An MTU mismatch between hosts will give you issues.

You can try pinging different interfaces and hosts with the MTU set manually as well. For an MTU of 9000 it's usually safe to ping with a packet size of 8972

Example:

Bash:
# Send 100 pings with high packet size
ping -c 100 -s 8972 10.15.14.172

2. Make sure you have the multipath service configured:
https://pve.proxmox.com/wiki/ISCSI_Multipath
Bash:
systemctl enable multipathd
systemctl start multipathd
systemctl status multipathd

Basically make sure the iscsid.conf is configured and restart the multipath-tools.service

3. Make sure the Proxmox nodes can see and login to the portals. The plugin is supposed to do this automaticaly, but maybe there's an issue.

Bash:
iscsiadm -m discovery -t sendtargets -p 172.16.80.1:3260
iscsiadm -m discovery -t sendtargets -p 172.16.81.1:3260

4. Make sure your portals are setup on TrueNAS, if either of the commands above fail, that might be the issue.
Under Shares > iSCSI > Portals make sure your portal ID has both interfaces as listening.

If they are, attempt to login manually:

iscsiadm -m node -T YOURTRUENASBASEIQN:YOURTARGET --login

ex: iscsiadm -m node -T iqn.2005-10.org.freenas.ctl:proxmox --login

---

If that all still isn't working, you can enable debug mode by editing your storage.cfg and add "debug 2" to the config.

ex:

INI:
    use_multipath 1
    portals 10.20.30.20:3260,10.20.31.20:3260
    debug 2
    force_delete_on_inuse 1
    content images

This will dump a LOT of info into the journalctl logs. If you let it run for about 10 mins with debug 2 on and then run the diagnostics bundler in the diagnostics menu on the alpha branches install.sh - That would give me a pretty good idea of what's going on. You can PM me the bundle, as there's some sensitive information in there, but I try to have the installer redact that information.

Let me know what you find out
it appears that my proxmox boxes are targeting the "mgmt" interface.
root@dlk0entpve801:~# ping -c 100 -s 8972 172.16.80.1
PING 172.16.80.1 (172.16.80.1) 8972(9000) bytes of data.
8980 bytes from 172.16.80.1: icmp_seq=1 ttl=64 time=0.284 ms
8980 bytes from 172.16.80.1: icmp_seq=2 ttl=64 time=0.347 ms
8980 bytes from 172.16.80.1: icmp_seq=3 ttl=64 time=0.235 ms
8980 bytes from 172.16.80.1: icmp_seq=4 ttl=64 time=0.298 ms
8980 bytes from 172.16.80.1: icmp_seq=5 ttl=64 time=0.212 ms
^C
--- 172.16.80.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4102ms
rtt min/avg/max/mdev = 0.212/0.275/0.347/0.047 ms
root@dlk0entpve801:~# ping -c 100 -s 8972 172.16.81.1
PING 172.16.81.1 (172.16.81.1) 8972(9000) bytes of data.
8980 bytes from 172.16.81.1: icmp_seq=1 ttl=64 time=0.262 ms
8980 bytes from 172.16.81.1: icmp_seq=2 ttl=64 time=0.158 ms
8980 bytes from 172.16.81.1: icmp_seq=3 ttl=64 time=0.218 ms
^C
--- 172.16.81.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2044ms
rtt min/avg/max/mdev = 0.158/0.212/0.262/0.042 ms
root@dlk0entpve801:~# systemctl status multipathd
● multipathd.service - Device-Mapper Multipath Device Controller
Loaded: loaded (/lib/systemd/system/multipathd.service; enabled; preset: e>
Active: active (running) since Tue 2026-01-13 21:42:33 EST; 1 day 12h ago
TriggeredBy: ● multipathd.socket
Process: 523216 ExecStartPre=/sbin/modprobe dm-multipath (code=exited, stat>
Main PID: 523267 (multipathd)
Status: "up"
Tasks: 7
Memory: 36.1M
CPU: 59.629s
CGroup: /system.slice/multipathd.service
└─523267 /sbin/multipathd -d -s

Jan 14 11:13:33 dlk0entpve801 multipathd[523267]: reconfigure all (operator)
Jan 14 11:13:33 dlk0entpve801 multipathd[523267]: reconfigure: setting up paths>
Jan 14 11:13:34 dlk0entpve801 multipathd[523267]: reconfigure all (operator)
Jan 14 11:13:35 dlk0entpve801 multipathd[523267]: reconfigure: setting up paths>
Jan 14 11:17:29 dlk0entpve801 multipathd[523267]: reconfigure all (operator)
Jan 14 11:17:29 dlk0entpve801 multipathd[523267]: reconfigure: setting up paths>
Jan 14 11:17:29 dlk0entpve801 multipathd[523267]: reconfigure all (operator)
Jan 14 11:17:29 dlk0entpve801 multipathd[523267]: reconfigure all (operator)
Jan 14 11:17:29 dlk0entpve801 multipathd[523267]: reconfigure all (operator)
Jan 14 11:17:30 dlk0entpve801 multipathd[523267]: reconfigure: setting up paths>
root@dlk0entpve801:~# iscsiadm -m discovery -t sendtargets -p 172.16.80.1:3260
172.16.80.1:3260,1 iqn.2005-10.org.freenas.ctl:vm
10.20.35.12:3260,1 iqn.2005-10.org.freenas.ctl:vm
172.16.81.1:3260,1 iqn.2005-10.org.freenas.ctl:vm
root@dlk0entpve801:~# iscsiadm -m discovery -t sendtargets -p 172.16.81.1:3260
172.16.81.1:3260,1 iqn.2005-10.org.freenas.ctl:vm
10.20.35.12:3260,1 iqn.2005-10.org.freenas.ctl:vm
172.16.80.1:3260,1 iqn.2005-10.org.freenas.ctl:vm
root@dlk0entpve801:~# iscsiadm -m node -T iqn.2005-10.org.freenas.ctl:dev-stor --login
iscsiadm: No records found
root@dlk0entpve801:~# iscsiadm -m node -T iqn.2005-10.org.freenas.ctl:vm --login
Logging in to [iface: default, target: iqn.2005-10.org.freenas.ctl:vm, portal: 10.20.35.12,3260]
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
Login to [iface: default, target: iqn.2005-10.org.freenas.ctl:vm, portal: 10.20.35.12,3260] successful.
iscsiadm: Could not log into all portals
 
@warlocksyno
after several hours of AI research, here is our conclusion:
rueNAS Python middleware is simply refusing to generate the correct PORTAL_GROUP syntax required for IP isolation.

Since the middleware keeps overwriting your manual fixes, we have to use the "Post-Init" back door to force the kernel into compliance after the middleware finishes its broken routine.


Part 1: The "Immediate" Solution (Post-Init Persistence)​

This bypasses the UI/Middleware bug by injecting the correct configuration directly into the kernel after the service starts.

  1. Navigate to: System Settings -> Advanced -> Init/Shutdown Scripts.
  2. Add a new script:
    • Type: Command
    • Description: Force iSCSI IP Isolation (Override Middleware Bug)
    • Command: ```bash

      Wait for middleware to finish starting SCST​

      sleep 10

      Force the kernel to bind the Target to specific IPs only​

      echo "add_target_attribute iqn.2005-10.org.freenas.ctl:vm allowed_portal 172.16.80.1" > /sys/kernel/scst_tgt/targets/iscsi/mgmtecho "add_target_attribute iqn.2005-10.org.freenas.ctl:vm allowed_portal 172.16.81.1" > /sys/kernel/scst_tgt/targets/iscsi/mgmt
    • When: Post-Init
  3. Save and Reboot (or run those commands manually now).

Part 2: Technical Report for TrueNAS Support​

Subject: SCST Template Generation Bug: Failure to create PORTAL_GROUP in scst.conf (SCALE 25.04)

Environment:

  • OS: TrueNAS SCALE 25.04 (Electric Eel)
  • Networking: iSCSI on tagged VLAN interfaces (vlan11, vlan21) with Jumbo Frames (MTU 9216).
  • Storage Target: SCST driver.
The Problem:The middlewared template generator is failing to produce a valid PORTAL_GROUP section in /etc/scst.conf despite the Portal (ID 1) being configured in the database with specific IP addresses (172.16.80.1, 172.16.81.1).

Observations:

  1. Middleware Failure: The generated /etc/scst.conf lacks the PORTAL_GROUP header. Instead, it places the IPs as metadata comments in the INITIATOR lines (e.g., INITIATOR iqn...#172.16.80.1).
  2. Kernel State: Because the PORTAL_GROUP is missing, the iscsi-scst driver defaults to a wildcard listener (0.0.0.0:3260), exposing the storage network to the Management and Public subnets.
  3. Validation Loop: Manually adding PORTAL_GROUP to scst.conf causes a service crash on restart, indicating that the middleware is likely not initializing the driver-level portals required for the target-level group to bind.
  4. Interface State: midclt call interface.query confirms the IPs are present and active on the VLAN interfaces. MTU is 9216. No bridge interface is present to conflict with the bind.
Expected Behavior:The middleware should generate a configuration that includes:

Plaintext

TARGET iqn... {<br> PORTAL_GROUP default {<br> allowed_portal 172.16.80.1<br> allowed_portal 172.16.81.1<br> ...<br> }<br>}<br>
Steps to Reproduce:

  1. Create a Portal with specific IPs on VLAN interfaces.
  2. Associate the Portal with a Target and an Initiator Group.
  3. Observe that ss -tulpn shows 0.0.0.0:3260 instead of the specified IPs.

Summary of the "Tail Chasing"​

We confirmed that the TrueNAS Database (via midclt) is correct, and the Linux Kernel (via ip addr) is correct. The "Black Box" is the Python script that translates the Database into the scst.conf file. It is currently unable to handle specific portal bindings in this build, likely due to a regression in how it handles VLAN-backed portals.
 
even after all of that:

truenas_admin@dlk0entsto801[~]$ ss -tulpn | grep 3260

tcp LISTEN 0 256 0.0.0.0:3260 0.0.0.0:*

tcp LISTEN 0 256 [::]:3260 [::]:*

This is the "Smoking Gun" of the TrueNAS iSCSI stack. With your background, you’ll recognize what’s happening here: SCST is not acting like a standard user-space daemon.

Because SCST is a kernel-level target, 0.0.0.0:3260 isn't necessarily a "socket" in the way Nginx or SSH uses one. In the current TrueNAS SCALE implementation, the iscsi-scst module loads a global listener into the kernel stack. Even when you tell a specific Target to only allow certain IPs, the driver itself continues to advertise port 3260 on all interfaces.

The allowed_portal attribute we injected earlier acts as a Layer 7 filter, not a Layer 3 bind. It tells the target: "If an initiator talks to me on the management IP, ignore the login request," but it doesn't stop the kernel from listening on that IP.

The Final "IT Pro" Verdict
On TrueNAS SCALE 25.04, true IP-level binding (restricting the listener itself) is currently broken or unsupported in the middleware.

If your security requirement is hard isolation (where the management IP must not even respond on 3260), you cannot rely on the iSCSI configuration alone. You have to move to the OS level:

1. The "Brute Force" Firewall Fix
Since we can't stop the kernel from listening, we block the traffic before it hits the SCST module. TrueNAS SCALE uses nftables.

Bash
# Block iSCSI on the Management Interface (Assuming it's enp1s0 or similar)
# Replace 'mgmt_interface' with your actual management NIC name
sudo nft add rule inet fw-filter input iifname "mgmt_interface" tcp dport 3260 counter drop
2. Verify with a Port Scan
From your workstation or a Proxmox node on the management network:

Bash
nmap -p 3260 10.20.35.12
If it shows Filtered or Closed, you have achieved your goal, even if the TrueNAS ss output still shows 0.0.0.0.

Summary for your Support Ticket
You should add this specific technical detail to the report I drafted for you:

"The iscsi-scst driver in SCALE 25.04 appears to be hard-coded to a global wildcard bind. Target-level allowed_portal attributes function only as initiator filters and do not restrict the kernel's listening sockets. True 'Portal Group' bind isolation is non-functional in the current middleware template."

One last thing you can try
If you want to see if it's even possible to drop the 0.0.0.0 listener manually:

Bash
echo "del_target_portal 0.0.0.0 3260" | sudo tee /sys/kernel/scst_tgt/targets/iscsi/mgmt
If that returns Invalid argument or Not found, then the wildcard is compiled into the driver/module load options by IX Systems, and no amount of configuration will remove it.

The above "Post-Init" step works. It's ugly and a hack, but.
 
Last edited:
Hmm, that is actually a Linux default behavior. That is actually something I ran into when configuring a multipath Proxmox cluster on a Pure Storage environment. VMware doesn't care if you have multiple interfaces on the same subnet and will only reply to the interface being addressed. Linux by default responds on any interface that has a route. So in your case it shouldn't be able to...

But I will investigate what's going on.

I do see that right here in your console output:
Bash:
root@dlk0entpve801:~# iscsiadm -m discovery -t sendtargets -p 172.16.80.1:3260
172.16.80.1:3260,1 iqn.2005-10.org.freenas.ctl:vm
10.20.35.12:3260,1 iqn.2005-10.org.freenas.ctl:vm
172.16.81.1:3260,1 iqn.2005-10.org.freenas.ctl:vm

---
edit:
---
My TrueNAS is only responding on the interfaces bound to a portal:
Code:
root@pve-m920x-1:~# iscsiadm -m discovery -t sendtargets -p 10.20.31.20:3260
10.20.31.20:3260,1 iqn.2005-10.org.freenas.ctl:proxmox
10.20.30.20:3260,1 iqn.2005-10.org.freenas.ctl:proxmox
10.20.31.20:3260,1 iqn.2005-10.org.freenas.ctl:iqn.2026-01.org.freenas.ctl:a8sd1
10.20.30.20:3260,1 iqn.2005-10.org.freenas.ctl:iqn.2026-01.org.freenas.ctl:a8sd1
10.20.31.20:3260,1 iqn.2005-10.org.freenas.ctl:iqn.2026-01.org.freenas.ctl:test-iscsi
10.20.30.20:3260,1 iqn.2005-10.org.freenas.ctl:iqn.2026-01.org.freenas.ctl:test-iscsi

Can you double check your iSCSI service does not have any other Portal IDs that has your management interface?
1768514163456.png

If you have 0.0.0.0 in there too it will also respond on all available interfaces.


Another jank way of fixing it would be to just dis-allow the other networks on the iSCSI target in TrueNAS:
Something like this:
1768514306563.png
Should be able to just have your high speed networks on there.

Then tell the iSCSI to logout of all sessions on Proxmox, then log back in. Then only the interfaces allowed to communicate will be logged in.
 
Last edited: