open-iscsi-service failed to start after update to Proxmox 9.0.10 (6.14.11-3-pve)

AndreasS

New Member
Sep 18, 2025
23
2
3
Aachen, Germany
Hi all,

I get open-iscsi-service failed to start on booting one of my nodes after upgrading to Proxmox 9.0.10 (6.14.11-3-pve) via GUI from 6.14.11-2-pve.

Has anyone seen the same, just wanted to check before upgrading any of the other nodes and try finding the error.

Thanks,
Andreas
 
Hi,
the service runs fine for me. Please share the full system logs/journal from around the time of the failure as well as the output of pveversion -v.
 
Hi Fiona,

those two are reasonably small and secret, how can I share full logs without compromising internal information e.g. IP-addresses and servernames etc. in here?

# pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.11-3-pve)
pve-manager: 9.0.10 (running version: 9.0.10/deb1ca707ec72a89)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.14.11-3-pve-signed: 6.14.11-3
proxmox-kernel-6.14: 6.14.11-3
proxmox-kernel-6.14.11-2-pve-signed: 6.14.11-2
proxmox-kernel-6.14.11-1-pve-signed: 6.14.11-1
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
ceph-fuse: 19.2.3-pve2
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx10
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.11
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.8
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-1
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.15-1
proxmox-backup-file-restore: 4.0.15-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.2
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.2
proxmox-widget-toolkit: 5.0.6
pve-cluster: 9.0.6
pve-container: 6.0.13
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.17-1
pve-ha-manager: 5.0.4
pve-i18n: 3.6.0
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.22
smartmontools: 7.4-pve1
spiceterm: 3.4.1
swtpm: 0.8.0+pve2
vncterm: 1.9.1
zfsutils-linux: 2.3.4-pve1

Oct 08 16:47:53 supsiprox02 systemd[1]: Failed to start open-iscsi.service - Login to default iSCSI targets.
Oct 08 16:55:46 supsiprox02 systemd[1]: Starting open-iscsi.service - Login to default iSCSI targets...
Oct 08 16:55:49 supsiprox02 systemd[1]: Finished open-iscsi.service - Login to default iSCSI targets.
Oct 08 16:56:55 supsiprox02 systemd[1]: Stopping open-iscsi.service - Login to default iSCSI targets...
Oct 08 16:56:56 supsiprox02 systemd[1]: Stopped open-iscsi.service - Login to default iSCSI targets.
Oct 08 16:56:56 supsiprox02 systemd[1]: Stopping iscsid.service - iSCSI initiator daemon (iscsid)...
Oct 08 16:56:56 supsiprox02 systemd[1]: Stopped iscsid.service - iSCSI initiator daemon (iscsid).
Oct 08 16:57:00 supsiprox02 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Oct 08 17:00:03 supsiprox02 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Oct 08 17:00:05 supsiprox02 systemd[1]: Starting open-iscsi.service - Login to default iSCSI targets...
Oct 08 17:00:05 supsiprox02 systemd[1]: Starting iscsid.service - iSCSI initiator daemon (iscsid)...
Oct 08 17:00:05 supsiprox02 iscsid[2004]: iSCSI logger with pid=2010 started!
Oct 08 17:00:05 supsiprox02 systemd[1]: Started iscsid.service - iSCSI initiator daemon (iscsid).
Oct 08 17:00:05 supsiprox02 kernel: Loading iSCSI transport class v2.0-870.
Oct 08 17:00:06 supsiprox02 iscsid[2010]: iSCSI daemon with pid=2011 started!
Oct 08 17:00:06 supsiprox02 kernel: scsi host15: iSCSI Initiator over TCP/IP
Oct 08 17:00:06 supsiprox02 kernel: scsi host16: iSCSI Initiator over TCP/IP
Oct 08 17:00:06 supsiprox02 kernel: scsi host17: iSCSI Initiator over TCP/IP
Oct 08 17:00:06 supsiprox02 kernel: scsi host18: iSCSI Initiator over TCP/IP
Oct 08 17:00:06 supsiprox02 kernel: scsi host19: iSCSI Initiator over TCP/IP
Oct 08 17:00:06 supsiprox02 kernel: scsi host20: iSCSI Initiator over TCP/IP
Oct 08 17:00:06 supsiprox02 kernel: scsi host21: iSCSI Initiator over TCP/IP
Oct 08 17:00:06 supsiprox02 kernel: scsi host22: iSCSI Initiator over TCP/IP
Oct 08 17:00:12 supsiprox02 systemd[1]: Failed to start open-iscsi.service - Login to default iSCSI targets.
Oct 08 17:18:26 supsiprox02 systemd[1]: Stopping iscsid.service - iSCSI initiator daemon (iscsid)...
Oct 08 17:18:26 supsiprox02 systemd[1]: Stopped iscsid.service - iSCSI initiator daemon (iscsid).
Oct 08 17:18:30 supsiprox02 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Oct 08 17:21:35 supsiprox02 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Oct 08 17:21:37 supsiprox02 systemd[1]: Starting open-iscsi.service - Login to default iSCSI targets...
Oct 08 17:21:37 supsiprox02 systemd[1]: Starting iscsid.service - iSCSI initiator daemon (iscsid)...
Oct 08 17:21:37 supsiprox02 iscsid[2008]: iSCSI logger with pid=2015 started!
Oct 08 17:21:37 supsiprox02 systemd[1]: Started iscsid.service - iSCSI initiator daemon (iscsid).
Oct 08 17:21:37 supsiprox02 kernel: Loading iSCSI transport class v2.0-870.
Oct 08 17:21:38 supsiprox02 iscsid[2015]: iSCSI daemon with pid=2016 started!
Oct 08 17:21:38 supsiprox02 kernel: scsi host15: iSCSI Initiator over TCP/IP
Oct 08 17:21:38 supsiprox02 kernel: scsi host16: iSCSI Initiator over TCP/IP
Oct 08 17:21:38 supsiprox02 kernel: scsi host17: iSCSI Initiator over TCP/IP
Oct 08 17:21:38 supsiprox02 kernel: scsi host18: iSCSI Initiator over TCP/IP
Oct 08 17:21:38 supsiprox02 kernel: scsi host19: iSCSI Initiator over TCP/IP
Oct 08 17:21:38 supsiprox02 kernel: scsi host20: iSCSI Initiator over TCP/IP
Oct 08 17:21:38 supsiprox02 kernel: scsi host21: iSCSI Initiator over TCP/IP
Oct 08 17:21:38 supsiprox02 kernel: scsi host22: iSCSI Initiator over TCP/IP
Oct 08 17:21:44 supsiprox02 systemd[1]: Failed to start open-iscsi.service - Login to default iSCSI targets.

root@prox02:~# systemctl status open-iscsi.service
× open-iscsi.service - Login to default iSCSI targets
Loaded: loaded (/usr/lib/systemd/system/open-iscsi.service; enabled; preset: enabled)
Active: failed (Result: exit-code) since Thu 2025-10-09 11:30:32 CEST; 10s ago
Duration: 38.234s
Invocation: a0ea12f1c5744f5db3ed32a6aa9f6400
Docs: man:iscsiadm(8)
man:iscsid(8)
Process: 353802 ExecStart=/usr/sbin/iscsiadm -m node --loginall=automatic (code=exited, status=8)
Main PID: 353802 (code=exited, status=8)
Mem peak: 1.8M
CPU: 11ms

Oct 09 11:30:32 prox02 iscsiadm[353802]: Login to [iface: default, target: iqn.1234-12.com.dell:01.array.aaaaaaaaaaaa, portal: 10.xxx.19.192,3260] successful.
Oct 09 11:30:32 prox02 iscsiadm[353802]: Login to [iface: default, target: iqn.1234-12.com.dell:01.array.aaaaaaaaaaaa, portal: 10.xxx.19.193,3260] successful.
Oct 09 11:30:32 prox02 iscsiadm[353802]: Login to [iface: default, target: iqn.1234-12.com.dell:01.array.aaaaaaaaaaaa, portal: 10.xxx.19.190,3260] successful.
Oct 09 11:30:32 prox02 iscsiadm[353802]: Login to [iface: default, target: iqn.1234-12.com.dell:01.array.bbbbbbbbbbbb, portal: 10.xxx.19.198,3260] successful.
Oct 09 11:30:32 prox02 iscsiadm[353802]: Login to [iface: default, target: iqn.1234-12.com.dell:01.array.bbbbbbbbbbbb, portal: 10.xxx.19.195,3260] successful.
Oct 09 11:30:32 prox02 iscsiadm[353802]: Login to [iface: default, target: iqn.1234-12.com.dell:01.array.bbbbbbbbbbbb, portal: 10.xxx.19.196,3260] successful.
Oct 09 11:30:32 prox02 iscsiadm[353802]: Login to [iface: default, target: iqn.1234-12.com.dell:01.array.bbbbbbbbbbbb, portal: 10.xxx.19.197,3260] successful.
Oct 09 11:30:32 prox02 systemd[1]: open-iscsi.service: Main process exited, code=exited, status=8/n/a
Oct 09 11:30:32 prox02 systemd[1]: open-iscsi.service: Failed with result 'exit-code'.
Oct 09 11:30:32 prox02 systemd[1]: Failed to start open-iscsi.service - Login to default iSCSI targets.


root@prox02:~# journalctl -xeu open-iscsi.service
A start job for unit open-iscsi.service has begun execution.

The job identifier is 426843.
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: Could not login to [iface: default, target: iqn.1234-12.com.dell:01.array.aaaaaaaaaaaa, portal: 10.xxx.20.202,3260].
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: initiator reported error (8 - connection timed out)
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: Could not login to [iface: default, target: iqn.1234-12.com.dell:01.array.aaaaaaaaaaaa, portal: 10.xxx.20.201,3260].
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: initiator reported error (8 - connection timed out)
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: Could not login to [iface: default, target: iqn.1234-12.com.dell:01.array.aaaaaaaaaaaa, portal: 10.xxx.20.203,3260].
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: initiator reported error (8 - connection timed out)
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: Could not login to [iface: default, target: iqn.1234-12.com.dell:01.array.aaaaaaaaaaaa, portal: 10.xxx.20.200,3260].
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: initiator reported error (8 - connection timed out)
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: Could not login to [iface: default, target: iqn.1234-12.com.dell:01.array.bbbbbbbbbbbb, portal: 10.xxx.20.206,3260].
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: initiator reported error (8 - connection timed out)
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: Could not login to [iface: default, target: iqn.1234-12.com.dell:01.array.bbbbbbbbbbbb, portal: 10.xxx.20.208,3260].
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: initiator reported error (8 - connection timed out)
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: Could not login to [iface: default, target: iqn.1234-12.com.dell:01.array.bbbbbbbbbbbb, portal: 10.xxx.20.207,3260].
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: initiator reported error (8 - connection timed out)
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: Could not login to [iface: default, target: iqn.1234-12.com.dell:01.array.bbbbbbbbbbbb, portal: 10.xxx.20.205,3260].
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: initiator reported error (8 - connection timed out)
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: Could not log into all portals
Oct 09 11:30:32 prox02 iscsiadm[353802]: Login to [iface: default, target: iqn.1234-12.com.dell:01.array.aaaaaaaaaaaa, portal: 10.xxx.19.191,3260] successful.
Oct 09 11:30:32 prox02 iscsiadm[353802]: Login to [iface: default, target: iqn.1234-12.com.dell:01.array.aaaaaaaaaaaa, portal: 10.xxx.19.192,3260] successful.
Oct 09 11:30:32 prox02 iscsiadm[353802]: Login to [iface: default, target: iqn.1234-12.com.dell:01.array.aaaaaaaaaaaa, portal: 10.xxx.19.193,3260] successful.
Oct 09 11:30:32 prox02 iscsiadm[353802]: Login to [iface: default, target: iqn.1234-12.com.dell:01.array.aaaaaaaaaaaa, portal: 10.xxx.19.190,3260] successful.
Oct 09 11:30:32 prox02 iscsiadm[353802]: Login to [iface: default, target: iqn.1234-12.com.dell:01.array.bbbbbbbbbbbb, portal: 10.xxx.19.198,3260] successful.
Oct 09 11:30:32 prox02 iscsiadm[353802]: Login to [iface: default, target: iqn.1234-12.com.dell:01.array.bbbbbbbbbbbb, portal: 10.xxx.19.195,3260] successful.
Oct 09 11:30:32 prox02 iscsiadm[353802]: Login to [iface: default, target: iqn.1234-12.com.dell:01.array.bbbbbbbbbbbb, portal: 10.xxx.19.196,3260] successful.
Oct 09 11:30:32 prox02 iscsiadm[353802]: Login to [iface: default, target: iqn.1234-12.com.dell:01.array.bbbbbbbbbbbb, portal: 10.xxx.19.197,3260] successful.
Oct 09 11:30:32 prox02 systemd[1]: open-iscsi.service: Main process exited, code=exited, status=8/n/a
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ An ExecStart= process belonging to unit open-iscsi.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 8.
Oct 09 11:30:32 prox02 systemd[1]: open-iscsi.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The unit open-iscsi.service has entered the 'failed' state with result 'exit-code'.
Oct 09 11:30:32 prox02 systemd[1]: Failed to start open-iscsi.service - Login to default iSCSI targets.
░░ Subject: A start job for unit open-iscsi.service has failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit open-iscsi.service has finished with a failure.
░░
░░ The job identifier is 426843 and the job result is failed.
 
Last edited:
The service failed with exist status 8, which is
Code:
ISCSI_ERR_TRANS_TIMEOUT - connection timer expired while trying to connect.
according to the manual man 8 iscsiadm.

and this can also be seen in the logs you posted, e.g.
Code:
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: Could not login to [iface: default, target: iqn.1234-12.com.dell:01.array.bbbbbbbbbbbb, portal: 10.xxx.20.205,3260].
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: initiator reported error (8 - connection timed out)
Oct 09 11:30:32 prox02 iscsiadm[353802]: iscsiadm: Could not log into all portals
 
I know, there are 8 ports on our storage, 4 of them can be connected (10.xxx.19.yyy IP-Range), the other 4 ports (10.aaa.20.bbb) are for intra-storage replication, nevertheless the appliance announces all 8 corresponding IP-addresses for 8 portals instead of 4. This might be bad implementation from Dell-Storage side of things. But Proxmox will never be able to reach those 4 intra-storage ones.

Plus: on all my other nodes, the problem does not exist having the exact same configuration/situation (although it tells me the same time-out-message for the .20. IPs, but it does not state "open-iscsi failed to start".

iSCSI itself is working on the mentioned one node. I can run VMs on the Node, they see their storage. I just don't want to go production-ready with those kind of errors / effects.

Is there a chance not to use "auto-discover" of all iSCSI-Targets but nail the 4 used ones down and blacklist or loop out the non-used 4 other ones?

Best regards,
Andreas
 
Hey @bbgeek17,

this is indeed a good starting point, do you know if I can safely replace portal IP in /etc/pve/storage.cfg with another one via CLI and just reboot, to see if it is working? Or do I have to do this in a different place.

Cluster is non productive yet, but I don't want to reinstall or fix to death if above fails.

Cheers,
Andreas
 
do you know if I can safely replace portal IP in /etc/pve/storage.cfg with another one via CLI and just reboot, to see if it is working? Or do I have to do this in a different place.
You can modify /etc/pve/storage.cfg at any time - it will not affect already established iSCSI sessions (or any other storage that’s already active). New sessions will take the updated configuration. On reboot, all session will be adjusted.

That said, I’m not sure what changing it will accomplish in your case. You mentioned that other nodes with seemingly identical configurations are not showing the same error or issue. I would focus on identifying the differences between the working and non-working nodes first. Once you’ve done that, match the system behavior to one of the four variants described in the article, and then decide whether you want to configure iSCSI accordingly.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
difference between the nodes is, that the failing node got the update to latest PVE kernel and the other didn't so far.

Apart from that I wanted to try, if the advertising from Dell-storage end is working properly if I use the management-IP of the controllers rather than iscsi-Port-IPs as auto-discovery-portal as this is working on windows-cluster connected to same storage.

If all fails I will open case with Dell, so they can elaborate why all ports are being advertised even though they are explicitly marked as non-usable for host-target-connections.
 
Apart from that I wanted to try, if the advertising from Dell-storage end is working properly if I use the management-IP of the controllers rather than iscsi-Port-IPs as auto-discovery-portal as this is working on windows-cluster connected to same storage.
PVE runs the iscsiadm discovery, similar to what is mentioned here: https://kb.blockbridge.com/technote...nderstand-multipath-reporting-for-your-device

You can manually run the discovery against the mgmt IP. However, if you want to plug it into storage.cfg - go for it!


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox