iSCSI Reconnecting every 10 seconds to FreeNAS solution

In freenas, in the iSCSI configuration under iSNS remove the entry. That will solve the issue. You will have to restart the service. This will cause an outage during the restart. Be sure to schedule your downtime.
 
Last edited:
FWIW, my symptoms were very similar, but different. I have 3 PVE hosts which each have an iSCSI connection to a virtualized instance of TrueNAS. I'd keep seeing the following messages:

Nov 11 09:12:28 truenas 1 2020-12-09T22:12:28.797576+01:00 MYDOMAIN.tld ctld 63152 - - 86.75.30.9: read: connection lost
Nov 11 09:12:28 truenas 1 2020-12-09T22:12:28.797576+01:00 MYDOMAIN.tld ctld 63152 - - 86.75.30.10: read: connection lost
Nov 11 09:12:28 truenas 1 2020-12-09T22:12:28.797576+01:00 MYDOMAIN.tld ctld 63152 - - 86.75.30.11: read: connection lost
Nov 11 09:12:28 truenas 1 2020-12-09T22:12:28.797576+01:00 MYDOMAIN.tld ctld 63152 - - 86.75.30.9: read: connection lost
Nov 11 09:12:28 truenas 1 2020-12-09T22:12:28.797576+01:00 MYDOMAIN.tld ctld 63152 - - 86.75.30.10: read: connection lost
Nov 11 09:12:28 truenas 1 2020-12-09T22:12:28.797576+01:00 MYDOMAIN.tld ctld 63152 - - 86.75.30.11: read: connection lost
Nov 11 09:12:28 truenas 1 2020-12-09T22:12:28.797576+01:00 MYDOMAIN.tld ctld 63152 - - 86.75.30.9: read: connection lost
Nov 11 09:12:28 truenas 1 2020-12-09T22:12:28.797576+01:00 MYDOMAIN.tld ctld 63152 - - 86.75.30.10: read: connection lost
Nov 11 09:12:28 truenas 1 2020-12-09T22:12:28.797576+01:00 MYDOMAIN.tld ctld 63152 - - 86.75.30.11: read: connection lost

Ignore the timestamps, domain name, and IP addresses (bonus points if you pick up on the silliness) -- I just copied/pasted to replicate what it looked like.

I personally don't receive the "...child process [blah] terminated with exit status 1..." message that some of the other members here are getting, just the "...read: connection lost..." messages.

Anyhow, I was able to solve this using the following syntax:

Code:
#
# proxmox filters
#
filter f_cut_ctld01 { message("ctld") and message("86.75.30.9: read: connection lost"); };
filter f_cut_ctld02 { message("ctld") and message("86.75.30.10: read: connection lost"); };
filter f_cut_ctld03 { message("ctld") and message("86.75.30.11: read: connection lost"); };


log { source(src); filter(f_cut_ctld01); flags(final); };
log { source(src); filter(f_cut_ctld02); flags(final); };
log { source(src); filter(f_cut_ctld03); flags(final); };

Also, not sure if it matters or not, but I placed this code into the syslog-ng.conf file directly between these two blocks:

"message filters"

and

"*.err;kern.warning;auth.notice;mail.crit /dev/console"

Then, service syslog-ng reload, and Uncle Bobby.

Hope this helps someone out there -- cheers!
 
Did you remove the iSNS setting from freenas? Doing anything with the syslog server is just going to make it not log the problem.
 
Did you remove the iSNS setting from freenas? Doing anything with the syslog server is just going to make it not log the problem.
If you are referring to Sharing > Block Shares (iSCSI) > Target Global Configuration > Global Configuration > ISNS Servers, I have never had anything defined there.
 
Hi, newbie here, I am trying to mute the messages using the filters suggested in this post but I don't seem to have a /etc/local path and can't find a syslog-ng.conf file elsewhere. Did the name/service change in TrueNAS later on? I am barely a week into messing with this so I'm very fresh but I have the read: connection lost messages on the console.
@nowoe already touched on this: it's not a bug, it's expected, if unfortunate behavior. Fortunately, there's an easy fix on the FreeNAS side that doesn't involve crippling either FreeNAS or Proxmox. Taken from an ixsystems thread (can't link since I'm a new user):

I added the following at the end of the filter section of /etc/local/syslog-ng.conf on FreeNAS:

Code:
#
# Proxmox Filters
#
filter f_cut_ctld01 {
        program("ctld") and
        message("192.168.1.2: read: connection lost");
};

filter f_cut_ctld02 {
        program("ctld") and
        message("child process") and
        message("terminated with exit status 1");
};

log { source(src); filter(f_cut_ctld01); flags(final); };
log { source(src); filter(f_cut_ctld02); flags(final); };

Save, exit, and run
Code:
service syslog-ng reload
Anything matching those parameters will be blocked. Note that I don't know what will happen if proxmox has a legitimate failure and drops connections, and you may lose those log messages. But, that may be worth the sanity of having clean logs.
 
Hi,

Just wanted to share my solution for this. I've submitted it to the pve-devel list. With a 5-node Proxmox cluster this constant traffic was quite noisy, even if I did filter it on the other end. I am uploading a copy of the file I am using on 8.2.4/8.3.0 with no issues, feel free to use at your own discretion.


https://lists.proxmox.com/pipermail/pve-devel/2025-June/071749.html


cp
Bash:
### Implementation

# scp or create the text file as attached - /root as example

# Validate no prior issues with pvestatd / pvedaemon
journalctl -xeu pvestatd
journalctl -xeu pvedaemon

# Make a backup
cp -a /usr/share/perl5/PVE/Storage/ISCSIPlugin.pm /root/ISCSIPlugin.pm.bak

# Validate your backup - diff should return no difference
ls -la /root/ISCSIPlugin.pm.bak
diff /root/ISCSIPlugin.pm.bak /usr/share/perl5/PVE/Storage/ISCSIPlugin.pm

# Replace with new version
cp -a /root/ISCSIPlugin.pm.txt /usr/share/perl5/PVE/Storage/ISCSIPlugin.pm

# Restart PVE Services and confirm no errors/services are started
systemctl restart pvedaemon
systemctl status pvedaemon

systemctl restart pvestatd
systemctl status pvestatd

journalctl -xeu pvestatd
journalctl -xeu pvedaemon

### Roll back if any errors encountered

# Restore backup file
cp -a /root/ISCSIPlugin.pm.bak /usr/share/perl5/PVE/Storage/ISCSIPlugin.pm

# Restart PVE Services and confirm no errors/services are started
systemctl restart pvedaemon
systemctl status pvedaemon

systemctl restart pvestatd
systemctl status pvestatd

journalctl -xeu pvestatd
journalctl -xeu pvedaemon
 

Attachments