Cluster with Synology shared storage

Egert143

Member
Mar 26, 2021
22
0
6
44
Hello

i made cluster with synology shared storage over iscsi.

Problem i am facing is each node in cluster is loggin following syslog messages:

Mar 10 00:24:06 Node01 iscsid[1584]: connection-1:0 cannot make a connection to fe80::9209:d0ff:fe3d:adc3:3260 (-1,22)
Mar 10 00:24:09 Node01 iscsid[1584]: connection-1:0 cannot make a connection to fe80::9209:d0ff:fe3d:adc4:3260 (-1,22)

it seems iscsi connection is tryed over ipv6 but i use only ipv4. In synology i cant find a way how to disable ipv6 synce its in HA mode. Can i disable ipv6 in proxmox or is there setting so it wount use ipv6 for iscsi connections?
 
Last edited:
Can i disable ipv6 in proxmox
This is Debian. On some of my systems I was forced to disable IPv6 and I did it this way:
Code:
~# cat /etc/sysctl.d/00-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

This disables the complete IPv6 network stack on this host, so be sure if this is what you want! Ref: man sysctl.conf
 
i added to following conf and rebooted server.
when server starts i see the following lines
Mar 11 12:52:57 Prox03 iscsiadm[1563]: Logging in to [iface: default, target: iqn.2000-01.com.synology:SynoCluster.default-target.7b76b890333, portal: 10.150.231.30,3260]
Mar 11 12:52:57 Prox03 iscsiadm[1563]: Logging in to [iface: default, target: iqn.2000-01.com.synology:SynoCluster.default-target.7b76b890333, portal: fe80::9209:d0ff:fe3d:adc3,3260]
Mar 11 12:52:57 Prox03 iscsiadm[1563]: Logging in to [iface: default, target: iqn.2000-01.com.synology:SynoCluster.default-target.7b76b890333, portal: 10.150.231.22,3260]
Mar 11 12:52:57 Prox03 iscsiadm[1563]: Logging in to [iface: default, target: iqn.2000-01.com.synology:SynoCluster.default-target.7b76b890333, portal: fe80::9209:d0ff:fe3d:adc4,3260]
Mar 11 12:52:57 Prox03 iscsiadm[1563]: Login to [iface: default, target: iqn.2000-01.com.synology:SynoCluster.default-target.7b76b890333, portal: 10.150.231.30,3260] successful.
Mar 11 12:52:57 Prox03 iscsiadm[1563]: Login to [iface: default, target: iqn.2000-01.com.synology:SynoCluster.default-target.7b76b890333, portal: 10.150.231.22,3260] successful.

and then back to

Mar 11 12:55:29 Prox03 iscsid[1586]: connection-1:0 cannot make a connection to fe80::9209:d0ff:fe3d:adc3:3260 (-1,22)
Mar 11 12:55:29 Prox03 iscsid[1586]: connection-1:0 cannot make a connection to fe80::9209:d0ff:fe3d:adc4:3260 (-1,22)

Any more ideas ? :)
 
Any more ideas ?
No. But did you activate the settings as advised to in the man page? Something like sysctl -p /etc/sysctl.d/00-disable-ipv6.conf should work. (Also a reboot will work, but that is... overkill.)

In any case verify (and post) the output of the usual ip address show - it should NOT contain any inet6 fe80...-lines anymore!
 
root@Prox03:~# ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: enp75s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 7c:c2:55:23:e9:2e brd ff:ff:ff:ff:ff:ff
inet 10.150.231.21/29 scope global enp75s0f0
valid_lft forever preferred_lft forever
3: enp75s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 7c:c2:55:23:e9:2f brd ff:ff:ff:ff:ff:ff
inet 10.150.231.29/29 scope global enp75s0f1
valid_lft forever preferred_lft forever
4: enp152s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 3c:ec:ef:fc:1e:ee brd ff:ff:ff:ff:ff:ff
5: enp152s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 3c:ec:ef:fc:1e:ee brd ff:ff:ff:ff:ff:ff permaddr 3c:ec:ef:fc:1e:ef
6: enxbe3af2b6059f: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether be:3a:f2:b6:05:9f brd ff:ff:ff:ff:ff:ff
7: enp152s0f2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 3c:ec:ef:fc:1e:f0 brd ff:ff:ff:ff:ff:ff
inet 10.150.232.36/28 scope global enp152s0f2
valid_lft forever preferred_lft forever
8: enp152s0f3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 3c:ec:ef:fc:1e:f1 brd ff:ff:ff:ff:ff:ff
inet 10.150.232.52/28 scope global enp152s0f3
valid_lft forever preferred_lft forever
9: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 3c:ec:ef:fc:1e:ee brd ff:ff:ff:ff:ff:ff
10: bond0.12@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v12 state UP group default qlen 1000
link/ether 3c:ec:ef:fc:1e:ee brd ff:ff:ff:ff:ff:ff
11: vmbr0v12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 3c:ec:ef:fc:1e:ee brd ff:ff:ff:ff:ff:ff
inet 10.150.231.4/29 scope global vmbr0v12
valid_lft forever preferred_lft forever
12: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 3c:ec:ef:fc:1e:ee brd ff:ff:ff:ff:ff:ff


No more Ipv6 in this entry after i added the previouse code.


But errors are still present

Mar 11 14:01:26 Prox03 iscsid[1580]: connection-1:0 cannot make a connection to fe80::9209:d0ff:fe3d:adc4:3260 (-1,22)
Mar 11 14:01:29 Prox03 iscsid[1580]: connection-1:0 cannot make a connection to fe80::9209:d0ff:fe3d:adc3:3260 (-1,22)
Mar 11 14:01:29 Prox03 iscsid[1580]: connection-1:0 cannot make a connection to fe80::9209:d0ff:fe3d:adc4:3260 (-1,22)
Mar 11 14:01:32 Prox03 pvestatd[2025]: command '/usr/bin/iscsiadm --mode node --targetname iqn.2000-01.com.synology:SynoCluster.default-target.7b76b890333 --login' failed: exit code 15
Mar 11 14:01:32 Prox03 iscsid[1580]: Connection-1:0 to [target: iqn.2000-01.com.synology:SynoCluster.default-target.7b76b890333, portal: fe80::9209:d0ff:fe3d:adc3,3260] through [iface: default] is shutdown.
Mar 11 14:01:32 Prox03 iscsid[1580]: Connection-1:0 to [target: iqn.2000-01.com.synology:SynoCluster.default-target.7b76b890333, portal: fe80::9209:d0ff:fe3d:adc4,3260] through [iface: default] is shutdown.
Mar 11 14:01:32 Prox03 pvestatd[2025]: status update time (244.251 seconds)
Mar 11 14:01:33 Prox03 iscsid[1580]: connection-1:0 cannot make a connection to fe80::9209:d0ff:fe3d:adc3:3260 (-1,22)
Mar 11 14:01:33 Prox03 iscsid[1580]: connection-1:0 cannot make a connection to fe80::9209:d0ff:fe3d:adc4:3260 (-1,22)
Mar 11 14:01:38 Prox03 iscsid[1580]: connection-1:0 cannot make a connection to fe80::9209:d0ff:fe3d:adc3:3260 (-1,22)
Mar 11 14:01:38 Prox03 iscsid[1580]: connection-1:0 cannot make a connection to fe80::9209:d0ff:fe3d:adc4:3260 (-1,22)
Mar 11 14:01:41 Prox03 iscsid[1580]: connection-1:0 cannot make a connection to fe80::9209:d0ff:fe3d:adc3:3260 (-1,22)
Mar 11 14:01:41 Prox03 iscsid[1580]: connection-1:0 cannot make a connection to fe80::9209:d0ff:fe3d:adc4:3260 (-1,22)
Mar 11 14:01:44 Prox03 iscsid[1580]: connection-1:0 cannot make a connection to fe80::9209:d0ff:fe3d:adc3:3260 (-1,22)
Mar 11 14:01:44 Prox03 iscsid[1580]: connection-1:0 cannot make a connection to fe80::9209:d0ff:fe3d:adc4:3260 (-1,22)
Mar 11 14:01:47 Prox03 iscsid[1580]: connection-1:0 cannot make a connection to fe80::9209:d0ff:fe3d:adc3:3260 (-1,22)
Mar 11 14:01:47 Prox03 iscsid[1580]: connection-1:0 cannot make a connection to fe80::9209:d0ff:fe3d:adc4:3260 (-1,22)
Mar 11 14:01:50 Prox03 iscsid[1580]: connection-1:0 cannot make a connection to fe80::9209:d0ff:fe3d:adc3:3260 (-1,22)
Mar 11 14:01:50 Prox03 iscsid[1580]: connection-1:0 cannot make a connection to fe80::9209:d0ff:fe3d:adc4:3260 (-1,22)
 
Once dabbled a bit with iscsi (but not actively), possibly you need to clear the target database, as it may be remembering the IPv6 entries.

What does this show you:
Code:
ls -R /var/lib/iscsi/send_targets/
 
does this help:

ls -R /etc/iscsi/send_targets/
/etc/iscsi/send_targets/:
10.150.231.22,3260 10.150.231.30,3260

/etc/iscsi/send_targets/10.150.231.22,3260:
iqn.2000-01.com.synology:SynoCluster.default-target.7b76b890333,10.150.231.22,3260,1,default
iqn.2000-01.com.synology:SynoCluster.default-target.7b76b890333,10.150.231.30,3260,1,default
iqn.2000-01.com.synology:SynoCluster.default-target.7b76b890333,fe80::9209:d0ff:fe3d:ada9,3260,1,default
iqn.2000-01.com.synology:SynoCluster.default-target.7b76b890333,fe80::9209:d0ff:fe3d:adaa,3260,1,default
st_config

/etc/iscsi/send_targets/10.150.231.30,3260:
st_config
 
@Egert143 you have likely found the primary culprit of your issue already in the other thread you posted:
https://bugzilla.proxmox.com/show_bug.cgi?id=5173

If it is indeed related to the new multi-portal discovery, then your immediate option is to prevent your storage array from advertising IPv6 target. Or properly configure PVE to support IPv6. The best place to find out how to do it - synology forum/support.

Keep in mind that fe80:: is a link-local (ie automatically assigned IPv6 address in absence of any others). https://networkengineering.stackexchange.com/questions/24749/what-is-link-local-addressing

It'd be interesting to capture a network trace during iSCSI discovery and confirm that synology sends fe80:: as part of portal listings, then you could add that to the bug above


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Just seen your reply while posting. Thanks.

iqn.2000-01.com.synology:SynoCluster.default-target.7b76b890333,fe80::9209:d0ff:fe3d:ada9,3260,1,default
iqn.2000-01.com.synology:SynoCluster.default-target.7b76b890333,fe80::9209:d0ff:fe3d:adaa,3260,1,default
Try deleting these 2. (Backup/comment before!)

Also see this.

Sorry I can't help you more.
 
See this and I quote:
I finally was able to figure out the problem. Even though I had IPv6 turned off on my box, even after I removed all targets, I noticed that when I re-discovered, it was still pulling both the IPv4 and IPv6 connection from the Synology Diskstation NAS.
I checked settings in the Synology and lo and behold it had its IPv6 turned on. I statically assigned IPv4, but didn't even look at IPv6.
Removed everything on my box again and re-discovered. STILL had both IPv4 and IPv6. After digging deeper into the Synology NAS, there is a setting in the ISCSI Manager -->Target-->Network Bindings that is checked for 'All Network Interfaces'. Not sure why it would still announce it's IPv6 since I disabled it in the network settings, but I specified that it only announce on its IPv4 interface.
One more time, I re-discovered and FINALLY, it was only seeing IPv4 target, which has seemed to have solved my problem.
 
  • Like
Reactions: bbgeek17
So it is most likely Synology problem and i need to disable ipv6 from there. Problem is current version doesent have "ISCSI Manager". and to make matters more fun, after i made two synology boxes into HA cluster, ipv6 settings were hidden even from simple network interfaces, that got replaced by HA virtual cluster interface. I made post into Synology forum aswell, sofar no replay.
 
I would try 2 things:

1. In the Synology (NAS) find the control-panel and disable IPv6 from Network Settings. (I don't own one, nor do I know what model/software you have, but I'm sure there must be such a setting).

2. If possible, disable IPv6 on your router/switch.
 
If you're on DSM 7.x on your Synology units, what you want is now called "SAN Manager". However, I was going to suggest using NFS. It might be easier to set up. You also don't need to disable IPv6 for that. I'm not sure if iSCSI gives you any advantage over NFS, but I can tell you that it consumes more storage than NFS will.

I have used both iSCSI and NFS when I was running vSphere.

EDIT: with Proxmox, for shared storage, I'm using NFS from my Synology units.
 
iSCSI and NFS are inherently very different; NFS is a file-system share while iSCSI is a network block device.

This compares to a disk drive, where the device itself has two characteristics;

1. Its a block-device
2. It has (or can have) a file-system written on it

iSCSI and NFS provide network (remote) availability to these characteristics, respectively.

I don't know what your network-storage needs are. But choose accordingly.
 
  • Like
Reactions: akulbe
I'm not sure if iSCSI gives you any advantage over NFS, but I can tell you that it consumes more storage than NFS will.

I have used both iSCSI and NFS when I was running vSphere.

EDIT: with Proxmox, for shared storage
This is true with a number of qualifiers: Synology iSCSI used as shared storage with Proxmox will be used as non-thin storage. Compared to NFS (in any implementation), that can be used as thin storage when used with qcow.

I dont know if there are any particular limitations with synology iscsi implementations when used with VMFS, but thin provisioning should be possible as its done at VMFS level rather than block level.

It is possible to have iSCSI and NVMe/TCP block devices as shared storage with Proxmox, including thin/snapshots/clones with a Proxmox aware storage solution.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Il try to breake the Synology HA cluster and disable ipv6, then remake the HA and see if Syno enables ipv6 again or not. If it does, then il explore the nfs route. :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!