iSCSI and shared LVM

arengifo

New Member
Dec 19, 2014
3
0
1
Hello guys:

I'm running Proxmox VE 4.2 with a Seagate NAS which supports iSCSI. I've setup a 2-node cluster and I've assigned a iSCSI LUN from my NAS to Proxmox like this:

Storage -> Add -> iSCSI (Options: ID+Portal+Target+NO Use LUNs directly)

Then I've configured a LVM group like this:

Storage -> Add -> LVM (Options: ID+Base Storage+Volume Group+Shared).

Now I can see my iSCSI LUNs and VG "myvg01" from both nodes. So I decided to create some KVM guests on node1 which are currently working without issues. However when I create a KVM guest on node2 (trying to install Zentyal from ISO) I start to get a lot of errors like these:

[ 3192.527082] blk_update_request: I/O error, dev sdh, sector 1579352
[ 3192.845052] sd 12:0:0:0: [sdh] tag#5 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 3192.845062] sd 12:0:0:0: [sdh] tag#5 Sense Key : Not Ready [current]
[ 3192.845065] sd 12:0:0:0: [sdh] tag#5 Add. Sense: Logical unit communication failure
[ 3192.845068] sd 12:0:0:0: [sdh] tag#5 CDB: Write(10) 2a 00 00 18 89 20 00 3f 10 00
[ 3192.845070] blk_update_request: I/O error, dev sdh, sector 1607968
[ 3194.020710] sd 12:0:0:0: [sdh] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 3194.020723] sd 12:0:0:0: [sdh] tag#0 Sense Key : Not Ready [current]
[ 3194.020727] sd 12:0:0:0: [sdh] tag#0 Add. Sense: Logical unit communication failure
[ 3194.020730] sd 12:0:0:0: [sdh] tag#0 CDB: Write(10) 2a 00 00 1a 38 60 00 2b 10 00
[ 3194.020733] blk_update_request: I/O error, dev sdh, sector 1718368
[ 3194.100166] sd 12:0:0:0: [sdh] tag#14 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 3194.100175] sd 12:0:0:0: [sdh] tag#14 Sense Key : Not Ready [current]
[ 3194.100179] sd 12:0:0:0: [sdh] tag#14 Add. Sense: Logical unit communication failure
[ 3194.100182] sd 12:0:0:0: [sdh] tag#14 CDB: Write(10) 2a 00 00 19 f9 58 00 3f 08 00
[ 3194.100185] blk_update_request: I/O error, dev sdh, sector 1702232

I thought this could be caused because i'm trying to use the same VG "myvg01" from both nodes at the same time (but using different LVs of course). So as I wasn't sure, I assigned a dedicated iSCSI LUN for a new VG "myvg02" which would be used only on node2 but the problem persists (same kernel errors).

I noticed that my Seagate NAS has an option for iSCSI that says "concurrent sessions" which was disabled and I later enabled it but I still have the same problems.
I have even unchecked the "Shared" option for my LVM setup (myvg02) but the problem persists.

Additionally, when I see the /var/log/messages file from my Seagate NAS (I have ssh access to it) I see error messages like these:

May 26 07:58:32 Seagate-D2 kernel: [142106.848796] iSCSI Login negotiation failed.
May 26 07:58:32 Seagate-D2 kernel: [142106.860097] rx_data returned 0, expecting 48.
May 26 07:58:32 Seagate-D2 kernel: [142106.860109] iSCSI Login negotiation failed.
May 26 07:58:32 Seagate-D2 kernel: [142106.860644] rx_data returned 0, expecting 48.
May 26 07:58:32 Seagate-D2 kernel: [142106.860652] iSCSI Login negotiation failed.


I haven't enabled any CHAP or any other authentication option from in my Seagate NAS.

I wonder if someone could tell me what I might be doing wrong.

Thanks in advance
 
I can add the following info:

- This problems seems to occur only with Linux guests: I've tried CentOS7 and Zentyal but both generate errors during installation process.
- When I install Windows XP as guest I don't see any of these errors.

I wonder if this might have any relation with a similar bug like this:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=805252

Maybe Linux guests do many writes at once and produce errors but Windows XP doesn't so it works smoothly.