[SOLVED] Peculiar Behavior for ZFS-over-ISCSI on FreeNAS (VM creates but can't be started because LUN can't be reached?)

wits-zach

New Member
Nov 19, 2019
9
2
3
37
All,

  • I re-did a lot of the networking on my cluster and appear to have successfully changed everything over to ZFS-over-ISCSI using LACP and Linux bonded interfaces for some throughput improvement and redundancy.

  • I am connecting to a FreeNAS cluster using GrandWazoo's ZFS-over-ISCSI patch https:// github.com/ TheGrandWazoo/ freenas-proxmox

  • When I changed all the networking over, I had one VM already on the FreeNAS device. That device starts up with no issue:

  • root@proxmox1:~# qm start 100
    Rescanning session [sid: 1, target: iqn.target-1.com.freenas.ctl:training1, portal: 192.168.8.224,3260]
    Rescanning session [sid: 1, target: iqn.target-1.com.freenas.ctl:training1, portal: 192.168.8.224,3260]
    root@proxmox1:~#


  • I am getting great throughput on LACP 10GbE NIC's:

    1585693885012.png

  • The problem arises when starting a new VM I have created. Through the Proxmox GUI, I can create a VM on the FreeNAS device with no issues (the extant and virtual disks all appear in FreeNAS):

    1585694096669.png

  • The problem arises when trying to start that VM, cannot connect to the LUN (in this case 2 as highlighted):

    Rescanning session [sid: 1, target: iqn.target-1.com.freenas.ctl:training1, portal: 192.168.8.224,3260]
    Rescanning session [sid: 1, target: iqn.target-1.com.freenas.ctl:training1, portal: 192.168.8.224,3260]
    kvm: -drive file=iscsi://192.168.8.224/iqn.target-1.com.freenas.ctl:training1/2,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on: iSCSI: Failed to connect to LUN : SENSE KEY:ILLEGAL_REQUEST(5) ASCQ:LOGICAL_UNIT_NOT_SUPPORTED(0x2500)

    TASK ERROR: start failed: QEMU exited with code 1


  • The LUN of the VM that already existed and still starts up no problem is 1. A separate VM with LUN 0 has had the same issues above.
Anyone have an idea what might cause this?
 

wits-zach

New Member
Nov 19, 2019
9
2
3
37
I was able to figure it out. See below if you run into similar issues:

  • The ISCSI service on my FreeNAS-11.2-U8 was initially having trouble restarting after reconfiguring the network. This was due to ISCSI and associated pools still being associated with the prior network configuration. After updating ISCSI and the pool to the new networks, it came up.

  • The ZFS Pool on my FreeNAS-11.2-U8 needed to be scrubbed after the networks had been updated. This was accomplished (in FreeNAS) by selecting Storage > Pools > [Gear by my pool] > Scrub Pool (NOTE: Make sure you understand what this does before you run it. It can destroy your data when it repairs the ZFS)
    • At this point, the extents and raw disk files for Proxmox VM's 107 and 108 were still present on the FreeNAS

  • I then rebooted FreeNAS to make sure everything started clean (this step was probably unnecessary, but I did it anyway).

  • When FreeNAS came back up, I attempted to restart VM's 107 and 108 in Proxmox, but I got a new error message:
    • TASK ERROR: Could not find lu_name for zvol vm-107-disk-0 at /usr/share/perl5/PVE/Storage/ZFSPlugin.pm line 118.
  • Because the error message indicated that the logical unit name could not be found, I investigated on FreeNAS. I discovered that the extents and raw disk files for VM's 107 and 108 were now gone (in my case, this was good. I was trying to delete them because they were test machines. In the event they are something you care about however, beware...)

  • I attempted to delete the VM's in Proxmox, but I got the same lu_name error message.

  • Realizing that the disks were missing, I simply detached the disks in the Proxmox GUI to update the VM's configuration file. The detachment was successful.

  • After that, I was able to delete the VM's from Proxmox without incident.

  • I can now create, start, stop, delete, etc. VM's from my LACP FreeNAS setup.
I would also like to take a second to extend kudos to Proxmox for having good error messages that actually tell you what the problem is. It gives you just enough thread to start pulling. Thank you!
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!