What open source solutions are available to use "ZFS over iSCSI with Proxmox"?

udo

Famous Member
Apr 22, 2009
5,869
163
83
Ahrensburg; Germany
All features I need work OK. But so far, I run only 1-2 VMs simultaneously. I need to do some more tests running about 50-100 VMs. If that works, I'll say it is production ready on my site and start to migrate from the old XenServer cluster.
Hi raku,
do you have an example of the freenas storage config (and the pve-part)?
I assume it's simple but I don't get the ends together.

Udo
 

raku

Member
Apr 16, 2016
35
3
8
39
I've tested Udo's LIO patches from pve-devel and I can say - they work OK, but ZFS on Linux (Ubuntu 18.04 LTS) with ZVOL over iSCSI totally sucks.
I've got huge performance issues with ZFS on Linux. HDD benchmarks inside VM resulted in about 120MB/s sequential and random read/writes
The same tests on VM with FreeNAS ZFS over iSCSI - about 400-600 MB/s

@udo: here's my /etc/pve/storage.cfg:
Code:
zfs: vmpool
   blocksize 4k
   iscsiprovider freenas
   pool tank/pve
   portal 10.0.254.101
   target iqn.2005-10.org.freenas.ctl:pve
   content images
   freenas_password secretpassword
   freenas_use_ssl 1
   freenas_user root
   nowritecache 0
   sparse 0
Right now I've got about 30 running VMs. They use about 1,8 TB on virtual disks (zvols over iSCSI) and about 2 TB via NFS shares.

All you need to do on FreeNAS is create zpool (mine is named tank):
Code:
# zfs list tank/pve
NAME       USED  AVAIL  REFER  MOUNTPOINT
tank/pve  1.76T  23.6T   156K  /mnt/tank/pve
root@storage-1:~ # zpool status tank
  pool: tank
 state: ONLINE
  scan: none requested
config:

   NAME                                            STATE     READ WRITE CKSUM
   tank                                            ONLINE       0     0     0
     raidz2-0                                      ONLINE       0     0     0
       gptid/48fc7429-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/49b89abd-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/4a83885f-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/4b39f431-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/4bf4df38-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
     raidz2-1                                      ONLINE       0     0     0
       gptid/4cdbea4e-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/4d91155f-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/4e4898a1-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/4f10b39b-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/4fcbb23c-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
     raidz2-2                                      ONLINE       0     0     0
       gptid/50a25872-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/515ff18e-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/5229a176-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/52e8cf38-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/53a7a7c8-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
   spares
     gptid/54835e0b-8a6d-11e8-9167-0cc47ad8d0c4    AVAIL
On this pool I've got dataset named pve:
Code:
zfs list tank/pve
NAME       USED  AVAIL  REFER  MOUNTPOINT
tank/pve  1.76T  23.6T   156K  /mnt/tank/pve
The last thing - you need to enable iSCSI daemon, create portal and target.
 
Last edited:
  • Like
Reactions: udo

udotirol

New Member
Mar 9, 2018
14
4
3
48
Jumping into this thread a little bit late, I'm the "real" Udo Rader who developed the patch for LIO :)

@raku:
My company has deployed patched versions of Proxmox to a small number of our development servers and so far did not see the performance degradation you see.

Can be a number off issues, but given the dramatic difference, I think this is indeed either cache or network related.

What does

Code:
targetcli ls backstores/block
say when you run it on the target?
 
  • Like
Reactions: udo

raku

Member
Apr 16, 2016
35
3
8
39
@udotirol: I've tested LIO based storage on Ubuntu 18.04 LTS and FreeNAS based storage on the same hardware: Intel E5-2620 v4, 64 GB RAM, 18 x 4 TB SAS + 2 x 250 GB SSD, 2 x 10 Gbit Ethernet. Maybe it was a matter of better system's tuning - I don't know. It was my first time when I played with targetcli. Performance over NFS was OK (~700 MB/s as far as I remember). I bet it was something with zvols shared via targetcli. I found a bunch of similar reports on the net regarding to poor performance of zvols with ZFS on Linux.

Unfortunately I cannot test LIO any more. My storage with FreeNAS on it has already turned into production usage as we have a global failure of old storage and had to recover about 5 TB of XenServer VMs and data and migrate them to Proxmox KVM VMs on the fly :)


EDIT:
@udotirol: I forgot about one thing - your patches worked great!!! and I've got no problems with setting and running Proxmox with targetcli
 
Last edited:

Catwoolfii

Member
Nov 6, 2016
22
0
6
Russia
Try add to config ctld: option writecache off (in /etc/ctl.conf), and you will see the same poor performance as on ubuntu.
The fact is that by default the cache is enabled and it is not considered reliable.
 

TomTomGo

Member
Mar 30, 2012
47
0
6
France
I've tested Udo's LIO patches from pve-devel and I can say - they work OK, but ZFS on Linux (Ubuntu 18.04 LTS) with ZVOL over iSCSI totally sucks.
I've got huge performance issues with ZFS on Linux. HDD benchmarks inside VM resulted in about 120MB/s sequential and random read/writes
The same tests on VM with FreeNAS ZFS over iSCSI - about 400-600 MB/s

@udo: here's my /etc/pve/storage.cfg:
Code:
zfs: vmpool
   blocksize 4k
   iscsiprovider freenas
   pool tank/pve
   portal 10.0.254.101
   target iqn.2005-10.org.freenas.ctl:pve
   content images
   freenas_password secretpassword
   freenas_use_ssl 1
   freenas_user root
   nowritecache 0
   sparse 0
Right now I've got about 30 running VMs. They use about 1,8 TB on virtual disks (zvols over iSCSI) and about 2 TB via NFS shares.

All you need to do on FreeNAS is create zpool (mine is named tank):
Code:
# zfs list tank/pve
NAME       USED  AVAIL  REFER  MOUNTPOINT
tank/pve  1.76T  23.6T   156K  /mnt/tank/pve
root@storage-1:~ # zpool status tank
  pool: tank
 state: ONLINE
  scan: none requested
config:

   NAME                                            STATE     READ WRITE CKSUM
   tank                                            ONLINE       0     0     0
     raidz2-0                                      ONLINE       0     0     0
       gptid/48fc7429-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/49b89abd-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/4a83885f-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/4b39f431-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/4bf4df38-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
     raidz2-1                                      ONLINE       0     0     0
       gptid/4cdbea4e-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/4d91155f-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/4e4898a1-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/4f10b39b-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/4fcbb23c-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
     raidz2-2                                      ONLINE       0     0     0
       gptid/50a25872-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/515ff18e-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/5229a176-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/52e8cf38-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
       gptid/53a7a7c8-8a6d-11e8-9167-0cc47ad8d0c4  ONLINE       0     0     0
   spares
     gptid/54835e0b-8a6d-11e8-9167-0cc47ad8d0c4    AVAIL
On this pool I've got dataset named pve:
Code:
zfs list tank/pve
NAME       USED  AVAIL  REFER  MOUNTPOINT
tank/pve  1.76T  23.6T   156K  /mnt/tank/pve
The last thing - you need to enable iSCSI daemon, create portal and target.
Hi Raku,

First of all thank you for the great work you made with your FreeNAS ZFS Storage plugin !
Yesterday i tried to patch my devs servers with the latest patches. FreeNAS ZFS over SCSI shows up fine in the ISCSI providers list after applying your procedure described in the README.md but i'm not able to use my FreeNAS storage after adding a new storage. Here is what i've done :

1 - Applying all the instructions of the README.md on both servers (2 nodes cluster)
2 - Add an iSCSI target on the FreeNAS as you described in this post
3 - Activate the iSCSI daemon on FreeNAS
4 - Add a new storage in the Proxmox Datacenter

/etc/pve/storage.cfg
Code:
zfs: pve-zfs
   blocksize 4k
   iscsiprovider freenas
   pool Datastore/pve
   portal 192.168.100.30
   target iqn.2005-10.org.freenas.ctl:pve
   content images
   freenas_password ******
   freenas_use_ssl 0
   freenas_user root
   nowritecache 0
   sparse 0
zpool status on FreeNAS
Code:
[root@freenas ~]# zpool status                                                                                                     
  pool: Datastore                                                                                                                   
 state: ONLINE                                                                                                                     
  scan: none requested                                                                                                             
config:                                                                                                                             
                                                                                                                                   
       NAME                                            STATE     READ WRITE CKSUM                                                 
       Datastore                                       ONLINE       0     0     0                                                 
         raidz1-0                                      ONLINE       0     0     0                                                 
           gptid/cfbe865a-e73d-11e8-88ef-4165926e0d67  ONLINE       0     0     0                                                 
           gptid/d03740b9-e73d-11e8-88ef-4165926e0d67  ONLINE       0     0     0                                                 
           gptid/d0b967a7-e73d-11e8-88ef-4165926e0d67  ONLINE       0     0     0                                                 
                                                                                                                                   
errors: No known data errors                                                                                                       
                                                                                                                                   
  pool: freenas-boot                                                                                                               
 state: ONLINE                                                                                                                     
  scan: none requested                                                                                                             
config:                                                                                                                             
                                                                                                                                   
       NAME        STATE     READ WRITE CKSUM                                                                                     
       freenas-boot  ONLINE       0     0     0                                                                                   
         mirror-0  ONLINE       0     0     0                                                                                     
           ada0p2  ONLINE       0     0     0                                                                                     
           ada1p2  ONLINE       0     0     0                                                                                     
                                                                                                                                   
errors: No known data errors                                                                                                       
[root@freenas ~]#
zfs list on FreeNAS
Code:
[root@freenas ~]# zfs list Datastore/pve                                                                                           
NAME            USED  AVAIL  REFER  MOUNTPOINT                                                                                     
Datastore/pve   117K  1.93T   117K  /mnt/Datastore/pve                                                                             
[root@freenas ~]#
Watching in Proxmox server logs : it seems that it try to use the ssh stuff :

Code:
Nov 14 19:00:01 pve-zfs-1 systemd[1]: Started Proxmox VE replication runner.
Nov 14 19:00:07 pve-zfs-1 pvestatd[2303]: command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.100.30_id_rsa root@192.168.100.30 zfs get -o value -Hp available,used Datastore/pve' failed: exit code 255
Nov 14 19:00:17 pve-zfs-1 pvestatd[2303]: command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.100.30_id_rsa root@192.168.100.30 zfs get -o value -Hp available,used Datastore/pve' failed: exit code 255
Nov 14 19:00:27 pve-zfs-1 pvestatd[2303]: command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.100.30_id_rsa root@192.168.100.30 zfs get -o value -Hp available,used Datastore/pve' failed: exit code 255
Nov 14 19:00:37 pve-zfs-1 pvestatd[2303]: command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.100.30_id_rsa root@192.168.100.30 zfs get -o value -Hp available,used Datastore/pve' failed: exit code 255
Nov 14 19:00:47 pve-zfs-1 pvestatd[2303]: command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.100.30_id_rsa root@192.168.100.30 zfs get -o value -Hp available,used Datastore/pve' failed: exit code 255
Nov 14 19:00:57 pve-zfs-1 pvestatd[2303]: command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.100.30_id_rsa root@192.168.100.30 zfs get -o value -Hp available,used Datastore/pve' failed: exit code 255
As i understand on the Github, your plugin use the FreeNAS APIs only or i misunderstood ?
Or maybe i miss something i my configuration ...

PVE version :

Code:
root@pve-zfs-1:~# pveversion --verbose
proxmox-ve: 5.2-2 (running kernel: 4.15.18-8-pve)
pve-manager: 5.2-10 (running version: 5.2-10/6f892b40)
pve-kernel-4.15: 5.2-11
pve-kernel-4.15.18-8-pve: 4.15.18-28
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-41
libpve-guest-common-perl: 2.0-18
libpve-http-server-perl: 2.0-11
libpve-storage-perl: 5.0-30
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-3
lxcfs: 3.0.2-2
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-20
pve-cluster: 5.0-30
pve-container: 2.0-29
pve-docs: 5.2-9
pve-firewall: 3.0-14
pve-firmware: 2.0-6
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.14.1-1
pve-qemu-kvm: 2.12.1-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-38
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.11-pve2~bpo1
root@pve-zfs-1:~#
Thanks if you can help me !

Regards,

Thomas
 

raku

Member
Apr 16, 2016
35
3
8
39
First of all: I'm not the creator of these patches :). I only contributed a few diffs.
Second of all: ZFS over iSCSI uses FreeNAS API to manage LUNs and ZFS ZVOLs (create/destroy). But it also uses SSH to get ZFS info. So you need to configure SSH connection between Proxmox cluster and FreeNAS as in https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI (create SSH keys, etc.)
 

TomTomGo

Member
Mar 30, 2012
47
0
6
France
First of all: I'm not the creator of these patches :). I only contributed a few diffs.
Second of all: ZFS over iSCSI uses FreeNAS API to manage LUNs and ZFS ZVOLs (create/destroy). But it also uses SSH to get ZFS info. So you need to configure SSH connection between Proxmox cluster and FreeNAS as in https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI (create SSH keys, etc.)
Okay so congratulations for your few contributions ;)
BTW, it works fine after configuring the SSH side, thanks for the clarification !
 

TomTomGo

Member
Mar 30, 2012
47
0
6
France
BTW,
First of all: I'm not the creator of these patches :). I only contributed a few diffs.
Second of all: ZFS over iSCSI uses FreeNAS API to manage LUNs and ZFS ZVOLs (create/destroy). But it also uses SSH to get ZFS info. So you need to configure SSH connection between Proxmox cluster and FreeNAS as in https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI (create SSH keys, etc.)
BTW, is there any plans to have these patches in a next Proxmox release ?
 

sirsean12

New Member
Jul 17, 2017
8
0
1
34
Hello everyone,
Thanks in advanced for the help. I installed the Freenas iscsi plugin from TheGrandWazoo, set up SSH key and I am able to connect. I am also able to create Virtual machines. However, when I try to restore a backup I receive the following task error.


restore vma archive: lzop -d -c /mnt/pve/NFS-Share/dump/vzdump-qemu-103-2018_12_02-12_05_14.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp12336.fifo - /var/tmp/vzdumptmp12336
CFG: size: 451 name: qemu-server.conf
DEV: dev_id=1 size: 107374182400 devname: drive-virtio0
CTIME: Sun Dec 2 12:05:25 2018
Use of uninitialized value $target_id in numeric eq (==) at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 518.
no lock found trying to remove 'create' lock
TASK ERROR: command 'set -o pipefail && lzop -d -c /mnt/pve/NFS-Share/dump/vzdump-qemu-103-2018_12_02-12_05_14.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp12336.fifo - /var/tmp/vzdumptmp12336' failed: error with cfs lock 'storage-ZFS-ISCSI': Unable to find the target id for iqn.2005-10.org.freenas.ctl:RaidZ/vm-103-disk-0 at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 149.
 

mihanson

New Member
Nov 1, 2018
19
0
1
43
I'm having trouble utilizing any storage on my FreeNAS 11.1 box. I am unable to create or migrate/move any VM to the new ZFS-over-iSCSI disk I created. This is what I have done:
  1. Setup a 250 GiB zvol in the FreeNAS GUI
  2. Got the Proxmox-FreeNAS SSH key setup
  3. Setup the ZFS-over-iSCSI in the Proxmox GUI
  4. Attempt to create/migrate/move a VM from Proxmox's local-zfs storage results in the following:
Code:
create full clone of drive scsi0 (local-zfs:vm-107-disk-0)
cannot create 'DataDump/vm-storage/vm-107-disk-0': parent is not a filesystem
TASK ERROR: storage migration failed: error with cfs lock 'storage-freenas-storage': command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.10.2_id_rsa root@192.168.10.2 zfs create -b 4k -V 33554432k DataDump/vm-storage/vm-107-disk-0' failed: exit code 1
The VM I am trying to move is OFFLINE.
This is the relevant storage config:
Code:
zfs: freenas-storage
    blocksize 4k
    iscsiprovider freenas
    pool DataDump/vm-storage
    portal 192.168.10.2
    target iqn.2017-12.com.lahansons:vm-storage
    content images
    freenas_password *************
    freenas_use_ssl 0
    freenas_user root
    nowritecache 0
    sparse 0
On FreeNAS:
Code:
root@freenas:~ # zpool status
  pool: DataDump
 state: ONLINE
  scan: scrub repaired 0 in 0 days 09:19:22 with 0 errors on Mon Nov 26 07:19:24 2018
config:

    NAME                                                STATE     READ WRITE CKSUM
    DataDump                                            ONLINE       0     0     0
      mirror-0                                          ONLINE       0     0     0
        gptid/99c49b27-d718-11e7-8cea-d05099c28ac7.eli  ONLINE       0     0     0
        gptid/9a828d58-d718-11e7-8cea-d05099c28ac7.eli  ONLINE       0     0     0
      mirror-1                                          ONLINE       0     0     0
        gptid/b39ecbcb-d718-11e7-8cea-d05099c28ac7.eli  ONLINE       0     0     0
        gptid/b44e6bb3-d718-11e7-8cea-d05099c28ac7.eli  ONLINE       0     0     0
      mirror-2                                          ONLINE       0     0     0
        gptid/d4d5827a-d718-11e7-8cea-d05099c28ac7.eli  ONLINE       0     0     0
        gptid/d581e7ec-d718-11e7-8cea-d05099c28ac7.eli  ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:39 with 0 errors on Mon Dec 10 03:46:39 2018
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        da1p2   ONLINE       0     0     0
        da0p2   ONLINE       0     0     0

errors: No known data errors
Code:
root@freenas:~ # zfs list DataDump/vm-storage
NAME                  USED  AVAIL  REFER  MOUNTPOINT
DataDump/vm-storage   254G  6.03T    56K  -
I am running the latest Proxmox (5.3-5)
Code:
root@pve:~# pveversion -v
proxmox-ve: 5.3-1 (running kernel: 4.15.18-9-pve)
pve-manager: 5.3-5 (running version: 5.3-5/97ae681d)
pve-kernel-4.15: 5.2-12
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-8-pve: 4.15.18-28
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-5-pve: 4.15.18-24
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.18-3-pve: 4.15.18-22
pve-kernel-4.15.18-2-pve: 4.15.18-21
pve-kernel-4.15.18-1-pve: 4.15.18-19
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-3
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-43
libpve-guest-common-perl: 2.0-18
libpve-http-server-perl: 2.0-11
libpve-storage-perl: 5.0-33
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-5
lxcfs: 3.0.2-2
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-22
pve-cluster: 5.0-31
pve-container: 2.0-31
pve-docs: 5.3-1
pve-edk2-firmware: 1.20181023-1
pve-firewall: 3.0-16
pve-firmware: 2.0-6
pve-ha-manager: 2.0-5
pve-i18n: 1.0-9
pve-libspice-server1: 0.14.1-1
pve-qemu-kvm: 2.12.1-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-43
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.12-pve1~bpo1
Can anyone see what may be wrong?

Mike
 

raku

Member
Apr 16, 2016
35
3
8
39
try to create new VM directly on FreeNAS storage and look at /var/log/syslog what happens.
 

mihanson

New Member
Nov 1, 2018
19
0
1
43
try to create new VM directly on FreeNAS storage and look at /var/log/syslog what happens.
Hi raku. Thanks for your response. Here's what I get in syslog when creating a new VM on the freenas-storage:
Code:
Dec 11 09:22:02 pve systemd[1]: Started Proxmox VE replication runner.
Dec 11 09:22:08 pve pvedaemon[2185]: <root@pam> starting task UPID:pve:00006AC4:0049BDCB:5C0FF240:qmcreate:109:root@pam:
Dec 11 09:22:09 pve pvedaemon[27332]: VM 109 creating disks failed
Dec 11 09:22:09 pve pvedaemon[27332]: unable to create VM 109 - error with cfs lock 'storage-freenas-storage': command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.10.2_id_rsa root@192.168.10.2 zfs create -b 4k -V 33554432k DataDump/vm-storage/vm-109-disk-0' failed: exit code 1
Dec 11 09:22:09 pve pvedaemon[2185]: <root@pam> end task UPID:pve:00006AC4:0049BDCB:5C0FF240:qmcreate:109:root@pam: unable to create VM 109 - error with cfs lock 'storage-freenas-storage': command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.10.2_id_rsa root@192.168.10.2 zfs create -b 4k -V 33554432k DataDump/vm-storage/vm-109-disk-0' failed: exit code 1
I feel like the problem is that my zvol, located on FreeNAS at /mnt/DataDump/vm-storage, is a block device and Proxmox is expecting a filesystem. Do I need to format and mount the zvol as a block device on Proxmox before I try to create/move/clone/migrate, etc?

To try this another way, I reconfigured the ZFS-over-iSCSI storage as follows:
Code:
zfs: freenas-storage
        blocksize 4k
        iscsiprovider freenas
        pool DataDump
        portal 192.168.10.2
        target iqn.2017-12.com.lahansons.com:vm-storage
        content images
        freenas_password **********
        freenas_use_ssl 0
        freenas_user root
        nowritecache 0
        sparse 0
and then tried to create a new VM again.

Code:
Dec 11 09:42:03 pve systemd[1]: Started Proxmox VE replication runner.
Dec 11 09:42:37 pve pvedaemon[2186]: <root@pam> starting task UPID:pve:0000521D:004B9DE0:5C0FF70D:qmcreate:109:root@pam:
Dec 11 09:42:38 pve pvedaemon[21021]: FreeNAS::lun_command : create_lu(/dev/zvol/DataDump/vm-109-disk-0)
Dec 11 09:42:39 pve pvedaemon[21021]: [ERROR]FreeNAS::API::freenas_api_call : Response code: 500
Dec 11 09:42:39 pve pvedaemon[21021]: [ERROR]FreeNAS::API::freenas_api_call : Response content: Can't connect to 192.168.10.2:80#012#012Connection refused at /usr/share/perl5/LWP/Protocol/http.pm line 47.
Dec 11 09:42:39 pve pvedaemon[21021]: VM 109 creating disks failed
Dec 11 09:42:39 pve pvedaemon[21021]: unable to create VM 109 - error with cfs lock 'storage-freenas-storage': Unable to connect to the FreeNAS API service at '192.168.10.2' using the 'http' protocol at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 249.
Dec 11 09:42:40 pve pvedaemon[2186]: <root@pam> end task UPID:pve:0000521D:004B9DE0:5C0FF70D:qmcreate:109:root@pam: unable to create VM 109 - error with cfs lock 'storage-freenas-storage': Unable to connect to the FreeNAS API service at '192.168.10.2' using the 'http' protocol at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 249.
So, it's a different error (not sure if it's a "better" error) and I can see Proxmox created a zvol in my FreeNAS pool (DataDump). It may be relevant to note that my FreeNAS has a different IP address for it's GUI vs data access.

Ok, it seems that it is expected that the FreeNAS GUI and data access IP addresses be the same. I changed my FreeNAS to listen on all addresses (System > General > Web GUI ipv4 > 0.0.0.0) and I can now create a new VM on the FreeNAS storage with the following storage.cfg:
Code:
zfs: freenas-storage
        blocksize 4k
        iscsiprovider freenas
        pool DataDump
        portal 192.168.10.2
        target iqn.2017-12.com.lahansons
        content images
        freenas_password ***********
        freenas_use_ssl 0
        freenas_user root
        nowritecache 0
        sparse 0
Code:
Dec 11 10:17:45 pve pvedaemon[2187]: <root@pam> starting task UPID:pve:000028D0:004ED53E:5C0FFF49:qmcreate:109:root@pam:
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::lun_command : create_lu(/dev/zvol/DataDump/vm-109-disk-0)
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::get_globalconfig : target_basename=iqn.2017-12.com.lahansons
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::get_target() : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::get_target_to_extent() : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::freenas_get_first_available_lunid : return 0
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::get_target() : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::get_globalconfig : target_basename=iqn.2017-12.com.lahansons
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::get_target() : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::get_target_to_extent() : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::get_extent : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::freenas_list_lu : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::list_lu(/dev/zvol/DataDump/vm-109-disk-0):name : lun not found
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::get_globalconfig : target_basename=iqn.2017-12.com.lahansons
Dec 11 10:17:47 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:47 pve pvedaemon[10448]: FreeNAS::API::get_target() : sucessfull
Dec 11 10:17:47 pve pvedaemon[10448]: FreeNAS::create_lu(lun_path=/dev/zvol/DataDump/vm-109-disk-0, lun_id=0) : blocksize convert 4k = 4096
Dec 11 10:17:47 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:48 pve pvedaemon[10448]: FreeNAS::API::create_extent(lun_path=/dev/zvol/DataDump/vm-109-disk-0, lun_bs=4096) : sucessfull
Dec 11 10:17:48 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:49 pve pvedaemon[10448]: FreeNAS::API::create_target_to_extent(target_id=5, extent_id=6, lun_id=0) : sucessfull
Dec 11 10:17:49 pve pvedaemon[10448]: FreeNAS::create_lu(lun_path=/dev/zvol/DataDump/vm-109-disk-0, lun_id=0) : sucessfull
Dec 11 10:17:49 pve pvedaemon[10448]: FreeNAS::lun_command : add_view()
Dec 11 10:17:49 pve pvedaemon[2187]: <root@pam> end task UPID:pve:000028D0:004ED53E:5C0FFF49:qmcreate:109:root@pam: OK
This still does not use the zvol I created in FreeNAS (vm-storage), but Promox creates it's own zvol in the root of my FreeNAS pool (/mnt/DataDump/). Unfortunately, the VM will not start. :(
Code:
Dec 11 10:39:52 pve pvedaemon[3208]: start VM 109: UPID:pve:00000C88:0050DBB2:5C100478:qmstart:109:root@pam:
Dec 11 10:39:52 pve pvedaemon[3208]: FreeNAS::lun_command : list_lu(/dev/zvol/DataDump/vm-109-disk-0)
Dec 11 10:39:52 pve pvedaemon[3208]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:39:52 pve pvedaemon[3208]: FreeNAS::API::get_target() : sucessfull
Dec 11 10:39:52 pve pvedaemon[3208]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:39:52 pve pvedaemon[3208]: FreeNAS::API::get_globalconfig : target_basename=iqn.2017-12.com.lahansons
Dec 11 10:39:53 pve pvedaemon[3208]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:39:53 pve pvedaemon[3208]: FreeNAS::API::get_target() : sucessfull
Dec 11 10:39:53 pve pvedaemon[3208]: FreeNAS::API::freenas_list_lu : sucessfull
Dec 11 10:39:53 pve pvedaemon[3208]: FreeNAS::list_lu(/dev/zvol/DataDump/vm-109-disk-0):name : lun not found
Dec 11 10:39:53 pve pvedaemon[3208]: Could not find lu_name for zvol vm-109-disk-0 at /usr/share/perl5/PVE/Storage/ZFSPlugin.pm line 115.
Dec 11 10:39:53 pve pvedaemon[2186]: <root@pam> end task UPID:pve:00000C88:0050DBB2:5C100478:qmstart:109:root@pam: Could not find lu_name for zvol vm-109-disk-0 at /usr/share/perl5/PVE/Storage/ZFSPlugin.pm line 115.
Can't destroy the new VM either...
Code:
Dec 11 10:46:08 pve pvedaemon[2186]: <root@pam> starting task UPID:pve:00002D38:00516E6F:5C1005F0:qmdestroy:109:root@pam:
Dec 11 10:46:08 pve pvedaemon[11576]: destroy VM 109: UPID:pve:00002D38:00516E6F:5C1005F0:qmdestroy:109:root@pam:
Dec 11 10:46:08 pve pvedaemon[11576]: FreeNAS::lun_command : list_lu(/dev/zvol/DataDump/vm-109-disk-0)
Dec 11 10:46:08 pve pvedaemon[11576]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:46:08 pve pvedaemon[11576]: FreeNAS::API::get_target() : sucessfull
Dec 11 10:46:08 pve pvedaemon[11576]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:46:08 pve pvedaemon[11576]: FreeNAS::API::get_globalconfig : target_basename=iqn.2017-12.com.lahansons
Dec 11 10:46:08 pve pvedaemon[11576]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:46:08 pve pvedaemon[11576]: FreeNAS::API::get_target() : sucessfull
Dec 11 10:46:08 pve pvedaemon[11576]: FreeNAS::API::freenas_list_lu : sucessfull
Dec 11 10:46:08 pve pvedaemon[11576]: FreeNAS::list_lu(/dev/zvol/DataDump/vm-109-disk-0):name : lun not found
Dec 11 10:46:08 pve pvedaemon[11576]: Could not find lu_name for zvol vm-109-disk-0 at /usr/share/perl5/PVE/Storage/ZFSPlugin.pm line 115.
Dec 11 10:46:08 pve pvedaemon[2186]: <root@pam> end task UPID:pve:00002D38:00516E6F:5C1005F0:qmdestroy:109:root@pam: Could not find lu_name for zvol vm-109-disk-0 at /usr/share/perl5/PVE/Storage/ZFSPlugin.pm line 115.
 

Attachments

Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!