to avoid misunderstanding. Udo Rader isn't me.
Oh, sorry. I connected the wrong dots ...
to avoid misunderstanding. Udo Rader isn't me.
Hi,
to avoid misunderstanding. Udo Rader isn't me.
Udo
Hi raku,All features I need work OK. But so far, I run only 1-2 VMs simultaneously. I need to do some more tests running about 50-100 VMs. If that works, I'll say it is production ready on my site and start to migrate from the old XenServer cluster.
zfs: vmpool
blocksize 4k
iscsiprovider freenas
pool tank/pve
portal 10.0.254.101
target iqn.2005-10.org.freenas.ctl:pve
content images
freenas_password secretpassword
freenas_use_ssl 1
freenas_user root
nowritecache 0
sparse 0
# zfs list tank/pve
NAME USED AVAIL REFER MOUNTPOINT
tank/pve 1.76T 23.6T 156K /mnt/tank/pve
root@storage-1:~ # zpool status tank
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/48fc7429-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0
gptid/49b89abd-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0
gptid/4a83885f-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0
gptid/4b39f431-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0
gptid/4bf4df38-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
gptid/4cdbea4e-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0
gptid/4d91155f-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0
gptid/4e4898a1-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0
gptid/4f10b39b-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0
gptid/4fcbb23c-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0
raidz2-2 ONLINE 0 0 0
gptid/50a25872-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0
gptid/515ff18e-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0
gptid/5229a176-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0
gptid/52e8cf38-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0
gptid/53a7a7c8-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0
spares
gptid/54835e0b-8a6d-11e8-9167-0cc47ad8d0c4 AVAIL
zfs list tank/pve
NAME USED AVAIL REFER MOUNTPOINT
tank/pve 1.76T 23.6T 156K /mnt/tank/pve
targetcli ls backstores/block
I can't tell you unfortunately. AFAIK my patch has been merged into master, but idk when the next official build will be published from master.@udotirol: Tell me, are there plans to incorporate your patch to the release of PVE? And how soon can that happen?
I've tested Udo's LIO patches from pve-devel and I can say - they work OK, but ZFS on Linux (Ubuntu 18.04 LTS) with ZVOL over iSCSI totally sucks.
I've got huge performance issues with ZFS on Linux. HDD benchmarks inside VM resulted in about 120MB/s sequential and random read/writes
The same tests on VM with FreeNAS ZFS over iSCSI - about 400-600 MB/s
@udo: here's my /etc/pve/storage.cfg:
Code:zfs: vmpool blocksize 4k iscsiprovider freenas pool tank/pve portal 10.0.254.101 target iqn.2005-10.org.freenas.ctl:pve content images freenas_password secretpassword freenas_use_ssl 1 freenas_user root nowritecache 0 sparse 0
Right now I've got about 30 running VMs. They use about 1,8 TB on virtual disks (zvols over iSCSI) and about 2 TB via NFS shares.
All you need to do on FreeNAS is create zpool (mine is named tank):
Code:# zfs list tank/pve NAME USED AVAIL REFER MOUNTPOINT tank/pve 1.76T 23.6T 156K /mnt/tank/pve root@storage-1:~ # zpool status tank pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/48fc7429-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0 gptid/49b89abd-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0 gptid/4a83885f-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0 gptid/4b39f431-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0 gptid/4bf4df38-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0 raidz2-1 ONLINE 0 0 0 gptid/4cdbea4e-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0 gptid/4d91155f-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0 gptid/4e4898a1-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0 gptid/4f10b39b-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0 gptid/4fcbb23c-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0 raidz2-2 ONLINE 0 0 0 gptid/50a25872-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0 gptid/515ff18e-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0 gptid/5229a176-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0 gptid/52e8cf38-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0 gptid/53a7a7c8-8a6d-11e8-9167-0cc47ad8d0c4 ONLINE 0 0 0 spares gptid/54835e0b-8a6d-11e8-9167-0cc47ad8d0c4 AVAIL
On this pool I've got dataset named pve:
Code:zfs list tank/pve NAME USED AVAIL REFER MOUNTPOINT tank/pve 1.76T 23.6T 156K /mnt/tank/pve
The last thing - you need to enable iSCSI daemon, create portal and target.
zfs: pve-zfs
blocksize 4k
iscsiprovider freenas
pool Datastore/pve
portal 192.168.100.30
target iqn.2005-10.org.freenas.ctl:pve
content images
freenas_password ******
freenas_use_ssl 0
freenas_user root
nowritecache 0
sparse 0
[root@freenas ~]# zpool status
pool: Datastore
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
Datastore ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/cfbe865a-e73d-11e8-88ef-4165926e0d67 ONLINE 0 0 0
gptid/d03740b9-e73d-11e8-88ef-4165926e0d67 ONLINE 0 0 0
gptid/d0b967a7-e73d-11e8-88ef-4165926e0d67 ONLINE 0 0 0
errors: No known data errors
pool: freenas-boot
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p2 ONLINE 0 0 0
ada1p2 ONLINE 0 0 0
errors: No known data errors
[root@freenas ~]#
[root@freenas ~]# zfs list Datastore/pve
NAME USED AVAIL REFER MOUNTPOINT
Datastore/pve 117K 1.93T 117K /mnt/Datastore/pve
[root@freenas ~]#
Nov 14 19:00:01 pve-zfs-1 systemd[1]: Started Proxmox VE replication runner.
Nov 14 19:00:07 pve-zfs-1 pvestatd[2303]: command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.100.30_id_rsa root@192.168.100.30 zfs get -o value -Hp available,used Datastore/pve' failed: exit code 255
Nov 14 19:00:17 pve-zfs-1 pvestatd[2303]: command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.100.30_id_rsa root@192.168.100.30 zfs get -o value -Hp available,used Datastore/pve' failed: exit code 255
Nov 14 19:00:27 pve-zfs-1 pvestatd[2303]: command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.100.30_id_rsa root@192.168.100.30 zfs get -o value -Hp available,used Datastore/pve' failed: exit code 255
Nov 14 19:00:37 pve-zfs-1 pvestatd[2303]: command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.100.30_id_rsa root@192.168.100.30 zfs get -o value -Hp available,used Datastore/pve' failed: exit code 255
Nov 14 19:00:47 pve-zfs-1 pvestatd[2303]: command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.100.30_id_rsa root@192.168.100.30 zfs get -o value -Hp available,used Datastore/pve' failed: exit code 255
Nov 14 19:00:57 pve-zfs-1 pvestatd[2303]: command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.100.30_id_rsa root@192.168.100.30 zfs get -o value -Hp available,used Datastore/pve' failed: exit code 255
root@pve-zfs-1:~# pveversion --verbose
proxmox-ve: 5.2-2 (running kernel: 4.15.18-8-pve)
pve-manager: 5.2-10 (running version: 5.2-10/6f892b40)
pve-kernel-4.15: 5.2-11
pve-kernel-4.15.18-8-pve: 4.15.18-28
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-41
libpve-guest-common-perl: 2.0-18
libpve-http-server-perl: 2.0-11
libpve-storage-perl: 5.0-30
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-3
lxcfs: 3.0.2-2
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-20
pve-cluster: 5.0-30
pve-container: 2.0-29
pve-docs: 5.2-9
pve-firewall: 3.0-14
pve-firmware: 2.0-6
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.14.1-1
pve-qemu-kvm: 2.12.1-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-38
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.11-pve2~bpo1
root@pve-zfs-1:~#
First of all: I'm not the creator of these patches . I only contributed a few diffs.
Second of all: ZFS over iSCSI uses FreeNAS API to manage LUNs and ZFS ZVOLs (create/destroy). But it also uses SSH to get ZFS info. So you need to configure SSH connection between Proxmox cluster and FreeNAS as in https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI (create SSH keys, etc.)
BTW, is there any plans to have these patches in a next Proxmox release ?First of all: I'm not the creator of these patches . I only contributed a few diffs.
Second of all: ZFS over iSCSI uses FreeNAS API to manage LUNs and ZFS ZVOLs (create/destroy). But it also uses SSH to get ZFS info. So you need to configure SSH connection between Proxmox cluster and FreeNAS as in https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI (create SSH keys, etc.)
You need to ask current maintainer of proxmox-freenas at https://github.com/TheGrandWazoo/freenas-proxmox/BTW, is there any plans to have these patches in a next Proxmox release ?
create full clone of drive scsi0 (local-zfs:vm-107-disk-0)
cannot create 'DataDump/vm-storage/vm-107-disk-0': parent is not a filesystem
TASK ERROR: storage migration failed: error with cfs lock 'storage-freenas-storage': command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.10.2_id_rsa root@192.168.10.2 zfs create -b 4k -V 33554432k DataDump/vm-storage/vm-107-disk-0' failed: exit code 1
zfs: freenas-storage
blocksize 4k
iscsiprovider freenas
pool DataDump/vm-storage
portal 192.168.10.2
target iqn.2017-12.com.lahansons:vm-storage
content images
freenas_password *************
freenas_use_ssl 0
freenas_user root
nowritecache 0
sparse 0
root@freenas:~ # zpool status
pool: DataDump
state: ONLINE
scan: scrub repaired 0 in 0 days 09:19:22 with 0 errors on Mon Nov 26 07:19:24 2018
config:
NAME STATE READ WRITE CKSUM
DataDump ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/99c49b27-d718-11e7-8cea-d05099c28ac7.eli ONLINE 0 0 0
gptid/9a828d58-d718-11e7-8cea-d05099c28ac7.eli ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/b39ecbcb-d718-11e7-8cea-d05099c28ac7.eli ONLINE 0 0 0
gptid/b44e6bb3-d718-11e7-8cea-d05099c28ac7.eli ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
gptid/d4d5827a-d718-11e7-8cea-d05099c28ac7.eli ONLINE 0 0 0
gptid/d581e7ec-d718-11e7-8cea-d05099c28ac7.eli ONLINE 0 0 0
errors: No known data errors
pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:01:39 with 0 errors on Mon Dec 10 03:46:39 2018
config:
NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
da1p2 ONLINE 0 0 0
da0p2 ONLINE 0 0 0
errors: No known data errors
root@freenas:~ # zfs list DataDump/vm-storage
NAME USED AVAIL REFER MOUNTPOINT
DataDump/vm-storage 254G 6.03T 56K -
root@pve:~# pveversion -v
proxmox-ve: 5.3-1 (running kernel: 4.15.18-9-pve)
pve-manager: 5.3-5 (running version: 5.3-5/97ae681d)
pve-kernel-4.15: 5.2-12
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-8-pve: 4.15.18-28
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-5-pve: 4.15.18-24
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.18-3-pve: 4.15.18-22
pve-kernel-4.15.18-2-pve: 4.15.18-21
pve-kernel-4.15.18-1-pve: 4.15.18-19
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-3
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-43
libpve-guest-common-perl: 2.0-18
libpve-http-server-perl: 2.0-11
libpve-storage-perl: 5.0-33
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-5
lxcfs: 3.0.2-2
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-22
pve-cluster: 5.0-31
pve-container: 2.0-31
pve-docs: 5.3-1
pve-edk2-firmware: 1.20181023-1
pve-firewall: 3.0-16
pve-firmware: 2.0-6
pve-ha-manager: 2.0-5
pve-i18n: 1.0-9
pve-libspice-server1: 0.14.1-1
pve-qemu-kvm: 2.12.1-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-43
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.12-pve1~bpo1
Hi raku. Thanks for your response. Here's what I get in syslog when creating a new VM on the freenas-storage:try to create new VM directly on FreeNAS storage and look at /var/log/syslog what happens.
Dec 11 09:22:02 pve systemd[1]: Started Proxmox VE replication runner.
Dec 11 09:22:08 pve pvedaemon[2185]: <root@pam> starting task UPID:pve:00006AC4:0049BDCB:5C0FF240:qmcreate:109:root@pam:
Dec 11 09:22:09 pve pvedaemon[27332]: VM 109 creating disks failed
Dec 11 09:22:09 pve pvedaemon[27332]: unable to create VM 109 - error with cfs lock 'storage-freenas-storage': command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.10.2_id_rsa root@192.168.10.2 zfs create -b 4k -V 33554432k DataDump/vm-storage/vm-109-disk-0' failed: exit code 1
Dec 11 09:22:09 pve pvedaemon[2185]: <root@pam> end task UPID:pve:00006AC4:0049BDCB:5C0FF240:qmcreate:109:root@pam: unable to create VM 109 - error with cfs lock 'storage-freenas-storage': command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.10.2_id_rsa root@192.168.10.2 zfs create -b 4k -V 33554432k DataDump/vm-storage/vm-109-disk-0' failed: exit code 1
zfs: freenas-storage
blocksize 4k
iscsiprovider freenas
pool DataDump
portal 192.168.10.2
target iqn.2017-12.com.lahansons.com:vm-storage
content images
freenas_password **********
freenas_use_ssl 0
freenas_user root
nowritecache 0
sparse 0
Dec 11 09:42:03 pve systemd[1]: Started Proxmox VE replication runner.
Dec 11 09:42:37 pve pvedaemon[2186]: <root@pam> starting task UPID:pve:0000521D:004B9DE0:5C0FF70D:qmcreate:109:root@pam:
Dec 11 09:42:38 pve pvedaemon[21021]: FreeNAS::lun_command : create_lu(/dev/zvol/DataDump/vm-109-disk-0)
Dec 11 09:42:39 pve pvedaemon[21021]: [ERROR]FreeNAS::API::freenas_api_call : Response code: 500
Dec 11 09:42:39 pve pvedaemon[21021]: [ERROR]FreeNAS::API::freenas_api_call : Response content: Can't connect to 192.168.10.2:80#012#012Connection refused at /usr/share/perl5/LWP/Protocol/http.pm line 47.
Dec 11 09:42:39 pve pvedaemon[21021]: VM 109 creating disks failed
Dec 11 09:42:39 pve pvedaemon[21021]: unable to create VM 109 - error with cfs lock 'storage-freenas-storage': Unable to connect to the FreeNAS API service at '192.168.10.2' using the 'http' protocol at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 249.
Dec 11 09:42:40 pve pvedaemon[2186]: <root@pam> end task UPID:pve:0000521D:004B9DE0:5C0FF70D:qmcreate:109:root@pam: unable to create VM 109 - error with cfs lock 'storage-freenas-storage': Unable to connect to the FreeNAS API service at '192.168.10.2' using the 'http' protocol at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 249.
zfs: freenas-storage
blocksize 4k
iscsiprovider freenas
pool DataDump
portal 192.168.10.2
target iqn.2017-12.com.lahansons
content images
freenas_password ***********
freenas_use_ssl 0
freenas_user root
nowritecache 0
sparse 0
Dec 11 10:17:45 pve pvedaemon[2187]: <root@pam> starting task UPID:pve:000028D0:004ED53E:5C0FFF49:qmcreate:109:root@pam:
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::lun_command : create_lu(/dev/zvol/DataDump/vm-109-disk-0)
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::get_globalconfig : target_basename=iqn.2017-12.com.lahansons
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::get_target() : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::get_target_to_extent() : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::freenas_get_first_available_lunid : return 0
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::get_target() : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::get_globalconfig : target_basename=iqn.2017-12.com.lahansons
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::get_target() : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::get_target_to_extent() : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::get_extent : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::freenas_list_lu : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::list_lu(/dev/zvol/DataDump/vm-109-disk-0):name : lun not found
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:46 pve pvedaemon[10448]: FreeNAS::API::get_globalconfig : target_basename=iqn.2017-12.com.lahansons
Dec 11 10:17:47 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:47 pve pvedaemon[10448]: FreeNAS::API::get_target() : sucessfull
Dec 11 10:17:47 pve pvedaemon[10448]: FreeNAS::create_lu(lun_path=/dev/zvol/DataDump/vm-109-disk-0, lun_id=0) : blocksize convert 4k = 4096
Dec 11 10:17:47 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:48 pve pvedaemon[10448]: FreeNAS::API::create_extent(lun_path=/dev/zvol/DataDump/vm-109-disk-0, lun_bs=4096) : sucessfull
Dec 11 10:17:48 pve pvedaemon[10448]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:17:49 pve pvedaemon[10448]: FreeNAS::API::create_target_to_extent(target_id=5, extent_id=6, lun_id=0) : sucessfull
Dec 11 10:17:49 pve pvedaemon[10448]: FreeNAS::create_lu(lun_path=/dev/zvol/DataDump/vm-109-disk-0, lun_id=0) : sucessfull
Dec 11 10:17:49 pve pvedaemon[10448]: FreeNAS::lun_command : add_view()
Dec 11 10:17:49 pve pvedaemon[2187]: <root@pam> end task UPID:pve:000028D0:004ED53E:5C0FFF49:qmcreate:109:root@pam: OK
Dec 11 10:39:52 pve pvedaemon[3208]: start VM 109: UPID:pve:00000C88:0050DBB2:5C100478:qmstart:109:root@pam:
Dec 11 10:39:52 pve pvedaemon[3208]: FreeNAS::lun_command : list_lu(/dev/zvol/DataDump/vm-109-disk-0)
Dec 11 10:39:52 pve pvedaemon[3208]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:39:52 pve pvedaemon[3208]: FreeNAS::API::get_target() : sucessfull
Dec 11 10:39:52 pve pvedaemon[3208]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:39:52 pve pvedaemon[3208]: FreeNAS::API::get_globalconfig : target_basename=iqn.2017-12.com.lahansons
Dec 11 10:39:53 pve pvedaemon[3208]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:39:53 pve pvedaemon[3208]: FreeNAS::API::get_target() : sucessfull
Dec 11 10:39:53 pve pvedaemon[3208]: FreeNAS::API::freenas_list_lu : sucessfull
Dec 11 10:39:53 pve pvedaemon[3208]: FreeNAS::list_lu(/dev/zvol/DataDump/vm-109-disk-0):name : lun not found
Dec 11 10:39:53 pve pvedaemon[3208]: Could not find lu_name for zvol vm-109-disk-0 at /usr/share/perl5/PVE/Storage/ZFSPlugin.pm line 115.
Dec 11 10:39:53 pve pvedaemon[2186]: <root@pam> end task UPID:pve:00000C88:0050DBB2:5C100478:qmstart:109:root@pam: Could not find lu_name for zvol vm-109-disk-0 at /usr/share/perl5/PVE/Storage/ZFSPlugin.pm line 115.
Dec 11 10:46:08 pve pvedaemon[2186]: <root@pam> starting task UPID:pve:00002D38:00516E6F:5C1005F0:qmdestroy:109:root@pam:
Dec 11 10:46:08 pve pvedaemon[11576]: destroy VM 109: UPID:pve:00002D38:00516E6F:5C1005F0:qmdestroy:109:root@pam:
Dec 11 10:46:08 pve pvedaemon[11576]: FreeNAS::lun_command : list_lu(/dev/zvol/DataDump/vm-109-disk-0)
Dec 11 10:46:08 pve pvedaemon[11576]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:46:08 pve pvedaemon[11576]: FreeNAS::API::get_target() : sucessfull
Dec 11 10:46:08 pve pvedaemon[11576]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:46:08 pve pvedaemon[11576]: FreeNAS::API::get_globalconfig : target_basename=iqn.2017-12.com.lahansons
Dec 11 10:46:08 pve pvedaemon[11576]: FreeNAS::api_call : setup : sucessfull
Dec 11 10:46:08 pve pvedaemon[11576]: FreeNAS::API::get_target() : sucessfull
Dec 11 10:46:08 pve pvedaemon[11576]: FreeNAS::API::freenas_list_lu : sucessfull
Dec 11 10:46:08 pve pvedaemon[11576]: FreeNAS::list_lu(/dev/zvol/DataDump/vm-109-disk-0):name : lun not found
Dec 11 10:46:08 pve pvedaemon[11576]: Could not find lu_name for zvol vm-109-disk-0 at /usr/share/perl5/PVE/Storage/ZFSPlugin.pm line 115.
Dec 11 10:46:08 pve pvedaemon[2186]: <root@pam> end task UPID:pve:00002D38:00516E6F:5C1005F0:qmdestroy:109:root@pam: Could not find lu_name for zvol vm-109-disk-0 at /usr/share/perl5/PVE/Storage/ZFSPlugin.pm line 115.