Hi!
I think i would benefit from using zfs-over-iscsi storage option because of snapshot capability, and in spite of storage being single-point-of-failure. Also i feel that storage solution not having at client i.e. pve node ordinary /dev/sdb, /dev/sdc etc device abstraction related to it somehow seems safer. (I understand iscsi client is in the case of zfs-over-iscsi based on libiscsi and built inside kvm/qemu rather than like ordinarily using traditional open-iscsi solution so-to-say provided by operating system.)
Practically all works (i use pve v. 8.3.2): running virtual machines, backuping-restoring with pbs, cloning
I am only worried about seeing messages in different flavors about 'lba status', for example during pbs backup at pve node (and many messages per second)
To understand if this behavior is related rather to iscsi client or server i made following test.
1. I set up virtual seabios-i440fx machine using zfs-over-iscsi and everything works but i see those messages (uefi q35 virtual machine with efi disk seems to produce its particular lba status messages and maybe it would be a different story)
2. I stopped virtual machine, disconnected zfs-over-iscsi storage from pve webgui
3. I connected same iscsi server to pve as 'iscsi-type-storage-resource' and selected 'Use LUNs directly'. I understand on the background this start open-iscsi department etc and pve has now /dev/sdb, /dev/sdc etc iscsi devices related to iscsi server luns.
4. I assing to the same virtual machine the same block resource but this time thru so to say different iscsi contact (i.e. open-iscsi-based)
5. start virtual machine up and do pbs backup etc (except Cloning, i understand cloning is not possible using storage that way but i do not need cloning also in my practice)
And this way i do not get any messages about lbs status failing etc. So i would say that iscsi target has potential and something is mis-behaving at client side.
Diff of two configurations looks like
And appropriate storage definitions are from /etc/pve/storage.cfg
As iscsi server i use ordinary targetcli running as virtual server on pve using Ubuntu 24.04 (targetcli-fb package). Actually i run all this as pve virtual machines i.e. nested virtualization but i also tried it out on physical hardware and experience same phenomena.
I also see that there are similar entries in the forum from past but i did not exactly realize what resolution they have so i decided to write up my experience. I would be very thankful if you could guide me further from this point. (Like it is safe to just ignore those lba status messages or at current time it is better to avoid using zfs-over-iscsi.)
Best regards,
Imre
I think i would benefit from using zfs-over-iscsi storage option because of snapshot capability, and in spite of storage being single-point-of-failure. Also i feel that storage solution not having at client i.e. pve node ordinary /dev/sdb, /dev/sdc etc device abstraction related to it somehow seems safer. (I understand iscsi client is in the case of zfs-over-iscsi based on libiscsi and built inside kvm/qemu rather than like ordinarily using traditional open-iscsi solution so-to-say provided by operating system.)
Practically all works (i use pve v. 8.3.2): running virtual machines, backuping-restoring with pbs, cloning
I am only worried about seeing messages in different flavors about 'lba status', for example during pbs backup at pve node (and many messages per second)
Code:
2024-12-21T00:32:54.789339+02:00 pve-sdn-01 QEMU[6547]: kvm: iSCSI GET_LBA_STATUS failed at lba 7585792: SENSE KEY:ILLEGAL_REQUEST(5) ASCQ:INVALID_FIELD_IN_CDB(0x2400)
2024-12-21T00:32:54.825026+02:00 pve-sdn-01 QEMU[6547]: kvm: iSCSI GET_LBA_STATUS failed at lba 7593984: SENSE KEY:ILLEGAL_REQUEST(5) ASCQ:INVALID_FIELD_IN_CDB(0x2400)
2024-12-21T00:32:54.860824+02:00 pve-sdn-01 QEMU[6547]: kvm: iSCSI GET_LBA_STATUS failed at lba 7602176: SENSE KEY:ILLEGAL_REQUEST(5) ASCQ:INVALID_FIELD_IN_CDB(0x2400)
2024-12-21T00:32:54.896620+02:00 pve-sdn-01 QEMU[6547]: kvm: iSCSI GET_LBA_STATUS failed at lba 7610368: SENSE KEY:ILLEGAL_REQUEST(5) ASCQ:INVALID_FIELD_IN_CDB(0x2400)
2024-12-21T00:32:54.968178+02:00 pve-sdn-01 QEMU[6547]: kvm: iSCSI GET_LBA_STATUS failed at lba 7618560: SENSE KEY:ILLEGAL_REQUEST(5) ASCQ:INVALID_FIELD_IN_CDB(0x2400)
2024-12-21T00:32:55.009013+02:00 pve-sdn-01 QEMU[6547]: kvm: iSCSI GET_LBA_STATUS failed at lba 7626752: SENSE KEY:ILLEGAL_REQUEST(5) ASCQ:INVALID_FIELD_IN_CDB(0x2400)
2024-12-21T00:32:55.050297+02:00 pve-sdn-01 QEMU[6547]: kvm: iSCSI GET_LBA_STATUS failed at lba 7634944: SENSE KEY:ILLEGAL_REQUEST(5) ASCQ:INVALID_FIELD_IN_CDB(0x2400)
To understand if this behavior is related rather to iscsi client or server i made following test.
1. I set up virtual seabios-i440fx machine using zfs-over-iscsi and everything works but i see those messages (uefi q35 virtual machine with efi disk seems to produce its particular lba status messages and maybe it would be a different story)
2. I stopped virtual machine, disconnected zfs-over-iscsi storage from pve webgui
3. I connected same iscsi server to pve as 'iscsi-type-storage-resource' and selected 'Use LUNs directly'. I understand on the background this start open-iscsi department etc and pve has now /dev/sdb, /dev/sdc etc iscsi devices related to iscsi server luns.
4. I assing to the same virtual machine the same block resource but this time thru so to say different iscsi contact (i.e. open-iscsi-based)
5. start virtual machine up and do pbs backup etc (except Cloning, i understand cloning is not possible using storage that way but i do not need cloning also in my practice)
And this way i do not get any messages about lbs status failing etc. So i would say that iscsi target has potential and something is mis-behaving at client side.
Diff of two configurations looks like
Code:
root@pve-sdn-01:~# diff /root/11061-zfs-over-iscsi.conf /root/11061-iscsi-use-luns-directly.conf
..
< virtio0: si-zfs-over-iscsi:vm-11061-disk-0,iothread=1,size=4G
---
> virtio0: si-iscsi-use-luns-directy:0.0.1.scsi-360014053727330e6ea24b98937a187fe,iothread=1,size=4G
And appropriate storage definitions are from /etc/pve/storage.cfg
Code:
zfs: si-zfs-over-iscsi
disable
blocksize 16k
iscsiprovider LIO
pool tank
portal 192.168.10.190
target iqn.2003-01.org.linux-iscsi.tgt-zfs-over-iscsi.x8664:sn.88367da57773
content images
lio_tpg tpg1
nowritecache 1
sparse 0
iscsi: si-iscsi-use-luns-directy
portal 192.168.10.190
target iqn.2003-01.org.linux-iscsi.tgt-zfs-over-iscsi.x8664:sn.88367da57773
content images
As iscsi server i use ordinary targetcli running as virtual server on pve using Ubuntu 24.04 (targetcli-fb package). Actually i run all this as pve virtual machines i.e. nested virtualization but i also tried it out on physical hardware and experience same phenomena.
I also see that there are similar entries in the forum from past but i did not exactly realize what resolution they have so i decided to write up my experience. I would be very thankful if you could guide me further from this point. (Like it is safe to just ignore those lba status messages or at current time it is better to avoid using zfs-over-iscsi.)
Best regards,
Imre
Last edited: