ZFS over iSCSI w/TrueNAS Scale -- difficult troubleshooting

ETS_FTW

Member
Aug 31, 2022
2
0
6
Thanks in advance. :)

TrueNAS-SCALE-23.10.1.1

Proxmox 8.1.4 with 2 node cluster in a home/smb environment.

Main network is 1Gbe, Storage Network is 10Gbe.

Attempting to set up storage per many articles, but this one in particular by Surfrock66. All configs followed (SSH, etc)

Its working....sort of? After deleting and recreating the storage 100 times with multiple reboots of both systems, I can migrate the disk of a PVE VM from local storage to the TNS target, then i can migrate it back from the target to local. BUT. the push from the local to the target gives me disconnects in the process:

qemu-img: iSCSI: NOP timeout. Reconnecting...
qemu-img: iSCSI CheckCondition: SENSE KEY:UNIT_ATTENTION(6) ASCQ:BUS_RESET(0x2900)

It DOES reconnect on its own and complete.

Target to local is much faster, but I did see this in the log at the top:

qemu-img: iSCSI GET_LBA_STATUS failed at lba 0: SENSE KEY:ILLEGAL_REQUEST(5) ASCQ:INVALID_FIELD_IN_CDB(0x2400)

Again, it completes, with no interruptions in the process except that at the beginning.

Not new to PVE but new to shared storage, and know enough linux to be dangerous XD. What more info can I provide to get some community assistance?
 
I might have a somewhat similar setup to yours. Components used:
I have two problems that don't seem to affect anything, both which I would like to be addressed:

On heavy load, I get this:

Apr 28 02:00:19 hv11 QEMU[3161]: kvm: iSCSI GET_LBA_STATUS failed at lba 2695168: SENSE KEY:ILLEGAL_REQUEST(5) ASCQ:INVALID_FIELD_IN_CDB(0x2400)
Apr 28 02:00:19 hv11 QEMU[3161]: kvm: iSCSI GET_LBA_STATUS failed at lba 2703360: SENSE KEY:ILLEGAL_REQUEST(5) ASCQ:INVALID_FIELD_IN_CDB(0x2400)
Apr 28 02:00:19 hv11 QEMU[3161]: kvm: iSCSI GET_LBA_STATUS failed at lba 2711552: SENSE KEY:ILLEGAL_REQUEST(5) ASCQ:INVALID_FIELD_IN_CDB(0x2400)

This is prominent during migrations and backups. As mentioned, nothing breaks, but it's hitting the log every split second and that's not cool.

The second problem, unrelated most likely, is this:
# qm importdisk 9404 debian-12-generic-amd64-20240415-1718.qcow2 nas02

importing disk 'debian-12-generic-amd64-20240415-1718.qcow2' to VM 9404 ...
Warning: volblocksize (4096) is less than the default minimum block size (16384).
To reduce wasted space a volblocksize of 16384 is recommended.

On a second exactly similar setup, I get this:
To reduce wasted space a volblocksize of 8192 is recommended.
new volume ID is 'nas01:vm-149-cloudinit'
Warning: volblocksize (4096) is less than the default minimum block size (8192).
To reduce wasted space a volblocksize of 8192 is recommended.
new volume ID is 'nas01:vm-149-disk-0'

I am confused why on the one NAS it's warning about 8192 and the other it's warning about 16384.

I posted about this passthrough problem but I didn't understand the reply:
https://forum.proxmox.com/threads/w...t-minimum-block-size-8192.135053/#post-597080

All these problems are harmless but reducing log warnings will give me a lot more peace of mind.

I did notice in PVD 8.1 change log:

  • When editing ZFS storages, display 16k as the blocksize placeholder to reflect the current ZFS defaults.

Maybe this will help in future troubleshooting.

Passthrough links:
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!