Whilst move storage from local-lvm to TrueNAS ZFS over iSCSI: Warning: volblocksize (4096) is less than the default minimum block size (8192).

eugenevdm

Active Member
Dec 13, 2020
57
12
28
53
Hi! When I move a VM's storage from `local-lvm` to a TrueNAS Core based ZFS volume over iSCSI, I get the following warning:

Code:
Task viewer: VM 102 - Move disk

create full clone of drive scsi0 (local-lvm:vm-102-disk-0)
Warning: volblocksize (4096) is less than the default minimum block size (8192).
To reduce wasted space a volblocksize of 8192 is recommended.

In order to get ZFS over iSCSI working, I used the plugin https://github.com/TheGrandWazoo/freenas-proxmox

I have no idea how to approach this issue. As far as I know I have chosen every single default possible.

Please could someone give me some tips on what the next steps of the troubleshooting process is?
 
Hi,
on the host with ZFS, what does zfs get volblocksize <name of ZFS> say? Please also share your storage configuration /etc/pve/storage.cfg.
 
ZFS host block size configuration for disk I'm moving:

Bash:
zfs get volblocksize pool/vm-102-disk-0
NAME                PROPERTY      VALUE     SOURCE
pool/vm-102-disk-0  volblocksize  4K        -

PVE Storage configuration:

Bash:
cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content vztmpl,backup,iso

lvmthin: local-lvm
    thinpool data
    vgname pve
    content rootdir,images

lvm: new-lvm-volume
    vgname pve
    content images,rootdir
    shared 0

lvmthin: new-lvm-thin-volume
    thinpool data
    vgname pve
    content rootdir,images

zfs: iscsi3
    blocksize 4k
    iscsiprovider freenas
    pool pool
    portal 192.168.1.25
    target iqn.2005-10.org.freenas.ctl:target
    content images
    freenas_apiv4_host 192.168.1.25
    freenas_password secret
    freenas_use_ssl 0
    freenas_user root
    nowritecache 0
    sparse 1
 
After battling to understand this problem and all the terminology in my near perfect Proxmox VE environment, I have finally made substantial progress in solving the issue for (my environment).

Environment:

- Proxmox VE
- ZFS over iSCSI to TrueNAS Scale

When you set up ZFS over iSCSI with Proxmox VE the default volblocksize is 4k.

When I started using TrueNAS Core, which most likely relies on OpenZFS, the default volblocksize was 8k.

Somewhere along the line TrueNAS core updated, and therefor as per https://github.com/openzfs/zfs/pull/12406 the ZFS default now became 16k.

So that's why out of the blue when I got new messages, namely:

Bash:
Warning: volblocksize (8192) is less than the default minimum block size (16384).
To reduce wasted space a volblocksize of 16384 is recommended.

I was really confused. After much reading, and discovering a scary script but with nice explanations that can retrospectively change this value, I had another thought:

What if I just create a new ZFS over iSCSI to the exact same server but using 16k instead? And then what if I then just transfer move the data from NASX to NASX-16k?

So far, so good. Will keep you posted.
 
Last edited:
Sorry to revive this thread, I{m suffering from the same issue, same setupw ZFS over iSCSI and TrueNAS (SCALE) and after reading both the Github and the script, this applies to TrueNAS Core or can I use with SCALE?, i{s not clear what OP says where to execute the script either on PVE or TrueNAS?.

Thanks for any feedback.
 
I solved this the hardcore way. I created another iSCSI disk using 16384 pointing to the same disk as the original 8192 volume. I then live migrated my VMs one by one (8192 disk to same disk but 16384) and after many hours I had everyhing in the right size. I then was able to delete 8192 volume and only left with 16k volume.
 
I solved this the hardcore way. I created another iSCSI disk using 16384 pointing to the same disk as the original 8192 volume. I then live migrated my VMs one by one (8192 disk to same disk but 16384) and after many hours I had everything in the right size. I then was able to delete 8192 volume and only left with 16k volume.
Thanks for your reply, let me try to digest that, so what you did was:

- On proxmox GUI create a new volume with 16384 a bsize or from TrueNAS?.
- Live migated, how did you do that?

I have a two-node PVE cluster with TNS as storage, if you could just share a more detailed apprach it'd be much more appreciated.

Regards.
 
I am assuming you're using the Grand Wazoo plugin. As some point you added a new volume, and accepted the default of 8192. If you study the screenshot below, you'll see in this screenshot, the default was 4K. This is clearly the wrong default if you want to avoid those warnings.

1747030619282.png

So what I did was add an additional ZFS over iSCSI link, replicating the incorrect one for all fields EXCEPT, I changed the ZFS Block Size to 16k to avoid warnings.

So then I ended up with duplicate iSCSI links, all pointing to the same disk.

Then I went into each VM, clicked Hardware, clicked the hard drive, and then reassigned the disk via Move disk:

1747030810583.png

You can see it in this screenshot. Before the NAS for a lot of VMs were just called nas01 (and block size was 8192 and I had those warnings). I called the new one nas01-16k to avoid confusion. Then, I moved all the disks to the new iSCSI drive on the same disk using the new 16k iSCSI named disk. It took a while because but it's disk to same disk so not too hectic.

Ps. I don't really know what gain you'll get moving from 8192 to 16k, but to me it was all about not having any warnings.
 
I am assuming you're using the Grand Wazoo plugin. As some point you added a new volume, and accepted the default of 8192. If you study the screenshot below, you'll see in this screenshot, the default was 4K. This is clearly the wrong default if you want to avoid those warnings.

View attachment 85921
You{re right, I{m on Grand Wazoo plugin indeed, and accepted the defaults on my own ignorance of that upcoming issue..
So what I did was add an additional ZFS over iSCSI link, replicating the incorrect one for all fields EXCEPT, I changed the ZFS Block Size to 16k to avoid warnings.

So then I ended up with duplicate iSCSI links, all pointing to the same disk.

Then I went into each VM, clicked Hardware, clicked the hard drive, and then reassigned the disk via Move disk:

View attachment 85922

You can see it in this screenshot. Before the NAS for a lot of VMs were just called nas01 (and block size was 8192 and I had those warnings). I called the new one nas01-16k to avoid confusion. Then, I moved all the disks to the new iSCSI drive on the same disk using the new 16k iSCSI named disk. It took a while because but it's disk to same disk so not too hectic.
I did that from the CLI, copied this section

Code:
zfs: tns-zfs-iscsi
        blocksize 16k
        iscsiprovider freenas
        pool store-fast1
        portal 100.64.100.5
        target iqn.2005-10.org.freenas.ctl:pve-vm
        content images
        freenas_apiv4_host 100.64.100.5:81
        freenas_password yikes
        freenas_use_ssl 0
        freenas_user admin
        nodes pve,pve02
        nowritecache 0
        sparse 1

And I'm moving images, I have quite a few so, it'll be time consuming.

Ps. I don't really know what gain you'll get moving from 8192 to 16k, but to me it was all about not having any warnings.
Well, I{m also not that aware of that apart from getting rid of the warning and IMHO, making snapshots and cloning faster which was what brought me to your post.

Thanks for taking the time to write and explain in such great detail, so few people around these communities are willing to do so nowadays.

Appreciate you!.