Switch from thin provision to thick provision

UK SPEED

New Member
Dec 23, 2023
27
0
1
I have a Proxmox with 2 NVMES drives and ZFS on and thin provision on them

Under this Promox pve , there are some VMs and Containers, now I would like to switch to thick provision because I would like to get a faster response for the DB.

Can I untick the thin provision to the thick provision and is that will affect both VMs and Containers under my ZFS storage, or do I want to recreate the VMs and Containers under the same storage with the untick of (thin provision)?


1719866715254.png
 
With ZFS it will affect new virtual disks but not existing virtual disks. I don't expect any performance difference, as you can simple change a setting on your existing zvols to change them from thin/sparse to full. Please lookup the setting in any ZFS documentation for more details.
 
@
leesteken

(((as you can simple change a setting on your existing zvols to change them from thin/sparse to full. Please lookup the setting in any ZFS documentation for more details.))
Can you help me with that by some links and docs please
 
Alos the backup and restore for the VM will help to change the thin to thick on the same drive ?
Yes, recreating a virtual disk (as in restoring from backup) will do. But it's much easier to simply change the refreservation on the current zvol: https://linux.die.net/man/8/zfs
@
leesteken

(((as you can simple change a setting on your existing zvols to change them from thin/sparse to full. Please lookup the setting in any ZFS documentation for more details.))
Can you help me with that by some links and docs please
Sure, I'll search the internet for you. It's called refreservation and here is a link: https://linux.die.net/man/8/zfs
 
Maybe create a not-thin VM disk and compare the ZFS settings between a thin VM disk and a not-thin VM disk, to make sure I got it right.
 
Last edited:
For better DB performance you might also want to set the recordsize and volblocksize to the blocksize your DB is reading/writing records. If you want this to persist restores I would recommend to create another dataset and add that as a new storage of type ZFS. Then set the recordsize of that dataset via "zfs set recordsize=..." and the volblocksize by setting a proper value in the "Block size" field of that ZFS storage.
And consider buying proper Enterprise SSDs with PLP so sync write performance won't suck in case you are only using consumer/prosumer/NAS grade SSDs. Without the PLP your writes to the DB will be way slower.
 
Last edited:
  • Like
Reactions: leesteken

Dunuin

I have a Micron 9400 NVMe with 2 mirror storage of them, and I have a BIG DB of Maria DB, what is the best value of recordsize= and the volblocksize ?
 
Should be 16K volblocksize + 16K recordsize for MariaDB. Old PVE installations defaulted to a 8K volblocksize + 129K recordsize and new instalations to 16K volblocksize + 128K recordsize. So you might want to check that the zvol containing your DB via zfs get volblocksize,recordsize.

You might also want to have a look at the "performance tuning" page of the ZFS documentation: https://openzfs.readthedocs.io/en/latest/performance-tuning.html#innodb-mysql
 
Last edited:
NAME PROPERTY VALUE SOURCE
rpool volblocksize - -
rpool recordsize 128K default
rpool/ROOT volblocksize - -
rpool/ROOT recordsize 128K default
rpool/ROOT/pve-1 volblocksize - -
rpool/ROOT/pve-1 recordsize 128K default
rpool/data volblocksize - -
rpool/data recordsize 128K default
rpool/var-lib-vz volblocksize - -
rpool/var-lib-vz recordsize 128K default
zfs2024 volblocksize - -
zfs2024 recordsize 128K default
zfs2024/subvol-100-disk-0 volblocksize - -
zfs2024/subvol-100-disk-0 recordsize 128K default
zfs2024/subvol-102-disk-0 volblocksize - -
zfs2024/subvol-102-disk-0 recordsize 128K default
zfs2024/subvol-103-disk-0 volblocksize - -
zfs2024/subvol-103-disk-0 recordsize 128K default
zfs2024/subvol-104-disk-0 volblocksize - -
zfs2024/subvol-104-disk-0 recordsize 128K default
zfs2024/subvol-105-disk-0 volblocksize - -
zfs2024/subvol-105-disk-0 recordsize 128K default
zfs2024/subvol-107-disk-0 volblocksize - -
zfs2024/subvol-107-disk-0 recordsize 128K default
zfs2024/subvol-108-disk-0 volblocksize - -
zfs2024/subvol-108-disk-0 recordsize 128K default
root@asus:~#
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!