Switch from thin provision to thick provision

UK SPEED

Member
Dec 23, 2023
31
0
6
I have a Proxmox with 2 NVMES drives and ZFS on and thin provision on them

Under this Promox pve , there are some VMs and Containers, now I would like to switch to thick provision because I would like to get a faster response for the DB.

Can I untick the thin provision to the thick provision and is that will affect both VMs and Containers under my ZFS storage, or do I want to recreate the VMs and Containers under the same storage with the untick of (thin provision)?


1719866715254.png
 
With ZFS it will affect new virtual disks but not existing virtual disks. I don't expect any performance difference, as you can simple change a setting on your existing zvols to change them from thin/sparse to full. Please lookup the setting in any ZFS documentation for more details.
 
@
leesteken

(((as you can simple change a setting on your existing zvols to change them from thin/sparse to full. Please lookup the setting in any ZFS documentation for more details.))
Can you help me with that by some links and docs please
 
Alos the backup and restore for the VM will help to change the thin to thick on the same drive ?
 
Alos the backup and restore for the VM will help to change the thin to thick on the same drive ?
Yes, recreating a virtual disk (as in restoring from backup) will do. But it's much easier to simply change the refreservation on the current zvol: https://linux.die.net/man/8/zfs
@
leesteken

(((as you can simple change a setting on your existing zvols to change them from thin/sparse to full. Please lookup the setting in any ZFS documentation for more details.))
Can you help me with that by some links and docs please
Sure, I'll search the internet for you. It's called refreservation and here is a link: https://linux.die.net/man/8/zfs
 
Maybe create a not-thin VM disk and compare the ZFS settings between a thin VM disk and a not-thin VM disk, to make sure I got it right.
 
Last edited:
For better DB performance you might also want to set the recordsize and volblocksize to the blocksize your DB is reading/writing records. If you want this to persist restores I would recommend to create another dataset and add that as a new storage of type ZFS. Then set the recordsize of that dataset via "zfs set recordsize=..." and the volblocksize by setting a proper value in the "Block size" field of that ZFS storage.
And consider buying proper Enterprise SSDs with PLP so sync write performance won't suck in case you are only using consumer/prosumer/NAS grade SSDs. Without the PLP your writes to the DB will be way slower.
 
Last edited:

Dunuin

I have a Micron 9400 NVMe with 2 mirror storage of them, and I have a BIG DB of Maria DB, what is the best value of recordsize= and the volblocksize ?
 
Should be 16K volblocksize + 16K recordsize for MariaDB. Old PVE installations defaulted to a 8K volblocksize + 129K recordsize and new instalations to 16K volblocksize + 128K recordsize. So you might want to check that the zvol containing your DB via zfs get volblocksize,recordsize.

You might also want to have a look at the "performance tuning" page of the ZFS documentation: https://openzfs.readthedocs.io/en/latest/performance-tuning.html#innodb-mysql
 
Last edited:
  • Like
Reactions: Kingneutron
NAME PROPERTY VALUE SOURCE
rpool volblocksize - -
rpool recordsize 128K default
rpool/ROOT volblocksize - -
rpool/ROOT recordsize 128K default
rpool/ROOT/pve-1 volblocksize - -
rpool/ROOT/pve-1 recordsize 128K default
rpool/data volblocksize - -
rpool/data recordsize 128K default
rpool/var-lib-vz volblocksize - -
rpool/var-lib-vz recordsize 128K default
zfs2024 volblocksize - -
zfs2024 recordsize 128K default
zfs2024/subvol-100-disk-0 volblocksize - -
zfs2024/subvol-100-disk-0 recordsize 128K default
zfs2024/subvol-102-disk-0 volblocksize - -
zfs2024/subvol-102-disk-0 recordsize 128K default
zfs2024/subvol-103-disk-0 volblocksize - -
zfs2024/subvol-103-disk-0 recordsize 128K default
zfs2024/subvol-104-disk-0 volblocksize - -
zfs2024/subvol-104-disk-0 recordsize 128K default
zfs2024/subvol-105-disk-0 volblocksize - -
zfs2024/subvol-105-disk-0 recordsize 128K default
zfs2024/subvol-107-disk-0 volblocksize - -
zfs2024/subvol-107-disk-0 recordsize 128K default
zfs2024/subvol-108-disk-0 volblocksize - -
zfs2024/subvol-108-disk-0 recordsize 128K default
root@asus:~#
 
So only LXCs with datasets in use and not VMs with zvols. So see the link above on how to optimize datasets for MySQL.
 
Yes the Maria DB on the container now, is that alright or do I need to so a tune with the container too ?
 
LXCs do use datasets, so there is a lot to optimize and the stuff mentioned in the link will be valid too:

InnoDB (MySQL)​

Make separate datasets for InnoDB’s data files and log files. Set recordsize=16K on InnoDB’s data files to avoid expensive partial record writes and leave recordsize=128K on the log files. Set primarycache=metadata on both to prefer InnoDB’s caching. Set logbias=throughput on the data to stop ZIL from writing twice.

Set skip-innodb_doublewrite in my.cnf to prevent innodb from writing twice. The double writes are a data integrity feature meant to protect against corruption from partially-written records, but those are not possible on ZFS. It should be noted that Percona’s blog had advocated using an ext4 configuration where double writes were turned off for a performance gain, but later recanted it because it caused data corruption. Following a well timed power failure, an in place filesystem such as ext4 can have half of a 8KB record be old while the other half would be new. This would be the corruption that caused Percona to recant its advice. However, ZFS’ copy on write design would cause it to return the old correct data following a power failure (no matter what the timing is). That prevents the corruption that the double write feature is intended to prevent from ever happening. The double write feature is therefore unnecessary on ZFS and can be safely turned off for better performance.

On Linux, the driver’s AIO implementation is a compatibility shim that just barely passes the POSIX standard. InnoDB performance suffers when using its default AIO codepath. Set innodb_use_native_aio=0 and innodb_use_atomic_writes=0 in my.cnf to disable AIO. Both of these settings must be disabled to disable AIO.
 
  • Like
Reactions: Kingneutron

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!