Excessive writes to NVMe on ZFS

Abs0lutZero

Member
Aug 2, 2021
3
1
8
Hi Guys

I'm running Proxmox 6.4.13 and recently installed a Corsair MP600 1TB NVMe using a PCIe riser card.

The NVMe is set up using ZFS (Single Disk, Compression On, ashift 12)

I am seeing a concerning amount of writes and I do not know why. I am not running any serious workloads. Just UniFi, Nginx Proxy, Heimdall, Untangle, Zabbix and an OpenVPN Server.

I did set each VM to use Write Back cache for improved performance. I have a reliable UPS solution.

I have set my ZFS Arc size to 24GB. (Total memory is 78GB) (DDR3 ECC)

Is there a way for me to reduce the writes to the NVMe while retaining performance ? And why does the percentage used seems wrong. The MP600 1TB is rated for 1800TBW while 2% of that should be 36TB and I have reached 2% at only 12TB
 

Attachments

  • SSD Wear.png
    SSD Wear.png
    22.1 KB · Views: 26
Does the filesystem inside your VM use a 4k block size? Did you set the Block Size of the Storage to 4k before you created the virtual disk? Did you add args: -global scsi-hd.physical_block_size=4k to your VM confdiguration and use VirtIO SCSI drives (as mentioned in this feature request)? I believe doing this can reduce the write amplification (but it cannot be prevented completely on any system).

EDIT: ZFS is a copy-on-write filesystem; not only the data needs to be written but also meta-data and checksums needs to be updated, sometimes all teh way to the root. For some writes the ZIL is written too. So a amplication of 2x to 3x does not sound too bad, IMHO.
 
Last edited:
I have done none of the steps you mentioned. I would need to check if the filesystem on the VM's is using a 4k block size.

I have always used VirtIO Block when setting up Virtual Machines. Is this considered not good when using ZFS ?
 
I have done none of the steps you mentioned. I would need to check if the filesystem on the VM's is using a 4k block size.

I have always used VirtIO Block when setting up Virtual Machines. Is this considered not good when using ZFS ?
These "improvements" are hardly documented anywhere, but I do think they will help. I think VirtIO SCSI is better than VirtIO Block because it has very similar performance and supports more features (like setting the virtual driver sector size). If you need a separate devices per drive, you can use VirtIO SCSI Single.
 
For those who are curious. I resolved the problem by switching back to LVM-Thin

This is not a resolution for the problem really, but I actually see better performance and far less wear so I'll stick to LVM+ext4 for now.
 
  • Like
Reactions: TheRealMaN_

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!