pveperf and ext4

mir

Famous Member
Apr 14, 2012
3,568
127
133
Copenhagen, Denmark
Hi all,

I have found the reason to the bad fsync performance using ext4 on SSD with proxmox. The reason is caused by using the mount option discard which has the effect that for each commit a trim is performed. This extra trim action will dramatically slow down your fsync performance.

Using mount option discard: fsync -> 284.14
Without mount option discard: fsync ->1428.61

Since trim is crucial to maintain performance use this instead:
Code:
1) drop mount option discard
2) do the following instead:
     - create this file '/etc/cron.daily/fstrim'
     - containing this (assuming SSD partitions on / and /var/lib/vz):
       #!/bin/sh

       PATH=/bin:/sbin:/usr/bin:/usr/sbin


       ionice -n7 fstrim -v /


       ionice -n7 fstrim -v /var/lib/vz
3) chmod a+x /etc/cron.daily/fstrim
Enjoy your new speedy SSD:)
 
I agree that trim should be done on a regular (daily or weekly) basis, not as a forced mount option. Just one comment to your trim script: ionice requires the CFQ scheduler, while Proxmox currently uses DEADLINE. So while fstrim is running, performance will be horrible, so it's advised to run it during low IO periods (don't forget backups are high IO as well).

I also wonder if your horrible discard fsync numbers were produced with CFQ. It is worth to point out that CFQ has a couple of tuneable parameters outlined at this link:
http://doc.opensuse.org/products/dr...a.tuning.io.html#sec.tuning.io.schedulers.cfq

/sys/block/<device>/queue/iosched/slice_idle
When a task has no more I/O to submit in its time slice, the I/O scheduler waits for a while before scheduling the next thread to improve locality of I/O. For media where locality does not play a big role (SSDs, SANs with lots of disks) setting /sys/block/<device>/queue/iosched/slice_idle to 0 can improve the throughput considerably.

/sys/block/<device>/queue/iosched/quantum
This option limits the maximum number of requests that are being processed by the device at once. The default value is 4. For a storage with several disks, this setting can unnecessarily limit parallel processing of requests. Therefore, increasing the value can improve performance although this can cause that the latency of some I/O may be increased due to more requests being buffered inside the storage. When changing this value, you canalso consider tuning /sys/block/<device>/queue/iosched/slice_async_rq (the default value is 2) which limits the maximum number of asynchronous requests—usually writing requests—that are submitted in one time slice.

/sys/block/<device>/queue/iosched/low_latency
For workloads where the latency of I/O is crucial, setting /sys/block/<device>/queue/iosched/low_latency to 1 can help.

So if someone still uses the CFQ scheduler on an SSD, it might be worth to test these parameters (and compare them to deadline).
For an SSD, I would set slice_idle to 0, quantum to 32 and low latency to 1. I'm not sure about slice_async_rq, but would experiment with 8 and 16.
 
Last edited:
I'm doing a test with a Samsung 840 evo, 250GB, etx4 formatted, proxmox 3.1 from pvetest with kernel 2.6.x and 3.10, NO discard. At the moment the ssd is on sata2 channel (3GB/s), but don't think is the problem. I've tried with and without "noatime" also. With both kernels I've:
Code:
 CPU BOGOMIPS:      24744.60 REGEX/SECOND:      2414655   HD SIZE:           39.37 GB (/dev/mapper/pve-root)   BUFFERED READS:    216.88 MB/sec   AVERAGE SEEK TIME: 0.05 ms   FSYNCS/SECOND:     215.93
my fstab is:
Code:
 /dev/pve/root / ext4 errors=remount-ro,noatime 0 1 /dev/pve/data /var/lib/vz ext4 defaults,noatime 0 1    UUID=45cfb83d-b3ad-4606-974e-a27985612fb4 /boot ext4 defaults,noatime 0 1 /dev/pve/swap none swap sw 0 0 proc /proc proc defaults 0 0
Any idea? BTW, the tag "
Code:
" seems not to work for me
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!