Low disk system performance in Proxmox 7.4 - 8

Roman337

New Member
Oct 24, 2022
4
1
3
Good day, we have encountered an issue with low file system performance on new servers running Proxmox 8.2. The performance drops are especially noticeable in Firebird 4 databases on virtual machines with Debian 12. Initially, we suspected the ZFS file system configuration was to blame, but after testing, we found that ZFS is not the cause—it just makes the performance drops more evident. We tested using the HQBird performance test, and for Windows virtual machines, we used CrystalDiskMark 8.0.5.
Here are the test results HQBird on firebird 4:
part1:
Proxmox 7.2 kernel 5.15.30-3 on zfs file system
Score value for Inserts: 21033 rec/sec
Score value for Updates: 18957 rec/sec
Score value for Deletes: 37428 rec/sec
Proxmox 7.2 kernel 5.15.30-3 on ext4 file system
Score value for Inserts: 20422 rec/sec
Score value for Updates: 20103 rec/sec
Score value for Deletes: 41658 rec/sec
part2:
Proxmox 7.4 kernel 5.15.158-2 on zfs file system
Score value for Inserts: 9587 rec/sec
Score value for Updates: 9651 rec/sec
Score value for Deletes: 12855 rec/sec
Proxmox 7.4 kernel 5.15.158-2 on ext4 file system
Score value for Inserts: 12120 rec/sec
Score value for Updates: 14280 rec/sec
Score value for Deletes: 35446 rec/sec
part3:
Proxmox 8 kernel 6.8.12-2-pve on zfs file system
Score value for Inserts: 6062 rec/sec
Score value for Updates: 3919 rec/sec
Score value for Deletes: 5505 rec/sec
Proxmox 8 kernel 6.8.12-2-pve on ext4 file system
Score value for Inserts: 8276 rec/sec
Score value for Updates: 8070 rec/sec
Score value for Deletes: 24030 rec/sec

Here are the test results Windows CrystalDiskMark 8.0.5:
Proxmox 7.2 kernel 5.15.30-3 on zfs file system
[Read]
SEQ 1MiB (Q= 8, T= 1): 9199.204 MB/s [ 8773.0 IOPS] < 911.04 us>
SEQ 1MiB (Q= 1, T= 1): 2833.754 MB/s [ 2702.5 IOPS] < 369.73 us>
RND 4KiB (Q= 32, T= 1): 582.288 MB/s [ 142160.2 IOPS] < 219.44 us>
RND 4KiB (Q= 1, T= 1): 178.315 MB/s [ 43533.9 IOPS] < 22.77 us>
[Write]
SEQ 1MiB (Q= 8, T= 1): 1373.574 MB/s [ 1309.9 IOPS] < 4334.17 us>
SEQ 1MiB (Q= 1, T= 1): 1490.037 MB/s [ 1421.0 IOPS] < 702.57 us>
RND 4KiB (Q= 32, T= 1): 545.708 MB/s [ 133229.5 IOPS] < 218.03 us>
RND 4KiB (Q= 1, T= 1): 149.076 MB/s [ 36395.5 IOPS] < 27.27 us>
Proxmox 7.2 kernel 5.15.30-3 on ext4 file system
[Read]
SEQ 1MiB (Q= 8, T= 1): 1540.941 MB/s [ 1469.6 IOPS] < 5439.38 us>
SEQ 1MiB (Q= 1, T= 1): 1490.720 MB/s [ 1421.7 IOPS] < 703.08 us>
RND 4KiB (Q= 32, T= 1): 643.311 MB/s [ 157058.3 IOPS] < 203.15 us>
RND 4KiB (Q= 1, T= 1): 30.480 MB/s [ 7441.4 IOPS] < 134.18 us>
[Write]
SEQ 1MiB (Q= 8, T= 1): 689.138 MB/s [ 657.2 IOPS] < 12138.04 us>
SEQ 1MiB (Q= 1, T= 1): 689.846 MB/s [ 657.9 IOPS] < 1519.14 us>
RND 4KiB (Q= 32, T= 1): 516.041 MB/s [ 125986.6 IOPS] < 248.20 us>
RND 4KiB (Q= 1, T= 1): 118.585 MB/s [ 28951.4 IOPS] < 34.35 us>
part2:
Proxmox 7.4 kernel 5.15.158-2 on zfs file system
[Read]
SEQ 1MiB (Q= 8, T= 1): 7792.773 MB/s [ 7431.8 IOPS] < 1075.63 us>
SEQ 1MiB (Q= 1, T= 1): 1798.773 MB/s [ 1715.4 IOPS] < 582.26 us>
RND 4KiB (Q= 32, T= 1): 445.574 MB/s [ 108782.7 IOPS] < 290.50 us>
RND 4KiB (Q= 1, T= 1): 97.736 MB/s [ 23861.3 IOPS] < 41.65 us>
[Write]
SEQ 1MiB (Q= 8, T= 1): 1589.959 MB/s [ 1516.3 IOPS] < 2984.29 us>
SEQ 1MiB (Q= 1, T= 1): 2370.763 MB/s [ 2260.9 IOPS] < 440.83 us>
RND 4KiB (Q= 32, T= 1): 421.211 MB/s [ 102834.7 IOPS] < 296.58 us>
RND 4KiB (Q= 1, T= 1): 85.951 MB/s [ 20984.1 IOPS] < 47.44 us>
Proxmox 7.4 kernel 5.15.158-2 on ext4 file system
[Read]
SEQ 1MiB (Q= 8, T= 1): 1606.013 MB/s [ 1531.6 IOPS] < 5218.33 us>
SEQ 1MiB (Q= 1, T= 1): 1223.629 MB/s [ 1166.9 IOPS] < 854.88 us>
RND 4KiB (Q= 32, T= 1): 511.212 MB/s [ 124807.6 IOPS] < 253.10 us>
RND 4KiB (Q= 1, T= 1): 25.482 MB/s [ 6221.2 IOPS] < 160.48 us>
[Write]
SEQ 1MiB (Q= 8, T= 1): 688.721 MB/s [ 656.8 IOPS] < 12142.96 us>
SEQ 1MiB (Q= 1, T= 1): 629.790 MB/s [ 600.6 IOPS] < 1663.09 us>
RND 4KiB (Q= 32, T= 1): 528.988 MB/s [ 129147.5 IOPS] < 247.25 us>
RND 4KiB (Q= 1, T= 1): 83.385 MB/s [ 20357.7 IOPS] < 48.92 us>
part3:
Proxmox 8 kernel 6.8.12-2-pve on zfs file system
[Read]
SEQ 1MiB (Q= 8, T= 1): 7758.558 MB/s [ 7399.1 IOPS] < 1080.24 us>
SEQ 1MiB (Q= 1, T= 1): 1451.402 MB/s [ 1384.2 IOPS] < 721.85 us>
RND 4KiB (Q= 32, T= 1): 482.175 MB/s [ 117718.5 IOPS] < 271.51 us>
RND 4KiB (Q= 1, T= 1): 19.826 MB/s [ 4840.3 IOPS] < 206.35 us>
[Write]
SEQ 1MiB (Q= 8, T= 1): 2022.025 MB/s [ 1928.4 IOPS] < 3548.71 us>
SEQ 1MiB (Q= 1, T= 1): 1772.300 MB/s [ 1690.2 IOPS] < 590.33 us>
RND 4KiB (Q= 32, T= 1): 435.747 MB/s [ 106383.5 IOPS] < 294.67 us>
RND 4KiB (Q= 1, T= 1): 19.425 MB/s [ 4742.4 IOPS] < 210.28 us>
Proxmox 8 kernel 6.8.12-2-pve on ext4 file system
[Read]
SEQ 1MiB (Q= 8, T= 1): 1466.780 MB/s [ 1398.8 IOPS] < 5714.22 us>
SEQ 1MiB (Q= 1, T= 1): 1212.238 MB/s [ 1156.1 IOPS] < 863.60 us>
RND 4KiB (Q= 32, T= 1): 492.676 MB/s [ 120282.2 IOPS] < 265.72 us>
RND 4KiB (Q= 1, T= 1): 26.766 MB/s [ 6534.7 IOPS] < 152.81 us>
[Write]
SEQ 1MiB (Q= 8, T= 1): 683.913 MB/s [ 652.2 IOPS] < 12233.77 us>
SEQ 1MiB (Q= 1, T= 1): 634.573 MB/s [ 605.2 IOPS] < 1651.01 us>
RND 4KiB (Q= 32, T= 1): 544.376 MB/s [ 132904.3 IOPS] < 240.43 us>
RND 4KiB (Q= 1, T= 1): 75.249 MB/s [ 18371.3 IOPS] < 54.22 us>

As can be seen from the tests, performance degradation occurs depending on the version. If Proxmox 7.4 is rolled back to kernel 5.13.85-1 or earlier, the issue disappears. The problem appeared when transitioning from kernel 5.15.85-1 to kernel 5.15.102-1. The issue is also more noticeable on newer Intel processors; specifically, these tests were conducted on a dual-processor system with Intel(R) Xeon(R) Gold 6444Y. No CPU or RAM performance degradation was observed, which was confirmed by tests. The virtual machines and disk subsystems used for testing were the same across all versions. Initially, Proxmox 7.2 was installed, configurations were made, and virtual machines were created. Nothing was changed afterward; we only updated the kernels and Proxmox versions. Has anyone encountered something similar? How can this issue be resolved?
 
Well, you can pin the old Kernel and update the rest to keep your perfomance where you expect it to be... even our 8.2.2 PVE does run fine with an very old Kernel....
Linux 5.13.19-6-pve (Tue, 29 Mar 2022 15:59:50 +0200)

People may cry now that this is unsafe or whatever.... but while waiting for a real solution you can keep it on par and have the other components safe and up2date....
 
  • Like
Reactions: Roman337
We at Thomas-Krenn.AG have tested the Different Cache Options and Write Back is basically very good

See PDF (Only in German, sorry.) (PDF is too big to attach it here.)

https://files.thomas-krenn.com/index.php/s/NGrcnkPG8j2FxC9
Link is only valid till 25.10.2024
We don't use caching; under the hood, it's a database. When using cache, there's a risk of data loss, but even enabling it for testing purposes did not affect performance in any way. In any case, there is a clear performance degradation, and my goal is not to increase the current low performance, but to restore it to its previous level.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!