Proxmox IO performance - my tests

bejbi

New Member
Oct 16, 2019
6
1
3
51
I have very strange feeling about my proxmox disk performace.

I did some test.

I have old Dell D620 server (dual Xeon, 8 core - 32 threads together, 96GB RAM, RAID H710, 2x250 Samsung Evo 860 disc in hardware raid-1)

I set on this server default CentOs 7.7 with ext4.
Then I made test using "sysbench" (https://github.com/akopytov/sysbench)

my command was:
sysbench --test=fileio --file-total-size=2G --file-test-mode=rndrw --max-time=300 --max-requests=0 run

After 5 minutes I have results:

Threads fairness:
events (avg/stddev): 13 000 000 (around 13 mlns)



Then I reinstall system to Proxmox 5 and made the same test:

Results are ALWAYS around 6 000 000 (6 milions).

I tried to use: ext4 or zfs (results was the same).


Why proxmox has 2 times WORSE I/O performance ?
My tests on Proxmox was not in kvm container but on pure Proxmox system. I think Proxmox or CentOS - the results should be fast the same ?


How to explain this performance difference ?
 
  • Like
Reactions: Benyr
Do you compare Barmatle with a Virtual environment?
Or do you compare CentOS with libvirt and ProxmoxVE?

And what is the goal?
Do you need the performance?
 
Do you compare Barmatle with a Virtual environment?
Or do you compare CentOS with libvirt and ProxmoxVE?

And what is the goal?
Do you need the performance?

The goal was: when i did test in kvm container on proxmoxVE the performance was (the same test with sysbench) around 3 millions.

So:
CentOS pure system: 12mlns
ProxmoxVE pure system: 6mlns
Kvm on proxmoxVE (CentOS, Debian): 3 mlns.

So i tried to understand where the performance degradation starts. When i compare pure Linux system (not virtual container, but base system) - I cann see the difference between default CentOS and Proxmox.
Proxmox has twice worst IO performance.

I try to understand -why? Proxmox is also Linux based on Debian. Why Proxmox base system works without maximum possible IO performance?
 
I can't tell you what the exact difference is because.
1.) I don't know the CentOS configuration in deep.
2.) I don't know the CentOS Linux kernel and what patches are used.
3.) The Intel CPU you use is?

But here is a list of potential performance problematic settings.
1.) CentOS may update your CPU microcode automatically?
With new kernel what use CPU HW vulnerability mediation is an old microcode a performance killer.
2.) If the kernel has CPU HW vulnerability mediation patches are they enabled?
These patches can reduce all IO up to 30% performance impact.
3.) As Ditmar says the mount options like barriers.
without write barriers, you have an faster storage, but the consistency is not 100% given.
4.) A different version of sysbench?
 
CentOS was out of box - default installation
In server there is old E5 processor (vunderable), but I think CentOS also have paches aginst HW vubderabilities.

mount options: I don't think that CentOS mounts filesystem is risky :)

but ok - now I try to setup pure Debian9 - will be more accurate to compare with Proxmox5 ?
 
but ok - now I try to setup pure Debian9 - will be more accurate to compare with Proxmox5 ?

If it's a kernel thing, then no:
- CentOS uses RHEL kernel, 3.10 kernel
- Debian 10 uses Debian kernel, 4.19 kernel
- ProxmoxVE uses Ubuntu LTS kernel, 5.0 kernel
 
If it's a kernel thing, then no:
- CentOS uses RHEL kernel, 3.10 kernel
- Debian 10 uses Debian kernel, 4.19 kernel
- ProxmoxVE uses Ubuntu LTS kernel, 5.0 kernel

If higher kernel line - it should be better, I think? Becouse has i.e. better SSD support ?

I did the test in Proxmox 5 (it has kernel 4.x). But kernel 4.x should be better than 3.x