Hypervisor Showdown: Performance of Leading Virtualization Solutions

I believe that he default virtual CPU does not accelerate SSL, which no informed user would do for a VM with SSL but they wanted "to keep everything default".

EDIT: And the defaulr is probably ZFS, which is (very) slow unless you make sure the have the right drives and match the block sizes with the VM.
 
Last edited:
  • Like
Reactions: justinclift
I believe that he default virtual CPU does not accelerate SSL, which no informed user would do for a VM with SSL but they wanted "to keep everything default".
I concur ... most probably missing AES-NI extension, which is IIRC only per default available in newest PVE 8 default CPU.
 
they have a discord channel at discord.gg/storagereview and i opened a thread there ( https://discord.com/channels/1098346557738328207/1251183684334260336 )

the first answer was

"I am aware the hypervisors can be optimized to perform better but the comparison was supposed to compare them straight out of the box with no knobs turned as someone completely unfamiliar with a new platform would"

this is a valid point. but it's an unfortunate testing scenario.

response/discussion is positive, though.

the question is, if proxmox vm defaults are suboptimal and should perhaps be changed to provide better out-of-the-box experience/performance.

but that needs to be well-thought, because NOT having host-cpu-passtrough as a default has a reason, and i think that reason is vm live migration capability/compatibility.

at least i can tell, the interest in optimizing defaults for better performance is not on the proxmox priority list ( e.g. https://bugzilla.proxmox.com/show_bug.cgi?id=4805 )
 
This is where becoming a "virtualization expert" can pay dividends. Being able to optimize a VM environment for your $DAYJOB or a client is well worth the learning curve.
 
So they compared the "default" settings which are generally geared towards max compatibility not towards what a sane SysAdmin would use. Unfortunate.
 
update

https://www.storagereview.com/revie...rformance-of-leading-virtualization-solutions

"
Following the publication of our initial findings, concerns were raised regarding our baseline Proxmox testing, specifically the absence of host CPU passthrough features. While our original methodology aimed to compare stock performance without additional tuning, we acknowledge the significance of these features for Proxmox. To address this, we will be conducting additional tests with Proxmox, enabling these features to provide a more comprehensive comparison. Additionally, we will be incorporating smaller VM tests for each hypervisor to evaluate performance without cross-NUMA hits. However, to maintain consistency in our comparison to bare metal, we will continue to utilize our large-VM comparison for that portion of the test. Stay tuned for the updated results in our forthcoming revision.


In this we do want to be brutally clear that we wanted to “compare stock performance without additional tuning” as what might be the case migrating many VMs would like most people who would when moving from an all VMWare or HyperV environment.


We are in the process of collecting a new full round of tests right now, as we couldn’t run a select group only. The platform leveraged in this project originally saw significant changes after moving to a direct liquid-cooling build, so we are running all of tests again with the new hardware.
 
It's impossible that openssl is so slow if they use the default x86-64-v2-AES model.

Or it's a benchmark of pve7 with old kvm64, but I don't see why they don't test pve8 in this case.... (as pve7 is eol next month)
 
It's impossible that openssl is so slow if they use the default x86-64-v2-AES model.

Or it's a benchmark of pve7 with old kvm64, but I don't see why they don't test pve8 in this case.... (as pve7 is eol next month)

"They" is a single student [1] who has no other publications. There's no methodology to replicate, even if you get the specific [2] test, you don't know which one he ran and how many times for the average.

The statements like "158.50% and 106.18% of bare metal performance" ... "suggesting these hypervisors can utilize these accelerators without manual configuration and tuning" ... wait, what? So they did not benchmark against the same on the actual "bare metal" themselves?

Anyone can publish an article like that. I would probably reach out for comment if I got 5% benchmarked on something before publishing, but well, he did not, so that's all you need to know. ;)

[1] https://in.linkedin.com/in/divyansh-jain-b98490188
[2] https://openbenchmarking.org/test/pts/openssl
 
yeah, but storagereview.com has quite some outreach...
and things spreading on the web because people copy&paste....

so better revise the source of information instead only telling it's wrong from another place.
 
yeah, but storagereview.com has quite some outreach...

This thread just gave them a backlink for the SEO too, I suppose. It's very low quality test to begin with, objective unknown. I'd like to believe people who choose their hypervisor based on an article that compares KVM and KVM with 80% discrepancy finding ... deserve to get what they choose.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!