Java VM Performance Issues on Proxmox vs ESXi

udhay

New Member
Jan 17, 2025
4
0
1
Hello,
I recently migrated from VMware ESXi to Proxmox for our hosting environment and have observed significant performance issues with a Java-based VM.

To troubleshoot, I created a new VM from scratch on both ESXi and Proxmox to compare results. Unfortunately, the VM on Proxmox is lagging a lot compared to ESXi.

Observed issue:
  • The VM runs Tomcat, which starts by executing ~1000 queries.
  • On ESXi, startup takes about 3 minutes.
  • On Proxmox, the same workload takes about 11 minutes.
Hardware (identical in both cases):
  • Cisco UCS B200 M5
  • CPU: 48 x Intel(R) Xeon(R) Gold 6146 CPU @ 3.20GHz (2 Sockets)
What I tried so far:
  • Setting kernel parameter: mitigations=off gave a slight improvement but still far behind ESXi.
  • VM CPU types: tested with host, Skylake-Server, etc.
  • Storage settings: tested with iothread=on/off, cache=none, and write-through.
  • Mounted Virtio RNG with /dev/urandom.
Question:
  • Has anyone experienced similar performance gaps between ESXi and Proxmox for Java workloads?
  • Are there recommended Proxmox configurations (CPU type, storage cache, disk settings, etc.) for improving Java/Tomcat performance on UCS hardware?

Any advice or shared experiences would be very helpful.


Thanks in advance!
 
People have been disappointed by the performance difference as past threads about ESXi migrations will show. ZFS RAIDz1 (on consumer drives) is not at all like hardware RAID5 (with BBU) and VMs that use non-VirtIO disks are also known to be slow. Things often improve when using best practices and installing VirtIO drivers. Lots of threads on this forum about that.
 
  • Like
Reactions: _gabriel
Thanks for the response. I understand the issues you mentioned with ZFS RAIDz1 versus hardware RAID5 and also the importance of using VirtIO drivers.

In my case, though, the situation is a bit different:
  • I did not import VMs from ESXi - all VMs were created fresh on Proxmox.
  • The storage is NFS-mounted, not ZFS.
  • All VMs are configured with VirtIO drivers, and disks are set up as VirtIO SCSI (single controller).
Given this setup, I’m still seeing a noticeable performance gap compared to ESXi. Could the NFS backend be the main limiting factor here, but i used the NFS for ESXI also or should I be looking at further Proxmox tuning?
 
I just mentioned those things as they are common performance issues. I don't know much about NFS performance for virtual disks, sorry. QEMU does use a difference file-based storage format than ESXi (and I would expect local block-based storage to be faster).
 
  • Like
Reactions: _gabriel