Ubuntu 20 VM Much slower than in ESXi

GuyInCorner

New Member
Jun 21, 2024
18
6
3
Given that Broadcom is essentially abandoning small businesses I decided to explore Proxmox for our growing data center.

As a start I decided to do a simple test using a Mac Mini that I already have ESXi installed on and load Proxmox on another drive so it can boot up either hypervisor.
Both ESXi and Proxmox are running off of their own Samsumg EVO 870 SATA SSD

I imported a VM from ESXi into Proxmox and ran the same realistic load test on an Ubuntu 20.04 server.
This test is rather database intensive. The results are as follows:
ESXi: 16 seconds
Proxmox: 112 seconds


Configuration:
ProxmoxTestVM.png
I've tried various caching and controller options with no measurable change.

The ESXi configuration has the same amount of resources assigned to it.

Any thoughts why this would be so slow? This is not useful.
 
Why are you not using VirtIO SCSI Single instead of LSI 53C895A? Why are you not using VirtIO instead of e1000?
EDIT: Also enable IO Thread for the virtual disk. Those settings are the Proxmox defaults (and should work for a Linux VM out of the box) and usually faster.

EDIT2: They are rhetorical questions.
 
Last edited:
Beside what @leesteken said: did you setup zfs on the EVO drives? If yes, you‘ll run into performance and durability problems very quick.
 
  • Like
Reactions: Johannes S
If you're doing cpu intensive stuff then it can be worthwhile using cpu affinity as well, to pin the VM cores to host cores.
 
@leesteken
Those are the defaults after the import.

I did try: VirtIO SCSI and IO Thread with no measurable performance.
I didn't try a different network adaptor type other than E1000 as the test doesn't use a lot of networking.
Edit: Just tried it with VirtIO instead of e1000 - no difference. :(

@cwt
I am not using ZFS but rather ext4.

@justinclift
There is only one i7 cpu on this test H/W so cpu affinity won't make a difference. The test is single threaded
 
Last edited:
esxi 6.7 rtm (mid 2018) haven't cpu mitigations enabled.
you need to set mitigations=off as kernel pve options to have similar cpu perf.
 
So it looks like mitigations can only be set system wide, not on a per-VM basis like some other virtualizations is that correct?
(slightly concerning)

I'm trying to test this but can't get the migrations=off setting to take. The command:
Code:
proxmox-boot-tool refresh
produces this output:
Code:
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..

No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.

So, nothing happens and my changes aren't applied.

Any thoughts how to turn off mitigations for this VM?
 
This installation is using systemd for boot (not using ZFS). I've edited the '/etc/kernel/cmdline' file but since `proxmox-boot-tool` refresh is erroring out, the change isn't taking.

Bummer. Not looking hopeful for using Proxmox for me. I'll keep kicking this around a bit but at this point the performance is a show-stopper and I'll likely have to stick with ESXi.
:(
 
but since `proxmox-boot-tool` refresh is erroring out, the change isn't taking.
Hmmm. What happens if you run?

Bash:
# proxmox-boot-tool clean

In theory that's supposed to update (and generate if needed) the /etc/kernel/proxmox-boot-uuids file.

If that doesn't work, would you be ok to run lsblk -o tran,name,type,size,vendor,model,label,rota,log-sec,phy-sec and paste the output here (in a monospaced font, so the columns line up)? That'll help people understand your storage layout, which will help spot wtf is going wrong. :)
 
Last edited:
This installation is using systemd for boot (not using ZFS).
This is not standard, Proxmox always use GRUB bootloader except if ZFS + EFI Secure Boot disabled.

I can't find docs but it seems ESXi haven't all mitigations enabled even on recent versions.
 
  • Like
Reactions: leesteken
Everything that I looked at pointed that this installation was using systemd rather than grub, but when I followed the grub instructions from here:
https://pve.proxmox.com/wiki/Host_Bootloader#sysboot_edit_kernel_cmdline

lscpu now indicates mitigations are off.
Code:
Vulnerabilities:         
  Gather data sampling:   Not affected
  Itlb multihit:          KVM: Vulnerable
  L1tf:                   Mitigation; PTE Inversion; VMX vulnerable
  Mds:                    Vulnerable; SMT vulnerable
  Meltdown:               Vulnerable
  Mmio stale data:        Unknown: No mitigations
  Reg file data sampling: Not affected
  Retbleed:               Not affected
  Spec rstack overflow:   Not affected
  Spec store bypass:      Vulnerable
  Spectre v1:             Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
  Spectre v2:             Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
  Srbds:                  Vulnerable: No microcode
  Tsx async abort:        Not affected

Performance has improved. The test now takes 74 seconds, but that's still 4.6 times longer than running on ESXi 7.0.3
 
Hmmm. What happens if you run?

Bash:
# proxmox-boot-tool clean

In theory that's supposed to update (and generate if needed) the /etc/kernel/proxmox-boot-uuids file.

If that doesn't work, would you be ok to run lsblk -o tran,name,type,size,vendor,model,label,rota,log-sec,phy-sec and paste the output here (in a monospaced font, so the columns line up)? That'll help people understand your storage layout, which will help spot wtf is going wrong. :)
proxmox-boot-tool clean
produces no output (probably good)

lsblk -o tran,name,type,size,vendor,model,label,rota,log-sec,phy-sec
produces:

Bash:
root@pve1:~# lsblk -o tran,name,type,size,vendor,model,label,rota,log-sec,phy-sec
TRAN   NAME                                        TYPE   SIZE VENDOR   MODEL                     LABEL ROTA LOG-SEC PHY-SEC
sata   sda                                         disk 465.8G ATA      Samsung SSD 870 EVO 500GB          0     512     512
       ├─sda1                                      part  1007K                                             0     512     512
       ├─sda2                                      part     1G                                             0     512     512
       └─sda3                                      part 464.8G                                             0     512     512
         ├─pve-swap                                lvm      8G                                             0     512     512
         ├─pve-root                                lvm     96G                                             0     512     512
         ├─pve-data_tmeta                          lvm    3.4G                                             0     512     512
         │ └─pve-data-tpool                        lvm  337.9G                                             0     512     512
         │   ├─pve-data                            lvm  337.9G                                             0     512     512
         │   ├─pve-vm--100--disk--0                lvm    150G                                             0     512     512
         │   ├─pve-vm--101--disk--0                lvm    150G                                             0     512     512
         │   └─pve-vm--100--state--kernal--changes lvm    8.5G                                             0     512     512
         └─pve-data_tdata                          lvm  337.9G                                             0     512     512
           └─pve-data-tpool                        lvm  337.9G                                             0     512     512
             ├─pve-data                            lvm  337.9G                                             0     512     512
             ├─pve-vm--100--disk--0                lvm    150G                                             0     512     512
             ├─pve-vm--101--disk--0                lvm    150G                                             0     512     512
             └─pve-vm--100--state--kernal--changes lvm    8.5G                                             0     512     512
sata   sdb                                         disk 931.5G ATA      Samsung SSD 870 EVO 1TB            0     512     512
       ├─sdb1                                      part   100M                                             0     512     512
       ├─sdb5                                      part     4G                                             0     512     512
       ├─sdb6                                      part     4G                                             0     512     512
       ├─sdb7                                      part 119.9G                                             0     512     512
       └─sdb8                                      part 803.5G                                             0     512     512
 
  • Like
Reactions: justinclift
can you run small 7zip single thread benchmark within your Ubuntu VM on ESXi then PVE ?
7zz b -mmt1
 
Cool. The disk layout for sda looks like it has the standard 1007K first partition, and 1G second partition, with everything else coming after it.

Here's the same command on my desktop system (also running Proxmox, but using ZFS for the "everything else" bit):

Bash:
# lsblk -o tran,name,type,size,vendor,model,label,rota,log-sec,phy-sec
TRAN   NAME        TYPE   SIZE VENDOR   MODEL        LABEL ROTA LOG-SEC PHY-SEC
sas    sda         disk 372.6G SanDisk  LT0400MO              0     512     512
       ├─sda1      part  1007K                                0     512     512
       ├─sda2      part     1G                                0     512     512
       └─sda3      part 371.6G                       rpool    0     512     512
sas    sdb         disk 372.6G SanDisk  LT0400MO              0     512     512
       ├─sdb1      part  1007K                                0     512     512
       ├─sdb2      part     1G                                0     512     512
       └─sdb3      part 371.6G                       rpool    0     512     512
nvme   nvme0n1     disk 931.5G          CT1000P5SSD8          0     512     512
nvme   ├─nvme0n1p1 part 931.5G                       pool1    0     512     512
nvme   └─nvme0n1p9 part     8M                                0     512     512
nvme   nvme1n1     disk 931.5G          CT1000P5SSD8          0     512     512
nvme   ├─nvme1n1p1 part 931.5G                       pool1    0     512     512
nvme   └─nvme1n1p9 part     8M                                0     512     512

When I run promox-boot-tool status, it lists the 2nd partition of both sas drives as being available for booting:

Bash:
# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
B532-9A3D is configured with: grub (versions: 6.5.11-8-pve, 6.5.13-5-pve)
B571-DD14 is configured with: grub (versions: 6.5.11-8-pve, 6.5.13-5-pve)

(that's because both drives were setup in a mirror, so if either fails the system won't have boot problems)

The B532-9A3D and B571-DD14 ids match the partition names under /dev/disk/by-uuid/:

Bash:
# ls -l /dev/disk/by-uuid/
total 0
lrwxrwxrwx 1 root root  10 Jun 23 22:49 B532-9A3D -> ../../sda2
lrwxrwxrwx 1 root root  10 Jun 23 22:49 B571-DD14 -> ../../sdb2

Hmmm. I kind of wonder what'd happen if you reinitialised your boot partition (sda2) using the instructions here?

https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_change_failed_dev

1719339600139.png
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!