Very slow VM on Dell R620

Ricky85

New Member
Apr 4, 2025
5
0
1
Hi all!

My VM are very slow and after a lot of try with no solutions i have to ask to you

Trying with only one VM with Linux Mint 21.6.

This is my VM settings:

Code:
Memory: 8.00GiB/16.00GiB
Processors 12 (2 sockets, 6 cores) (host, flags=+spec-ctrl; +pdpe1gb) (numa=1)
Bios: SeaBIOS
Display: Default
Machine: i440fx
SCSI Controller: VirtIO SCSI Single
Hard Disk (virtio0): local-zfs:vm-111-disk-0, cache=writeback, discard=on, iothread=1, size=50GB
Network Device (net0): virtio=BC:xx:xx:xx:xx:xx:xx, bridge=vmbr0, firewall=1

OS Type: Linux 6.x - 2.6 Kernel
Use tablet for pointer: Yes
ACPI support: Yes
KVM hardware virtualization: Yes
Freeze CPU at startup: no
QEMU Guest Agent: Enabled
Protection: No
Spice Enhancements: none
VM State storage: Automatic
AMD SEV: disabled

My server is a DELL PowerEdge R620 with:
Code:
CPU: 2x E5-2680
RAM: 2x16GB ECC Dual Rank 1600Mhz
Controller: PERC H310 mini

8x original DELL 09W5WV 1TB6GB/S 7.2K 64MB

I'm not using hardware raid (the H310 is in HBA mode), i'm using a ZFS-Z1 (RAID 5 like)

CPU Virtualization activated in bios
Qemu Guest Agent installed and activated on VM

Some ideas?

Thanks in advance!
 
My VM are very slow
You need to find a metric which quantifies your experience. "Slow" means different things for different people. There are multiple benchmark tools out there...

Specifically for storage benchmarks look for "fio". This can be used on the node and also inside a guest.

I'm not using hardware raid (the H310 is in HBA mode), i'm using a ZFS-Z1 (RAID 5 like)
HBA mode is fine.

RaidZ1 is bad. Very bad, actually for multiple reasons. It gives you the IOPS of a single disk. And nowadays that means "it is slow" for nearly each and every use case.

Rebuild your system and use ZFS mirrors only = four mirrors with two drives each. These four vdevs are (automatically) striped, giving you a four times higher IOPS. (And four times higher read bandwidth while writing data stays relatively low.)

If you have any chance add a "Special Device" using another two (small, ~32 GB is enough) SSDs/NVMe high quality drives (with PLP), mirrored of course. Try hard to do this. (Do not try to utilize an SLOG or a Cache.) This will lift the felt performance by another factor of three to ten - for the very most (but not all) use cases.

Just my two or three €¢...
 
  • Like
Reactions: leesteken and news
You need to find a metric which quantifies your experience. "Slow" means different things for different people. There are multiple benchmark tools out there...

Specifically for storage benchmarks look for "fio". This can be used on the node and also inside a guest.


HBA mode is fine.

RaidZ1 is bad. Very bad, actually for multiple reasons. It gives you the IOPS of a single disk. And nowadays that means "it is slow" for nearly each and every use case.

Rebuild your system and use ZFS mirrors only = four mirrors with two drives each. These four vdevs are (automatically) striped, giving you a four times higher IOPS. (And four times higher read bandwidth while writing data stays relatively low.)

If you have any chance add a "Special Device" using another two (small, ~32 GB is enough) SSDs/NVMe high quality drives (with PLP), mirrored of course. Try hard to do this. (Do not try to utilize an SLOG or a Cache.) This will lift the felt performance by another factor of three to ten - for the very most (but not all) use cases.

Just my two or three €¢...
I rebuilded my sistem with ZFS mirror (10) but nothing changed, with fio i had little better speed on disks but vms are very slow, to give you a metric, only the installation of Linux Mint is about 30 minutes.

Maybe the problem is elsewere?
 
1/ Check in BIOS to set "High Performance Power Profile"

2/ Don't except too much from these old CPUs, disabling mitigations can help :
 
1/ Check in BIOS to set "High Performance Power Profile"

2/ Don't except too much from these old CPUs, disabling mitigations can help :
Done with High Performance Power Prifile
Mitigations disabled.
Numa Disablet.

Nothing changed

I think is not CPU problem, are very old but the CPU usate never go over 30% with 4 VM active
 
You need to find a metric which quantifies your experience. "Slow" means different things for different people. There are multiple benchmark tools out there...

Specifically for storage benchmarks look for "fio". This can be used on the node and also inside a guest.


HBA mode is fine.

RaidZ1 is bad. Very bad, actually for multiple reasons. It gives you the IOPS of a single disk. And nowadays that means "it is slow" for nearly each and every use case.

Rebuild your system and use ZFS mirrors only = four mirrors with two drives each. These four vdevs are (automatically) striped, giving you a four times higher IOPS. (And four times higher read bandwidth while writing data stays relatively low.)

If you have any chance add a "Special Device" using another two (small, ~32 GB is enough) SSDs/NVMe high quality drives (with PLP), mirrored of course. Try hard to do this. (Do not try to utilize an SLOG or a Cache.) This will lift the felt performance by another factor of three to ten - for the very most (but not all) use cases.

Just my two or three €¢...
I would suggest what UdoB posted. It sounds like you re-did the zfs pool just as a zfs mirror 10, but should try breaking it into the four vdevs striped which will give great performance.

Also what type of hard drives are you using? Consumer or enterprise?
 
Your drives are made to be behind a RAID controller with cache acceleration, and their embedded cache are disabled.
Turn on Write Cache Enable (WCE) of each HDD :
 
Last edited:
I would suggest what UdoB posted. It sounds like you re-did the zfs pool just as a zfs mirror 10, but should try breaking it into the four vdevs striped which will give great performance.

Also what type of hard drives are you using? Consumer or enterprise?
Yes, i did zfs pool mirror 10 during the Proxmox installation, what mean "vdevs striped"? I think is out of my knowledge.
They are enterprise hard drives (DELL 09W5WV)

In picture my actual ZFS situation


Hi @Ricky85 , it could be that a firmware update is required. If you are running on the latest Proxmox VE version. You should also update motherboard, HBA and all the firmware to the latest. Have you check if there are any updates done since the hardware is already 12 years old.
All firmware are updated to the last version
 

Attachments

  • Senza titolo.png
    Senza titolo.png
    137.4 KB · Views: 3
Last edited:
what mean "vdevs striped"?
On your picture you see your four mirrors. Each of these is one virtual device = "vdev". They are equally connected to the higher level "rpool". The ZFS term for this construct is "striped" :-)

You may easily compare it with a Raid 10.