SSD performance way different in PVE shell vs Debian VM

stiphout

Member
Apr 22, 2022
1
0
6
Hi!

I read this page: https://pve.proxmox.com/wiki/Benchmarking_Storage

And then I logged into my pve's shell and did the 4k performance check from that page:

Code:
fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --name seq_read --filename=/dev/sda

I got a pretty decent result:

Code:
read: IOPS=16.8k, BW=65.8MiB/s (68.0MB/s)(3948MiB/60001msec)

Then I logged into a Debian 11 VM and did the same command, on the same physical disk, but, of course, it's now going through the LVM-thin storage provided by the PVE. I got this result:

Code:
read: IOPS=5192, BW=20.3MiB/s (21.3MB/s)(1217MiB/60001msec)

Now, I EXPECT some performance loss from the overhead of going through the LVM-thin storage, but a 69% loss seems excessive.

FYI, here's my Debian 11 VM hardware page:

hw.png

Anyone have any suggestions?

Thanks!!

-Dutch
 
I just did some tests after reading your comment. My SSD (intel S3710) is around half the speed I got under ESXi.
 
Last edited:
enable vdisk writecache in vm hardware option then report result

with a cache=writeback, everything else is great but the 4k 32Q 1T is still about half (didn't improve from the default "no-cache"). 4KQ32T1 is around 100-120 MB/s which is around half of what the S3700 is capable of under ESXi.

with a cache=unsafe, results are the same as writeback.

Next I'll pass-thru a physical disk and test it there.
 
What are your esxi settings? What is your VM config? (qm config) What is your host CPU?
Reasons for being faster/slower can be en/disabled CPU flags (spectre and meltdown), cache, storage type, hardware, and several other things.

Also, how did you benchmark your disk?
 
Last edited:
For maximum performance, please make sure that you use "SCSI Controller single" and on the disk, "IO thread" is checked.
 
  • Like
Reactions: leesteken
Hi!

I read this page: https://pve.proxmox.com/wiki/Benchmarking_Storage

And then I logged into my pve's shell and did the 4k performance check from that page:

Code:
fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --name seq_read --filename=/dev/sda

I got a pretty decent result:

Code:
read: IOPS=16.8k, BW=65.8MiB/s (68.0MB/s)(3948MiB/60001msec)

Then I logged into a Debian 11 VM and did the same command, on the same physical disk, but, of course, it's now going through the LVM-thin storage provided by the PVE. I got this result:

Code:
read: IOPS=5192, BW=20.3MiB/s (21.3MB/s)(1217MiB/60001msec)

Now, I EXPECT some performance loss from the overhead of going through the LVM-thin storage, but a 69% loss seems excessive.

FYI, here's my Debian 11 VM hardware page:

View attachment 36202

Anyone have any suggestions?

Thanks!!

-Dutch
I suggest setting processor type to "host" and hard disk cache to "none". I also use SCSCI single controller with discard and iothread to "on". Also set the Linux IO scheduler to "none/noop".
 
Last edited:
Even with a single disk, iothreads can give a speedup, because it means qemu has to do less work in the main thread. In a quick test on a single disk VM setup with the earlier mentioned fio command, with iothread enabled I get 50% more IOPS and BW than without.
 
  • Like
Reactions: Dunuin and _gabriel