How to make a best use of an SSD drive in VMs?

shalak

Member
May 9, 2021
52
2
13
39
On my proxmox machine I got two SSD drives natively connected to it. First disk is for proxmox OS, the other I want to use in VMs.

Basically, I want to use half of this second SSD drive in one VM, and other half in another, preferably with auto-adjusting sizes (in case one VM needs more than a half, I cannot predict which will need more at this point).

In proxmox, I've set the second SSD up as an LVM-thin disk.

In VM, I added a new hard disk using SCSI Bus/Device and pointed it to the LVM-Think storege.

Unfortuantely, I'm getting real bad bottlenecks, `sysbench --test=fileio` executed in VM has told me:

Code:
Throughput:
    read, MiB/s:                  10.77
    written, MiB/s:               7.18

While on the proxmox host (on the first SSD), I'm seeing:

Code:
Throughput:
    read, MiB/s:                  71.33
    written, MiB/s:               47.56

That's pretty much an order of magnitude difference. Toggling the "SSD emulation" doesn't help.

What should I change in my workflow to get better results?
 
I should add that those drives are connected to the server via HPE Smart Array P420i controller, it has this little battery. I wonder how safe would it be to use "writeback" Cache mode
 
I'm not sure, I use GUI with following settings:
1674439209811.png

("fast" is the lvm-thin storage)
You are using an emulated LSI controller. Try "virtio SCSI single" which should be faster.

The first one is ADATA SU630, the second one (the one that is bottlenecking) is Seagate IronWolf 125. I would expect it to have better performance than ADATA.
Then your benchmarks are not comparable. You should benchmark the same SSD model with and without virtualization. Maybe that IronWolf is slow too when directly running that benchmark on the host without virtualizaton.
 
You are using an emulated LSI controller. Try "virtio SCSI single" which should be faster.
I cannot seem to change the SCSI Controller value from GUI and for Bus/Device I can only see:
1674471176456.png
Then your benchmarks are not comparable. You should benchmark the same SSD model with and without virtualization. Maybe that IronWolf is slow too when directly running that benchmark on the host without virtualizaton.
You're right. I wiped the disk, set up a Directory storage using ext4 and repeated the test:
Code:
Throughput:
    read, MiB/s:                  111.64
    written, MiB/s:               74.43
Thus confirming the bottleneck.
 
Last edited:
Thanks, that helped a bit:

With IO Thread disabled:
Code:
Throughput:
    read, MiB/s:                  26.73
    written, MiB/s:               17.82

With IO Thread enabled:
Code:
Throughput:
    read, MiB/s:                  34.65
    written, MiB/s:               23.10

Is this the best I can aim for? VM overhead leaves us with 1/3rd of the original performance?
 
Really depends on the workload. For example doing 1M sequential async writes got way less overhead than 4K random sync writes. Especially with nested filesystems where metadata multiplies. For more individual benchmarks you could use fio.
 
Last edited:
I noticed, that LVM-think works way worse, than Directory storage + qcow2 files for VM disks. What FS for the Directory should I use for such approach? xfs, or ext4? it will only contain the qcow2 files...
 
LVM-thin is still very lightweight. Stuff like ZFS will be way worse with easily times the overhead of LVM-Thin. Not sure if ext4 or xfs will be better. Best you benchmark it yourself.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!