Win 10 in Proxmox - Performance

I suspect the qcow2 backed disk was trying to expand/grow the virtual disk as you were filling it up.
I remember hyper-v dynamic expanding disks causing issues when growing, most noticeable on databases being wirtten out.
 
Last edited:
The only difference i figured out sow far:
1612551246491.png

Just an additional Information. To install O365 took multiple hours - with 5Mb/s hdd writing speed ☹ (on the bad VM)
Transfering the testfile from "good"VM to a NAS is working with more than 120Mb/s ... what ?!?!?:eek:
 
Last edited:
The difference is, that you are using raw in the "new" VM and qcow2 Image in the "kaka" VM. Thats the difference in Performance.
You are right !! I did not see this fact!
I am sorry.


Can i change this in the config ?- w.o. killing the VM ?
 
You are right !! I did not see this fact!
I am sorry.


Can i change this in the config ?- w.o. killing the VM ?
you can run qemu-img convert -f qcow2 -O raw /path/to/image.qcow2 /path/to/image.raw at least in theory. Take a Backup before!

EDIT: You may have to fix the path in the config after that, but I´m not sure.
 
And as Dominic suggested, I would choose "SCSI" instead of "VirtIO Block" as the vdisk bus/device type. As far as I know "VirtIO Block" is kind of deprecated.

EDIT: And your CPU type is still "Default (kvm64)"? If you don't plan a cluster with different physical CPUs you should try "host".
 
Last edited:
  • Like
Reactions: Anti-Ctrl
And as Dominic suggested, I would choose "SCSI" instead of "VirtIO Block" as the vdisk bus/device type. As far as I know "VirtIO Block" is kind of deprecated.

EDIT: And your CPU type is still "Default (kvm64)"? If you don't plan a cluster with different physical CPUs you should try "host".
thx for advice
i am not ignoring your proposal.
Will follow up after HDD is converted to raw.
 
Update: after converting the HDD:


1612589877969.png
The Result is much better!! o_Oo_O:):)o_Oo_O

I was testing 2 cases:
Case 1: copied 10 GB from NAS to local HDD => constant 105MB/s
Edit:- After some more testing: not reproduce able (tested with multiple different VM’s)
Case 2:
copied 10 GB from HDD to HDD (picture above)

Curios Question: Why is the local copied Performance fluctuating between 30 MB/s -200 MB/s
 
Last edited:
1612596831223.png
NAS faster than Folder2Folder (my assumption was F2F would be much faster)
Different results in different VM's (even the Config is the same)
hmmm.... intressting
 
Last edited:
sherminator is right. You can test that scenario easily on your own PC. Copy a file Folder to Folder on the Same HDD and you will see speeds crippling. On an SSD thats not such a big impact, but still noticable. Its just the worst case for HDDs.
 
Changing format on CLI is fine, of course. If you prefer GUI: The move disk button lets you change the output format if the target storage is of type directory.
move.png

SSD emulation might be especially interesting together with TRIM/Discard. In the background, SSD emulation sets the rotation_rate parameter. So if you have it enabled and type ps -aux you should be able to grep for vmid or rotation_rate
Code:
ps -aux | grep rotation_rate

There are some conditions when exactly the parameter is set. If you're a little into programming then I think it should be possible to get an idea of them by looking at the if/elsif lines surrounding the 2 occurences of rotation_rate in the code without understanding everything: https://git.proxmox.com/?p=qemu-ser...5e5e9a26b783aa997669b514d03a064;hb=HEAD#l1456