Hi,
I've been doing some pre-production testing on my home server and ran into some kind of bottleneck with my storage performance, most notably on my optane drive.
When I install a Windows vm with the latest VirtIO drivers the performance is kinda dissapointing.
I've tried switching over from VirtIO to VirtIO SCSI and tried numerous different options.
Using LVM or ZFS does not give any different results either.
Finally I tried the same with a ramdisk, these gave me similar results to the optane disk.
Creating a ramdisk inside the Windows vm gave me way better low queuedepth performance than the one on the host.
It seems like there's some bottleneck in the virtualization layer.
Switching systems did yield different result from the first system, though the performance stayed the same over different settings on that server.
Has anybody had the same experience?
Here's the setups for the two systems I've done testing on:
System 1:
Supermicro a2sdi-h-tf
Intel Atom c3758
4x 16gb 2400mhz samsung ddr4 reg
Optane 900p 480gb
Intel Optane M10 16gb
4x Intel s3510 480gb
8x ST8000NM055 8tb
pve 5.3-8 (fully updated)
System2:
Asrock EP2C602-4L/D16
2x Intel Xeon 2670
8x8gb kingston 1600mhz ddr3 reg
8x4gb kingston 1600mhz ddr3 reg
1x Crucial mx300 1050gb
4x ST8000DM004
(Optane 900p 480gb when testing)
pve 5.3-8 (fully updated)
I've added a crystaldiskmark test in the attachements wich I ran on the first system.
c: Optane VirtIO
f: ramdisk VirtIO
h: ramdisk in vm (softperfect)
I've been doing some pre-production testing on my home server and ran into some kind of bottleneck with my storage performance, most notably on my optane drive.
When I install a Windows vm with the latest VirtIO drivers the performance is kinda dissapointing.
I've tried switching over from VirtIO to VirtIO SCSI and tried numerous different options.
Using LVM or ZFS does not give any different results either.
Finally I tried the same with a ramdisk, these gave me similar results to the optane disk.
Creating a ramdisk inside the Windows vm gave me way better low queuedepth performance than the one on the host.
It seems like there's some bottleneck in the virtualization layer.
Switching systems did yield different result from the first system, though the performance stayed the same over different settings on that server.
Has anybody had the same experience?
Here's the setups for the two systems I've done testing on:
System 1:
Supermicro a2sdi-h-tf
Intel Atom c3758
4x 16gb 2400mhz samsung ddr4 reg
Optane 900p 480gb
Intel Optane M10 16gb
4x Intel s3510 480gb
8x ST8000NM055 8tb
pve 5.3-8 (fully updated)
System2:
Asrock EP2C602-4L/D16
2x Intel Xeon 2670
8x8gb kingston 1600mhz ddr3 reg
8x4gb kingston 1600mhz ddr3 reg
1x Crucial mx300 1050gb
4x ST8000DM004
(Optane 900p 480gb when testing)
pve 5.3-8 (fully updated)
I've added a crystaldiskmark test in the attachements wich I ran on the first system.
c: Optane VirtIO
f: ramdisk VirtIO
h: ramdisk in vm (softperfect)