60 IOPS in windows VM with VIRT-IO SCSI, is this normal?

95GMT400

Member
Jul 21, 2021
7
0
6
21
Hello everyone, I'd like to start by saying that I'm new-ish to ProxMox and especially new to server hardware. I recently got my hands on a HPE ProLiant ML350 Gen9 tower server without storage. The specs of the server are 2x Intel(R) Xeon(R) CPU E5-2620 v4 CPUs, 32gb of ram (16gb per CPU), and a Smart Array P440ar Controller. Before I got this server I was using 2 desktop towers to run 5 VMs. I just transferred all the VMs to the new server recently, and wanted to add a new windows VM to encode DVDs to put on my plex server. (Yes I know this is technically against the rules but the server is for my LAN only, so I'm essentially just making my DVDs easier to watch.)

The VMs are as follows:
100: PFsense Router
101: PiHole DNS Filter (Headless Debian),
102: Lancache for Game downloads (Headless Debian)
103: Plex media server (Debian)
104: Zabbix Monitoring (Headless Debian)
105: Windows 10 21H2

The windows VM is experiencing horrible disk write performance. Took me over an hour to get through the windows install alone. I have the latest Stable Virt-io scsi and Balloon drivers installed. Here is a Crystal Disk Mark Real World performance test from inside the windows VM:
1652372614855.png
from top to bottom the tests are:
Sequential 1MB Queues=1 Threads=1 (MB/s)
Random 4KB Queues=1 Threads=1 (MB/s)
Random 4KB Queues=1 Threads=1 (IOPS)
Random 4KB Queues=1 Threads=1 (us)

Here is the config of the windows VM105:
Code:
root@pve:~# cat /etc/pve/qemu-server/105.conf
boot: order=scsi0;net0;sata1
cores: 5
hotplug: disk,network,usb
machine: pc-i440fx-6.2
memory: 8192
meta: creation-qemu=6.2.0,ctime=1652318673
name: Encode
net0: e1000=52:C3:0F:45:3E:A8,bridge=vmbr0,firewall=1
numa: 1
ostype: win10
sata1: local:iso/virtio-win-0.1.217.iso,media=cdrom,size=519096K
scsi0: local-lvm:vm-105-disk-0,size=50G
scsihw: virtio-scsi-pci
smbios1: uuid=01aa6ff7-5603-4950-8216-c883febd7193
sockets: 2
vmgenid: 9c3c1f16-34a0-4ed1-8343-820bcf2c4bf9

I have 7 2.5 inch 7200RPM 500Gb drives that I installed in the server, and configured them as follows:
Cache is enabled on the P440ar but it is read only, no write.
The P440ar is connected to CPU0 through PCIe.

PVE Boot Drive (RAID 1):
The array is not actually degraded, I am just using 3d Printed caddys so the controller can't verify that the drive is HP genuine.
1652370108836.png
Cache Storage for VM 102:
1652370379467.png
Raid 5 for to store my encoded video files for VM 103:
1652370510987.png1652370531817.png

Is there something I've done that is glaringly wrong? Or are two 7200 RPM drives in raid one just have that bad of performance? I know it doesn't give you a write increase, but with a dedicated RAID controller I wouldn't think it would trash the performance that badly?
 
I have a ZFS mirror of two laptop drives and pveperf gives me around 70 fsyncs/sec. I think your laptop drives are showing normal performance.
Thank you for your insight. When I run pveperf here is my output:
root@pve:~# pveperf CPU BOGOMIPS: 134160.48 REGEX/SECOND: 2313782 HD SIZE: 93.93 GB (/dev/mapper/pve-root) BUFFERED READS: 113.14 MB/sec AVERAGE SEEK TIME: 19.89 ms FSYNCS/SECOND: 25.79 DNS EXT: 92.29 ms DNS INT: 66.22 ms (localdomain)
Ran it 3 more times to get more data and I ended up with:
root@pve:~# pveperf CPU BOGOMIPS: 134160.48 REGEX/SECOND: 2138847 HD SIZE: 93.93 GB (/dev/mapper/pve-root) BUFFERED READS: 122.79 MB/sec AVERAGE SEEK TIME: 17.27 ms FSYNCS/SECOND: 16.55 DNS EXT: 89.81 ms DNS INT: 66.88 ms (localdomain) root@pve:~# pveperf CPU BOGOMIPS: 134160.48 REGEX/SECOND: 2324339 HD SIZE: 93.93 GB (/dev/mapper/pve-root) BUFFERED READS: 179.64 MB/sec AVERAGE SEEK TIME: 24.25 ms FSYNCS/SECOND: 13.95 DNS EXT: 90.97 ms DNS INT: 67.59 ms (localdomain) root@pve:~# pveperf CPU BOGOMIPS: 134160.48 REGEX/SECOND: 2225452 HD SIZE: 93.93 GB (/dev/mapper/pve-root) BUFFERED READS: 224.05 MB/sec AVERAGE SEEK TIME: 19.99 ms FSYNCS/SECOND: 17.96 DNS EXT: 90.29 ms DNS INT: 65.08 ms (localdomain)

What are FSYNCS? What are they measuring?
 
What are FSYNCS? What are they measuring?
It measures the maximum number of synchronous writes (per second): (small unbuffered) write actions where the CPU waits until the drives (claim to) have actually written the data to disk. Note that Proxmox itself is also writing to your disks for logging and graphs during those tests.
 
  • Like
Reactions: 95GMT400
It measures the maximum number of synchronous writes (per second): (small unbuffered) write actions where the CPU waits until the drives (claim to) have actually written the data to disk. Note that Proxmox itself is also writing to your disks for logging and graphs during those tests.
Okay, good to know. After you said that the solution hit me. My Monitoring/logging VM. I was having issues with my ISP and I have multiple ICMP pings that go out every second and it saves that data. I shut down the VM and now look at my FSYNC/sec:
Code:
root@pve:~# pveperf
CPU BOGOMIPS:      134160.48
REGEX/SECOND:      2463466
HD SIZE:           93.93 GB (/dev/mapper/pve-root)
BUFFERED READS:    184.67 MB/sec
AVERAGE SEEK TIME: 12.48 ms
FSYNCS/SECOND:     51.14
DNS EXT:           86.50 ms
DNS INT:           67.81 ms (localdomain)
root@pve:~# pveperf
CPU BOGOMIPS:      134160.48
REGEX/SECOND:      2660894
HD SIZE:           93.93 GB (/dev/mapper/pve-root)
BUFFERED READS:    357.80 MB/sec
AVERAGE SEEK TIME: 11.56 ms
FSYNCS/SECOND:     38.78
DNS EXT:           89.08 ms
DNS INT:           67.87 ms (localdomain)
root@pve:~# pveperf
CPU BOGOMIPS:      134160.48
REGEX/SECOND:      2774830
HD SIZE:           93.93 GB (/dev/mapper/pve-root)
BUFFERED READS:    508.19 MB/sec
AVERAGE SEEK TIME: 11.56 ms
FSYNCS/SECOND:     49.08
DNS EXT:           87.13 ms
DNS INT:           72.63 ms (localdomain)

Its not amazing but its way better than before. For now I think I will turn the ping frequency down, or put an IOPS limit on that VM's disk.
However I think I will be looking for SSDs as soon as possible. Would consumer 2.5 in SATA 6GB/s 500GB or 1TB SSDs be a bad Idea for any reason?
 
Would consumer 2.5 in SATA 6GB/s 500GB or 1TB SSDs be a bad Idea for any reason?
They can wear out quickly by all the Proxmox logging. But you can run Proxmox fine from your current drives (and store ISO's and templates there) and put the VMs on SSD. Note that one machine does not wear out an SSD, but you are running multiple (virtual) machines on them at the same time. I suggest searching this forum and the internet for reports on good and bad SSDs for virtualization.
 
m.2 NVMe in a PCIe -> m.2 adapter would be a better option.
Thank you for your input. Currently I'm looking at putting proxmox on a harddrive connected directly to the motherboard, and using the RAID1 to only store the VM disks for now. I don't know if I want to actually buy pcie storage for this as it's just my router, ad blocker, and media server. I know 2.5 in ssds have come down in price, and I would set them up in the same way, prox on a hard drive, with raid 1 for VM disks. Just to prevent prox logging from wearing my drives unnecessarily as leesteken said.
They can wear out quickly by all the Proxmox logging
 
SSD's usually wear from writes, depending how much writes you expect your VM's to do will decide on what SSD's you should get.

Samsung make some consumer SSD's from EVO -> PRO ($$ -> $$$), as others have suggested keep Proxmox running on the RAID 1 HD and add two SSD's in RAID 1 and migrate the VM disks to these.
 
SSD's usually wear from writes, depending how much writes you expect your VM's to do will decide on what SSD's you should get.

Samsung make some consumer SSD's from EVO -> PRO ($$ -> $$$), as others have suggested keep Proxmox running on the RAID 1 HD and add two SSD's in RAID 1 and migrate the VM disks to these.
okay, I will look into Samsung's SSD offerings.
Thank you very much!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!