Quick and Dirty IO Performance Testing

Jan 19, 2016
16
3
43
42
I have quite a few servers I manage, most are for SMB clients and some are coming up on server refreshes.

I have been using Proxmox since v3.4 and we use a Proxmox CEPH cluster that we use within our own network, and am reasonably familiar with the product.

The main focus I have is in migrating the SMB clients from direct MS Window Server installs to a Proxmox HV with the MS Windows Server OS running as a VM.

These would be very simple machines, but I want to ensure that when we migrate the clients that they do not see any type of performance degradation, IO performance being the biggest concern as we have witnessed considerable IO performance concerns from within the VM compared to the HV itself.

As a result I have grabbed some hardware to do some testing with the target being to try and find the best configuration for performance and reliability.

In the production servers we are currently using Kingston DC500M and Seagate Nytro 1551 drives, but for the purposes of this test I am using some older Intel 320 SSD drives. I personally do not consider these drives consumer as they have many features that would leverage them to datacenter usage, capacitors to write SRAM in event of power loss, user data not stored in DRAM cache, etc.

The System specs are below

Supermicro X11SLH-F
Xeon E3-1246 v3
32GB DDR3 ECC
Onboard C226 6 x 6 Gbps SATA3
1 x 300GB Intel 320 SSD (Proxmox Installed to this drive, 31GB ext4 root partition, 8GB SWAP, remainder is default lvm-thin)
5 x 120GB Intel 320 SSD (Testing drives with LVM-FAT, LVM-Thin, Dir-ext4, ZFS (1-drive), ZFS (raidz1)

The Intel 320SSDs come in around 2100 fsync/s as measured by pveperf.

We have installed a base no frills Debian buster to the LVM-thin on the 300GB Intel 320.
The first round of performance testing I am comparing FIO results run against the /dev/sdx device from within Proxmox, and then using the built-in GUI tools to setup LVM-FAT, LVM-Thin, Dir-ext4 storage and attach a 40GB VM disk to the Debian VM, and then run the exact same FIO commands. The tests were run 3 times each time.
We are using the default SCSI VirtIO controller in the VM, with discard and emulate SSD turned on.

The fio commands we are using are as follows:

1) fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --buffered=0 --name XXX --filename=/dev/sdX
2) fio --ioengine=libaio --direct=1 --sync=1 --rw=randread --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --buffered=0 --name XXX --filename=/dev/sdX
3) fio --ioengine=libaio --direct=1 --sync=1 --rw=randread --bs=4k --numjobs=1 --iodepth=8 --runtime=60 --time_based --buffered=0 --name XXX --filename=/dev/sdX
4) fio --ioengine=libaio --direct=1 --sync=1 --rw=randread --bs=4k --numjobs=1 --iodepth=64 --runtime=60 --time_based --buffered=0 --name XXX --filename=/dev/sdX
5) fio --ioengine=libaio --direct=1 --sync=1 --rw=randread --bs=4k --numjobs=1 --iodepth=256 --runtime=60 --time_based --buffered=0 --name XXX --filename=/dev/sdX
6) fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=1M --numjobs=1 --iodepth=1 --runtime=60 --time_based --buffered=0 --name XXX --filename=/dev/sdX
7) fio --ioengine=libaio --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --buffered=0 --name XXX --filename=/dev/sdX
8) fio --ioengine=libaio --direct=1 --sync=1 --rw=randwrite --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --buffered=0 --name XXX --filename=/dev/sdX
9) fio --ioengine=libaio --direct=1 --sync=1 --rw=randwrite --bs=4k --numjobs=1 --iodepth=8 --runtime=60 --time_based --buffered=0 --name XXX --filename=/dev/sdX
10) fio --ioengine=libaio --direct=1 --sync=1 --rw=randwrite --bs=4k --numjobs=1 --iodepth=64 --runtime=60 --time_based --buffered=0 --name XXX --filename=/dev/sdX
11) fio --ioengine=libaio --direct=1 --sync=1 --rw=randwrite --bs=4k --numjobs=1 --iodepth=256 --runtime=60 --time_based --buffered=0 --name XXX --filename=/dev/sdX
12) fio --ioengine=libaio --direct=1 --sync=1 --rw=write --bs=1M --numjobs=1 --iodepth=1 --runtime=60 --time_based --buffered=0 --name XXX --filename=/dev/sdX
13) fio --ioengine=libaio --direct=1 --sync=1 --randrepeat=1 --rw=randrw --rwmixread=75 --bs=4k --iodepth=64 --runtime=60 --time_based --buffered=0 --name XXX --filename=/dev/sdX

There are tons of varations for running FIO, but the above has typically served me well for performance comparisons.
I have run testing with EXT4 DIR with QCOW, LVM-THIN, LVM-FAT, and VM disk passthrough, and single disk ZFS (so far).

EXT4 with QCOW and LVM-THIN was largely for comparison/curiousity, and LVM-FAT and VM disk passthrough results below.
ZFS is my preferred local storage option, but there is alot more going on with ZFS so I did not want to further complicate the testing just yet.

Performace metrics for FIO against the device directly from within Proxmox are below ordered the same as the FIO command above.

P1) Read IOPs: 17.3k Read BW: 67.6 MiB/s
P2) Read IOPs: 4.9k Read BW: 19.0 MiB/s
P3) Read IOPs: 19.2k Read BW: 74.9 MiB/s
P4) Read IOPs: 48.8k Read BW: 179 MiB/s
P5) Read IOPs: 45.9k Read BW: 179 MiB/s
P6) Read IOPs: 258 Read BW: 259 MiB/s

P7) Write IOPs: 7.9k Write BW: 30.8 MiB/s
P8) Write IOPs: 6.1k Write BW: 23.6 MiB/s
P9) Write IOPs: 12.0k Write BW: 50.7 MiB/s
P10) Write IOPs: 15.4k Write BW: 60.1 MiB/s
P11) Write IOPs: 14.1k Write BW: 54.0 MiB/s
P12) Write IOPs: 133 Write BW: 133 MiB/s

P13) Read IOPs: 21.9k Read BW: 85.5 MiB/s Write IOPs: 7.3k Write BW: 28.5 MiB/s

Results to compare are listed below:

LVM-FAT 1) Read IOPs: 10.3k Read BW: 40.4 MiB/s
LVM-FAT 2) Read IOPs: 3.8k Read BW: 14.7 MiB/s
LVM-FAT 3) Read IOPs: 20.5k Read BW: 80.1 MiB/s
LVM-FAT 4) Read IOPs: 38.9k Read BW: 151 MiB/s
LVM-FAT 5) Read IOPs: 43.3k Read BW: 169 MiB/s
LVM-FAT 6) Read IOPs: 243 Read BW: 244 MiB/s

LVM-FAT 7) Write IOPs: 4.7k Write BW: 18.5 MiB/s
LVM-FAT 8) Write IOPs: 4.2k Write BW: 16.5 MiB/s
LVM-FAT 9) Write IOPs: 9.9k Write BW: 39.0 MiB/s
LVM-FAT 10) Write IOPs: 14.6k Write BW: 57.2 MiB/s
LVM-FAT 11) Write IOPs: 15.6k Write BW: 60.8 MiB/s
LVM-FAT 12) Write IOPs: 129 Write BW: 130 MiB/s

LVM-FAT 13) Read IOPs: 14.6k Read BW: 56.9 MiB/s Write IOPs: 4.87k Write BW: 19 MiB/s

Results for VM Disk Passthrough are listed below:

Passthrough 1) Read IOPs: 10.4k Read BW: 40.7 MiB/s
Passthrough 2) Read IOPs: 3.8k Read BW: 15.0 MiB/s
Passthrough 3) Read IOPs: 17.8k Read BW: 69.7 MiB/s
Passthrough 4) Read IOPs: 39.5k Read BW: 154 MiB/s
Passthrough 5) Read IOPs: 39.6k Read BW: 155 MiB/s
Passthrough 6) Read IOPs: 244 Read BW: 245 MiB/s

Passthrough 7) Write IOPs: 4.8k Write BW: 18.7 MiB/s
Passthrough 8) Write IOPs: 4.3k Write BW: 16.8 MiB/s
Passthrough 9) Write IOPs: 10.2k Write BW: 39.8 MiB/s
Passthrough 10) Write IOPs: 14.9k Write BW: 58.1 MiB/s
Passthrough 11) Write IOPs: 15.8k Write BW: 61.5 MiB/s
Passthrough 12) Write IOPs: 129 Write BW: 130 MiB/s

Passthrough 13) Read IOPs: 18.3k Read BW: 71.6 MiB/s Write IOPs: 6.123k Write BW: 23.9 MiB/s

While I can cherry pick tests that come within ~5% of the performance when running FIO within Proxmox, there is still considerable loss of performance (upwards of 30-50%) when not submitting large numbers of concurrent IO requests from within the VM.

I am using as much as I can 'default' or 'out of the box' configurations, if anyone has any recommendations with my configuration that would allow me to achieve greater overall performance from within the VM I am very interested.
Or if anyone is seeing problems with the tests, or hardware setup just let me know.
My next tests I would like to do would be comparing Windows VM fio performance to that of the Debian VM, and to go through ZFS Mirror/Zraid1 tests. I also have a couple new Kingston DC500Ms that I may run a few tests on to see if the performance losses are greater on faster devices.
 
  • Like
Reactions: proxman4

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!