VM read/write optimization

GeorgiL

New Member
Jan 18, 2019
14
0
1
36
Greetings,

I'm trying to analyze and optimize read and write performance of a VM which is performing the role of NAS and has control over some of the storage. Setup is as following

Virtual Environment 5.3-12

VM config (some parts removed):

Code:
balloon: 0
boot: dcn
bootdisk: scsi0
cores: 2
cpu: host,flags=+pcid
hookscript:
ide2: none,media=cdrom
memory: 10240
name:
net0: virtio= ,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-100-disk-0,cache=directsync,size=156G
scsihw: virtio-scsi-pci
smbios1:
sockets: 1
startup: order=1,up=90,down=420
vmgenid:

On hardware side, there is one hard drive controlled by raid bbu controller with write back caching policy.


I did perform some tests with VM caching setting Write Through and DirectSync. I'm looking after reliability first and speed as second factor so I went for one of those two caching modes as mentioned.

For writes i did test with:
Code:
dd bs=512 count=5120000 if=/dev/zero of=write_test.img
and
dd bs=512 iflag=nocache oflag=nocache count=5120000 if=/dev/zero of of=write_test.img

For reads I used
Code:
dd bs=512 if=write_test.img of=/dev/null
and
dd bs=512 iflag=nocache oflag=nocache if=write_test.img of=/dev/null

When performing write inside the VM it seems DirectSync perform little better on the write side even when both DirectSync and Write Through caching methods prove to be mostly inconsistent even when writing in same file (maybe CoW?). Speed can greatly vary between 39MB/s and 100MB/s for Direct Sync and 34MB/s to 47MB/s for Write Through. On the read side DirectSync and Write Throught perform the same and first read after VM reboot or reads with iflag/oflag=nocache end up at consistent 18-20MB/s. After the file been chached in ram reading numbers are through the roof and not been interesting.

When i perform same tests within the host, writes can be about 20-30MB/s better but reads are multiple times better reaching easily 65-85MB/s. I'm trying to figure out why reads inside the VM are much worse and how to improve upon that. DirectSync seems to be doing little bit better over Write Through as well as I'm thinking that host page caching - the read cache that host builds in Write Through mode, is not needed since the VM builds cache of its own anyway, caching the same stuff, so instead of wasting double the amount of ram to cache same files once in host and once in VM, I can use Direct Sync and have those files cached only inside the VM's ram. I'm thinking host's ram should rather be left free for caching stuff related to other LXCs and activities. Correct me if I'm wrong. Advices are welcome!


Thank you
 
Last edited:
You got the right idea by benchmarking and selecting the correct option for your setup, as there isn't a "one size fits all" configuration for VM workloads most of the time.

However, two things you should consider:
  • Update your system - 5.3 is rather old, and newer versions contain better optimizations in kernel and QEMU
  • Use 'fio' or comparable tools for disk benchmarking - 'dd' is never a reliable benchmark, especially when /dev/null is involved, kernels tend to do all sorts of weird optimizations and tricks that fool your view of the system. 'fio' also provides numbers on IOPS, which can be just as important as throughput, depending on your use case of course
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!