Proxmox Performance issue on SATA Storage

ngurjar

Member
Feb 25, 2015
33
1
6
Hi,
I have purchased ProxmoxVE 3.4 and installed on one host with 2 SATA Storages.
I have configured 2 SATA storages on 2 separate directories. They are formated with ext4 filesystem.
I have setup number of KVM based VMs on both the storages.
Inside VMs which are on storage2 I am getting poor writing performance around 10MB/s max. Due to this inside VMs %wa is getting higher and LoadAverage is also increasing accordingly.

Please note that on Proxmox host when I tried to test writing speed on both storages, I got equal writing speed.

Any pointers to troubleshoot this issue.


Regards
Neelesh Gurjar
 
You share one sata disk between a number of VM's and are surprised of bad write performance? Have you done your math?

1 sata disk below a hypervisor will in best case roughly give 70-80 MB/s sustained writes. If writes are distributed evenly among the VM's using this disk then it is a simple calculation to find out what you can expect. The numbers gets worse with random writes. To get decent performance for your setup you will have to add a hardware raid controller with dedicated RAM.
 
Thanks mir.
Same configuration and same number of VMs worked well with SolusVM. I had never faced performance issue.

Using RAID controller will cross my budget actually. This is mainly for internal project purpose. So I need to improve performance in same config.

Currently I am using qcow2 images. I have created templates and created VMs using link clone from templates. Host has huge RAM available.
Can I use that RAM to improve storage performance? Something like writting first in RAM then it will get flushed on Drive.
Or Can I use one SSD drive for caching purpose?
Is there any other hack which I can do with KVM filesystem configuration ?

Please help
Neelesh Gurjar
 
Last edited:
Thanks Manu,
I was planning to use writeback cache but I read that it is not safe. It can make file system corruption as well.
Do you have any experience in using this?
Also same thing was working with solusVM which was using Xen hypervisor. That surprised me.

Regards
Neelesh Gurjar
 
Yes. There is always a tradeoff between performance and data safety.

Are you using Virtio drivers on your guests ?

As mir said standard I/O from a sata hard drive is around 80MB/s maximal toping at 100MB/s.
 
Yes I agree that normal SATA drive 80MB/s. I am using Seagate Enterprise with 128MB cache. So I may get around 90-100MB/s.
I accept average throughput inside VM 25-30 MB/s.
On one storage drive I am running 10 VMs . However as per my observation not all VMs are doing disk read-write at a time.

No I am not using Virtio drivers. I am using IDE drivers.

Regards
Neelesh Gurjar
 
Can I convert current IDE drivers to Virtio drivers? or I need to configure new disk ?
Also does virtio have any tradeoff? any idea?

Regards
Neelesh Gurjar
 
If you remove the disk of VM, this disk will appear like Ununsed, then you click again in disk and select the correct bus interface VirtIO. The disk will be inserted again with VirtIO bus.
Don´t forget, if you VM don´t have VirtIO Drivers, is better install a second disk with VirtIO bus and install drivers FISRT. Then, change the Bus interface of primary disk after.
 
Please try also monitoring your I/O using iotop in the hypervisor itself and see for yourself how much I/O you're producing. You'll also notice that ext4 uses only one writing thread which is actually a big performance problem. Please use LVM to get the most throughput and smallest latency out of your system. To test the "real" thing, use a performance benchmark like fio in your hypervisor and in one VM (only one running at the time of the test) to get a real comparison of the additional penalty of the KVM/virtio software stack.

You said, that you used XEN, but what type of hypervisor? Are these all Linux machines and was paravirtualized XEN (so without qemu) used?

I also do not unterstand what you all mean by "SATA can achieve 80 MB/sec":
For measuring I/O performance, it is crucial to know what kind of I/O you're producing. Single sequential and even big I/O leads to real good throughput of about 150 MB/sec for ca. of-the-shelf SATA drive, yet multiple random I/O yields a few KB/sec. In mixed setups the performance is also bad due to the heavy movement of the harddisk head on one disk.

For best performance everywhere you'll have to use virtio for everything and LVM backed storage (besides a lot of harddisks). Even check all the guides in the wiki and here in the forum for optimizing different kinds of operating systems. Compare also things like storage alignment (if you're using 4K sectored harddisks), background optimization tasks like defrag in windows, access time updates of the filesystem, disabling barriers in ext4 (or using XFS or ZFS or ... etc. etc.) There is really a bunch of things that can lead to more performance. Hardware upgrades ALWAYS make things better, so please reconsider buying a decent raid controller. If I had to meet a small budget in the past, I normally turn to use used parts and buy e.g. an HP P400 or P410, BBU, 512 MB and attach at least 4 SATA disks with RAID10. You can get this system with ca. 4x500 GB disks on eBay for about 100 EUR.

I personally do not know your previous virtualization software but maybe it has some build in optimization tweaks like the ones described above for tuning the performance (normally tweaked by installing guest additions).
 
All linux kernels >= 2.6.25 have Virtio drivers, so it means all distributions released in the last 4 years have it.
You can check it by doing a grep 'virtio' in /boot/config-mykernel-version
 
small footnote to thread, to add to existing advice,

- for serious disk IO inside your VMs you absolutely must configure VirtIO to get better IO performance to disk for your VMs. Period. Not really up for discussion.... IDE "works" in the sense that it is easy, and functional, but definitely it is not optimal.
- VIrtIO is supported in modern linux guests since quite a few years. (I don't think you mentioned in your thread what OS your guest Vms are running?)
- note VirtIO also is fine on typical Windows guests (especillay nice in Vista/Win7/Server2008 and more recent - I find still some issues with 2003/XP and some VirtIO driver configs but ... hopefully most people are not doing new deployment of 2003 at this stage in its looming EOL cycle).
- basic gig to move from IDE to VitrtIO disk is easy, ie,
(a) schedule brief downtime for host
(b) attach a tiny VirtIO new disk to the host
(c) power off and power on the VM
(d) attach VirtIO ISO driver disc, available from KVM site, as per link in proxmox wiki support docs (if running windows guest)
(e) install VirtIO drivers if required in your guest OS (Windows)
(f) make sure new VirtIO disk is visible, shows up in device manager as "Redhat SCSI Controller"
(g) power off, detach and delete this temporary tiny disk which allowed you to force install of VirtIO drivers
(h) detach your REAL OS disk and then re-attach to VM but instead of IDE bus select VirtIO bus
(i) boot your VM and now it should boot and be using VirtIO Bus and drivers for boot volume. And magically disk performance inside this VM becomes much nicer.

End of the day though: I have a few testboxes for misc test work which are on bare SATA disk .. and the performance of these will never be great - as others have said, sata disk is not great once you have a lot of VMs grinding IO to the physical device. Possible workaround on modern versions of Proxmox might include
-- deploy with ZFS "software raid" pool and have many spindles - no hardware raid controller, but multiple SATA disks in your ZFS storage pool for proxVE VM storage. As such you get normal speedup attributed to "many spindles makes better performance" in a 'raid config'. ie, you can avoid spending $ on hardware raid card but you can't avoid spending money on multiple disks and a chassis in system that accomodate enough drives. But of course in this config setup work is a bit more but performance is better too. So .. clearly .. 8 x 500gig drives in a ZFS pool will yield better IO performance than a single 4Tb Sata disk even if the total disk space is not so different :)

-- note that Ceph is supported now as storage type, and it too can help you do multiple spindle bare disk no hardware raid storage pool (fault tolerance comes from multiple nodes / multiple disks / suitable levels of redundancy). But I am guessing for a testing box (solo box) you will have simpler setup with a single host to do a ZFS based storage pool with multiple SATA disks rather than getting into Ceph.

-- or -- you can always do an 'unsupported' proxmox install config, ie, setup Debian minimal on baremetal first; install on top of a SW Raid config that uses multiple spindles (ie, Raid10 volumes spanning 8-12 physical SATA disks for example?) and then custom add in proxVE install on top after. But ZFS would be the 'supported' way of doing this setup :)

Hope this meandering post is of some help maybe.

Tim
 
Thanks very much fortechitsolutions, LnxBil, Manu for such detailed information. I really appreciate.
Few of my VMs running CentOS6.6 and others are Windows Server 2008.
To Implement VirtIO in current scenario, I used below steps:
- I removed disk from VM
- DoubleClick on "Unused Disk".
- Changed to VirtIO and attach.
- But then when I started Linux VM, it did not detect HardDrive gave error saying "No bootable disk found". In Windows VM, it booted with windows and showed "Windows Starting up" screen. Then got blue screen and restarted.

So in case of Windows I need to install VirtIO drivers. How should I go with Linux coz it does not detect boot drive only.

Also I am using QCOW2 images for VMs disk. Because if I use LVM, I am not able to do Link Clone. And they want to use Link Clone as it launches VMs faster.

One more questsion: When we do Link clone, does base image get hits or any read-write activity?

Regards
Neelesh Gurjar
 
You also need to change boot options for the vm. This can be done from the vm's 'Options' tab. Click on the row named 'Boot order' and select the virtio disk as Boot device 1
 
Hi Neelesh, just a few followup comments,
(1) your VM Guest image may not have VirtIO driver support present for boot device. Possibly test first by just adding a new VirtIO disk to your VM, while it can still boot on old config, and make sure the OS is able to see the new disk (VirtIO /dev/vda) before moving ahead. You may need to take steps to rebuild init.rd used a boot time to ensure required virtio support is present. I am not entirely sure.
(2) Link Clone is COW (copy-on-write) based, I believe, so general model is that writes go to new buffer/storage; reads of course will still hit the 'source' image linked from which your link clone exists. So there will be some IO activity against the source image, just read-only. (I think!?)
(3) QCOW will be impacting only what features KVM and VM layer can/do provide; your guest OS is never aware of what method(s) you use to expose a block device to it. That is outside his scope of visibility :) in theory. Only what kind of virtual controller (ie, VirtIO attached, IDE attached, etc). I myself haven't used link clone much so can't really comment about if/how LVM might involve:interfere.

otherwise I think the other comments in this thread since your question also sound good / and need to be reviewed suitably.

Hope you get things to a state you are happy with soon.


Tim
 
Thanks all again.
I think mir has a point. I did not changed my boot sequence after adding VirtIO disk. I will check and let you know.
Whatever I perform, will try to document and post it over here, so that others will get idea.
Regards
Neelesh Gurjar
 
Hi,
VirtIO works fine. I checked on Linux machines. I missed to change Boot Option that was causing the issue.

Thanks all again.

Regards
Neelesh Gurjar
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!