Hi All..
I have been playing with Proxmox VE for a few weeks now but can't seem to solve a performance issue I am having in VM's..
I have a single Proxmox VE node and a FreeBSD based NAS server.. The drives in the NAS are a ZFS RAID 10 setup and exported over NFS..
Now if I run a simple dd write test from the Proxmox VE node to the mount point I can get a write performance of 80-88 MB/s.. The same test from within a VM is about half the speed at a max of about 40MB/s..
I have atime disabled, I have tried the deadline IO scheduler, I have tried tuning the NAS.. Nothing has made any significant difference..
So based on the fact I was getting 50% throughput on one VM I tried to run the test on two separate VM's at the same time.. This reached a combined
throughput of 47MB/s (~58%)..
I am using Virtio and raw virtual disk format..
So can anyone make any suggestions as to why the VM performance is so poor?
Thanks..
I have been playing with Proxmox VE for a few weeks now but can't seem to solve a performance issue I am having in VM's..
I have a single Proxmox VE node and a FreeBSD based NAS server.. The drives in the NAS are a ZFS RAID 10 setup and exported over NFS..
Now if I run a simple dd write test from the Proxmox VE node to the mount point I can get a write performance of 80-88 MB/s.. The same test from within a VM is about half the speed at a max of about 40MB/s..
I have atime disabled, I have tried the deadline IO scheduler, I have tried tuning the NAS.. Nothing has made any significant difference..
So based on the fact I was getting 50% throughput on one VM I tried to run the test on two separate VM's at the same time.. This reached a combined
throughput of 47MB/s (~58%)..
I am using Virtio and raw virtual disk format..
So can anyone make any suggestions as to why the VM performance is so poor?
Thanks..