Hi all!
I've got a HP DL380 G8 running with 2x300GB 15k SAS in RAID 1 and 4x300GB 15k SAS in RAID10. Controller has 1GB of cache, Write Cache is enabled, BBU is present. Controller status is ok.
I've just set it up (yes, raid resync is long done) and moved my VMs from my other server. LXC Containers are fine, 1GB of write cache is amazing.
But I've noticed bad performance in KVM VMs. It was just a feeling, apt-get upgrades were way slower, docker image unpacking took way too long. So I spun up a test VM with GRML live and a 40gb image on my RAID10 array. Settings are virtio scsi with writeback enabled (which should be ok with a BBU, right?)
I did a simple dd test:
First, on the PVE Host (Raid1 Array):
root@neto:~# dd if=/dev/zero of=test.img bs=512 count=10000 oflag=direct
10000+0 records in
10000+0 records out
5120000 bytes (5.1 MB, 4.9 MiB) copied, 0.553076 s, 9.3 MB/s
then in the KVM VM (Raid10 Array):
root@grml:~# dd if=/dev/zero of=/mnt/test.img bs=512 count=10000 oflag=direct
10000+0 records in
10000+0 records out
5120000 bytes (5.1 MB, 4.9 MiB) copied, 2.22463 s, 2.3 MB/s
as you can see, KVM is a LOT slower even though its on the faster raid array (10 vs 1). Both arrays are idle, no other VMs / Services running.
I'm aware that dd is not the best performance test. But real world performance is way too low.
Any ideas on that topic?
Thanks!
Greetings,
erfus
I've got a HP DL380 G8 running with 2x300GB 15k SAS in RAID 1 and 4x300GB 15k SAS in RAID10. Controller has 1GB of cache, Write Cache is enabled, BBU is present. Controller status is ok.
I've just set it up (yes, raid resync is long done) and moved my VMs from my other server. LXC Containers are fine, 1GB of write cache is amazing.
But I've noticed bad performance in KVM VMs. It was just a feeling, apt-get upgrades were way slower, docker image unpacking took way too long. So I spun up a test VM with GRML live and a 40gb image on my RAID10 array. Settings are virtio scsi with writeback enabled (which should be ok with a BBU, right?)
I did a simple dd test:
First, on the PVE Host (Raid1 Array):
root@neto:~# dd if=/dev/zero of=test.img bs=512 count=10000 oflag=direct
10000+0 records in
10000+0 records out
5120000 bytes (5.1 MB, 4.9 MiB) copied, 0.553076 s, 9.3 MB/s
then in the KVM VM (Raid10 Array):
root@grml:~# dd if=/dev/zero of=/mnt/test.img bs=512 count=10000 oflag=direct
10000+0 records in
10000+0 records out
5120000 bytes (5.1 MB, 4.9 MiB) copied, 2.22463 s, 2.3 MB/s
as you can see, KVM is a LOT slower even though its on the faster raid array (10 vs 1). Both arrays are idle, no other VMs / Services running.
I'm aware that dd is not the best performance test. But real world performance is way too low.
Any ideas on that topic?
Thanks!
Greetings,
erfus