promox 5 :: kvm raw disk over zfs zvol :: poor performance

derbot

Member
Oct 5, 2017
1
0
6
44
Hi,

I'm evaluating Proxmox 5 on a test server and the storage performance in guest seems very poor compared to that of the host. The root cause seems the zvol performance.
In the past week I've read various threads started by people with similar problems but none of their solutions worked for me.
I can go the LVM way but async replication and live migration with local storage are look like good features.
Maybe you can help.

Write performance: 10% of that of the host
Read performance: 30% of that of the host

Details:

Host:
HP DL380 G6, 48 GB RAM ECC, P410i raid controller
HDDs: 4 x 146GB SAS @ 10k rpm
rpool is raidz1-0 with 4 disks.
os: Proxmox 5

Guest:
os: debian9
ram: 4G
virtualization: kvm
disk: scsi or virtio, cache: no cache / write back (tried multiple options, the performance is no much different)
disk size: 32 GB / 120 GB (multiple tests, doesn't seem to matter)
fs: ext4, 4k block size

Tests:
A. Host/write:

root@dnm:/rpool/ROOT/pve-1# dd if=/dev/urandom of=sample.txt bs=64M
count=1562 iflag=fullblock
1562+0 records in
1562+0 records out
104824045568 bytes (105 GB, 98 GiB) copied, 575.276 s, 182 MB/s

B. Host/read:

root@dnm:/rpool/ROOT/pve-1# dd if=sample.txt of=/dev/zero bs=64M
count=1562 iflag=fullblock
1562+0 records in
1562+0 records out
104824045568 bytes (105 GB, 98 GiB) copied, 446.276 s, 235 MB/s

C. Guest/write

root@debian:~# dd if=/dev/urandom of=sample.txt bs=64M count=1562
iflag=fullblock
1562+0 records in
1562+0 records out
104824045568 bytes (105 GB, 98 GiB) copied, 5275.91 s, 19.9 MB/s

D. Guest/read

root@debian:~# dd if=sample.txt of=/dev/zero bs=64M count=1562
iflag=fullblock
1562+0 records in
1562+0 records out
104824045568 bytes (105 GB, 98 GiB) copied, 1455.35 s, 72.0 MB/s

Tried with various bs and file sizes.
Tried with zvol volblocksize=4k (to match the ext4 block size).
Added log and cache ssds (cheaper Intel ones, I know). Performance seems worse.


Thanks,
Bogdan.
 
that is not a valid storage benchmark in any sense. use fio and test different work loads to get a more realistic picture, and be aware how the cache on the host (ARC), inside the VM and in your case also potentially on the RAID controller (which should be in IT/HBA mode for ZFS!) can drastically skew results.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!