Hi all,
I'm running ZFS on Proxmox 4.1 and for some reasons the I/O inside KVM guest is much slower than on host. The results are fairly consistent.
Host:
Guest:
Storage configuration on host is the following
The guest is running on raw (the only disk option available) with no cache. It has 2 CPU cores and 2GB RAM. Hostnode runs on 6x HDD raid 10 and 1 SSD each for cache and log, as you can see below. Should you be wondering why dsync IO is so high, it's because sync is set to disabled.
I created another guest (on local storage, not ZFS plugin; with qcow2 and writeback cache, and got similar if not worse results).
Any help or advice would be appreciated.
Thanks!
I'm running ZFS on Proxmox 4.1 and for some reasons the I/O inside KVM guest is much slower than on host. The results are fairly consistent.
Host:
Code:
root@host:/# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 0.451532 s, 2.4 GB/s
root@host:/# dd if=/dev/zero of=test bs=64k count=16k oflag=dsync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 0.456314 s, 2.4 GB/s
Guest:
Code:
[root@kvm ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 2.26477 s, 474 MB/s
[root@kvm ~]# dd if=/dev/zero of=test bs=64k count=16k oflag=dsync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 10.0685 s, 107 MB/s
Storage configuration on host is the following
Code:
root@host:/# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
maxfiles 0
content images,rootdir,iso,vztmpl
zfspool: storage
pool rpool/zfsdisks
content rootdir,images
sparse
The guest is running on raw (the only disk option available) with no cache. It has 2 CPU cores and 2GB RAM. Hostnode runs on 6x HDD raid 10 and 1 SSD each for cache and log, as you can see below. Should you be wondering why dsync IO is so high, it's because sync is set to disabled.
Code:
root@host:/# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
logs
sdh ONLINE 0 0 0
cache
sdg ONLINE 0 0 0
root@host:/# pveperf | egrep '(CPU|SECOND)'
CPU BOGOMIPS: 166416.32
REGEX/SECOND: 1593360
FSYNCS/SECOND: 19405.40
I created another guest (on local storage, not ZFS plugin; with qcow2 and writeback cache, and got similar if not worse results).
Any help or advice would be appreciated.
Thanks!