Slow disk speed on Guest VM

proteus

Member
Feb 24, 2021
25
15
8
I am new to Proxmox VE, I tried to find an answer for this problem with no luck.
The problem is that speed on Host is good but on Guest VM is very bad. I really tried all the cache types and 2 different ZFS Pools. The system specs are bellow for this test but i did tried on another Supermicro Server with AMD Epyc Rome and different HDD's with same results.


Test: Block 1M
fio --randrepeat=1 --ioengine=libaio --direct=0 --gtod_reduce=1 --name=test --filename=test --bs=1M --iodepth=32 --size=5G --readwrite=randrw --rwmixread=50 --numjobs=8 --time_based --runtime=120

### Quick Results
# HOST:
Run status group 0 (all jobs):
READ: bw=162MiB/s (170MB/s), 19.3MiB/s-20.8MiB/s (20.3MB/s-21.8MB/s), io=19.1GiB (20.5GB), run=120017-120819msec
WRITE: bw=164MiB/s (172MB/s), 19.7MiB/s-21.3MiB/s (20.7MB/s-22.4MB/s), io=19.3GiB (20.8GB), run=120017-120819msec


# Ubuntu 20.04 Guest VM / SCSI Controller: VirtIO SCSI / Default (No Cache) / Installed qemu-guest-agent
Run status group 0 (all jobs):
READ: bw=44.9MiB/s (47.1MB/s), 5062KiB/s-6190KiB/s (5184kB/s-6339kB/s), io=5496MiB (5763MB), run=120093-122414msec
WRITE: bw=46.0MiB/s (49.3MB/s), 5811KiB/s-6265KiB/s (5950kB/s-6416kB/s), io=5752MiB (6031MB), run=120093-122414msec



### This system is just for testing not production ###
DL380p G8 / P420i - HBA Mode
1 x HDD SATA 1TB Enteprise PVE Boot + 1 x USB Stick for GRUB
2 x HDD SAS 600GB 15K RPM - ZFS-Mirror-01 (ashift=9)
2 x HDD SAS 600GB 15K RPM - ZFS-Mirror-02 (ashift=9)

*Both Host and Guest VM are updated.

proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-8
pve-kernel-helper: 6.3-8
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.8
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.12-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-9
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-5
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-10
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
 
Last edited:
Hi,

## SATA

Run status group 0 (all jobs):
READ: bw=29.0MiB/s (30.4MB/s), 3339KiB/s-4435KiB/s (3419kB/s-4542kB/s), io=3692MiB (3871MB), run=124194-127245msec
WRITE: bw=30.2MiB/s (31.7MB/s), 3331KiB/s-4420KiB/s (3411kB/s-4526kB/s), io=3844MiB (4031MB), run=124194-127245msec


## IDE
Run status group 0 (all jobs):
READ: bw=30.2MiB/s (31.7MB/s), 3513KiB/s-4219KiB/s (3597kB/s-4320kB/s), io=3844MiB (4031MB), run=122728-127186msec
WRITE: bw=31.5MiB/s (33.1MB/s), 3947KiB/s-4377KiB/s (4041kB/s-4482kB/s), io=4012MiB (4207MB), run=122728-127186msec

As for another controller, what should I choose ?

1617381736457.png
 
Used 1M because I thought it should be easier to achieve better speeds then 4K blocks. When I am copying a big file over the LAN I get the speeds from the first test, 25-30MB/s. I even installed WS 2019 with latest virtio drivers and I get the same speeds.


fio --randrepeat=1 --ioengine=libaio --direct=0 --gtod_reduce=1 --name=test --filename=test --bs=4K --iodepth=32 --size=5G --readwrite=randrw --rwmixread=50 --numjobs=8 --time_based --runtime=120

## Host
Run status group 0 (all jobs):
READ: bw=3933KiB/s (4027kB/s), 481KiB/s-501KiB/s (492kB/s-513kB/s), io=461MiB (484MB), run=120018-120114msec
WRITE: bw=3929KiB/s (4023kB/s), 481KiB/s-497KiB/s (493kB/s-508kB/s), io=461MiB (483MB), run=120018-120114msec


## Guest VM
Run status group 0 (all jobs):
READ: bw=2347KiB/s (2403kB/s), 288KiB/s-296KiB/s (295kB/s-304kB/s), io=276MiB (289MB), run=120053-120305msec
WRITE: bw=2344KiB/s (2401kB/s), 285KiB/s-297KiB/s (292kB/s-304kB/s), io=275MiB (289MB), run=120053-120305msec


Like I said, I am new to Proxmox, maybe this speeds are ok ?
 
Before posting I tried with the default ashift=12 and the speed was lower with ~10MB/s with the same test and 1M block size.
 
I really wanted this to work but I am tired or testing and searching the internet for a fix.... back to Windows, Hyper-V and RAID...
 
I am new to Proxmox VE, I tried to find an answer for this problem with no luck.
The problem is that speed on Host is good but on Guest VM is very bad. I really tried all the cache types and 2 different ZFS Pools. The system specs are bellow for this test but i did tried on another Supermicro Server with AMD Epyc Rome and different HDD's with same results.


Test: Block 1M
fio --randrepeat=1 --ioengine=libaio --direct=0 --gtod_reduce=1 --name=test --filename=test --bs=1M --iodepth=32 --size=5G --readwrite=randrw --rwmixread=50 --numjobs=8 --time_based --runtime=120

Hi @proteus

Your test is not OK, for several reasons:

- you test the host with fio for a dataset where cache is enable by default(so 1/2 RAM is usabable) and compare the result with VM test where the cache is only limited by the allocated RAM for that VM(give for your VM [1/2 of RAM - 1 GB] server and see the difference)
-the blocksize for a dataset is 128k compared with default VM who is 512 B, so oranges are not like apples
- even more the default Ubuntu install(and others) is done with LVM, so another layer with own block-size
- libaio option for fio is problematic for zfs, and your results will not be what you think you will see
- use end_fsync for fio, so you will be sure that all data test are write on the disk, and are not on the cache
When I am copying a big file over the LAN I get the speeds from the first test, 25-30MB/s. I even installed WS 2019 with latest virtio drivers and I get the same speeds.

- I would also test the network speed, maybe is a problem here, maybe your switch use jumbo frames and your server is not using?

Good luck / Bafta
 
Hi, you are right, gave 10GB RAM to VM and same fio test with 1M block

Run status group 0 (all jobs):
READ: bw=55.1MiB/s (57.8MB/s), 6532KiB/s-7652KiB/s (6689kB/s-7835kB/s), io=6925MiB (7261MB), run=124110-125581msec
WRITE: bw=57.9MiB/s (60.7MB/s), 6648KiB/s-7717KiB/s (6807kB/s-7902kB/s), io=7267MiB (7620MB), run=124110-125581msec

Slightly better but this confirms that my tests are wrong and the memory cache is in my way.


Tried with this on the VM
fio --randrepeat=1 --ioengine=posixaio --direct=0 --gtod_reduce=1 --name=test --filename=test --bs=1M --iodepth=32 --size=5G --readwrite=randrw --rwmixread=50 --numjobs=8 --time_based --runtime=120 --end_fsync=1

Run status group 0 (all jobs):
READ: bw=35.4MiB/s (37.1MB/s), 4076KiB/s-4847KiB/s (4174kB/s-4964kB/s), io=7433MiB (7794MB), run=209755-209823msec
WRITE: bw=37.1MiB/s (38.9MB/s), 4437KiB/s-5140KiB/s (4544kB/s-5264kB/s), io=7783MiB (8161MB), run=209755-209823msec



The Network is fine, its copying with 1Gb/s until RAM is at 50% and then it goes down to 25-30MB/s
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!