Unresponsive vm during high disk usage

Analius

New Member
Feb 19, 2021
9
0
1
25
Hi,

I've set up a debian VM with the following config:

Code:
agent: 1
balloon: 2048
boot: order=scsi0;net0
cores: 8
memory: 16384
name: Docker
net0: virtio=16:D0:54:56:5D:02,bridge=vmbr0
net1: virtio=02:75:60:E0:2E:17,bridge=vmbr1
numa: 0
onboot: 1
ostype: l26
scsi0: Spool:vm-100-disk-0,cache=writeback,discard=on,format=raw,size=32G
scsi1: Spool:vm-100-disk-1,cache=writeback,discard=on,format=raw,size=164G
scsihw: virtio-scsi-pci
smbios1: uuid=50e5a719-3deb-4425-b23a-85c3d63a7a97
sockets: 1
startup: order=2
vmgenid: 19d2ce34-393c-4d2b-bee9-6c43330ffc02

with spool being laid out as follows:
Code:
# zpool status spool
  pool: spool
 state: ONLINE
  scan: scrub repaired 0B in 00:14:41 with 0 errors on Mon Jul 19 02:14:42 2021
config:

        NAME                                         STATE     READ WRITE CKSUM
        spool                                        ONLINE       0     0     0
          nvme-eui.0000000001000000e4d25c7b31e05201  ONLINE       0     0     0
          nvme-eui.0000000001000000e4d25c80eacf5201  ONLINE       0     0     0

errors: No known data errors



Whenever I do any sustained reads or writes to any of the two disks, the VM becomes basically unresponsive. Any other service running on the VM locks up and does not respond until the reads/writes are done. I also notice the CPU usage of the guest going up to 50% during these io operations.
As well as the above, I also notice much lower read and write speeds on said disks than I would expect, as they are two intel nmve SSDs in raid0. If I try to transfer a file through sftp, it simply won't go faster than 25MB/s, both read and write, while if I try to move the exact same file, to the same zfs array but to the proxmox host directly, I max out at gigabit line speeds.

At first I suspected the NICs, but doing an ookla speed test, I also maxed out at gigabit so it has to be the disks. Anyone have any idea what might be going on here?
 
I've now managed to increase performance drastically by changing the CPU type from the default kvm64 to "host". This increased my transfer speeds from the mentioned 25MB/s to ~80MB/s.
Besides the obvious throughput increase, this also resolved the "unresponsiveness". While doing this transfer, the VM did not stop handling other network requests.

As this does fix the main issue, the unresponsiveness, I'm happy for now but I'm clearly not reaching the limit of gigabit, so if anyone knows anything else that might help, please do share!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!