Hi,
I've set up a debian VM with the following config:
with spool being laid out as follows:
Whenever I do any sustained reads or writes to any of the two disks, the VM becomes basically unresponsive. Any other service running on the VM locks up and does not respond until the reads/writes are done. I also notice the CPU usage of the guest going up to 50% during these io operations.
As well as the above, I also notice much lower read and write speeds on said disks than I would expect, as they are two intel nmve SSDs in raid0. If I try to transfer a file through sftp, it simply won't go faster than 25MB/s, both read and write, while if I try to move the exact same file, to the same zfs array but to the proxmox host directly, I max out at gigabit line speeds.
At first I suspected the NICs, but doing an ookla speed test, I also maxed out at gigabit so it has to be the disks. Anyone have any idea what might be going on here?
I've set up a debian VM with the following config:
Code:
agent: 1
balloon: 2048
boot: order=scsi0;net0
cores: 8
memory: 16384
name: Docker
net0: virtio=16:D0:54:56:5D:02,bridge=vmbr0
net1: virtio=02:75:60:E0:2E:17,bridge=vmbr1
numa: 0
onboot: 1
ostype: l26
scsi0: Spool:vm-100-disk-0,cache=writeback,discard=on,format=raw,size=32G
scsi1: Spool:vm-100-disk-1,cache=writeback,discard=on,format=raw,size=164G
scsihw: virtio-scsi-pci
smbios1: uuid=50e5a719-3deb-4425-b23a-85c3d63a7a97
sockets: 1
startup: order=2
vmgenid: 19d2ce34-393c-4d2b-bee9-6c43330ffc02
with spool being laid out as follows:
Code:
# zpool status spool
pool: spool
state: ONLINE
scan: scrub repaired 0B in 00:14:41 with 0 errors on Mon Jul 19 02:14:42 2021
config:
NAME STATE READ WRITE CKSUM
spool ONLINE 0 0 0
nvme-eui.0000000001000000e4d25c7b31e05201 ONLINE 0 0 0
nvme-eui.0000000001000000e4d25c80eacf5201 ONLINE 0 0 0
errors: No known data errors
Whenever I do any sustained reads or writes to any of the two disks, the VM becomes basically unresponsive. Any other service running on the VM locks up and does not respond until the reads/writes are done. I also notice the CPU usage of the guest going up to 50% during these io operations.
As well as the above, I also notice much lower read and write speeds on said disks than I would expect, as they are two intel nmve SSDs in raid0. If I try to transfer a file through sftp, it simply won't go faster than 25MB/s, both read and write, while if I try to move the exact same file, to the same zfs array but to the proxmox host directly, I max out at gigabit line speeds.
At first I suspected the NICs, but doing an ookla speed test, I also maxed out at gigabit so it has to be the disks. Anyone have any idea what might be going on here?