I have a couple of Windows 10 VMs and all of a sudden disk Active time goes to 100% and with very little disk transfers. I have tried different virtio settings and nothing changes. I did a forum search and found someone else had the same issue but no resolution. Any ideas? Let me know if you need more information.
vm config
system info
I have a ZFS mirror pool which is not breathing during this peg
pastebin link to lshw for pve
vm config
Code:
root@pveprod:~# cat /etc/pve/qemu-server/108.conf
agent: 1
audio0: device=ich9-intel-hda,driver=spice
balloon: 2048
bios: ovmf
boot: order=ide2;virtio0
cores: 8
efidisk0: vm_images:108/vm-108-disk-0.qcow2,size=128K
hostpci0: 0000:05:11.5,pcie=1
ide2: none,media=cdrom
machine: pc-q35-6.1
memory: 16000
name: greg-idat-t41-w10
numa: 0
onboot: 1
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=95f5b43e-72e0-4d4e-a079-69e06fc71661
sockets: 1
startup: order=7,up=25
vga: qxl
virtio0: vm_images:108/vm-108-disk-1.qcow2,aio=threads,backup=0,cache=none,iothread=1,size=512G
vmgenid: 1e7e55f7-dd38-4685-9d4a-0950a6d3bea8
system info
I have a ZFS mirror pool which is not breathing during this peg
Code:
avg-cpu: %user %nice %system %iowait %steal %idle
8.59 0.00 1.51 9.45 0.00 80.46
Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
sda 2.67 0.33 0.00 0.00 32.75 128.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.09 9.73
sdb 4.67 0.58 0.00 0.00 36.64 128.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.17 18.53
sde 3.33 0.42 0.00 0.00 14.10 128.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.05 4.93
sdi 3.00 0.38 0.00 0.00 12.56 128.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.04 4.53
sdj 3.33 0.39 0.00 0.00 37.60 119.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.13 13.73
sdk 2.67 0.33 0.00 0.00 11.88 128.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.03 3.87
root@pveprod:~# zpool iostat zones 3
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
zones 11.4T 10.4T 32 159 4.60M 29.3M
zones 11.4T 10.4T 0 91 0 11.5M
zones 11.4T 10.4T 0 110 0 13.8M
zones 11.4T 10.4T 0 75 0 9.48M
zones 11.4T 10.4T 0 119 0 15.0M
zones 11.4T 10.4T 2 81 384K 10.1M
zones 11.4T 10.4T 0 52 0 6.57M
zones 11.4T 10.4T 0 114 0 14.3M
zones 11.4T 10.4T 0 79 0 9.89M
zones 11.4T 10.4T 0 127 0 16.0M
zones 11.4T 10.4T 0 125 0 15.7M
zones 11.4T 10.4T 0 123 42.6K 15.5M
Code:
root@pveprod:~# zpool status zones
pool: zones
state: ONLINE
scan: scrub repaired 0B in 1 days 05:59:48 with 0 errors on Mon Feb 14 06:23:55 2022
remove: Removal of vdev 2 copied 982G in 4h51m, completed on Tue Apr 27 09:38:46 2021
8.89M memory used for removed device mappings
config:
NAME STATE READ WRITE CKSUM
zones ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-ST8000VN0022-2EL112_ZA1EL30R ONLINE 0 0 0
ata-ST8000VN0022-2EL112_ZA1EJPDN ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
ata-ST8000VN004-2M2101_WSD1D5NP ONLINE 0 0 0
ata-ST8000AS0002-1NA17Z_Z840EMVD ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
ata-ST8000AS0002-1NA17Z_Z840CZMZ ONLINE 0 0 0
ata-ST8000AS0002-1NA17Z_Z840EVQJ ONLINE 0 0 0
logs
mirror-8 ONLINE 0 0 0
ata-Samsung_SSD_850_EVO_mSATA_250GB_S248NX0H402948X-part3 ONLINE 0 0 0
ata-Samsung_SSD_840_EVO_250GB_mSATA_S1KPNSAF601284M-part1 ONLINE 0 0 0
cache
ata-Samsung_SSD_850_EVO_mSATA_250GB_S248NX0H402948X-part4 ONLINE 0 0 0
ata-Samsung_SSD_840_EVO_250GB_mSATA_S1KPNSAF601284M-part2 ONLINE 0 0 0
errors: No known data errors
pastebin link to lshw for pve
Last edited: