We have recently run up against an issue where large (>4GB) copies cause lockups within a Windows guest. These can be copies within the same folder, or across disks. The original issue was seen on a RAID Z-1 pool (6x 870 EVO SSD), but has been replicated on ZFS RAID-1 pool with Micron 7450. The issue is not so much the speed (drop-off is expected after the cache exhausts), just the lock-up behaviour with hitting 0bytes/s
Vm is running on recommended v1.271 virtio drivers and latest Windows Server 2022. Disks have been tested with both virtIO scsi and sata emulation. VM conf below:
Attached is a txt file with /proc/spl/kstat/zfs/arcstats and arc_summary. Happy to try tuning options if there are some good first points of call.
Vm is running on recommended v1.271 virtio drivers and latest Windows Server 2022. Disks have been tested with both virtIO scsi and sata emulation. VM conf below:
Code:
agent: 1
bios: ovmf
boot: order=sata0
cores: 24
cpu: x86-64-v4,flags=+aes
efidisk0: local-zfs:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
machine: pc-q35-10.0+pve1
memory: 8192
meta: creation-qemu=10.0.2,ctime=1758721753
numa: 0
ostype: win11
sata0: local-zfs:vm-100-disk-1,size=200G
sata4: local-zfs:vm-100-disk-3,size=1500G
scsihw: virtio-scsi-pci
smbios1: uuid=d611609a-1d86-461e-a1d7-a56dbbb865b4
sockets: 1
tpmstate0: local-zfs:vm-100-disk-2,size=4M,version=v2.0
unused0: local-zfs:vm-100-disk-5
unused2: local-zfs:vm-100-disk-4
vmgenid: b2156e6c-0295-44ae-8162-e6cee0b8e59c
Attached is a txt file with /proc/spl/kstat/zfs/arcstats and arc_summary. Happy to try tuning options if there are some good first points of call.