Windows 10 guest stutters during SMB file copy

marcosscriven

Member
Mar 6, 2021
130
11
23
I have a strange Windows 10 guest issue. I don’t have any native Windows systems to just try it.

When I copy a 20GB file over SMB, Windows 10 stutters - not just the mouse pointer, but animations in a running browser (for instance).

I tried copying the file locally, while also running iPerf, and it was totally fine.

I notice while running SMB one of the Windows cores is pegged to 100%, but all the others are fine.

I’m using the VFIO NIC interface. SSD has an IO thread. The VM itself has 8GB mem and 16 cores.

Seems to be hard to Google as all the results seem to be about SMB performance itself, not Windows performance during an SMB copy.
 
Last edited:
Quick update to this - while running the SMB copy, I see "Hardware Interrupts and DPCs" go from 0 to 8% (total) CPU, which seems to account for the CPU core being pegged.

Wondering if that's something to do with QEMU or a native Windows 10 problem.
 
I’m using the VFIO NIC interface. SSD has an IO thread. The VM itself has 8GB mem and 16 cores.
Please show the whole VM configuration file (to check for VirtIO SCSI Single and other things). And how much memory and threads does your Proxmox hardware have?

EDIT: It the other Windows also a VM (and running on the same Proxmox host)? If so, what is its configuration file?
 
Last edited:
  • Like
Reactions: marcosscriven
Is the Windows 10 to SMB share using the same network as you are using to interface to the Windows 10 guest?
While the Windows 10 to SMB share is going on, how is the general snappiness/stutter PVE GUI interface doing? Other VMs general snappiness/stutter during this period?
 
  • Like
Reactions: marcosscriven
Please show the whole VM configuration file (to check for VirtIO SCSI Single and other things). And how much memory and threads does your Proxmox hardware have?

EDIT: It the other Windows also a VM (and running on the same Proxmox host)? If so, what is its configuration file?
Sorry - I ought to know better!

Here's the guest config:
Code:
agent: 1
balloon: 0
bios: ovmf
boot: order=scsi0;net0
cores: 16
cpu: host
efidisk0: local-lvm:vm-110-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:01:00,pcie=1,rombar=0,x-vga=1
hostpci1: 0000:15:00.0,pcie=1,rombar=0
machine: pc-q35-8.1
memory: 8192
meta: creation-qemu=8.1.5,ctime=1711703824
name: w10-test
net0: virtio=BC:24:11:F9:0E:16,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
scsi0: local-lvm:vm-110-disk-1,cache=writeback,discard=on,iothread=1,size=128G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=1b1756c3-3b1b-413e-8b6a-89ebec967cb5
sockets: 1
vmgenid: b2df351e-02e5-403f-96e9-77e6d9690655

The host is a Ryzen 7950x3D (16C, 32T) with 64GB memory.

I've tried two SMB servers, both Linux based, and both on completely separate physical hardware. One is native Linux, and connected at 2.5gbit, the other is a Proxmox LXC and connected at 10gbit. I see the same behaviour in the Windows client (in the Windows Proxmox guest) for either server.
 
Is the Windows 10 to SMB share using the same network as you are using to interface to the Windows 10 guest?
While the Windows 10 to SMB share is going on, how is the general snappiness/stutter PVE GUI interface doing? Other VMs general snappiness/stutter during this period?
Sorry I wasn't clear on the SMB setup.

The SMB client is in a Windows 10 VM guest, where the host has a 10gbit connection.

I've tried two different SMB servers, both on different physical hardware to the Windows guest I'm seeing this. I perf from the Proxmox host to both the SMB servers show max performance (2.5gbit on one, and 10gbit on the other).

The Proxmox GUI is fine, as are any command line activities in the Proxmox host, while the copy is happening.

I did since see within the Windows guest "Interrupts and DPCs" are high during the copy. It's not replicable with either local copy, or iperf3, or both together. Only when running an SMB copy.
 
Last edited:
Looks like you separated the para-virtual threads for the drive and the network correctly. However, you give the VM all of your CPU cores (as the sibling threads don't count as full cores). If you want VMs to be low-latency and responsive, don't give it all or most of any resource to prevents maxing out on anything. Can you test with cores: 12 and cores: 8 (so Proxmox has more hardware free to do the virtual networking and routing)?
 
  • Like
Reactions: marcosscriven
I just tried reducing from 16 to 8 - and I see the same behaviour. To be clear this isn't some very slight latency issue - the whole Windows system becomes jerky and unresponsive at random points during the transfer.

Whether it's the hypervisor or Windows I don't know, but it certainly seems to be down to the "Hard Interrupts and DPCs" in Windows (which would make sense in terms of the symptoms).
 
Having discovered that DPCs were involved here - it helped a bit with Googling. I thus found a Windows program called latencymon.

It reports all green until I run the SMB transfer, and then says there are problems with the "ndis.sys" driver. At 2.5gbit it certainly shouldn't be having problems.

WhatsApp Image 2024-04-14 at 14.21.46 (1).jpeg

WhatsApp Image 2024-04-14 at 14.21.46.jpeg
 
  • Like
Reactions: Kingneutron
I just tried the E1000 emulation - although I still see high DPCs, it's not as high as with the virtio driver, and although there's a little jerkiness, it's much more responsive.
 
  • Like
Reactions: leesteken
isn't the ssd limited write speed ?
Try with vDisk cache set to None (PVE default).
 
I just tried the E1000 emulation - although I still see high DPCs, it's not as high as with the virtio driver, and although there's a little jerkiness, it's much more responsive.
Hi, can you post the output of lscpu on the host?
 
  • Like
Reactions: marcosscriven
Hi, can you post the output of lscpu on the host?
Thanks for your help. Here's the output:

Code:
 lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         48 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  32
  On-line CPU(s) list:   0-31
Vendor ID:               AuthenticAMD
  BIOS Vendor ID:        Advanced Micro Devices, Inc.
  Model name:            AMD Ryzen 9 7950X3D 16-Core Processor
    BIOS Model name:     AMD Ryzen 9 7950X3D 16-Core Processor           Unknown CPU @ 4.2GHz
    BIOS CPU family:     107
    CPU family:          25
    Model:               97
    Thread(s) per core:  2
    Core(s) per socket:  16
    Socket(s):           1
    Stepping:            2
    CPU(s) scaling MHz:  15%
    CPU max MHz:         5759.0000
    CPU min MHz:         400.0000
    BogoMIPS:            8399.59
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma
                          cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmo
                         n_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_m
                         bm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gf
                         ni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization features:
  Virtualization:        AMD-V
Caches (sum of all):
  L1d:                   512 KiB (16 instances)
  L1i:                   512 KiB (16 instances)
  L2:                    16 MiB (16 instances)
  L3:                    128 MiB (2 instances)
NUMA:
  NUMA node(s):          1
  NUMA node0 CPU(s):     0-31
Vulnerabilities:
  Gather data sampling:  Not affected
  Itlb multihit:         Not affected
  L1tf:                  Not affected
  Mds:                   Not affected
  Meltdown:              Not affected
  Mmio stale data:       Not affected
  Retbleed:              Not affected
  Spec rstack overflow:  Mitigation; Safe RET
  Spec store bypass:     Mitigation; Speculative Store Bypass disabled via prctl
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:            Mitigation; Enhanced / Automatic IBRS, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
  Srbds:                 Not affected
  Tsx async abort:       Not affected
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!