I am experiencing performance issues with SMB file sharing from a Windows VM guest on Proxmox.
I have both a newly configured VM, as well as a VM that was migrated from ESXi.
On the ESXi machine, I easily got wire speed. Meaning: 112MB/sec. which is normal for a 1Gbps interface.
I do not get near that performance on my new Proxmox server, which is ultimately more powerful.
The problem is isolated to SMB read speed (download from VM to regular PC). The tests relate to large file transfers (26 files for 72GB)
hardware: VM disk I'm reading/writing from is on a RAIDZ pool with 4 spinning disks, Proxmox server is Epyc Rome 16 core, 256GB RAM, 10Gbps interface
Client machine that is accessing the server has 2.5GbE interface. So I expect performance somewhere in the 200-250MB/sec region
Tests done:
What bothers me is the slow read access with SMB. This was way better with the Windows VM on ESXi. I'm getting 20, where I was hoping for >200. And given local file and FTP read performance, that is not an unrealistic expectation. It's really linked to SMB on Windows on Proxmox, as the other tests are fine.
Any thoughts?
Windows guest or Proxmox host tunings I should be looking into?
I have both a newly configured VM, as well as a VM that was migrated from ESXi.
On the ESXi machine, I easily got wire speed. Meaning: 112MB/sec. which is normal for a 1Gbps interface.
I do not get near that performance on my new Proxmox server, which is ultimately more powerful.
The problem is isolated to SMB read speed (download from VM to regular PC). The tests relate to large file transfers (26 files for 72GB)
hardware: VM disk I'm reading/writing from is on a RAIDZ pool with 4 spinning disks, Proxmox server is Epyc Rome 16 core, 256GB RAM, 10Gbps interface
Client machine that is accessing the server has 2.5GbE interface. So I expect performance somewhere in the 200-250MB/sec region
Tests done:
- ethernet level tests:
- iperf tests: wire speed when I use more than 1 parallel thread
- OpenSpeedTest: around 2000gbps
- So the ethernet drivers (server and client) are OK
- local file read: speeds vary from 200-1000MB/sec.
- so the disk access is fast enough for read
- FTP server on the VM guest:
- read: 200-220MB/sec both for sequential (one file at a time) or parallel (5 streams) access
- write to the VM: slower, initially fast, then levels out to 70-100MB/sec (both sequential and parallel). Not optimal, but doable.
- SMB performance
- read: levels out at around 20MB/sec
- write: Starts around wire speed, levels out to around 70-100MB/sec average just like the FTP
What bothers me is the slow read access with SMB. This was way better with the Windows VM on ESXi. I'm getting 20, where I was hoping for >200. And given local file and FTP read performance, that is not an unrealistic expectation. It's really linked to SMB on Windows on Proxmox, as the other tests are fine.
Any thoughts?
Windows guest or Proxmox host tunings I should be looking into?