Good Morning/Evening,
The reason for this post is that I need to move Windows 11 VM's onto Hyper-V in order to use WSL2 for Docker Desktop. I found out that WSL2 uses 9P protocol to communicate outside of the WSL2 created VM and is considered very slow, but I want to break down why it's slow and what can be done. I want Windows 11 with WSL2 to be capable of properly working on Proxmox machines and this is why I am putting in such effort to do a deep dive into the file structure. It seems the only way to move this faster is with VirtIO drivers and Proxmox Kernel as the other options are outside of Proxmox and RedHat control.
Below you will see all of the layers that a single file has to work with in order for it to be read/written to the disk and back. As a gist it's because hardware>hardware raid controller>hardware raid controller driver>Proxmox OS Ext4>Proxmox KVM>VirtIO Drivers>Windows NTFS>Windows Kernel>WSL2 9P Protocol>WSL2 Linux Filesystem. Technically 10 layers unless I'm wrong here.
1. WSL2 Linux environment (Ubuntu)
2. The 9P protocol
3. The Windows host OS (Windows 11)
4. Windows NTFS filesystem
5. VirtIO storage driver
6. KVM hypervisor (Proxmox OS)
7. Proxmox EXT4 filesystem
8. The RAID controller driver
9. Hardware RAID controller
10. Physical disks
The performance bottleneck you're experiencing is a combination of many layers, but the most significant point of failure is the 9P protocol when accessing Windows files from WSL2. The other layers (Proxmox, KVM, VirtIO) are designed to be high-performance, but the guest-to-host file access bridge (9P) is inherently inefficient for the common I/O patterns of Linux development. As a result, operations that require thousands of small file interactions are dramatically slower than if you were working directly on the native EXT4 filesystem inside the WSL2 virtual hard disk.
The reason for this post is that I need to move Windows 11 VM's onto Hyper-V in order to use WSL2 for Docker Desktop. I found out that WSL2 uses 9P protocol to communicate outside of the WSL2 created VM and is considered very slow, but I want to break down why it's slow and what can be done. I want Windows 11 with WSL2 to be capable of properly working on Proxmox machines and this is why I am putting in such effort to do a deep dive into the file structure. It seems the only way to move this faster is with VirtIO drivers and Proxmox Kernel as the other options are outside of Proxmox and RedHat control.
Below you will see all of the layers that a single file has to work with in order for it to be read/written to the disk and back. As a gist it's because hardware>hardware raid controller>hardware raid controller driver>Proxmox OS Ext4>Proxmox KVM>VirtIO Drivers>Windows NTFS>Windows Kernel>WSL2 9P Protocol>WSL2 Linux Filesystem. Technically 10 layers unless I'm wrong here.
1. WSL2 Linux environment (Ubuntu)
- Layer: Ubuntu's EXT4 filesystem.
- Function: The write() system call is issued from a Linux application (e.g., npm, git, or a C program).
- Overhead: The request is processed by the Linux kernel running inside the WSL2 VM. If the target file is in the /mnt/c/ path, the kernel recognizes that this is not a local file and forwards the request to the next layer.
2. The 9P protocol
- Layer: The 9P server running inside the Windows host and the 9P client running in the WSL2 VM.
- Function: The Linux kernel's file I/O request is converted into a 9P network request. This communication happens over a virtual network connection between the WSL2 VM and the Windows host.
- Overhead: This is the primary source of the "9P is slow" problem. The request must be serialized, sent over a virtual network, and deserialized on the other side. This process adds significant latency, especially for many small, independent operations.
3. The Windows host OS (Windows 11)
- Layer: Windows kernel.
- Function: The 9P server running in Windows translates the 9P request into a standard Windows file I/O request.
- Overhead: Windows must now process a file write request, complete with its own complex file system caching, locking, and security policies (like Windows Defender).
4. Windows NTFS filesystem
- Layer: NTFS driver and filesystem.
- Function: The Windows kernel directs the file write to the NTFS partition.
- Overhead: NTFS has its own journaling, metadata updates, and caching mechanisms that add processing time before the request can be sent to the disk.
5. VirtIO storage driver
- Layer: VirtIO driver within the Windows 11 VM.
- Function: This driver, which you installed for best performance, translates the Windows file I/O request into a format understood by the KVM hypervisor on the Proxmox host.
- Overhead: While highly optimized for virtualized environments, this is still a software-based translation layer between the guest OS (Windows) and the host hypervisor (Proxmox).
6. KVM hypervisor (Proxmox OS)
- Layer: Proxmox's KVM hypervisor and Linux kernel.
- Function: KVM intercepts the VirtIO-formatted request and performs the necessary resource allocation and I/O scheduling on the host.
- Overhead: The hypervisor's job is to manage all virtual machines and allocate system resources, which adds its own layer of complexity and potential delay.
7. Proxmox EXT4 filesystem
- Layer: The EXT4 filesystem on the Proxmox host.
- Function: The hypervisor directs the I/O request to the local EXT4 partition that contains the virtual disk file (.vhd or similar) for the Windows VM.
- Overhead: Proxmox's own disk caching and journaling on the EXT4 filesystem must process the request.
8. The RAID controller driver
- Layer: Driver for your RAID controller.
- Function: The host OS directs the request to the correct driver for the physical RAID card.
- Overhead: The driver performs its own resource management, caching (if configured), and error checking before sending the request to the hardware.
9. Hardware RAID controller
- Layer: The physical RAID controller hardware.
- Function: The controller processes the request and writes the data to the correct physical disk or disks, handling striping, parity, or mirroring as defined by your RAID configuration.
- Overhead: The controller's own firmware and cache (DRAM) add a final layer of processing before the data is actually written to a disk.
10. Physical disks
- Layer: The individual physical disks.
- Function: The data is finally committed to the magnetic platters or flash memory.
- Overhead: At this point, the speed is limited by the physical disk's seek time and write speed.
The performance bottleneck you're experiencing is a combination of many layers, but the most significant point of failure is the 9P protocol when accessing Windows files from WSL2. The other layers (Proxmox, KVM, VirtIO) are designed to be high-performance, but the guest-to-host file access bridge (9P) is inherently inefficient for the common I/O patterns of Linux development. As a result, operations that require thousands of small file interactions are dramatically slower than if you were working directly on the native EXT4 filesystem inside the WSL2 virtual hard disk.