Virtiofs reads are faster than the host

proxmox-fan

Renowned Member
Oct 22, 2017
3
3
68
45
TL; DR: Using dd I'm getting ~550 MB/s reads in a VM guest on virtiofs, but only ~150 MB/s running dd on the host

Here's an odd one, I have a Windows OS template that I developed in a PVE VM that will be V2P into a physical box for testing. I gracefully shut down the guest Windows OS, took a snapshot, then updated the VM config with a virtiofs passthrough and an Ubuntu 24.04 live ISO on the DVD drive.

I booted into the live ISO, successfully installed virtiofsd and clonezilla, and captured a full disk image to the mounted virtiofs path without any issues.

# mount -t virtiofs $passthrough-name /mnt/virtiofs

Out of curiosity, I ran dd against the captured image file to see how well virtiofs performed with reads, and observed ~550 MB/s:

# dd if=sda3.ntfs-ptcl-img.zst.aa of=/dev/null -status=progress
3874079232 bytes (3.9 GB, 3.6 GiB) copied, 7.00021 s, 553 MB/s
^C

To account for Guest OS filesystem caching to memory, I completely unmounted and remounted the virtiofs assthrough and received the same results.

I perform the same read on the host, and it hits a ceiling of ~150 MB/s:
# dd if=sda3.ntfs-ptcl-img.zst.aa of=/dev/null -status=progress
701627392 bytes (702 MB, 669 MiB) copied, 4.53475 s, 155 MB/s
^C

Any idea why the VM guest has faster reads?

Host: Proxmox 8.4.6
Storage: 2TB AData SX8200 NVMe SSD in single-disk ZFS layered over a LUKS encrypted block
The stored clonezilla image files are located in a folder within the ZFS dataset rpool/data/enc

Bash:
# lsblk -fT -o NAME,FSTYPE,FSVER,LABEL,FSAVAIL,FSUSE%,SIZE,TYPE,MOUNTPOINTS /dev/nvme0n1
NAME            FSTYPE      FSVER    LABEL FSAVAIL FSUSE%  SIZE TYPE  MOUNTPOINTS
nvme0n1                                                    1.9T disk 
├─nvme0n1p1                                               1007K part 
├─nvme0n1p2     vfat        FAT32           510.7M     0%  512M part  /boot/efi
├─nvme0n1p3     LVM2_member LVM2 001                        31G part 
│ └─pve-root    ext4        1.0              15.9G    42%   31G lvm   /
└─nvme0n1p4     crypto_LUKS 2                              1.8T part 
  └─nvme0n1-enc zfs_member  5000     rpool                 1.8T crypt

# zpool list rpool
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  1.83T   759G  1.09T        -         -    15%    40%  1.00x    ONLINE  -

# zpool status -v rpool
  pool: rpool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:10:28 with 0 errors on Sun Oct 12 00:34:29 2025
config:

        NAME                                                                STATE     READ WRITE CKSUM
        rpool                                                               ONLINE       0     0     0
          dm-uuid-CRYPT-LUKS2-b30f5771759d4ca6972a5da56f7eef01-nvme0n1-enc  ONLINE       0     0     0

errors: No known data errors

# zfs list -o space,sync,xattr,atime,relatime,refquota rpool/enc/data
NAME            AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD  SYNC      XATTR  ATIME  RELATIME  REFQUOTA
rpool/enc/data  1.03T   111G        0B   98.8G             0B      12.2G  standard  sa     off    on            none
 
It sounds like the difference in read speed might be tied to how the Proxmox host and the VM are accessing your storage. One possibility is that the VM is bypassing certain layers (like encryption overhead or ZFS metadata handling) because the host has already done the heavy lifting. When you read the file directly on the host, ZFS, LUKS encryption, and disk I/O all come into play, which could introduce some bottlenecks.

ZFS in particular is powerful but can be resource-intensive, especially if things like checksumming or compression are enabled. If the VM is using something like VirtIO to access the disk, it might not be hitting the same overhead because the data is effectively being passed through from the host, already processed.

Another factor could be how caching works. Proxmox might be caching data for the VM in memory, which makes reads appear faster from the VM’s perspective. On the host, direct reads bypass certain caches, showing the raw performance of the system.

To dig deeper, you could try running your test with caching disabled (e.g., using <span>dd</span> with the <span>direct=1</span> option) on both the host and VM to compare raw read speeds. Additionally, checking CPU usage during the read operation might reveal if encryption or ZFS is contributing to the slowdown on the host.
 
  • Like
Reactions: Johannes S
Since for VirtIO FS to work the filesystem already needs to be mounten on the host I doubt that Passthrough is relevant here. My guess is caching. @OP: Are you sure the data wasn't accessed before the test on the vm?