I want to like VirtIOFS, but...

I am not informed enough to recommend this, I wouldn't use it on data I am not ready to lose. I am still using cache=auto, which I have set up half a year ago with hook scripts. Compared to always, auto loses about 21% performance, using OP's fio command.

If NFS has no issues for you, then I don't see any reasons to migrate as you won't get better performance with virtiofs.
Thanks for this. I was just about to post something similar.

I've never messed with the default (no cache) for my VirtIO SCSI disks, so I'm not really clear on what it means to change to cache being always on for VirtIO FS. What is that actually doing at the filesystem level? Does it actually change whether ZFS is working in async/sync mode?
 
Does it actually change whether ZFS is working in async/sync mode?
I did a bit more research, it seems the cache policy mode is about metadata and paths.... not data? as such i assume it just means reads of metadata and paths..... not sure how that would affect sync/async (which i though was about writes... but i am still new to ZFS on my truenas server and just have cpehRBD and LVMs on my promox.

  • cache=always: Metadata, data, and pathname lookup are cached in the guest and never expire.
  • cache=auto: Metadata and pathname lookup cache expires after a configured amount of time (default is 1 second).
  • cache=none: Forbids the FUSE client from caching to achieve the best coherency at the cost of performance.

looking at https://github.com/kata-containers/runtime/issues/2748 it seems it depends on what is writing to the file system (host vs guest)- it looks like if a host changes the metadata or path then virtioFS gets 'funky' - so its only safe to use always where it is gurateed only the guest will write to virtioFS and that is the only way the paths and metadat on the storage backing the virtioFS can be changed.

so for my cepFS backed virtioFS it would seem 'always' could be an issue incase another node changes the metadata or paths in the cephFS but the only scenario i could see this being an issue is as follows, this is how I interpret it based on the github link above:

  1. swarm container FOO is runningt on docker01 on pve1 - everything works fine with always
  2. swarm container FOO moves from vm-docker01 on pve1 to vm-docker02 on pve2 things will be fine if virtioFS has never cached the metadat or paths, it makes changes to these
  3. swarm container FOO moves from vm-docker02 on pve2 to vm-docker01 on pve1 - now because virtioFS hasn't seen the VM write the changes ot metadat / paths it will not provide the wrong metadata to the VM
i don't know how the processes in the container would respond at that point.... i won't bother testing, i will just always leave to auto based on this!
 
Last edited:
  • Like
Reactions: rzmeu
I think one major benefit of virtiofs over NFS is that you don't have to worry about doing your writes sync (and using a SLOG for security)?