I want to like VirtIOFS, but...

i found one other interesting affect of setting cache to none - none of my docker containers in a VM wouldn't start...

scenario:
  • cephFS data store on 3 promox nodes
  • dockerVM on each node (never roams)
  • virtioFS top pass through the mount point of cephFS through to the dockerVM
  • bind mounts defined as volume type local
When virtiofs is set to auto (default) cache everything works.

When set to none (and all other virtiofs checkbox unchecked) no container would start on that node where i made the change. i checked the mount was there. I checked i could create a text file with nano, i could. however things like my adguard container would fail saying they couldn't open one of the database files. I tried rebooting the node incase it was an odd ceph issue.

I reverted the setting back to auto and it all started working again.

I wonder if none must be used with directio for apps that do sync opens/writes?

(i havent yet re-enabled none to prove it was the setting)
 
Last edited:
  • Like
Reactions: HomebrewD
Maybe good to know for my test.

My virtio fs shares 2 mergerfs pools consisting of nvme's and sata ssd's. (tiered storage setup)
My hot pools (where the write hits) consist of a writeback enabled lvm pool (cache virtiofs dir) and tmpfs (tmp virtiofs dir). All the colder tiers use zfs (mainly for metadata caching as mergerfs has to scan all branches).

Not really sure how the caching interacts with mergerfs for that matter so might explain my weird results.
Server has been running smoothly though with cache=none for the past days.
 
  • Like
Reactions: scyto