Feature Request: Add 9p_virtio Filesystem Pathrough to share Host files with Guest.

Jules-

Renowned Member
Jun 7, 2016
6
1
68
43
The approach of 9p_virtio (http://www.linux-kvm.org/page/9p_virtio) is very usefull one, especially for clustered environments were you need shared Webroot (Readonly), within several KVM VMs, sharing config files e.g.

That shoudn't be that hard to implement because its already implemented into qemu isn't it?

If you cannot or won't add this feature, i would appreciate any hints to create a workaround myself.

Thanks in advance!
 
Please post feature requests on our bugzilla (https://bugzilla.proxmox.com/).
In the meantime you can add the qemu parameters via `args:` in the VM's configuration, here's an example:
Code:
args: -fsdev local,id=My9pShare,path=/shared/data,security_model=none,readonly -device virtio-9p-pci,fsdev=My9pShare,mount_tag=my9p,bus=pci.0,addr=0x1a
 
Please post feature requests on our bugzilla (https://bugzilla.proxmox.com/).
In the meantime you can add the qemu parameters via `args:` in the VM's configuration, here's an example:
Code:
args: -fsdev local,id=My9pShare,path=/shared/data,security_model=none,readonly -device virtio-9p-pci,fsdev=My9pShare,mount_tag=my9p,bus=pci.0,addr=0x1a

Hi Wolfgang,

thanks alot, that di the trick.
After benchmarking read speed for 9p shares i recognized that it seems not fast enough for my usecase.
Streaming a 1GB file from SSD giving me read speed of ~174 MB/s so i gonna try the other way, mounting
the partition readonly with virtio, like: virtio1: /dev/mapper/pve-webshare,size=17824M which gives me ~550 MB/s throughput, nearly equal to host disk read performance.

The only caveat seems that if i write new data from host mount, it doesn't show up in guest instantly.
Only after i unmount and remount the readonly-share on guest side. Is this because fsync happen on host machine but not on guest-system?
Any workaround for that probably?
 
You mean you pass the same device you use on the host through and mount it read only inside the guest? This is extremely dangerous and can lead to data corruption and loss.
Even a read-only mount can perform writes to the disk (eg. with ext4 a read-only mount still replays the journal). And nobody's forcing the guest OS to play nice.
From the mount(8) manpage:
Code:
  -r, --read-only
         Mount the filesystem read-only.  A synonym is -o ro.

         Note that, depending on the filesystem type,  state  and  kernel
         behavior,  ***the  system may still write to the device***.  For exam‐
         ple, ext3 and ext4 will replay the journal if the filesystem  is
         dirty.

If you want shared mounts, use a network or cluster file system.
 
You mean you pass the same device you use on the host through and mount it read only inside the guest? This is extremely dangerous and can lead to data corruption and loss.
Even a read-only mount can perform writes to the disk (eg. with ext4 a read-only mount still replays the journal). And nobody's forcing the guest OS to play nice.
From the mount(8) manpage:
Code:
  -r, --read-only
         Mount the filesystem read-only.  A synonym is -o ro.

         Note that, depending on the filesystem type,  state  and  kernel
         behavior,  ***the  system may still write to the device***.  For exam‐
         ple, ext3 and ext4 will replay the journal if the filesystem  is
         dirty.

If you want shared mounts, use a network or cluster file system.

I'm using ext2 which is not journaling filesystem. So it's save for this.
I do have an glustered filesytem which i use for all the dynamic content stuff like uploads e.g. but want to serve the core components of my webapps via fast local storage spread with the virtual machines within the web-cluster.
 
Maybe containers with bind mounts are better suited for this purpose. It doesn't matter what filesystem you're using. It is very possible that while the guest loads metadata from the device the host writes some other metadata causing the guest to become inconsistent and throw all kinds of unwanted errors, or the guest wants to load some file A and instead gets a mix of files B and C causing your web service to fail miserably or even expose private data over the network. Such a setup is simply wrong.
 
Maybe containers with bind mounts are better suited for this purpose. It doesn't matter what filesystem you're using. It is very possible that while the guest loads metadata from the device the host writes some other metadata causing the guest to become inconsistent and throw all kinds of unwanted errors, or the guest wants to load some file A and instead gets a mix of files B and C causing your web service to fail miserably or even expose private data over the network. Such a setup is simply wrong.

I agree with you that it's suboptimal but the only way to get highest performance out of this.
In my usecase it might work, because of the deployment chain i use. Symlinking current with timestamped release directory. So I deploy code to hostmachine (r/w) mount, based on timestamp named release directories, and execute flush buffer (blockdev --flushbufs /dev/vdb) i just found out the command on client (readonly share) side after deploying finished. so the file A, B mixup will never happen because nothing of the directory structure and files get's changed when deploying, just new directories with release getting added/appended.

The container with bind mount sounds also like an approach to try.

The other one would be to find out why 9p_virtio is so slow.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!