virtfs / virtio 9p plans to incorporate?

hvisage

Renowned Member
May 21, 2013
278
26
93
Good day,

I continually bump my head where I need to access some installation files I've copied to the hosts, from inside the guests, and the 9p virtfs would've been good option to solve the problem...

However, it seems that it's not compiled in, at least not in pve-4.4:
kvm: -virtfs fsdriver,id=hvt,path=/MWpool/Moneyworks,security_model=none,readonly,mount_tag=hvt: 'virtio-9p-pci' is not a valid device model name
Any plans to include this in 5.0?
 
Sorry in advance for dragging up a old thread - any idea if this was ever brought into Proxmox kernel?
 
Code:
# pveversion
pve-manager/5.0-32/2560e073 (running kernel: 4.10.17-3-pve)

add this edited line to /etc/pve/qemu-server/xxx.conf

args: -fsdev local,security_model=passthrough,id=fsdev0,path=/media/share/ -device virtio-9p-pci,id=fs0,fsdev=fsdev0,mount_tag=9pshare,bus=pci.0,addr=0x4

Inside VM /etc/fstab

9pshare /media/share 9p trans=virtio,version=9p2000.L,nobootwait,rw,_netdev 0 0

You can get error at VM startup. Just change bus=pci.0,addr=0x4 address.
If you will get low speed add to fstab msize=262144,posixacl,cache=loose
 
Wow! Thank you so much! You just saved me a bunch of messing around with it. I just tested this and it works like a charm. Just curious, have you tested it much? does it handle significant volumes of data without issue?
 
Good to know it doesn't sound like larger directories will be a issue. Thanks again for the help!
 
Would this work for windows also do you think ?

Would save me a bit of trouble (mainly the samba network server) if I can just pass on the hosts /mnt/ceph to the Windows guest...
 
Would be nice if that got proper support, would also solve some of my issues. Would be able to fully move to Proxmox on my file server.

If it supports ACLs (setfacl) then that would be great. I use ACLs alot on my file server.
 
  • Like
Reactions: networknanny
Code:
# pveversion
pve-manager/5.0-32/2560e073 (running kernel: 4.10.17-3-pve)

add this edited line to /etc/pve/qemu-server/xxx.conf

args: -fsdev local,security_model=passthrough,id=fsdev0,path=/media/share/ -device virtio-9p-pci,id=fs0,fsdev=fsdev0,mount_tag=9pshare,bus=pci.0,addr=0x4

Inside VM /etc/fstab

9pshare /media/share 9p trans=virtio,version=9p2000.L,nobootwait,rw,_netdev 0 0

You can get error at VM startup. Just change bus=pci.0,addr=0x4 address.
If you will get low speed add to fstab msize=262144,posixacl,cache=loose

Re-bump.

Spent two days rebuilding boxes to get all this supported and mounting. Everything is *mounting* fine, but the directory appears empty on all my Alpine Linux and K3OS (also Alpine Linux based I believe) VMs! I do the exact same thing on an Ubuntu VM and I can see the contents of the 9P share just fine.

I have only found one other mention of this anywhere, and they seemed to believe at that time it was caused by the hypervisor and guest not being the same architecture (32 vs 64 bit), but now all these years later, everything is obviously all 64 bit yet I am seeing this.

Has anyone else ever run into this before? Any ideas how to fix it? I imagine converting all the VMs to Ubuntu would do the trick, but low memory footprint is super important.

More info for what it's worth - I've got CephFS enabled, and that is the directory I am trying to 9P share to my guests. My guests are docker/k8s nodes - I'd like to configure my container that have storage to use a 'local' bind to the 9P share, which comes from the CephFS below, which replicates to all the other nodes! Then the container is free to move around. I'm currently doing this with NFS, but I'd love to eliminate NFS from the picture. Any help or tips would be greatly appreciated!

VM config -

Code:
args: -fsdev local,security_model=passthrough,id=fsdev0,path=/mnt/pve/cephfs/ -device virtio-9p-pci,id=fs0,fsdev=fsdev0,mount_tag=9pshare,bus=pci.0,addr=0x4

mount configuration is the same in the working and non-working boxes -

Code:
9pshare /mnt/pve/cephfs 9p trans=virtio,version=9p2000.L,nobootwait,rw,_netdev,msize=262144 0 0

Edit: Looks like I can get it working on FCOS (I could not, for the life of me, when I was trying two days ago) -

Code:
[core@9P-test ~]$ sudo mount -t 9p -o trans=virtio,version=9p2000.L,nobootwait,rw,_netdev,msize=262144 9pshare /mnt
[core@9P-test ~]$ ls /mnt
container_volumes  dump  manager-test  template
[core@9P-test ~]$

So I guess I'll just migrate my Swarm and K8S cluster to that (or back to that in the case of Swarm).

Edit: Thanks for reminding me about this luison (hope that's the right way to tag someone) - after all that work, Swarm or K8S (can't remember which) wouldn't work with the 'local' mounts on multiple nodes, so in the end I just ended up putting a loopback IP (the same loopback IP) on all my Proxmox nodes, and exporting CephFS via NFS from that address - it's a separate interface bound to the guests, and the network stays completely local to the node - but it exists on every node - so wherever the container ends up, it can reach out for NFS at that address, which ends up being the local loopback (so very fast), which is the CephFS which the underlying node it taking care of replicating and keeping synced between all the nodes. Maybe not the best or cleanest way to do it, but it works, and works well.
 
Last edited:
  • Like
Reactions: auser and luison
Edit: Thanks for reminding me about this luison (hope that's the right way to tag someone) - after all that work, Swarm or K8S (can't remember which) wouldn't work with the 'local' mounts on multiple nodes,

so in the end I just ended up putting a loopback IP (the same loopback IP) on all my Proxmox nodes, and exporting CephFS via NFS from that address - it's a separate interface bound to the guests, and the network stays completely local to the node - but it exists on every node - so wherever the container ends up, it can reach out for NFS at that address, which ends up being the local loopback (so very fast), which is the CephFS which the underlying node it taking care of replicating and keeping synced between all the nodes. Maybe not the best or cleanest way to do it, but it works, and works well.
Sorry for the 13 month later reply Hyacin but I found this last infomation interesting and don't immediately see how you share the loopback IP into many containers.
Obviously I can create a 127.x.x.x/32 IP on 'lo' or I might to prefer to create a 'dummy' interface, as I do already to hold VPN connections,
but how does the that single IP become available in the containers?

p.s. I am very used to linux native containers and KVM VMs, but a bit rusty on lxc / lxd, so maybe that is the the obvious bit that I am not seeing.

Thanks in advance for any insight!
:)
 
Sorry for the 13 month later reply Hyacin but I found this last infomation interesting and don't immediately see how you share the loopback IP into many containers.
Obviously I can create a 127.x.x.x/32 IP on 'lo' or I might to prefer to create a 'dummy' interface, as I do already to hold VPN connections,
but how does the that single IP become available in the containers?

p.s. I am very used to linux native containers and KVM VMs, but a bit rusty on lxc / lxd, so maybe that is the the obvious bit that I am not seeing.

Thanks in advance for any insight!
:)
Hey sorry for the delay!

Let me see ... I create a loopback bridge with no physical interface on each Proxmox box -

/etc/network/interfaces
Code:
auto vmbr1
iface vmbr1 inet manual
        address 10.50.250.1/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0

then on the VM I create a network interface bound to that bridge -

Screenshot 2021-11-29 114134.jpg

Then on the VM I give that interface an address in the same subnet -

/etc/netplan/00-installer-config.yaml in this case
Code:
network:
  ethernets:
<snip>
    ens20:
      addresses:
      - 10.50.250.254/24
      nameservers: {}
  version: 2

Then I export via NFS on the Proxmox box -

/etc/exports
Code:
# /etc/exports: the access control list for filesystems which may be exported
#               to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
#
/mnt/pve/cephfs 10.50.250.0/24(rw,sync,no_wdelay,crossmnt,no_root_squash,insecure_locks,sec=sys,anonuid=1025,anongid=100)

and mount on the client! (and I even left the old commented out 9p in there as that is what this thread is about) :)

/etc/fstab
Code:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/0b548bd8-<snip>-80de7958c82f / ext4 defaults 0 0
/swapfile       none    swap    sw              0       0
<snip>
10.50.250.1:/mnt/pve/cephfs             /mnt/nfs/cephfs                 nfs     defaults        0       0
<snip>

# Breaks HA
#9pshare /mnt/pve/cephfs 9p trans=virtio,version=9p2000.L,nobootwait,rw,_netdev 0 0
 
Sorry to dig up an old thread, but what is the status of this in Proxmox 7? I have a couple of virtual disks mounted over CIFS at the moment and I'm not too impressed with the performance, wondering if this is the way to go.
 
Last edited:
Sorry to dig up an old thread, but what is the status of this in Proxmox 7? I have a couple of virtual disks mounted over CIFS at the moment and I'm too impressed with the performance, wondering if this is the way to go.
It works (using args: ... in the VM configuration files with all the necessary QEMU parameters manually), but performance and user/permission management is minimal.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!