virtfs / virtio 9p plans to incorporate?

hvisage

Active Member
May 21, 2013
99
4
28
Good day,

I continually bump my head where I need to access some installation files I've copied to the hosts, from inside the guests, and the 9p virtfs would've been good option to solve the problem...

However, it seems that it's not compiled in, at least not in pve-4.4:
kvm: -virtfs fsdriver,id=hvt,path=/MWpool/Moneyworks,security_model=none,readonly,mount_tag=hvt: 'virtio-9p-pci' is not a valid device model name
Any plans to include this in 5.0?
 

Nitsua

New Member
Sep 18, 2017
6
0
1
32
Sorry in advance for dragging up a old thread - any idea if this was ever brought into Proxmox kernel?
 

Nemesiz

Well-Known Member
Jan 16, 2009
678
47
48
Lithuania
Code:
# pveversion
pve-manager/5.0-32/2560e073 (running kernel: 4.10.17-3-pve)

add this edited line to /etc/pve/qemu-server/xxx.conf

args: -fsdev local,security_model=passthrough,id=fsdev0,path=/media/share/ -device virtio-9p-pci,id=fs0,fsdev=fsdev0,mount_tag=9pshare,bus=pci.0,addr=0x4

Inside VM /etc/fstab

9pshare /media/share 9p trans=virtio,version=9p2000.L,nobootwait,rw,_netdev 0 0

You can get error at VM startup. Just change bus=pci.0,addr=0x4 address.
If you will get low speed add to fstab msize=262144,posixacl,cache=loose
 

Nitsua

New Member
Sep 18, 2017
6
0
1
32
Wow! Thank you so much! You just saved me a bunch of messing around with it. I just tested this and it works like a charm. Just curious, have you tested it much? does it handle significant volumes of data without issue?
 

Nemesiz

Well-Known Member
Jan 16, 2009
678
47
48
Lithuania
I`m running it for a week only. Sharing 2TB partition with various size of files. No problem so far.
 

Nitsua

New Member
Sep 18, 2017
6
0
1
32
Good to know it doesn't sound like larger directories will be a issue. Thanks again for the help!
 

AlexLup

Member
Mar 19, 2018
201
10
23
39
Would this work for windows also do you think ?

Would save me a bit of trouble (mainly the samba network server) if I can just pass on the hosts /mnt/ceph to the Windows guest...
 

Jonny

New Member
Oct 22, 2019
5
1
1
Would be nice if that got proper support, would also solve some of my issues. Would be able to fully move to Proxmox on my file server.

If it supports ACLs (setfacl) then that would be great. I use ACLs alot on my file server.
 
  • Like
Reactions: networknanny

Hyacin

New Member
May 6, 2020
24
1
3
41
Code:
# pveversion
pve-manager/5.0-32/2560e073 (running kernel: 4.10.17-3-pve)

add this edited line to /etc/pve/qemu-server/xxx.conf

args: -fsdev local,security_model=passthrough,id=fsdev0,path=/media/share/ -device virtio-9p-pci,id=fs0,fsdev=fsdev0,mount_tag=9pshare,bus=pci.0,addr=0x4

Inside VM /etc/fstab

9pshare /media/share 9p trans=virtio,version=9p2000.L,nobootwait,rw,_netdev 0 0

You can get error at VM startup. Just change bus=pci.0,addr=0x4 address.
If you will get low speed add to fstab msize=262144,posixacl,cache=loose

Re-bump.

Spent two days rebuilding boxes to get all this supported and mounting. Everything is *mounting* fine, but the directory appears empty on all my Alpine Linux and K3OS (also Alpine Linux based I believe) VMs! I do the exact same thing on an Ubuntu VM and I can see the contents of the 9P share just fine.

I have only found one other mention of this anywhere, and they seemed to believe at that time it was caused by the hypervisor and guest not being the same architecture (32 vs 64 bit), but now all these years later, everything is obviously all 64 bit yet I am seeing this.

Has anyone else ever run into this before? Any ideas how to fix it? I imagine converting all the VMs to Ubuntu would do the trick, but low memory footprint is super important.

More info for what it's worth - I've got CephFS enabled, and that is the directory I am trying to 9P share to my guests. My guests are docker/k8s nodes - I'd like to configure my container that have storage to use a 'local' bind to the 9P share, which comes from the CephFS below, which replicates to all the other nodes! Then the container is free to move around. I'm currently doing this with NFS, but I'd love to eliminate NFS from the picture. Any help or tips would be greatly appreciated!

VM config -

Code:
args: -fsdev local,security_model=passthrough,id=fsdev0,path=/mnt/pve/cephfs/ -device virtio-9p-pci,id=fs0,fsdev=fsdev0,mount_tag=9pshare,bus=pci.0,addr=0x4

mount configuration is the same in the working and non-working boxes -

Code:
9pshare /mnt/pve/cephfs 9p trans=virtio,version=9p2000.L,nobootwait,rw,_netdev,msize=262144 0 0

Edit: Looks like I can get it working on FCOS (I could not, for the life of me, when I was trying two days ago) -

Code:
[core@9P-test ~]$ sudo mount -t 9p -o trans=virtio,version=9p2000.L,nobootwait,rw,_netdev,msize=262144 9pshare /mnt
[core@9P-test ~]$ ls /mnt
container_volumes  dump  manager-test  template
[core@9P-test ~]$

So I guess I'll just migrate my Swarm and K8S cluster to that (or back to that in the case of Swarm).

Edit: Thanks for reminding me about this luison (hope that's the right way to tag someone) - after all that work, Swarm or K8S (can't remember which) wouldn't work with the 'local' mounts on multiple nodes, so in the end I just ended up putting a loopback IP (the same loopback IP) on all my Proxmox nodes, and exporting CephFS via NFS from that address - it's a separate interface bound to the guests, and the network stays completely local to the node - but it exists on every node - so wherever the container ends up, it can reach out for NFS at that address, which ends up being the local loopback (so very fast), which is the CephFS which the underlying node it taking care of replicating and keeping synced between all the nodes. Maybe not the best or cleanest way to do it, but it works, and works well.
 
Last edited:
  • Like
Reactions: luison

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!