Code:
# pveversion
pve-manager/5.0-32/2560e073 (running kernel: 4.10.17-3-pve)
add this edited line to /etc/pve/qemu-server/xxx.conf
args: -fsdev local,security_model=passthrough,id=fsdev0,path=
/media/share/ -device virtio-9p-pci,id=fs0,fsdev=fsdev0,mount_tag=
9pshare,bus=pci.0,addr=0x4
Inside VM /etc/fstab
9pshare /media/share 9p trans=virtio,version=9p2000.L,nobootwait,rw,_netdev 0 0
You can get error at VM startup. Just change bus=pci.0,addr=0x4 address.
If you will get low speed add to fstab
msize=262144,posixacl,cache=loose
Re-bump.
Spent two days rebuilding boxes to get all this supported and mounting. Everything is *mounting* fine, but the directory appears empty on all my Alpine Linux and K3OS (also Alpine Linux based I believe) VMs! I do the exact same thing on an Ubuntu VM and I can see the contents of the 9P share just fine.
I have only found one other mention of this anywhere, and they seemed to believe at that time it was caused by the hypervisor and guest not being the same architecture (32 vs 64 bit), but now all these years later, everything is obviously all 64 bit yet I am seeing this.
Has anyone else ever run into this before? Any ideas how to fix it? I imagine converting all the VMs to Ubuntu would do the trick, but low memory footprint is super important.
More info for what it's worth - I've got CephFS enabled, and that is the directory I am trying to 9P share to my guests. My guests are docker/k8s nodes - I'd like to configure my container that have storage to use a 'local' bind to the 9P share, which comes from the CephFS below, which replicates to all the other nodes! Then the container is free to move around. I'm currently doing this with NFS, but I'd love to eliminate NFS from the picture. Any help or tips would be greatly appreciated!
VM config -
Code:
args: -fsdev local,security_model=passthrough,id=fsdev0,path=/mnt/pve/cephfs/ -device virtio-9p-pci,id=fs0,fsdev=fsdev0,mount_tag=9pshare,bus=pci.0,addr=0x4
mount configuration is the same in the working and non-working boxes -
Code:
9pshare /mnt/pve/cephfs 9p trans=virtio,version=9p2000.L,nobootwait,rw,_netdev,msize=262144 0 0
Edit: Looks like I can get it working on FCOS (I could not, for the life of me, when I was trying two days ago) -
Code:
[core@9P-test ~]$ sudo mount -t 9p -o trans=virtio,version=9p2000.L,nobootwait,rw,_netdev,msize=262144 9pshare /mnt
[core@9P-test ~]$ ls /mnt
container_volumes dump manager-test template
[core@9P-test ~]$
So I guess I'll just migrate my Swarm and K8S cluster to that (or back to that in the case of Swarm).
Edit: Thanks for reminding me about this
luison (hope that's the right way to tag someone) - after all that work, Swarm or K8S (can't remember which) wouldn't work with the 'local' mounts on multiple nodes, so in the end I just ended up putting a loopback IP (the same loopback IP) on all my Proxmox nodes, and exporting CephFS via NFS from that address - it's a separate interface bound to the guests, and the network stays completely local to the node - but it exists on every node - so wherever the container ends up, it can reach out for NFS at that address, which ends up being the local loopback (so very fast), which is the CephFS which the underlying node it taking care of replicating and keeping synced between all the nodes. Maybe not the best or cleanest way to do it, but it works, and works well.