making mp's migratable (virtiofs on pve managed paths only)

flash7777

New Member
Apr 16, 2025
7
5
3
Dear Proxmox Developer Team,

Thank you very much for implementing Virtiofs in Proxmox VE 8.4 – a fantastic feature! However, I regret that mountpoints (e.g., via Virtiofs or mp) are not migratable, even though the underlying local folders of these "directories" reside on PVE-cluster-wide managed mounts (like NFS storage backends).

Have I overlooked something, or are there fundamental reasons against this, such as on the OS side within the QEMU VM (Linux/Windows)?

I’d love to contribute as a developer to help complete this small puzzle piece, provided it’s not entirely impossible. Could you kindly let me know where I can best get involved?

Thank you for your amazing work and support!
 
  • Like
Reactions: Michael Zieher
Another deep dive, and after some minor modifications to Proxmox, I ended up with a missing QEMU capability: VHOST_USER_PROTOCOL_F_LOG_SHMFD. More specifically, the missing implementation of the driver in the interplay between the PCI device, the driver, and the actually quite performant DMA-like shared memory mappings from host into the guest. Actually, there's nothing to log in this shared memory scenario, is there? So, the entire migration interface of QEMU, of which VHOST_USER_PROTOCOL_F_LOG_SHMFD is only a part, is obviously still a topic of research. No reason not to pursue it, but clearly, this "solution" isn't feasible here. Sorry for jumping the gun. It was probably already disappointing enough to discover this far ahead of me and to intercept QEMU's refusal by making error messages visible to the user. Good job!
 
  • Like
Reactions: Johannes S
Thank you very much for implementing Virtiofs in Proxmox VE 8.4 – a fantastic feature! However, I regret that mountpoints (e.g., via Virtiofs or mp) are not migratable, even though the underlying local folders of these "directories" reside on PVE-cluster-wide managed mounts (like NFS storage backends).
If the storage is already an nfs mount, why do you need virtiofs at all? just mount the nfs mounts inside the guest.
 
If the storage is already an nfs mount, why do you need virtiofs at all? just mount the nfs mounts inside the guest.
depends on the design decision on the architecture

1) securing your storage

a storage network between proxmox and the remote storage backend should not be accessible by the guest os within a vm. do you trust the storage backend? do you trust the guest os? (both in terms of intrusion protection)

any guest machine could come in possession of a third party (bad boy). securing the guest machine does not have to be the responsibility of a virtualization infrastructure provider (you) at all. but securing your storage backend is often the Achilles heel. don't you also separate proxmox management from service networks? if so, why? passive separation vs management overhead.

in the same way as iscsi volumes or volumes on nfs, but with the charm of sharing. ... and sharing is caring ;)


2) access control by whom

every virtiofs is ready for immediate use. it is simply built into the host. depends too.
 
a storage network between proxmox and the remote storage backend should not be accessible by the guest os within a vm. do you trust the storage backend? do you trust the guest os? (both in terms of intrusion protection)
I dont actually understand your point; maybe I misunderstood
even though the underlying local folders of these "directories" reside on PVE-cluster-wide managed mounts (like NFS storage backends).
any guest machine could come in possession of a third party (bad boy). securing the guest machine does not have to be the responsibility of a virtualization infrastructure provider (you) at all. but securing your storage backend is often the Achilles heel. don't you also separate proxmox management from service networks? if so, why? passive separation vs management overhead.
nothing with virtiofs deals with this. perhaps, again, I dont understand what you're trying to accomplish.

2) access control by whom
NFS has access control...
 
  • Like
Reactions: Johannes S
I dont actually understand your point; maybe I misunderstood


nothing with virtiofs deals with this. perhaps, again, I dont understand what you're trying to accomplish.


NFS has access control...

sure. almost everything has access controls. therefore we could put every machine directly in the internet f.i., would you?

virtiofs reduces the attack surface by moving access and access control to a sharable storage universe (f.i. storage network share nfs) behind the isolation of a virtual machine. that's a big win for anyone who wants that.
 
Last edited:
once the dir mapping is available for containers (as "managed bind mounts"), enabling migration should be easy there. for live-migration of VMs, there are still some missing pieces AFAIK, once those become available and are proven to work, we will enable it as well of course. offline migration should already work, if you set up the dir mapping correctly.
 
to be honest, live migration is more a luxury problem at the moment, compared to all the work that was done till now. offline migration is possible and this is most imortant in order to cover any failure of the node the vm is running in.

and after looking into virtiofsd ... it looks quite complex. there isnt any dirty logging yet and any migration has to take care of descriptors and content of three components (character device, pci device and memory ballooning). poooh. thats a lot. and as i've noticed, its still work in progress to make it work in win11 more properly.

patience get paid ;)
 
  • Like
Reactions: Johannes S
But one which is easy to break as long as you don't use nfs4 + Kerberos. And NFS/Kerberos brings it's own problems
Wait; are you looking to use virtioFS as a replacement for a network file system?

I mean, ok, but thats a very tough hill to climb; NFS and SMB had decades to deal with various bugs, limitations, issues, etc.

and if you're not- well, nfs and smb exist. warts and all. virtiofs simply tries to facilitate bind mounts for vms, which I argue dont have a point beyond a single host homelab and consequently having the ability to live migrate them is kinda without priority.
 
  • Like
Reactions: Johannes S
Wait; are you looking to use virtioFS as a replacement for a network file system?

Nope, I don't. I even use NFS in my homelab but I wouldn't use it for important stuff in environments with many devices, especislly if people are allowed to bring their own device since anybody with root on his endpoint can circumvent classic nfs access control.
 
I even use NFS in my homelab but I wouldn't use it for important stuff in environments with many devices, especislly if people are allowed to bring their own device since anybody with root on his endpoint can circumvent classic nfs access control.
For my curiousity, can you elaborate?
1. What does the number of devices have to do with you access control?
2. similarly, what does it have to do with people bringing their own devices?
3. a user having root at their side has no consequence on the server side. I'm not understanding what you mean.

consider a simple export directive like this:

/my/share 192.168.100.25(rw,sync,root_squash)

What would be your concerns? ip spoofing/hijacking? if thats within your scope of security, that should be handled in your dhcp/network layer and once it is, what's the concern?
 
3. a user having root at their side has no consequence on the server side. I'm not understanding what you mean.
Not really. Traditional NFS trusts the client to provide correct information about things like the user ID (UID).

One obvious attack @Johannes S slides point out is someone with root on their device creating an account with the same UID as another user. Then mount the share and su to the other user where you can peruse or alter their files. There are ways to leverage this into root access on the server. Think for example about what you can do by altering the admin's .bashrc if he's got sudo.

ETA: NFSv4 with Kerberos mostly mitigates this kind of thing. On multi-user machines you can still do some shenanigans with cached credentials but at least those expire and limit the time window.
 
Last edited:
  • Like
Reactions: Johannes S
Not really. Traditional NFS trusts the client to provide correct information about things like the user ID (UID).
easily remedied by limiting access to trusted guests at the network level.

Look, ANY method of sharing has tradeoffs between security and usability; its the task of the sysadmin/netsec/opsec staff to ensure that whatever solution deployed meets or exceeds your organization's security policy, and NO policy is completely secure because such a policy will deny service to the indended audience. Your example is HIGHLY speculative, and would require MULTIPLE layers of security that a normal org have in place to be violated before it can be enacted starting from the physical, to network security, to layer 3 access, to layer 2 access, AND THEN to do that UID obfuscation (which STILL doesnt give you root access on the host.)

The POINT- NFS works just fine in an environment designed for its use. and I'm assuming the audience here falls into the homelabber category (for whom all of this is academic anyway) to the professional category, who understands there are multiple layers to cover for any solution.
 
  • Like
Reactions: waltar
easily remedied by limiting access to trusted guests at the network level.
I was responding to the specific statement you made:

3. a user having root at their side has no consequence on the server side. I'm not understanding what you mean.
That is simply not true for "traditional" NFS without Kerberos. Getting root on the server is a "hacking 101" exercise, not something that is "HIGHLY speculative". It is a well-known attack!

So I agree with you that if you want to use traditional NFS you have to limit access to a hopefully small group of "trusted" hosts that have "trusted" users. This is perfectly fine for having an NFS share that stores VM images and is is only accessible by the hypervisors in a cluster (for example). If you want to have shared home directories or other things that are shared with a wider audience of general users, even if those are in VM's, then you really should implement Kerberos.
 
Getting root on the server is a "hacking 101" exercise, not something that is "HIGHLY speculative". It is a well-known attack!
let me just verify I understood what you claim; having an NFS share open is a vector for an attacker to gain root access to the underlying server, and its a well known attack?

humor me, as I am clearly not in the know; since its a well known attack and presumably not secret, could you share it with me? I clearly am harboring blatant insecurities.
 
One obvious attack @Johannes S slides point out is someone with root on their device creating an account with the same UID as another user.
I saw this German talk last year, the slides are in English though: https://talks.mrmcd.net/2024/talk/7RR38D/
I should have addressed this to begin with. always been true, so a central multiuser repo in 2025 would use nfsv4 with kerberos because it exists. That is an entirely different matter then "Getting root on the server is a "hacking 101" exercise, not something that is "HIGHLY speculative". It is a well-known attack!"
 
Of course. If you use Kerberos+NFSv4 then you have mitigated that particular class of attack.

You seemed to be saying that such attacks can't work even with traditional non-Kerberized NFS if other unspecified things are in place. You very much cannot assume that. In such a case it is "hacking 101" to get root via messing with an admin's files. A "traditional" NFS setup without Kerberos should not be used for that kind of sharing regardless of whatever ACL's or firewalls or vlans or whatever you have established. That was the point I was trying to make.

There are other scenarios, like sharing disk images among cluster members with host access limited to admins, that can be safe. But you have to be careful. Adding any untrusted hosts or users, even inadvertently, creates a potential security problem. Export to specific hosts, not whole subnets (unless the shares are read-only). Don't allow any untrusted users on those hosts. Put the VM's on a different subnet/vlan than the share. And so on.

So I think we actually agree on this.
 
  • Like
Reactions: Johannes S