Setting Up PVE Shared Storage for VM and LXC via NFS: How Many NFS Users?

Sep 1, 2022
181
28
28
40
Hello,

I've got 3 Proxmox nodes, and I'd like them to share networked storage via NFS.

I've seen guides that do this with a single NFS user that each node uses to access the share, and with each node having its own user.

I tend to favor the later approach, as intuitively it just seems better that each node not share the same user account on the storage server, and might even make the logs on the storage server easier to read if there's a problem.

Am I missing something? Is there some reason to prefer all the nodes sharing a single NFS user on the storage server?

(I'm aware of the need to manage the user and group IDs on the storage server and the nodes to keep them in sync.)
 
During a migration, no data is transferred to shared storage. If the permissions don't match, that could be a problem.
 
I've seen guides that do this with a single NFS user that each node uses to access the share, and with each node having its own user.
I've never seen such a system ... sources?

It also does not make much sense to me. In order to have a working shared storage in PVE, you need to have one configured shared storage that is enabled on all nodes and there will only be one entry in /etc/pve/storage.cfg. This implies, that you'll have to use the same user. The only option I can think of to have indivisual users would be to configure a directory as "shared" and have the mountpoint behind this directory be configured manually and then have a individual users.

Security between nodes and storage is normally done by a dedicated storage network - not only for security, but also for performance and redundancy
 
I've never seen such a system ... sources?
Op probably means for non-PVE use cases. Yes, of course in a secure environment the NFS permissions are important, ie where you have multiple developers, applications, etc. However, in PVE case there is one application/user (for cluster). All hosts must be able to access same data.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I've seen guides that do this with a single NFS user that each node uses to access the share, and with each node having its own user.
you're overthinking it.

pve accesses its storage using root. if you wanted to limit access PER NODE you can have serious permission issues if you were to employ root squash and force individual users- and you really need to ask yourself WHY you want this. A cluster is effectively a collection of resources that are meant to be use INTERCHANGABELY; if you dont want it used that way, why are you clustered to begin with?
 
Thanks everyone. :)

I should have specified that I don't have a cluster set up yet. At this point, I want to set up network storage for a single node, and then create a cluster later. That's what motivated this question. I should have been clearer, but I was trying to post before I forgot to post. Again. Sorry for the confusion.

So, with the bolded goal in mind, what should I do now? It sounds like I just need a single user (that is, that I need to behave as though I have a cluster of 1 node that I will expand later), and when I create the cluster, I'll just keep using that user? Is that correct?
 
So, with the bolded goal in mind, what should I do now? It sounds like I just need a single user (that is, that I need to behave as though I have a cluster of 1 node that I will expand later), and when I create the cluster, I'll just keep using that user? Is that correct?
Proxmox needs access to NAS to store anything there. Proxmox uses "root" account with "ID 0" to access NFS, that is hardcoded.

You can :
a) setup your NAS to squash ID 0 to something more acceptable to you, and properly setup access such that PVE is able to execute any operation it needs. PVE will let you know (through warning messages and logs) when it cant do something
b) allow root access to NFS from PVE hosts

As long as your future node will have identical access to NFS as your first yet to be connected node, you will have no issues.

When you create a cluster there will be a special FileSystem created, operated by the cluster. This filesystem is where the PVE cluster stores configuration that is shared across all cluster nodes. The storage access is part of this shared configuration. You will not need to do anything on second node - it will automatically read the existing config provided by first node.
Depending on your NFS setup you may need to update your NAS, to allow new IP to connect to NFS (if you restricted IP access in your export configuration.)

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Proxmox needs access to NAS to store anything there. Proxmox uses "root" account with "ID 0" to access NFS, that is hardcoded.

What? What. Why?

I just watched someone do a network storage tutorial on YouTube for a different hypervisor and that was certainly not required.

What.

I'm sorry. Not enough caffeine in my system yet for that kind of shock. ;)

You can :
a) setup your NAS to squash ID 0 to something more acceptable to you, and properly setup access such that PVE is able to execute any operation it needs. PVE will let you know (through warning messages and logs) when it cant do something

This sounds like what I want to do. I just have no idea how to do it yet. I'm going to have to find a Proxmox-specific NFS share tutorial for TrueNAS, I guess. I've never had to squash root to something else with NFS before. In fact I've deliberately avoided that with other NFS shares for general use.

As long as your future node will have identical access to NFS as your first yet to be connected node, you will have no issues.

I'm guessing/hoping you mean that the NFS user on each node has the same group and user ID in TrueNAS? It was my understanding that I'd need to set it up that way.

When you create a cluster there will be a special FileSystem created, operated by the cluster. This filesystem is where the PVE cluster stores configuration that is shared across all cluster nodes. The storage access is part of this shared configuration. You will not need to do anything on second node - it will automatically read the existing config provided by first node.
Depending on your NFS setup you may need to update your NAS, to allow new IP to connect to NFS (if you restricted IP access in your export configuration.)

This makes it sound like it might be better to wait and set up shared storage after I've set up a cluster? Is that accurate?

I'm a bit lost on where it sets up the cluster FS. I need to go watch more cluster tutorials, apparently. :)

Good luck

Thanks for this. It's been a big help.
 
This makes it sound like it might be better to wait and set up shared storage after I've set up a cluster? Is that accurate?
No; your nfs share can be set up at any time including now. your initial assertion that you can/should treat the storage whether you have a cluster of one or many the same is correct.

What? What. Why?
I think you may be misreading what @bbgeek17 is saying. it doesnt need access to the NAS itself, it needs access to the share you're exposing. this is normal. as for the root access- this isnt as big a deal as it might seem, especially if you limit the share access to a specific location (which you should.) do not ever open system locations to nfs even with root squash ;)

"squashing" in nfs parlance simply means that if a *nix user with a specific id is "rejected and remapped." lets say you want all users to access a particular nfs resource as the same user (anonymous for all intents) regardless of their original user id on the guest OS- your export line would look like:

/nfsroot/myshare iprange(all_squash,anonuid=UID,anongid=GID)

where iprange is the range containing your guests, UID is the local user you want all FS traffic to appear as, GID the group.

this is most commonly done for a source root user. you can tell the nfs server that you want to "squash" the root user, and instead use a predetermined user UID which you specify in your exports on the server-

/nfsroot/myshare iprange(root_squash,anonuid=UID,anongid=GID)

As long as all cluster members end up with the SAME uid/gid on the share, and the permissions on the server are set for that user to be owner of the resource, it will work fine.
 
What? What. Why?I just watched someone do a network storage tutorial on YouTube for a different hypervisor and that was certainly not required.
What. I'm sorry. Not enough caffeine in my system yet for that kind of shock
Hard to compare with a random youtube video of an unnamed hypervisor.

Many things in Proxmox are tied to root. You should think of Proxmox as an appliance rather than an addon random app. It was built so historically for efficiency and simplicity. There is not enough reason or demand to change something thats been in place for years. If you can articulate a requirement that cannot be achieved by other means where a NFS access step-down to non-root user is required, and you think such reason is widely applicable, you should file a Request for Feature Enhancement here https://bugzilla.proxmox.com.

This sounds like what I want to do. I just have no idea how to do it yet. I'm going to have to find a Proxmox-specific NFS share tutorial for TrueNAS
NFS is NFS is NFS. There is nothing Proxmox specific about it. Proxmox does not implement its own NFS client, it uses underlying OS native packaging (Ubuntu).
I'm guessing/hoping you mean that the NFS user on each node has the same group and user ID in TrueNAS? It was my understanding that I'd need to set it up that way.
There are no other users in default Proxmox installation than root. The data on NFS (iso, disk images, snippets, etc) are all done by the root system account. If you setup additional users/pools the restrictions apply at a higher level - in PVE, any disk access is proxied to underlying system applications.
It doesnt matter what TrueNAS has, the primary purpose of NFS within Proxmox is to share data. If you dont want data shared, then perhaps setting up multiple NFS datastores specific to each node is what you want.
I'm a bit lost on where it sets up the cluster FS. I need to go watch more cluster tutorials, apparently.
https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pmxcfs)

Can I ask whether this is for a home lab or production business environment? If its the former - just do it and experience things, then come back and ask questions. If its the later, perhaps you can benefit from one of the classes that Proxmox Partners run.

Keep an open mind.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!