I've been beating my head at this for a while.
Goal: Access all files over a 10G network, all other traffic over 1G, and ensure that VMs and containers are constrained to all file access over the 10G network. Share NFS mounted on Proxmox with VMs (current sticking point)
Setup -
Promox installed as a single host, trying to set up so more nodes can be easily added later as needs change. 4 network interfaces, 2 1G and 2 10G
NAS device - Asus Lockerstor 10, with 4 network interfaces, 2 1G and 2 10G.
3 NFS shares - Media, vmshared, and vmdata. Media is a RAID array for movies, music, tv shows, etc
On both Proxmox and the NAS, the 1G interfaces are set up in an LACP to the switch, 192.168.101.40 and 192.168.101.100 respectively.
On both the proxmox and NAS, one 10G interface is an IP of 192.168.102.3 and 192.168.102.1 respectively. There is a ethernet cable directly connecting these two interfaces, no switch. I will obtain a 10G switch later once I decide to add more nodes.
I then mounted the 3 NFS shares using the Proxmox GUI to the Datacenter node, so that they should be able to be shared among more nodes later. All containers and VMs reside on the vmdata NFS share, which is a 2TB NVME drive. Accessing containers and VMs is super fast, just as fast as if I had it on local storage, because the network speed exceeds the NVME access speed by almost double.
I've found tutorials for how to do a bind mount of some sort to an NFS share that's mounted on the proxmox host, but I have not seen anything similar for VMs. If I mount it on the VM as 192.168.101.100, then that data will go out the slower 1G interfaces. I already have a direct link between the NAS and the proxmox host. HOW do I share the NFS mount to the VM, that knows nothing of the 192.168.102.x network?
There's got to be a way to expose that NFS mount to VMs, and let Proxmox handle the connection to the NFS share itself.
Goal: Access all files over a 10G network, all other traffic over 1G, and ensure that VMs and containers are constrained to all file access over the 10G network. Share NFS mounted on Proxmox with VMs (current sticking point)
Setup -
Promox installed as a single host, trying to set up so more nodes can be easily added later as needs change. 4 network interfaces, 2 1G and 2 10G
NAS device - Asus Lockerstor 10, with 4 network interfaces, 2 1G and 2 10G.
3 NFS shares - Media, vmshared, and vmdata. Media is a RAID array for movies, music, tv shows, etc
On both Proxmox and the NAS, the 1G interfaces are set up in an LACP to the switch, 192.168.101.40 and 192.168.101.100 respectively.
On both the proxmox and NAS, one 10G interface is an IP of 192.168.102.3 and 192.168.102.1 respectively. There is a ethernet cable directly connecting these two interfaces, no switch. I will obtain a 10G switch later once I decide to add more nodes.
I then mounted the 3 NFS shares using the Proxmox GUI to the Datacenter node, so that they should be able to be shared among more nodes later. All containers and VMs reside on the vmdata NFS share, which is a 2TB NVME drive. Accessing containers and VMs is super fast, just as fast as if I had it on local storage, because the network speed exceeds the NVME access speed by almost double.
I've found tutorials for how to do a bind mount of some sort to an NFS share that's mounted on the proxmox host, but I have not seen anything similar for VMs. If I mount it on the VM as 192.168.101.100, then that data will go out the slower 1G interfaces. I already have a direct link between the NAS and the proxmox host. HOW do I share the NFS mount to the VM, that knows nothing of the 192.168.102.x network?
There's got to be a way to expose that NFS mount to VMs, and let Proxmox handle the connection to the NFS share itself.
Last edited: