Stupid question - Proxmox VE host shared folder to be accessed between VMs

alpha754293

Member
Jan 8, 2023
94
18
8
My apologies for my stupid question:
Here is what I am trying to do - I am looking to install Proxmox VE on a system, and I want to set up a folder that has my files and I want my subsequent VMs to be able to access that shared folder WITHOUT needing to go through the network interface.

Is there a way to do that?

My understanding that for Windows VMs, in order for me to be able to access the host hosted shared folder, I would need to go through the network over SMB to be able to get access to the files (vs. being able to do it locally).

It's a lot faster for me to be able to go through the local access rather than through SMB as I am planning on setting up a NVMe SSD RAID array.

If I go through SMB via the NIC, then I would only be able to access the data on said shared folder at 1 Gbps vs. the 32 Gbps (or more, roughly) of a PCIe 3.0 x4 NVMe SSD.

If you can please point me in the right direction as to where I should be looking in regards to how to set something like this up, that would be greatly appreciated.

Thank you.
 
Search for virtio-fs and 9p but I don't think you'll get better performance than the virtual network.
For NVMe SSDs, will the virtual network be able to run at close to the native PCIe 3.0 x4 interface speeds? (i.e. if say, one of the NVMe SSDs is able to get a sequential read of almost 3 GB/s (24 Gbps), would the virtual network be able to get close to that?

Or would it still only be able to perform at something that's more akin to a 1 Gbps NIC?

Thank you.
 
The best way to answer questions about performance is to test it : )

But no, the traffic won't be limited by the NIC
 
The best way to answer questions about performance is to test it : )

But no, the traffic won't be limited by the NIC
A part of the problem is that this is a proposed solution which will require and is driving hardware acquisition (if it works the way that I am thinking/hoping it will) and thus, becomes a Catch-22 problem.

I might test it on my local system, where I will create a 64 GB RAM drive (in order to mimick/emulate the NVMe SSD) and then see if I can share the RAM drive across multiple VMs and see what kind of performance I'll be able to get out of that on my test HP Z420 system.
 
  • Like
Reactions: leesteken
Can't you test the network speed using e.g. iperf? No need to get any hardware
 
Can't you test the network speed using e.g. iperf? No need to get any hardware
For the NVMe test, I don't have the NVMe RAID hardware yet. (e.g. Broadcom/Avago/LSI NVMe MegaRAID HW RAID HBA).

For the RAM drive (to emulate said NVMe RAID/SSD), yes, I would be able to use iperf to test that.

It's my "poor man's NVMe SSD" because I can create it directly on the proxmox host itself, and then try and "share" it so that all of the VMs would be able to see it via virtio-p9 (which, I haven't seen any documentation yet in terms of how I would get virtio-p9 working on Windows guests, but I'll google that some more). (There's documentation for like "vanilla" virtio, but not on virtio-p9 specifically that I've found yet. It would appear that most of the documentation for it are for Linux guests rather than for Windows guests.)

And then once I get that going, I would be able to test/benchmark the native read/write speeds of said "poor man's NVMe SSD" (my RAM drive) and then I would be able to compare what the host is able to achieve vs. what the VM guests are able to achieve via virtio-p9.

That's the plan for tonight.
 
For the RAM drive (to emulate said NVMe RAID/SSD), yes, I would be able to use iperf to test that.
iperf already reads/writes to/from RAM. So a RAM disk isn't really needed.

For NVMe SSDs, will the virtual network be able to run at close to the native PCIe 3.0 x4 interface speeds? (i.e. if say, one of the NVMe SSDs is able to get a sequential read of almost 3 GB/s (24 Gbps), would the virtual network be able to get close to that?

Or would it still only be able to perform at something that's more akin to a 1 Gbps NIC?
Virtual NICs are usually limited by how fast your CPU can handle the packets. So with a fast CPU you can get into the 20Gbit range. But don't forget that SMB also got some overhead in addition tothe network performance.
 
iperf already reads/writes to/from RAM. So a RAM disk isn't really needed.
My apologies - let me clarify:

iperf can measure the network performance (presumably of the virtio-p9?).

What iperf won't be able to measure is whether a host hosted shared folder would be accessible ACROSS VMs.

That's where the RAM drive comes into play.

iperf might be able to tell me how fast each of the VMs talk to each other.

But iperf won't be able to tell me if I can create a centralised storage pool/repository, and then have my 10 or so VMs read/write to it simultaneously.

Sorry for the confusion. Hope that helps to clarify things.

(I am trying to consolidate my 5 NAS servers down to 1, but then still have the new server do all the things that the 5 separate NAS servers used to do, but now it's just all virtualised instead of it being on 5 physically separate NAS servers. Some of my NAS servers are so old that it doesn't have the PCIe expansion slots needed to be able to add 10 GbE NIC for example. And therefore; to be able to do that, I would need to spend about $1400 minumum to upgrade JUST that ONE NAS server so that it will support 10 GbE, and then I would also need to get a 24-port 10 GbE switch (I'm using a 48-port gigabit switch right now, of which, about half of the ports are populated), which would cost at least $2000. By consolidating, I can skip the supporting networking infrastructure altogether, and run everything at host speeds, which if I had a NVMe RAID array, I would be able to run it at speeds that's significantly faster than even 10 GbE. Yes, there is capex involved. But if it works the way that I am thinking/hoping that it will, then it would have significant performance benefits, and I would be saving money (by NOT having to buy a 24-port 10 GbE switch, and I would also save money via the consolidation where I would be able to cut my total power consumption from ~1.2 kW down to about 600 W, which based on the savings of the power itself, will have a TARR of about 30 months (2.5 years).)

So there are multiple reasons/motivations for wanting to do something like this.

Virtual NICs are usually limited by how fast your CPU can handle the packets. So with a fast CPU you can get into the 20Gbit range. But don't forget that SMB also got some overhead in addition tothe network performance.
I thought that when you created the VM, you have to tell it what type of a NIC you want the VM to emulate no? (i.e. RTL8139 (100 Mbps) NIC or e1000 (Intel gigabit NIC))


re: CPU
It would be AWESOME if the paravirtualised network driver supported RDMA, because then, by bypassing the entire network stack, you'd be able to get it to run as fast as it CAN run as it won't be limited/restricted by anything else at that point.

re: SMB
Course, if you AREN'T running SMB to be able to go between the host to the VMs, (i.e. you're more able to shared the folders directly, and it can communicate directly to it), then SMB overhead won't be an issue.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!