Hello. I've managed to really confuse myself about my network storage options and could use some advice. I work solo from home, so my Proxmox setup is a mix of real production workloads and homelab stuff, with a 10 Gbps network backbone. So, I'm not trying to optimize everything like I'm running a datacenter, but I also don't want to be leaving any obvious, easy to maintain performance on the table.
Background
Initial Plan: I was initially going to keep this simple with the PVE SM ZFS over iSCSI storage type using my Storage VLAN, now that it's supported in TrueNAS via a plugin (or rather, two alternative plugins). But then I realized this would, on an LACP bond, limit me to a 10 Gbps connection to my virtual disk storage, most of the time.
Yes, Ceph would have been better.

Background
- I have a TrueNAS server with 4x 10 Gbps ports (2 per NIC). One NIC is a Mellanox card, and the other is an Aquantia card.
- I currently only use the Mellanox card, in an LACP bond, with devices (including my Proxmox node) connecting to the NAS over a dedicated storage VLAN. So, that's two VLANs:
- Proxmox Management VLAN (only PVE nodes live here).
- Storage VLAN (All 10 Gbps data traffic that I want to guarantee high performance for lives here, MTU 9000)
- The Aquantia NICs are currently unused for data transfer (I'm using them for management at 1 Gbps).
- I want to add a new Proxmox node, but don't have access to large NVME storage for it right now. So, I'm looking at my network storage options for VM virtual disk storage.
Initial Plan: I was initially going to keep this simple with the PVE SM ZFS over iSCSI storage type using my Storage VLAN, now that it's supported in TrueNAS via a plugin (or rather, two alternative plugins). But then I realized this would, on an LACP bond, limit me to a 10 Gbps connection to my virtual disk storage, most of the time.
- I think for most services that would be sufficient.
- But I also run a Windows VM for work and various Linux VMs that I use remote desktop to access for things like ham radio, cloud gaming, etc. On my current PVE node, I'm running an MVME mirror and get about 3.5 Gbps reads (and probably ~1.75 Gbps writes). So a 10 Gbps link would be a massive hit on reads and writes internally. That's very concerning to me.
- I've never done a thing with multipathing before. I don't know how much effort it is to set up and maintain and I'm just assuming that TrueNAS' implementation will work well enough.
- Assuming it's not ridiculously overcomplicated for a SOHO environment, do you think it's actually worth it to get, theoretically, 20 Gbps to my shared VM storage?
- Or, for the kind of use I've described, is 10 Gbps enough?
- And, I suppose, from the point of view of just getting something up and running so I can start storing VM disks on the network: Is there any reason I can't start with using a single LACP bonded vmbr to access my iSCSI storage, and then move to using iSCSI via multipath later?
Yes, Ceph would have been better.
- I wasn't planning to do a multi-node cluster at first, and back when I started tinkering with Proxmox (PVE 7), Ceph was described as an experimental, not-for-production technology, and even a lot of popular tutorials at the time were recommending not using it for most people. I feel like that's changed in the last 4-5 years or so and might have made different decisions if I was building a cluster from scratch in 2026, but here we are.
- If i wanted to switch to Ceph today, with the current pricing on 1-2 TB NVMEs, I'd end up spending anywhere from $500-$1000.
- I might revisit switching everything to Ceph in a year or two, after I actually have a built-out workload of running VMs and LXCs and a better idea of my actual needs, as well as a budget for buying more NVME, but it's not an option right now.