Proxmox network storage performance

PmUserZFS

Well-Known Member
Feb 2, 2018
77
2
48
Im building a proxmox cluster for a lab.

To gain HA we need shared storage.

I dont have enough disks etc. for CEPH, so Im going for a shared storage from TrueNAS running ZFS with dedicated l2arc P4500 and a slog/zil P1600X.
for disks I have 6 4TB HGST sata drives in mirror.
data shared over network with either nfs or iSCSI.

Is there anyway to have a local cache where the VM is running ? Or how can I increase performance.

3x ProxMox hosts each has
32vCPU ( 16 core cpus E5 2698s )
256GB DDR4
8x300GB sas hdds, of which 3 are mirror for proxmox boot, the rest is currently unused


TruneNAS:
8 core Intel Silver 6408, I think it was.
ram 64 to 128GB. we will see.
6x4TB SATA HDD HGST
SLOG Intel P1600X 58GB
L2ARC P3700 400GB nvme


2x10GB NICs for all hosts.

I have two PCI slots on the proxmox hosts (via risers due to 1u HP dl360 gen9 servers) here I could add a pci to m.2 for ssd cache or just local storage.
 
Last edited:
Is there anyway to have a local cache where the VM is running
Yes, just set the disk cache to something different than the default "none".

Or how can I increase performance.
L2ARC benefits are most of the time not really realistic. Better to use special device SSD for metadata. Also use more disks for more performance. I suspend, you have RAIDz? If so, don't run VMs of it. It's just to slow. Stripped mirror (RAID10) is faster.

data shared over network with either nfs or iSCSI.
IMHO, best experience is ZFS-over-iSCSI in this setup, yet I do not know of the current state of this with TrueNAS.
 
  • Like
Reactions: PmUserZFS
Yes, just set the disk cache to something different than the default "none".


L2ARC benefits are most of the time not really realistic. Better to use special device SSD for metadata. Also use more disks for more performance. I suspend, you have RAIDz? If so, don't run VMs of it. It's just to slow. Stripped mirror (RAID10) is faster.


IMHO, best experience is ZFS-over-iSCSI in this setup, yet I do not know of the current state of this with TrueNAS.
As I wrote in my post the disks will be mirrored, still spinning disks though.

L2ARC will be served from a P3700 400GB nmve. but still over network with nfs.

Im now looking,thinking of local cache from each proxmox host, cheap m.2 read cacehe ? samsung 980 or so ? Can it be used somehow ?

Else I just install a 1TB 980 in one or two hosts and run the vns from there without disk HA. could synz the zpool via zfs send for backup/semi storage HA.

Yes iSCSI has theoretcial lower latencies, how difficult is it to setup properly. I wonder ? I need to do more reasearch here.

I have also added more info about the setup in the OP.
 
Last edited:
samsung 980 or so ?
Please don't cripple your system by using consumer SSD, use Enterprise SSDs in otherwise enterprise hardware.

Yes iSCSI has theoretcial lower latencies, how difficult is it to setup properly. I wonder ?
That's not the main point here. The main point is to have a WORKING HA storage. With ZFS-over-iSCSI you already have a working HA storage with snapshot support, which only QCOW2-on-NFS/CIFS and CEPH provides.
 
With ZFS-over-iSCSI you already have a working HA storage
But..., please excuse me: a cluster with HA is implemented to let VMs survive the death of a (one) server in a cluster.

OP has three PVE nodes and one single TrueNAS.

With ZFS-over-iSCSI being sourced from that single TrueNAS (or any other single iSCSI target) is introducing a Single-Point-of-Failure --> a failing TrueNAS kills the whole cluster.

I have no experience on this level, but you would need a second/redundant TrueNAS and dual/redundant networking, including switches, to compensate this.


Oh, did I mention I am a ZFS (and replication) fanboy...? We are lucky to have so many choices :)
 
But..., please excuse me: a cluster with HA is implemented to let VMs survive the death of a (one) server in a cluster.

OP has three PVE nodes and one single TrueNAS.

With ZFS-over-iSCSI being sourced from that single TrueNAS (or any other single iSCSI target) is introducing a Single-Point-of-Failure --> a failing TrueNAS kills the whole cluster.

I have no experience on this level, but you would need a second/redundant TrueNAS and dual/redundant networking, including switches, to compensate this.


Oh, did I mention I am a ZFS (and replication) fanboy...? We are lucky to have so many choices :)
Yes, TruneNAS is a SPOF, but thats what I got for the lab.

currently Im leaning on lvm over iSCSI, with write through cache on local ssds, as read cache.
 
But..., please excuse me: a cluster with HA is implemented to let VMs survive the death of a (one) server in a cluster.

OP has three PVE nodes and one single TrueNAS.

With ZFS-over-iSCSI being sourced from that single TrueNAS (or any other single iSCSI target) is introducing a Single-Point-of-Failure --> a failing TrueNAS kills the whole cluster.

I have no experience on this level, but you would need a second/redundant TrueNAS and dual/redundant networking, including switches, to compensate this.

Oh, did I mention I am a ZFS (and replication) fanboy...? We are lucky to have so many choices :)
You're totally right and my answer was tailored to the OPs hardware in mind.
 
  • Like
Reactions: UdoB

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!