Max Performance in file transfers

md61267

New Member
Jun 13, 2020
2
0
1
63
Dear forum users,
I need a proxmox server with maximum data transmission rates because of the software that is being used.
One of the programmers told me, that they prefer NVMe PCIe cards for that.
My question is, if I am losing too much speed when using ZFS. So is it better to have more RAM (planned are 128 GB) for the server which need 32 GB or have 2 or 4 fast NVMe cards attached through PCIe 3.0 lanes?
Does anyone have experiences with that?
The other idea would be to passthrough the NVMe card to the guest, but lose ZFS features, or use more RAM (256 GB) together with "normal" SSDs which transfer 6Gb/s at most instead of 48Gb/s using enough lanes.

Any hints are welcome.
md61267
 
My question is, if I am losing too much speed when using ZFS.
While ZFS does have an overhead, it is usually rather low. Especially when you're using VMs, as they go throught the ZFS volume layer, not the FS layer.

So is it better to have more RAM (planned are 128 GB) for the server which need 32 GB or have 2 or 4 fast NVMe cards attached through PCIe 3.0 lanes?
Preferrably both? RAM is still vastly more performant than PCIe storage, so caching always helps. For ZFS especially. A good rule of thumb is to allocate about half of your system memory to ZFS caching.

The other idea would be to passthrough the NVMe card to the guest, but lose ZFS features
Passthrough will certainly give you the highest performance, but applications that need *that* much raw bandwidth are rare, and you'll probably be fine with ZFS or LVM disks for your VMs. Passthrough also means losing out on certain features like snapshots, easy backups and live migration.

or use more RAM (256 GB) together with "normal" SSDs which transfer 6Gb/s at most instead of 48Gb/s using enough lanes.
Not sure what more RAM will help with regular SSDs, it might help a bit with read because of caching, but ultimately it will not offset the performance loss of using non-NVMe storage.

Also note that while many SATA SSDs are indeed capable of maxing out the 6Gb/s transfer limit, the same is very rare for PCIe SSDs, little hardware can actually max out a PCIe 3.0 link, and even then usually only in ideal "benchmarking" conditions. Some of the performance gains are also found in IOPS and not bandwidth, as PCIe offers lower latency and a more direct connection to the CPU.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!