Remote storage

Taser

New Member
Oct 18, 2019
10
1
3
24
Hey, I'm currently running 6 hosts, 80 ish VMs, 2 proxmox backup servers, and 10 NFS Servers,
everything is running wonderful, however, it feels like NFS isn't the right way to go here as it's not block-based.

we run backups during nights and don't use snapshots.

i considered using ZFS over ISCSI, however as we only run a hp env. (almost everything is based on hp dl 380 g8's with p420(i) raid cards)
we don't have access to an HBA/"IT Mode" to make ZFS work on them.


is there any decent alternative or is NFS "okay"?
 
You said that everything is running "wonderfully" - why try to fix what is not broken? I've seen many large shops run their virtualization on NFS/NAS and they were quite happy. Granted those were commercial NAS products. Are there better ways out there? Certainly. But you have invested in specific hardware with certain limitations and it seems to mostly(?) work for you?


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
You said that everything is running "wonderfully" - why try to fix what is not broken? I've seen many large shops run their virtualization on NFS/NAS and they were quite happy. Granted those were commercial NAS products. Are there better ways out there? Certainly. But you have invested in specific hardware with certain limitations and it seems to mostly(?) work for you?


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I would like to optimize the speed as right now our 40gbit Infiniband network isn't getting saturated, and neither are the disks.
from my understanding, a block storage should be alot faster.
 
The first step would be to understand why your disks and links are not saturated to your liking. Is it because the NFS servers are slow, or perhaps your VMs are not IO hungry? Again, I am going of "everything is running wonderful".

The critical performance metric between Virtualization and Storage is latency. Both the server and the client can be introducing delays into communication. Is the client submitting the IOs to storage fast enough, or is it spending a lot of time processing and the storage is simply idle? Or is the client waiting for extended times for storage to reply? There is an art to finding the performance bottleneck.

As someone who is associated with block SDS, I am certainly biased towards block being faster than NFS. If the quality of the code is equal, there is simply less overhead with iSCSI vs NFS. However, both types are software written by people and some NFS implementation may be better than some block implementations.

My suggestions would be to start with "netperf" and "fio" to establish the baseline of your environment capability.

Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!