Starting with Proxmox. Storage setup questions and performance for small servers

rlopez

New Member
Apr 18, 2024
1
0
1
Hello all,
I am testing Proxmox VE with the idea of use it to install all my new customer servers henceforth. In general these are small servers to run 1-5 MV.
These servers running with 32GB-64GB RAM in the most cases. For the customer type, they have to be a limited budget.

Until now I was using VmWare ESXi in their free version, but that ended. In VmWare it was mandatory to use HW RAID. I was using LSI entry RAID HBA with good results for this type of server. One of my habitual configuration consists in
- 2 x 240GB SSD SATA RAID1 for system
- 2 x 960GB SSD SATA RAID 1 for VMs.

Now, with Proxmox I am trying to use ZFS (without HW RAID card) to archieve some similar configuration but I found that there is a big performance lost. I am connecting those SATA Disks directly to mainboard. And I did same configuration 2x240GB SATA ZFS RAID1 for Proxmox system, and 2x240GB SATA ZFS RAID1 for VMs.

I am doing tests with a Windows Server VM (which is the habitual OS I need to use on my customers), and I can observe a degradation in disk read/write operations compared to same config in VmWare.

I have read a lot about different ZFS settings (ARC, L2ARC, Log device, and so on), but I don´t find any definitive recommendation about how to make a simple but reliable setup for this scenario.

In principle, I would like to use ZFS because all the great features that it has, but maybe I need to use LVM with ext4 instead?
Should I use an HBA card in non-RAID mode to improve performance with ZFS?
Should I think about change my storage scheme to other like RAID10? If yes, can I have only one RAID10 pool to Proxmox system and VMs? it is recommended?

Any ideas would be appreciated.

Thank you so much.
 
I am doing tests with a Windows Server VM (which is the habitual OS I need to use on my customers), and I can observe a degradation in disk read/write operations compared to same config in VmWare.
Hi @rlopez, it seems like you only have one data point: PVE+ZFS+no/hba+Windows
Try other combinations to find out whether it's a hardware or software problem. For example:
PVE+ext4+no/hba+Windows
PVE+ext4+h/w raid+Windows
etc.

There are many tunable options in ZFS, but it's not clear if this is not a VM issue yet. Make sure to use VIRTio controllers and drivers in your VM.

If ESXi with hardware raid worked reliably for you, why not start with profiling an apples-to-apples configuration with Proxmox?
My advice is to avoid introducing too many changes at the same time.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Are you using enterprise grade SSDs with PLP (and bigger cache)? ZFS is different than conventional hardware RAID controllers which come with their own cache. If your SSDs are consumer or „pro“ consumer drives their cache will fill up quickly and performance will drop.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!