What is the best configuration for this hardware?

zdaniu

New Member
Aug 23, 2021
3
0
1
Hello everyone!

I search the net and look for a "golden mean" with my equipment. The more I read, the more I get the feeling that I am dumber;)

Configuration of my (homelab) Fujitsu Primergy TX120 S3p:
CPU: Intel Xeon E3-1240 V2 @ 3.4GHz [4/8]
RAM: 4 x 8GB
SSD: 2 x 256GB, 2 x 512GB
RAID controller: standard Fujitsu D2507-D11 (LSI SAS1064E)

I have a problem with choosing the right configuration for the Promox installation (homelab).
The result I would like to achieve is 2 x RAID1 composed of 256GB and 512GB drives respectively.

During the test installation (RAID1 from 512GB disks) using a RAID controller and ext4 file system, the attempt to clone a 60GB vm ended with 3-4 hours of waiting (iowait ...), effectively blocking Proxmox.

So .. I'm looking for some advice on how to configure it best ..
Leave the controller? Or maybe kick it out and use ZFS?
I don't know, I feel lost ...

Thanks to everyone for your willingness to help! :cool:
 
What SSDs models do you got? 3-4 hours is way to slow. If I look at the LSI SAS1064E datasheet it says it only uses 8x PCIe 1.0 lanes so a maximum of 2 GB/s for all 4 ports combined. So with all 4 drives attached to that raid controller you probably wouldn't get the full SSD performance with all possible workloads.
ZFS normally should be slower because it creates alot of overhead and write amplification because it is a copy-on-write filesystem. Because of the write amplification and performance it is also not recommended to use consumer grade SSDs with it. You could test it, maybe its fine for your workload, but I would monitor the SMART attributes. And ZFS would need 4-8GB of RAM for caching.

Using mdadm with lvm thin would also be an option. But its not officially supported, so if you want to use it you should install a Debian 11 and install PVE7 ontop of it.
 
  • Like
Reactions: zdaniu
What SSDs models do you got?
These are budget SSDs: Apacer AS350 (max read/write: 560/540MB/s)

3-4 hours is way to slow. If I look at the LSI SAS1064E datasheet it says it only uses 8x PCIe 1.0 lanes so a maximum of 2 GB/s for all 4 ports combined. So with all 4 drives attached to that raid controller you probably wouldn't get the full SSD performance with all possible workloads.
With the above disks, the controller should not be a bottleneck, the total bandwidth for disks will be around the controller bandwidth (2GB/s)
ZFS normally should be slower because it creates alot of overhead and write amplification because it is a copy-on-write filesystem. Because of the write amplification and performance it is also not recommended to use consumer grade SSDs with it. You could test it, maybe its fine for your workload, but I would monitor the SMART attributes. And ZFS would need 4-8GB of RAM for caching.
8GB of RAM is not a problem. It's a homelab for rather small virtual machines, so the remaining 24GB are enough.
Using mdadm with lvm thin would also be an option. But its not officially supported, so if you want to use it you should install a Debian 11 and install PVE7 ontop of it.
Well ... I will have hours of "fun" testing the performance of multiple combinations ;)
 
Not just hours. I'm just testing ZFS combinations for 2 weeks now...^^
Of course you are right, I should have written "hours";}
In any case, cloning the virtual machine should take less time than reinstalling it with the configuration..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!