Slow performance

alessio.71

Active Member
Jul 5, 2018
18
1
43
52
Hello, i need an help with an old install of proxmox 5.4-14
Configuration :
HP DL380 G8 2X8 Core Xeon E5-2670
64Gb Ram
RAID HP smartArray 800
Disks :
1 x raid1 2x300gb sas for Proxmox
4 x raid0 4 x 1TB SSD (prosumers) HP sd700 (forming a ZFS pool raidz1)
Network HP 570 2 x 10Gb Fiber connectecd to aruba 1930 with sfp+

With large files, the trasfer rate from pc to samba share is about 100 MB/S (windows copy)
With Small files the performances are unacceptable : not costant and may vary from 10 to 40 MBS (Windows copy)

The zpool is ok

I'm going to review the server
Reading the support the problem is in the storage area
I also know the problem to boot with USB in controller set in HBA Mode
What do you advise for better performances ?

In the attachment a test with a win10 Virtual machine
 

Attachments

  • Istantanea_2021-07-26_13-44-22.png
    Istantanea_2021-07-26_13-44-22.png
    48.2 KB · Views: 9
Last edited:
1.) its not good to use ZFS with a HW raid controller, See here why: https://openzfs.github.io/openzfs-docs/Performance and Tuning/Hardware.html#hardware-raid-controllers
2.) with CrystalDiskMark you are just benchmarking your RAM
3.) consumer/prosumer SSDs are crappy slow depending on the workload because they can't cache/optimize sync writes because of the missing powerloss protection (see here for some official benchmarks where a SATA enterprise SSD is 36x faster than a consumer SATA SSD).
4.) You should show us the config of your VMs so we can see what virtual storage Controller, what storage protocol and what cache type are used. virtio SCSI + SCSI + cache = none should be the best for Windows on ZFS.
5.) That proxmox is very old. You should atlest upgrade to 6.4 if you don't want the very new 7.0.
 
Last edited:
Hello Dunuin,
  1. I know the problem and the solution is the HBA mode (with USB boot)
  2. Why ? It's a virtual machine stored on zpool let me know more
  3. I know
  4. Yes it is virtio both on linux and windows cache write trough gave me a little more performance (but was only a test)
  5. Yes, it's time to upgrade
many thanks
 
Hello Dunuin,
  1. I know the problem and the solution is the HBA mode (with USB boot)
If you are using 4 x raid0 so Proxmox sees 4 individual disks for proxmox thats wrong...ZFS needs direct access to the discs without any abstraction layer in between like a raid controller. HBA mode would be fine.
  1. Why ? It's a virtual machine stored on zpool let me know more
Because CrystalDiskMark only does async writes. These get cached in RAM first by Windows, then cached in RAM by Linux page files on the hosts RAM if you selected a virtio caching mode like "writeback" and not "none", then it gets cached in RAM by ZFSs ARC on the host, then it gets cached on the internal RAM of the SSD, then it gets cached on the NAND in SLC mode if you got free space and after that it gets written to the actual NAND. So it got cached 4-5 times in between and the RAM is always magnitudes faster than your SSD so you are only benchmarking the RAM.
If you want to see real performance you need to disable all caches, write TBs of data so caches can't handle that or use sync writes which can't be cached.
 
Last edited:
  • Like
Reactions: kareemtharwat
What do you thing about the new configuration :

4x Samsung 960gb PM883 x ZFS
1 x SmartArray P440 in HBA Mode ?
 
I would run them as a striped mirror (raid10 equivalent) for best performance.
 
Thanks Everybody
So i've installed p440 in hba mode
installed 4x Samsung 960gb PM883 in zraid10
Then installed a 10G FC card one lan connected to qnap 10g and the other on a aruba 1930
full new install of Proxmox,

restore of data over 900MB/S reduced backup time of 50%
and the view the image to see the improvements

Thank You All
 

Attachments

  • 5.PNG
    5.PNG
    24.6 KB · Views: 9
  • Like
Reactions: Dunuin
Can see in my benchmark thread, I tested with diskmark, but I tried to show a more representative situation for sync workloads by also testing with zfs caches disabled and I also provided host performance data which removes one of the caching layers.

I dont think the benchmark is useless on vm's as it still gives an idea of what configuration is faster than another, but for sure unless you are confident your workload is asynchronous writes only, then you should also test with something that flushes the data.
 
Can see in my benchmark thread, I tested with diskmark, but I tried to show a more representative situation for sync workloads by also testing with zfs caches disabled and I also provided host performance data which removes one of the caching layers.

I dont think the benchmark is useless on vm's as it still gives an idea of what configuration is faster than another, but for sure unless you are confident your workload is asynchronous writes only, then you should also test with something that flushes the data.
Yes, If you use CDM without disabling ARC caching on the host and caching of virtio you are basically only benchmarking your RAM and not your drives. Especially if you only run 5x 1GB of IO instead of 5x 100GB or something in that range that would be able to overload the buffers and torture the SSDs.
 
Last edited:
Thats true as well, I did short bursts of writes, not sustained long one's.

My workloads are mostly spindles and I also know when I am using ssd's at home, its only going to be bursty i/o so that did affect the tests I was doing, although I do plan more tests with different tools, especially when I do the spindle testing.

Essentially I just wanted to know the most suitable configuration for my windows OS drive for my windows guest, and that guest is only been used for testing custom windows ISO's. I will also install some linux and bsd guests, but again they will be bursty only as I will just be testing remote upgrade procedures and custom scripts. So all short bursts of i/o workloads.
 
Last edited:
Yes i know what you said about benchmarks
The target of the test was a simple comparative before/after dont want to be a real benchmark
The real benchmark was the customer satisfied using large Solidworks files now the flow is almost fluid
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!