Poor pveperf performance

decibel83

Renowned Member
Oct 15, 2008
210
1
83
Hi.
I think my pveperf results are very poor:

Code:
pve1:/var/lib/vz/template/iso# pveperf 
CPU BOGOMIPS:      31920.53
REGEX/SECOND:      666950
HD SIZE:           19.69 GB (/dev/pve/root)
BUFFERED READS:    194.80 MB/sec
AVERAGE SEEK TIME: 3.45 ms
FSYNCS/SECOND:     170.77
DNS EXT:           153.14 ms
DNS INT:           177.43 ms (localdomain)

The server is a Dell PowerEdge T410 with 2 Xeon E5504 2.0 GHz CPUs, 8 Gb RAM DDR3 and 2 600 Gb SAS hard disks connected in RAID1 on a Dell 6/i RAID controller.

What are your opinion about my pveperf results?

Thank you very much!
Bye.
 
What are your opinion about my pveperf results?

Your BUFFERED READS and FSYNCS/SECOND are low. Is write-back cache enabled? I'm no expert on the 6/i but it looks like it is just a rebranded LSI 1068E Controller. What does lspci report?

I think the consensus is that the 6/i is a "fake raid" card and that you would be well advised to get a real RAID controller.

Cheers
 
From my experience that is really low.

I am running two 6/i's and have never had performance that low. Form what I have seen most people believe it is a fake raid, which it is not. I have done dozens of tests and found that with my Proxmox installation the best mode is having a BBU with write-back mode enabled. You will see a dramatic increase.

I am using WD Caviar Blacks and here is my performance.
No BBU in write-through: 300-400 fsyncs/sec
With BBU in write-back: 5000-6000 fsyncs/sec

Definitely look at using a BBU (which does not come with it standard) with write-back mode.

P.S. The 6/i is an LSI MegaRaid 1078, at least mine is.

lspci -v
01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 1078 (rev 04)
Subsystem: Dell PERC 6/i Adapter RAID Controller
Flags: bus master, fast devsel, latency 0, IRQ 16
Memory at dfd80000 (64-bit, non-prefetchable) [size=256K]
I/O ports at ec00
Memory at dfdc0000 (64-bit, non-prefetchable) [size=256K]
Expansion ROM at dfd00000 [disabled] [size=32K]
Capabilities: [b0] Express Endpoint, MSI 00
Capabilities: [c4] Message Signalled Interrupts: Mask- 64bit+ Queue=0/2 Enable-
Capabilities: [d4] MSI-X: Enable- Mask- TabSize=4
Capabilities: [e0] Power Management version 2
Capabilities: [ec] Vital Product Data <?>
Capabilities: [100] Power Budgeting <?>
Kernel driver in use: megaraid_sas
Kernel modules: megaraid_sas
 
From my experience that is really low.
I am using WD Caviar Blacks and here is my performance.
No BBU in write-through: 300-400 fsyncs/sec
With BBU in write-back: 5000-6000 fsyncs/sec

I changed the controller, now I have a PERC H700 with 512 Mb of cache memory onboard, the BBU and the array is setted in write-back.

Now I have about 1500 FSYNCS/SECOND.

What do you think about this value?
 
I changed the controller, now I have a PERC H700 with 512 Mb of cache memory onboard, the BBU and the array is setted in write-back.

Now I have about 1500 FSYNCS/SECOND.

What do you think about this value?
Hi,
i think you can work with this configuration, but good values are higher.
But with 2 disk in raid1 you can get only the speed of one disk (plus caching of the controller) - so expect not too much also with an fast raid-controller.

I had four sas-drives in raid10 with an areca controller and got following values:
Code:
pveperf /var/lib/vz
CPU BOGOMIPS:      27293.32
REGEX/SECOND:      1096839
HD SIZE:           543.34 GB (/dev/mapper/pve-data)
BUFFERED READS:    470.67 MB/sec
AVERAGE SEEK TIME: 5.45 ms
FSYNCS/SECOND:     4639.97
DNS EXT:           71.76 ms
DNS INT:           0.53 ms

Udo
 
And what about RAID5 with 3 hard disks (one spare)? How much it is slower than RAID10? Is RAID10 better than RAID5?
 
raid5 is slower in write operations than raid10.