[SOLVED] Performance issues with new Lenovo Server (Hardware Raid)

Gerhard Wegl

Renowned Member
Feb 23, 2016
10
4
68
54
Proxmox 6.4.1

System 1:
Hardware: Lenovo ThinkSystem ST250
Machine Type: 7Y45
Raid Controller: Avago RAID 530-8i wihout Cache
Disks: 4x 960GB 6Gbps SATA 2.5" SSD Intel SSDSC2KB960G8L to RAID 5

#######################################################################################################
Hello Folks!

We have a serious performance problem with our new Server

We' ve noticed that issue after installing a Windows 2019 VM and started a specific programm.
This program does some database querys. It's all installed on the same VM. (Please dont ask why :) )

On our old system this query takes about 20 second.
With our new system the Lenovo it takes up to 2 minutes with the exact same query.

The really strange thing!
We also installed VMWARE ESXi on this server to compare against Proxmox.

I've done some benchmark testing within the VM and made screenshots from the ATTO tool.
It looks very much the same in benchmark but the so called "Schwuppdizität"
[perceived speed at which a computer or program operates] :) and this query is very poor.

But with ESXi on the new server the query takes about 25 sec.

I don't get it why the old server with 2SATA SSDs in RAID1 is faster
then our new one with 5xSAS12G Enterprise SSDs in RAID5

Could someone please enlighten me, or could guide me to the right direction what we could have done false to get that poor performance!


Specs
#######################################################################################################
Proxmox 7.0-11

System New

Hardware: Lenovo ThinkSystem SR650
Machine Type: 7X06
Raid Controller: AVAGO RAID 930-8i 2GB Flash PCIe 12Gb
Disks: 5x 1.92TB 12Gbps SAS 2.5" SSD Samsung MZILT1T9HBJRV3 to RAID5
lspci | grep RAID
ae:00.0 RAID bus controller: Broadcom / LSI MegaRAID Tri-Mode SAS3508 (rev 01)

lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 7T 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part /boot/efi
└─sda3 8:3 0 7T 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 100G 0 lvm /
└─pve-data 253:2 0 6.9T 0 lvm /var/lib/local_1
root@pve111a:~# hdparm -tT /dev/sda

/dev/sda:
Timing cached reads: 14768 MB in 2.00 seconds = 7392.68 MB/sec
Timing buffered disk reads: 4334 MB in 3.00 seconds = 1444.50 MB/sec

smartctl -a /dev/sg1
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.11.22-4-pve] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor: Lenovo
Product: RAID 930-8i-2GB
Revision: 5.17
Compliance: SPC-3
User Capacity: 7,675,999,944,704 bytes [7.67 TB]
Logical block size: 512 bytes
Physical block size: 4096 bytes
Rotation Rate: Solid State Device
Logical Unit id: 0x600062b20711e580290140042509073a
Serial number: 003a0709250440012980e51107b26200
Device type: disk
Local Time is: Fri May 13 11:23:41 2022 CEST
SMART support is: Unavailable - device lacks SMART capability.

=== START OF READ SMART DATA SECTION ===
Current Drive Temperature: 0 C
Drive Trip Temperature: 0 C

Error Counter logging not supported


#########################################################################################################

System Old

Hypervisor: 5.1-41
RAID Controller: Adaptec ASR8405

lspci | grep RAID
04:00.0 RAID bus controller: Adaptec Series 8 12G SAS/PCIe 3 (rev 01)

root@pve:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 894G 0 disk
├─sda1 8:1 1 1M 0 part
├─sda2 8:2 1 256M 0 part
└─sda3 8:3 1 893.8G 0 part
├─pve-swap 253:1 0 20G 0 lvm [SWAP]
├─pve-root 253:2 0 30G 0 lvm /
└─pve-data 253:3 0 843.8G 0 lvm /var/lib/local_1
sdb 8:16 1 894G 0 disk
└─sdb1 8:17 1 894G 0 part
└─ssd2-data2 253:0 0 894G 0 lvm /var/lib/local_2
root@pve:~# hdparm -tT /dev/sda

/dev/sda:
Timing cached reads: 18566 MB in 2.00 seconds = 9292.87 MB/sec
Timing buffered disk reads: 1498 MB in 3.00 seconds = 499.25 MB/sec
root@pve:~# hdparm -tT /dev/sdb

/dev/sdb:
Timing cached reads: 18328 MB in 2.00 seconds = 9172.78 MB/sec
Timing buffered disk reads: 2696 MB in 3.00 seconds = 898.20 MB/sec

smartctl -a /dev/sg1
=== START OF INFORMATION SECTION ===
Vendor: ASR8405
Product: SSD2
Revision: V1.0
User Capacity: 959,914,704,896 bytes [959 GB]
Logical block size: 512 bytes
Rotation Rate: 22065 rpm
Logical Unit id: 0x2eba5bc900d00000
Serial number: C95BBA2E
Device type: disk
Local Time is: Fri May 13 10:57:10 2022 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Temperature Warning: Disabled or Not Supported

smartctl -a /dev/sg2
=== START OF INFORMATION SECTION ===
Model Family: Samsung based SSDs
Device Model: SAMSUNG MZ7LM960HCHP-0E003
Serial Number: S2NLNXAGB07175M
LU WWN Device Id: 5 002538 c400b8827
Firmware Version: GXT3003Q
User Capacity: 960,197,124,096 bytes [960 GB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-2, ATA8-ACS T13/1699-D revision 4c
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Fri May 13 10:57:17 2022 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

smartctl -a /dev/sg3
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.13.13-2-pve] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family: Samsung based SSDs
Device Model: SAMSUNG MZ7KM960HAHP-00005
Serial Number: S2HTNX0J402539
LU WWN Device Id: 5 002538 c4059a46e
Firmware Version: GXM1103Q
User Capacity: 960,197,124,096 bytes [960 GB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-2, ATA8-ACS T13/1699-D revision 4c
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Fri May 13 11:04:25 2022 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

smartctl -a /dev/sg4
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.13.13-2-pve] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor: SEAGATE
Product: XS960SE70024
Revision: 0204
Compliance: SPC-5
User Capacity: 960,197,124,096 bytes [960 GB]
Logical block size: 512 bytes
LU is resource provisioned, LBPRZ=1
Rotation Rate: Solid State Device
Form Factor: 2.5 inches
Logical Unit id: 0x5000c500a18f5c6b
Serial number: HLJ06FY90000822150Z3
Device type: disk
Transport protocol: SAS (SPL-3)
Local Time is: Fri May 13 11:05:53 2022 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Temperature Warning: Enabled

smartctl -a /dev/sg5
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.13.13-2-pve] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor: SEAGATE
Product: XS960SE70024
Revision: 0204
Compliance: SPC-5
User Capacity: 960,197,124,096 bytes [960 GB]
Logical block size: 512 bytes
LU is resource provisioned, LBPRZ=1
Rotation Rate: Solid State Device
Form Factor: 2.5 inches
Logical Unit id: 0x5000c500a18f5c1b
Serial number: HLJ06GYC0000822150Z3
Device type: disk
Transport protocol: SAS (SPL-3)
Local Time is: Fri May 13 11:06:53 2022 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Temperature Warning: Enabled

##################################################################################

VM:
Server 2019 Standard
HDD with virtio driver
Guest Agent is installed

root@pve:/etc/pve/qemu-server# cat 444.conf
#machine%3A pc-i440fx-6.0
agent: 1
boot: c
bootdisk: virtio0
cores: 6
ide0: none,media=cdrom
ide2: none,media=cdrom
memory: 4096
name: AdminWAST2
net0: virtio=C6:39:AB:7C:6E:31,bridge=vmbr0,firewall=1
net1: e1000=7A:C8:5C:2B:AC:4D,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: win10
scsi1: local_1:444/vm-444-disk-1.qcow2,size=5G
scsihw: virtio-scsi-pci
smbios1: uuid=5cac3ae4-b889-413c-befb-f919526f650c
sockets: 1
vga: std
virtio0: local_2:444/vm-444-disk-0.qcow2,cache=writeback,size=60G
virtio1: local_1:444/vm-444-disk-2.qcow2,size=5G
 

Attachments

  • C1.jpg
    C1.jpg
    73.7 KB · Views: 27
  • C2.jpg
    C2.jpg
    76 KB · Views: 26
  • ESXi7SR650SAS12GWin2019_1.jpg
    ESXi7SR650SAS12GWin2019_1.jpg
    77.1 KB · Views: 20
  • ESXi7SR650SAS12GWin2019_2IO.jpg
    ESXi7SR650SAS12GWin2019_2IO.jpg
    71.4 KB · Views: 18
  • Proxmox7SR650SAS12GWin2019_1.jpg
    Proxmox7SR650SAS12GWin2019_1.jpg
    75 KB · Views: 16
  • Proxmox7SR650SAS12GWin2019_2IO.jpg
    Proxmox7SR650SAS12GWin2019_2IO.jpg
    71.5 KB · Views: 27
yes raid5 on hardware

filesystem xfs

New Server
# pveperf /var/lib/local_1
CPU BOGOMIPS: 67200.00
REGEX/SECOND: 2210890
HD SIZE: 7038.34 GB (/dev/mapper/pve-data)
BUFFERED READS: 2555.40 MB/sec
AVERAGE SEEK TIME: 0.08 ms
FSYNCS/SECOND: 4818.20
DNS EXT: 23.70 ms
DNS INT: 40.31 ms

Old Server with hardware Raid1 setup
# pveperf /var/lib/local_2
CPU BOGOMIPS: 57468.60
REGEX/SECOND: 2021612
HD SIZE: 893.55 GB (/dev/mapper/ssd2-data2)
BUFFERED READS: 898.81 MB/sec
AVERAGE SEEK TIME: 0.15 ms
FSYNCS/SECOND: 11155.05
DNS EXT: 531.03 ms
DNS INT: 501.88 ms
 
i doubt you can do best without cache. imho hw controller disable write cache to ensure integrity.
test one disk without raid.
+ compare with cache of vm'vdisk set to writethrough
 
We already checked in the HardwareRaid that there is no cache activated.

Already tested with with writethrough on the raid5 but i would call it a measurement accuracy not worth to mention.

I don't get it. All benchmark tell me the new server should be faster, but in real live testing workloads are 4 times slower than the old server.

Maybe it' a driver problem with the hardware raid controller, IMO because we hat a problems with installing the latest proxmox ve
 
I don't get it. All benchmark tell me the new server should be faster, but in real live testing workloads are 4 times slower than the old server.
pveperf shows clearly that your new array achieves much lower fsync rate.
Check disk cache settings. Enterprise SSDs do have built-in capacitors (power loss protection) and you can leave drive cache on.
 
What type of database?
How are the Power-Settings in the Host?
What is the CPU in the old and int the new server?
 
Hi

thanks for all of your response.

about the cpu question:

Old Server
root@pve:~# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
Stepping: 2
CPU MHz: 2394.526
CPU max MHz: 3200.0000
CPU min MHz: 1200.0000
BogoMIPS: 4789.05
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 15360K
NUMA node0 CPU(s): 0-11
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb intel_ppin tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts


New Server:
lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Silver 4208 CPU @ 2.10GHz
Stepping: 7
CPU MHz: 2100.000
BogoMIPS: 4200.00
Virtualization: VT-x
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 8 MiB
L3 cache: 11 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2
ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology no
nstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdc
m pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnow
prefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr
_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_
a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xget
bv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp_epp pku ospke avx5
12_vnni md_clear flush_l1d arch_capabilities

To the Database question

It is DBISAM

About the RAID5 vs 10 decision: Simply to have more disk space.

From the output of lscpu it looks like mitigation is turned on.

From all your input, i think our problem is a mixture of mitigation/cache problem

I've also attached a screenshot equivalent to our sas ssd configuration. (the sas typ is in production at the moment)

Could the Raid5 config really be the origin for the bad FSYNC/SECOND rate?



Thanks in advanced
 

Attachments

  • Raid Config ST250.JPG
    Raid Config ST250.JPG
    75.2 KB · Views: 16
I am not sure what you are expecting but your new CPU isn't what I would call a "burner"...
1653391113380.png

Try disabling C1E Power-State in Bios and set it to "full throttle". Configure Linux to always run "High Performance" disable all power-savings.
And get RID of that RAID5.... RAID10 and a Spare should help. The CPU on your Raid-Controller can not handle the SSD-Speed in parity well....
 
Definitely try RAID 10. Also, about the CPU, that's the problem with some of the new CPUS. Their single thread rating is sometimes not that much better. Sure a little better than these aging servers but not a big boost. They are more for multi core and PCIE lanes, but if your app is single threaded then very low performance. A good alternative which I am looking for our use is the Xeon E-2300 series CPUs like the E-2378G or E-2388G. These are mostly 1U's and I still like 2U servers but I may try one.
 
Could the Raid5 config really be the origin for the bad FSYNC/SECOND rate?
IOPS performance with raid5 doesn't scale with the number of disks, but with raid10 it would. So a 4 disk raid10 should give you around double the IOPS performance compared to a 4 disk raid5.
 
Ok so today i created a RAID10 with 4 disk.
I Installed the oldest VE image 6.4.1 from the website (i think thats an image without mitigation)
and also deaktivated all C States in bios.

Now im getting round about 8000 FSYNCS/SECOND
(All on the SR650)

With all that tweaks i've done, keep in mind the old server FSYNCS/SECOND: 11155.05 ....

Please correct me if i'm wrong but the "use case" from our customer is a simple db query. Shouldn't this only affect read performance?

i know that i can't expect that much of a performance impact only with a newer system in single treaded applications, but shouldn't it be at least at the same level, and not worse?

A little more questions:
1) VE 6.4.1 did not show the security vulnerabilities with lscpu, on which version proxmox starts to implement the patches?
Our servers are on premise server and are not reachable from internet so my concerns about it are really low.
Did find too many different howtos to disable it. which one are the correct steps to deactivate? our bootloader is grub.

2) Even if Raid10 is the better choice for our server, do we have a chance to gain more speed with a better hardware raid controller?

Thank you all for your time and patience!
 
Well it's getting faster..

If you want to try with mitigations off you have to add this to
/etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT="quiet mitigations=off"

then run #update-grub

I thought the big speed difference affected only older CPUS though?
 
i'm curious about same tests with mitigations=off

imho :
your previous server has a hw raid controller with a dedicaced slc cache so pveperf / fsync is done in cache.
in most case, hw controller disable "write cache" embedded of drives (ssd & hdd) when raid configured to ensure integrity default. you must use hw cache to boost perfs.
 
Hi

So i've done many testing so far.

And for our specific database query problem i've come to the conclusio that our CPU is the bottleneck like itNGO and entilza mentioned.

I tried various proxmox versions , with raid0,1,5,10 all with the same outcome to this database query (it always takes up to about 25-30 sec)
The only difference i saw was when i enabled C1E Power-State in Bios the query takes 5 seconds longer. and my fsync rates were dropping.
Also with mitigation=off no difference.

We also had a Lenovo ST250. so i looked the specs on this cpu and they are much higher in single thread. so i gave it a try and boom double the speed.

We are now in contact with the software developer to check if they could make their program more "parallel".

Thanks for all your support, now i know what i could try in the future.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!