Need assistance on an issue I've been facing with my HP ProLiant ML350 Gen10 server running Proxmox. Specifically, I'm interested in improving the "Timing cached reads" performance metric on my server.
Server Specs:Server Model: HP ProLiant ML350 Gen10
CPU: Intel Xeon Bronze 3106
Storage: HP MO000800JWTBR-MSA-LF - HP 800GB SAS 12G MU LFF SSD for MSA Storage
RAID Controller: HPE Smart Array P408i-a SR Gen10 Controller
Memory: HP DDR4 SmartMemory 16GB
Performance Test Results:
I have conducted performance tests on my server using hdparm -tT to measure the "Timing cached reads" performance. Below are the results from my server and a DigitalOcean instance for comparison:
DigitalOcean:
Code:
/dev/vda:
Timing cached reads: 37718 MB in 2.00 seconds = 18892.70 MB/sec
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 3154 MB in 3.00 seconds = 1051.20 MB/sec
My HP Gen10 ML350 Server:
Code:
/dev/sde:
Timing cached reads: 10298 MB in 1.99 seconds = 5184.00 MB/sec
Timing buffered disk reads: 4804 MB in 3.00 seconds = 1598.70 MB/sec
As you can see, there's a significant difference in the "Timing cached reads" performance between my server and the DigitalOcean instance, with DigitalOcean having considerably higher performance.
My Questions:
What hardware and software changes can I make to improve the "Timing cached reads" performance on my server?
Are there specific RAID controller or SSD settings I should consider optimizing for better cache performance?
Could Proxmox configurations play a role in this performance difference?
Are there any known Proxmox or Linux kernel optimizations that can enhance cached reads performance?
What other diagnostic tools or strategies would you recommend to pinpoint the performance bottleneck?
I greatly appreciate your help and guidance on this matter. Please feel free to share your experiences, suggestions, or any relevant information that can assist in improving the cached reads performance on my HP ProLiant ML350 Gen10 server.
Thank you for your time and support.
Last edited: