Slow Proxmox 2.1 Performance on Dell H310 Raid Card

I used the default proxmox 3.1. Ext3.
Mounted the other disks using the regular mount command. Mount /Dev/sdx /var/lib/vz

Sent from my Nexus 5 using Tapatalk
 
I reinstalled the entire server to test the hardware. Breaking the existing raid, used the latest version of proxmox if the older version was the issue

Sent from my Nexus 5 using Tapatalk
 
Single Disk /dev/sdb

Code:
root@prox31:~# fio /usr/share/doc/fio/examples/iometer-file-access-server
iometer: (g=0): rw=randrw, bs=512-64K/512-64K, ioengine=libaio, iodepth=64
2.0.8
Starting 1 process
iometer: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m] [100.0% done] [723K/177K /s] [637 /164  iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=18075
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
  read : io=3273.5MB, bw=1517.9KB/s, iops=495 , runt=2208471msec
    slat (usec): min=2 , max=207925 , avg=33.47, stdev=1886.38
    clat (usec): min=46 , max=1414.8K, avg=98863.80, stdev=71330.05
     lat (usec): min=625 , max=1414.8K, avg=98897.51, stdev=71319.16
    clat percentiles (msec):
     |  1.00th=[   12],  5.00th=[   20], 10.00th=[   28], 20.00th=[   42],
     | 30.00th=[   55], 40.00th=[   68], 50.00th=[   82], 60.00th=[   98],
     | 70.00th=[  120], 80.00th=[  147], 90.00th=[  192], 95.00th=[  237],
     | 99.00th=[  338], 99.50th=[  383], 99.90th=[  498], 99.95th=[  553],
     | 99.99th=[  685]
    bw (KB/s)  : min=   77, max= 6937, per=100.00%, avg=1521.78, stdev=1071.32
  write: io=842256KB, bw=390528 B/s, iops=124 , runt=2208471msec
    slat (usec): min=2 , max=1309.9K, avg=92.70, stdev=4129.57
    clat (msec): min=4 , max=1946 , avg=120.90, stdev=96.99
     lat (msec): min=4 , max=1946 , avg=120.99, stdev=97.05
    clat percentiles (msec):
     |  1.00th=[   14],  5.00th=[   24], 10.00th=[   34], 20.00th=[   49],
     | 30.00th=[   63], 40.00th=[   77], 50.00th=[   94], 60.00th=[  116],
     | 70.00th=[  143], 80.00th=[  180], 90.00th=[  241], 95.00th=[  306],
     | 99.00th=[  465], 99.50th=[  545], 99.90th=[  717], 99.95th=[  816],
     | 99.99th=[ 1582]
    bw (KB/s)  : min=   44, max= 2339, per=100.00%, avg=382.22, stdev=285.39
    lat (usec) : 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%
    lat (usec) : 1000=0.01%
    lat (msec) : 2=0.01%, 4=0.01%, 10=0.45%, 20=4.18%, 50=21.21%
    lat (msec) : 100=33.47%, 250=35.64%, 500=4.81%, 750=0.20%, 1000=0.01%
    lat (msec) : 2000=0.01%
  cpu          : usr=0.53%, sys=0.93%, ctx=1341622, majf=0, minf=21
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=1094095/w=273872/d=0, short=r=0/w=0/d=0


Run status group 0 (all jobs):
   READ: io=3273.5MB, aggrb=1517KB/s, minb=1517KB/s, maxb=1517KB/s, mint=2208471msec, maxt=2208471msec
  WRITE: io=842256KB, aggrb=381KB/s, minb=381KB/s, maxb=381KB/s, mint=2208471msec, maxt=2208471msec


Disk stats (read/write):
  sdb: ios=1093387/275801, merge=2173/694, ticks=106597574/33457846, in_queue=140058484, util=100.00%
root@prox31:~#

Raid 10 - 10 Disks
/dev/sdc

Code:
root@prox31:~# fio /usr/share/doc/fio/examples/iometer-file-access-server
iometer: (g=0): rw=randrw, bs=512-64K/512-64K, ioengine=libaio, iodepth=64
2.0.8
Starting 1 process
iometer: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m] [100.0% done] [2928K/711K /s] [2579 /638  iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=17929
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
  read : io=3274.1MB, bw=6016.4KB/s, iops=1962 , runt=557400msec
    slat (usec): min=1 , max=92613 , avg= 6.38, stdev=271.50
    clat (usec): min=70 , max=479683 , avg=25699.84, stdev=15972.58
     lat (usec): min=313 , max=479688 , avg=25706.41, stdev=15972.03
    clat percentiles (msec):
     |  1.00th=[    5],  5.00th=[    8], 10.00th=[   10], 20.00th=[   13],
     | 30.00th=[   16], 40.00th=[   19], 50.00th=[   22], 60.00th=[   26],
     | 70.00th=[   31], 80.00th=[   37], 90.00th=[   47], 95.00th=[   57],
     | 99.00th=[   80], 99.50th=[   90], 99.90th=[  114], 99.95th=[  125],
     | 99.99th=[  165]
    bw (KB/s)  : min= 1287, max=22473, per=100.00%, avg=6021.87, stdev=4198.65
  write: io=840807KB, bw=1508.5KB/s, iops=489 , runt=557400msec
    slat (usec): min=2 , max=91186 , avg=12.60, stdev=495.70
    clat (msec): min=1 , max=533 , avg=27.67, stdev=17.17
     lat (msec): min=1 , max=533 , avg=27.68, stdev=17.17
    clat percentiles (msec):
     |  1.00th=[    8],  5.00th=[   10], 10.00th=[   12], 20.00th=[   16],
     | 30.00th=[   18], 40.00th=[   21], 50.00th=[   24], 60.00th=[   27],
     | 70.00th=[   32], 80.00th=[   38], 90.00th=[   49], 95.00th=[   61],
     | 99.00th=[   87], 99.50th=[   97], 99.90th=[  129], 99.95th=[  163],
     | 99.99th=[  281]
    bw (KB/s)  : min=  369, max= 6269, per=100.00%, avg=1509.84, stdev=1088.10
    lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
    lat (msec) : 2=0.01%, 4=0.35%, 10=9.42%, 20=32.98%, 50=49.14%
    lat (msec) : 100=7.81%, 250=0.27%, 500=0.01%, 750=0.01%
  cpu          : usr=1.14%, sys=1.77%, ctx=1217740, majf=0, minf=21
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=1093851/w=272875/d=0, short=r=0/w=0/d=0


Run status group 0 (all jobs):
   READ: io=3274.1MB, aggrb=6016KB/s, minb=6016KB/s, maxb=6016KB/s, mint=557400msec, maxt=557400msec
  WRITE: io=840806KB, aggrb=1508KB/s, minb=1508KB/s, maxb=1508KB/s, mint=557400msec, maxt=557400msec


Disk stats (read/write):
  sdc: ios=1091779/272858, merge=1494/226, ticks=27950616/7520834, in_queue=35471699, util=100.00%
root@prox31:~#
 
Did you configure spare disks in your raid10 setup? I would say that the write performance in raid10 with 10disks should be 5 times a single disk, which in your case means: 620 and you get 489
 
No, this is 10 active disks. No spares. I have two disks out of raid so I can do the install on a separate disk and testing with an untaxed single disk.

Sent from my Nexus 5 using Tapatalk
 
Updated Disk Firmware and played with these settings....
Same Raid 10, 10 Disks

root@prox31:/home# megacli -LDSetProp disDskCache -LAll -aAll


Code:
Set Disk Cache Policy to Disabled on Adapter 0, VD 0 (target id: 0) success


Exit Code: 0x00
root@prox31:/home# pveperf /var/lib/vz
CPU BOGOMIPS:      63997.60
REGEX/SECOND:      1252758
HD SIZE:           1372.50 GB (/dev/sdb)
BUFFERED READS:    943.93 MB/sec
AVERAGE SEEK TIME: 6.36 ms
FSYNCS/SECOND:     89.66
DNS EXT:           232.97 ms
DNS INT:           0.80 ms (smbtrinidad.local)

root@prox31:/home# megacli -LDSetProp enDskCache -LAll -aAll


Code:
Set Disk Cache Policy to Enabled on Adapter 0, VD 0 (target id: 0) success


Exit Code: 0x00
root@prox31:/home# pveperf /var/lib/vz
CPU BOGOMIPS:      63997.60
REGEX/SECOND:      1260630
HD SIZE:           1372.50 GB (/dev/sdb)
BUFFERED READS:    943.29 MB/sec
AVERAGE SEEK TIME: 6.59 ms
FSYNCS/SECOND:     1595.28
DNS EXT:           875.42 ms
DNS INT:           0.73 ms (smbtrinidad.local)
 
So now the server seems very quick and stable.. Although we're probably looking to upgrade the raid cards soon. For now I've set a cron job to set the disk cache on with every reboot. I realized that even though megacli kept reporting that it was enabled, it wasn't. Since we've taken one node offline and upgraded it, i will be doing this for the second node because the firmware updates gave me a 30% bump in performance...Thanks for all your assistance everyone!

Sent from my Nexus 5 using Tapatalk
 
Oh thheo, its still set to write through and no read ahead... Will normal disk cache cause issues like this still?

Sent from my Nexus 5 using Tapatalk
 
on Perc modules read cache and write cache are different settings/policies.. In my case a BBU was 200USD and working with write back caching makes a huge difference..
you could buy H710 with 1GB cache..