Slow Proxmox 2.1 Performance on Dell H310 Raid Card

Romjo

New Member
Oct 25, 2011
18
0
1
Hey guys, i've been reading different threads on here relating to this and haven't found a definitive answer... without me taking the server out of production as of yet. Main issue is performance, i've moved other VMs off the server and now only host 3 Windows Boxes and 1 Linux FW. I've shutdown everything but the Linux FW and did a pveperf on /var/lib/vz and got slightly better results of 599MB/s buffered reads and 40 FSYNCS/SECOND.
Reading through here i'm seeing that the FSYNCS/Second is terrible ... I'm getting I/O speed issues from the file server, as of right now we've turned off everything that can use the disk except the files themselves ... ANY help is appreciated...
Oh it's also clustered with another proxmox 2.1 box.

Basic System Configuration:
Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
96GB Ram
12 x 320GB SAS in Raid 10
H310 Perc card




#pveperf /var/lib/vz (with 4 vms running-this pretty much stays the same as the VMs aren't that intensive *NOTE Above*)
CPU BOGOMIPS: 63990.69
REGEX/SECOND: 886784
HD SIZE: 1443.74 GB (/dev/mapper/pve-data)
BUFFERED READS: 152.03 MB/sec
AVERAGE SEEK TIME: 9.31 ms
FSYNCS/SECOND: 26.84
DNS EXT: 160.37 ms
DNS INT: 1.34 ms


# pveversion -v
pve-manager: 2.1-1 (pve-manager/2.1/f9b0f63a)
running kernel: 2.6.32-11-pve
proxmox-ve-2.6.32: 2.0-66
pve-kernel-2.6.32-11-pve: 2.6.32-66
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-2
pve-cluster: 1.0-26
qemu-server: 2.0-39
pve-firmware: 1.0-15
libpve-common-perl: 1.0-27
libpve-access-control: 1.0-21
libpve-storage-perl: 2.0-18
vncterm: 1.0-2
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-9
ksm-control-daemon: 1.1-1


I would like to see if it's a proxmox issue or something else, also if i did the other proxmox box i get similar stats but as of right now it's heavily loaded with VMs so it's an unfair comparison. The two systems are identical.

# pveperf /var/lib/vz
CPU BOGOMIPS: 63991.31
REGEX/SECOND: 748466
HD SIZE: 1443.74 GB (/dev/mapper/pve-data)
BUFFERED READS: 215.94 MB/sec
AVERAGE SEEK TIME: 7.95 ms
FSYNCS/SECOND: 37.51
DNS EXT: 158.01 ms
DNS INT: 1.23 ms
 
Last edited:
Box2 Versions:

# pveversion -v
pve-manager: 2.1-1 (pve-manager/2.1/f9b0f63a)
running kernel: 2.6.32-11-pve
proxmox-ve-2.6.32: 2.0-66
pve-kernel-2.6.32-11-pve: 2.6.32-66
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-2
pve-cluster: 1.0-26
qemu-server: 2.0-39
pve-firmware: 1.0-15
libpve-common-perl: 1.0-27
libpve-access-control: 1.0-21
libpve-storage-perl: 2.0-18
vncterm: 1.0-2
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-9
ksm-control-daemon: 1.1-1
 
02:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2008 [Falcon] (rev 03)
Kernel driver in use: megaraid_sas

# cat /proc/mounts
none /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
none /proc proc rw,nosuid,nodev,noexec,relatime 0 0
none /dev devtmpfs rw,relatime,size=49451728k,nr_inodes=12362932,mode=755 0 0
none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
/dev/mapper/pve-root / ext3 rw,relatime,errors=remount-ro,barrier=0,data=ordered 0 0
tmpfs /lib/init/rw tmpfs rw,nosuid,relatime,mode=755 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0
/dev/mapper/pve-data /var/lib/vz ext3 rw,relatime,errors=continue,barrier=0,data=ordered 0 0
/dev/sda1 /boot ext3 rw,relatime,errors=continue,barrier=0,data=ordered 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
/dev/fuse /etc/pve fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other 0 0
none /sys/kernel/config configfs rw,relatime 0 0
beancounter /proc/vz/beancounter cgroup rw,relatime,blkio,name=beancounter 0 0
container /proc/vz/container cgroup rw,relatime,freezer,devices,name=container 0 0
fairsched /proc/vz/fairsched cgroup rw,relatime,cpuacct,cpu,cpuset,name=fairsched 0 0
 
I would test I/O performance with "fio"
Just apt-get install fio
And then use the command: fio /usr/share/doc/fio/examples/iometer-file-access-server
First edit the file and at the [global] section add: "directory = /var/lib/vz" if you want to test the performance of /dev/mapper/pve-data
You do this only with the host without running VMs. Then just with one VM ( preferably one that doesn't do any I/O activity .. ) to the same thing with fio iometer.
This way you can see what is the performance penalty of a VM out of the host performance. If it's 98% let's say then you are ok.. if not loot at drivers, kvm emnulation ( cpu,
disk type: virtion/sata..etc )
You didn't mention all details about VMs, configuration of VMs.. drivers.. what kind of raid setup, disks used in raid..
 
Sry about that,
Primary VM with the issue VM211:

VM211 File Server
boot: cdn
bootdisk: virtio0
cores: 2
ide2: none,media=cdrom
memory: 4096
name: <Removed>
net0: e1000=<Removed>,bridge=vmbr0
onboot: 1
ostype: wxp
sockets: 1
virtio0: local:211/vm-211-disk-1.vmdk
virtio1: local:211/vm-211-disk-2.vmdk
virtio2: local:211/vm-211-disk-3.vmdk

VM218 - MS SQL Server
boot: cdn
bootdisk: virtio0
cores: 4
ide0: local:218/vm-218-disk-4.vmdk
ide1: local:218/vm-218-disk-3.vmdk
ide2: none,media=cdrom
memory: 4096
name: <Removed>
net0: virtio=<Removed>,bridge=vmbr0
onboot: 1
ostype: win7
sockets: 1
virtio0: local:218/vm-218-disk-5.vmdk


VM202 - ISA Firewall
boot: cdn
bootdisk: virtio0
cores: 2
ide2: none,media=cdrom
memory: 1024
name: <Removed>
net0: virtio=<Removed>,bridge=vmbr0
net1: virtio=<Removed>,bridge=vmbr2
net2: rtl8139=<Removed>,bridge=vmbr1
onboot: 1
ostype: wxp
sockets: 1
virtio0: local:202/vm-202-disk-1.vmdk



VM107 - Linux FW
bootdisk: ide0
cores: 2
ide0: local:107/vm-107-disk-2.vmdk
memory: 1024
name: <Removed>
net0: e1000=<Removed>,bridge=vmbr1
net1: e1000=<Removed>,bridge=vmbr2
net2: e1000=<Removed>,bridge=vmbr0
onboot: 1
ostype: l26
sockets: 1

will run a benchmark tonight using fio, however i'll have to run it with the Linux FW turned on (minimal if any disk io) and i'll post those benchmarks here tonight.

Thanks thheo
 
PVEPerf with just the 1 Linux FW running.

Code:
# pveperf /var/lib/vzCPU BOGOMIPS:      63990.69
REGEX/SECOND:      1037632
HD SIZE:           1443.74 GB (/dev/mapper/pve-data)
BUFFERED READS:    496.27 MB/sec
AVERAGE SEEK TIME: 7.37 ms
FSYNCS/SECOND:     45.67
DNS EXT:           4011.20 ms
DNS INT:           1.25 ms


With just the 1 VM Running - fio bench

Code:
# fio /usr/share/doc/fio/examples/iometer-file-access-serveriometer: (g=0): rw=randrw, bs=512-64K/512-64K, ioengine=libaio, iodepth=64
Starting 1 process
Jobs: 1 (f=1): [m] [100.0% done] [9971K/2572K /s] [1870/485 iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=754510
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
  read : io=3273MB, bw=14135KB/s, iops=1855, runt=237145msec
    slat (usec): min=4, max=76742, avg=18.52, stdev=115.91
    clat (usec): min=358, max=410837, avg=26557.06, stdev=14950.20
    bw (KB/s) : min= 6800, max=22327, per=100.14%, avg=14153.65, stdev=3167.59
  write: io=842578KB, bw=3553KB/s, iops=463, runt=237145msec
    slat (usec): min=4, max=94358, avg=26.10, stdev=619.87
    clat (msec): min=2, max=451, avg=31.65, stdev=17.23
    bw (KB/s) : min=  807, max= 5959, per=100.15%, avg=3558.46, stdev=882.11
  cpu          : usr=1.76%, sys=4.46%, ctx=487459, majf=0, minf=1828
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued r/w: total=440107/109804, short=0/0
     lat (usec): 500=0.01%, 750=0.01%, 1000=0.01%
     lat (msec): 2=0.02%, 4=0.34%, 10=7.01%, 20=25.26%, 50=61.86%
     lat (msec): 100=5.15%, 250=0.32%, 500=0.02%


Run status group 0 (all jobs):
   READ: io=3273MB, aggrb=14134KB/s, minb=14474KB/s, maxb=14474KB/s, mint=237145msec, maxt=237145msec
  WRITE: io=842578KB, aggrb=3553KB/s, minb=3638KB/s, maxb=3638KB/s, mint=237145msec, maxt=237145msec


Disk stats (read/write):
  dm-2: ios=445454/112435, merge=0/0, ticks=11786425/3647413, in_queue=15435078, util=100.00%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    sda: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=-nan%

Fio with all 4 VMs running

Code:
iometer: (groupid=0, jobs=1): err= 0: pid=704601  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
  read : io=3275MB, bw=13148KB/s, iops=1717, runt=255060msec
    slat (usec): min=5, max=3316, avg=20.90, stdev=13.28
    clat (usec): min=442, max=1344K, avg=28698.53, stdev=20633.68
    bw (KB/s) : min=  995, max=21875, per=100.34%, avg=13191.87, stdev=3177.44
  write: io=841236KB, bw=3298KB/s, iops=431, runt=255060msec
    slat (usec): min=7, max=806, avg=22.10, stdev= 9.95
    clat (msec): min=2, max=1275, avg=33.98, stdev=23.11
    bw (KB/s) : min=    0, max= 6075, per=98.14%, avg=3236.78, stdev=985.57
  cpu          : usr=1.63%, sys=4.78%, ctx=486757, majf=0, minf=2085
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued r/w: total=438047/109990, short=0/0
     lat (usec): 500=0.01%, 750=0.01%, 1000=0.01%
     lat (msec): 2=0.03%, 4=0.33%, 10=6.53%, 20=23.23%, 50=61.55%
     lat (msec): 100=7.74%, 250=0.54%, 500=0.01%, 750=0.02%, 1000=0.01%
     lat (msec): 2000=0.01%


Run status group 0 (all jobs):
   READ: io=3275MB, aggrb=13147KB/s, minb=13463KB/s, maxb=13463KB/s, mint=255060msec, maxt=255060msec
  WRITE: io=841235KB, aggrb=3298KB/s, minb=3377KB/s, maxb=3377KB/s, mint=255060msec, maxt=255060msec


Disk stats (read/write):
  dm-2: ios=443256/126809, merge=0/0, ticks=12718534/5813505, in_queue=18534242, util=100.00%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    sda: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=-nan%

I used VMDK because sometimes we move VMs cross platform, especially to ESXi so we try to keep it like this. Conversion time is a b!@tcH
 
Can you tell me exactly what disk drives are you using? Also did you activate in your raid setup write back caching?
do you have BBU installed on the perc?
 
Megaclisas-status
Sry they're 300G.

Code:
~# megaclisas-status-- Controller informations --
-- ID | Model
c0 | PERC H310 Mini


-- Arrays informations --
-- ID | Type | Size | Status | InProgress
c0u0 | RAID10 | 1633G | Optimal | None


-- Disks informations
-- ID | Model | Status
c0u0p0 | WD WD3001BKHG D1S4WXS1E32PTTZS | Online, Spun Up
c0u0p1 | WD WD3001BKHG D1S4WXS1E32PTTXY | Online, Spun Up
c0u0p2 | WD WD3001BKHG D1S4WXN1E32MULTS | Online, Spun Up
c0u0p3 | WD WD3001BKHG D1S4WXN1E32LMKMV | Online, Spun Up
c0u0p4 | WD WD3001BKHG D1S4WX51C62D7241 | Online, Spun Up
c0u0p5 | WD WD3001BKHG D1S4WXN1E32MULKF | Online, Spun Up
c0u0p0 | WD WD3001BKHG D1S4WX51C62D7311 | Online, Spun Up
c0u0p1 | WD WD3001BKHG D1S4WX51C62D7215 | Online, Spun Up
c0u0p2 | WD WD3001BKHG D1S4WXN1E32KSLUV | Online, Spun Up
c0u0p3 | WD WD3001BKHG D1S4WX51C62D7177 | Online, Spun Up
c0u0p4 | WD WD3001BKHG D1S4WX51C62D8835 | Online, Spun Up
c0u0p5 | WD WD3001BKHG D1S4WXN1E32MSJKS | Online, Spun Up

megacli -AdpAllInfo -aALL

Code:
megacli -AdpAllInfo -aALL


                                     
Adapter #0


==============================================================================
                    Versions
                ================
Product Name    : PERC H310 Mini
Serial No       : 27801X1
FW Package Build: 20.10.1-0084


                    Mfg. Data
                ================
Mfg. Date       : 07/14/12
Rework Date     : 07/14/12
Revision No     : A01
Battery FRU     : N/A


                Image Versions in Flash:
                ================
BIOS Version       : 4.29.00_4.12.05.00_0x05110000
Preboot CLI Version: 03.02-015:#%00008
Ctrl-R Version     : 3.00-0020
NVDATA Version     : 3.09.03-0033
FW Version         : 2.120.14-1504
Boot Block Version : 2.02.00.00-0001


                Pending Images in Flash
                ================
None


                PCI Info
                ================
Controller Id: 0000
Vendor Id       : 1000
Device Id       : 0073
SubVendorId     : 1028
SubDeviceId     : 1f51


Host Interface  : PCIE


ChipRevision    : B2


Number of Frontend Port: 0 
Device Interface  : PCIE


Number of Backend Port: 8 
Port  :  Address
0        500056b37789abff 
1        0000000000000000 
2        0000000000000000 
3        0000000000000000 
4        0000000000000000 
5        0000000000000000 
6        0000000000000000 
7        0000000000000000 


                HW Configuration
                ================
SAS Address      : 5d4ae520b35b1400
BBU              : Absent
Alarm            : Absent
NVRAM            : Present
Serial Debugger  : Present
Memory           : Absent
Flash            : Present
Memory Size      : 0MB
TPM              : Absent
On board Expander: Absent
Upgrade Key      : Absent
Temperature sensor for ROC    : Present
Temperature sensor for controller    : Present


ROC temperature : 46  degree Celcius
Controller temperature : 46  degree Celcius


                Settings
                ================
Current Time                     : 11:59:35 1/8, 2014
Predictive Fail Poll Interval    : 300sec
Interrupt Throttle Active Count  : 16
Interrupt Throttle Completion    : 50us
Rebuild Rate                     : 30%
PR Rate                          : 30%
BGI Rate                         : 30%
Check Consistency Rate           : 30%
Reconstruction Rate              : 30%
Cache Flush Interval             : 4s
Max Drives to Spinup at One Time : 4
Delay Among Spinup Groups        : 12s
Physical Drive Coercion Mode     : 128MB
Cluster Mode                     : Disabled
Alarm                            : Disabled
Auto Rebuild                     : Enabled
Battery Warning                  : Disabled
Ecc Bucket Size                  : 15
Ecc Bucket Leak Rate             : 1440 Minutes
Restore HotSpare on Insertion    : Disabled
Expose Enclosure Devices         : Disabled
Maintain PD Fail History         : Disabled
Host Request Reordering          : Enabled
Auto Detect BackPlane Enabled    : SGPIO/i2c SEP
Load Balance Mode                : Auto
Use FDE Only                     : Yes
Security Key Assigned            : No
Security Key Failed              : No
Security Key Not Backedup        : No
Default LD PowerSave Policy      : Controller Defined
Maximum number of direct attached drives to spin up in 1 min : 20 
Auto Enhanced Import             : No
Any Offline VD Cache Preserved   : No
Allow Boot with Preserved Cache  : No
Disable Online Controller Reset  : No
PFK in NVRAM                     : No
Use disk activity for locate     : No
POST delay : 90 seconds


                Capabilities
                ================
RAID Level Supported             : RAID0, RAID1, RAID5, RAID00, RAID10, RAID50, PRL 11, PRL 11 with spanning, PRL11-RLQ0 DDF layout with no span, PRL11-RLQ0 DDF layout with span
Supported Drives                 : SAS, SATA


Allowed Mixing:


Mix in Enclosure Allowed


                Status
                ================
ECC Bucket Count                 : 0


                Limitations
                ================
Max Arms Per VD          : 16 
Max Spans Per VD         : 8 
Max Arrays               : 16 
Max Number of VDs        : 16 
Max Parallel Commands    : 31 
Max SGE Count            : 60 
Max Data Transfer Size   : 8192 sectors 
Max Strips PerIO         : 20 
Max LD per array         : 16 
Min Strip Size           : 64 KB
Max Strip Size           : 64 KB
Max Configurable CacheCade Size: 0 GB
Current Size of CacheCade      : 0 GB
Current Size of FW Cache       : 0 MB


                Device Present
                ================
Virtual Drives    : 1 
  Degraded        : 0 
  Offline         : 0 
Physical Devices  : 14 
  Disks           : 12 
  Critical Disks  : 0 
  Failed Disks    : 0 


                Supported Adapter Operations
                ================
Rebuild Rate                    : Yes
CC Rate                         : Yes
BGI Rate                        : Yes
Reconstruct Rate                : Yes
Patrol Read Rate                : Yes
Alarm Control                   : Yes
Cluster Support                 : No
BBU                             : No
Spanning                        : Yes
Dedicated Hot Spare             : Yes
Revertible Hot Spares           : Yes
Foreign Config Import           : Yes
Self Diagnostic                 : Yes
Allow Mixed Redundancy on Array : No
Global Hot Spares               : Yes
Deny SCSI Passthrough           : No
Deny SMP Passthrough            : No
Deny STP Passthrough            : No
Support Security                : No
Snapshot Enabled                : No
Support the OCE without adding drives : Yes
Support PFK                     : No
Support PI                      : No
Support Boot Time PFK Change    : No
Disable Online PFK Change       : No
Support Shield State            : No
Block SSD Write Disk Cache Change: No


                Supported VD Operations
                ================
Read Policy          : No
Write Policy         : No
IO Policy            : No
Access Policy        : Yes
Disk Cache Policy    : Yes
Reconstruction       : Yes
Deny Locate          : No
Deny CC              : No
Allow Ctrl Encryption: No
Enable LDBBM         : Yes
Support Breakmirror  : Yes
Power Savings        : Yes


                Supported PD Operations
                ================
Force Online                            : Yes
Force Offline                           : Yes
Force Rebuild                           : Yes
Deny Force Failed                       : No
Deny Force Good/Bad                     : No
Deny Missing Replace                    : No
Deny Clear                              : Yes
Deny Locate                             : No
Support Temperature                     : Yes
Disable Copyback                        : No
Enable JBOD                             : Yes
Enable Copyback on SMART                : No
Enable Copyback to SSD on SMART Error   : No
Enable SSD Patrol Read                  : No
PR Correct Unconfigured Areas           : Yes
Enable Spin Down of UnConfigured Drives : No
Disable Spin Down of hot spares         : Yes
Spin Down time                          : 30 
T10 Power State                         : Yes
                Error Counters
                ================
Memory Correctable Errors   : 0 
Memory Uncorrectable Errors : 0 


                Cluster Information
                ================
Cluster Permitted     : No
Cluster Active        : No


                Default Settings
                ================
Phy Polarity                     : 0 
Phy PolaritySplit                : 0 
Background Rate                  : 30 
Strip Size                       : 64kB
Flush Time                       : 4 seconds
Write Policy                     : WT
Read Policy                      : None
Cache When BBU Bad               : Disabled
Cached IO                        : No
SMART Mode                       : Mode 6
Alarm Disable                    : No
Coercion Mode                    : 128MB
ZCR Config                       : Unknown
Dirty LED Shows Drive Activity   : No
BIOS Continue on Error           : No
Spin Down Mode                   : None
Allowed Device Type              : SAS/SATA Mix
Allow Mix in Enclosure           : Yes
Allow HDD SAS/SATA Mix in VD     : No
Allow SSD SAS/SATA Mix in VD     : No
Allow HDD/SSD Mix in VD          : No
Allow SATA in Cluster            : No
Max Chained Enclosures           : 4 
Disable Ctrl-R                   : No
Enable Web BIOS                  : No
Direct PD Mapping                : Yes
BIOS Enumerate VDs               : Yes
Restore Hot Spare on Insertion   : No
Expose Enclosure Devices         : No
Maintain PD Fail History         : No
Disable Puncturing               : No
Zero Based Enclosure Enumeration : Yes
PreBoot CLI Enabled              : No
LED Show Drive Activity          : Yes
Cluster Disable                  : Yes
SAS Disable                      : No
Auto Detect BackPlane Enable     : SGPIO/i2c SEP
Use FDE Only                     : Yes
Enable Led Header                : No
Delay during POST                : 0 
EnableCrashDump                  : No
Disable Online Controller Reset  : No
EnableLDBBM                      : Yes
Un-Certified Hard Disk Drives    : Allow
Treat Single span R1E as R10     : Yes
Max LD per array                 : 16
Power Saving option              : Don't spin down unconfigured drives
Don't spin down Hot spares
Don't Auto spin down Configured Drives
Power settings apply to all drives - individual PD/LD power settings cannot be set
Max power savings option is  not allowed for LDs. Only T10 power conditions are to be used.
Cached writes are not used for spun down VDs
Can schedule disable power savings at controller level
Default spin down time in minutes: 30 
Enable JBOD                      : Yes
TTY Log In Flash                 : Yes
Auto Enhanced Import             : No
BreakMirror RAID Support         : Yes
Disable Join Mirror              : Yes
Enable Shield State              : No
Time taken to detect CME         : 60s


Exit Code: 0x00


megacli -AdpBbuCmd -GetBbuStatus -aALL

Code:
[COLOR=#222222][FONT=Verdana]# megacli -AdpBbuCmd -GetBbuStatus -aALL[/FONT][/COLOR]

Adapter 0: Get BBU Status Failed.


Exit Code: 0x01

megacli -LDInfo -Lall -Aall | grep -i 'Current Cache Policy'

Code:
[COLOR=#222222][FONT=Verdana]# megacli -LDInfo -Lall -Aall | grep -i 'Current Cache Policy'[/FONT][/COLOR]Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU

 
12 SAS 10k rpm drives in a single raid10 configuration should achieve about 840iops (140iops per drive), you only get 463 ( in one of your tests ).
Can you test without anything running except fio? I get a better performance with 4 drives (15k rpm) in raid5..
Of course without write caching you'll get hit by all the burst write I/O that cannot be cached.. but with fio you should get the theoretical basic performance of your setup.. and to me it seems poor.
 
Okay so i've run fio without anything running - even did a reboot and made sure no VMs were started.

Code:
# fio /usr/share/doc/fio/examples/iometer-file-access-serveriometer: (g=0): rw=randrw, bs=512-64K/512-64K, ioengine=libaio, iodepth=64
Starting 1 process
Jobs: 1 (f=1): [m] [100.0% done] [10299K/2571K /s] [1959/485 iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=2528
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
  read : io=3278MB, bw=13978KB/s, iops=1826, runt=240178msec
    slat (usec): min=8, max=67599, avg=28.52, stdev=437.24
    clat (usec): min=347, max=406968, avg=26945.23, stdev=17760.45
    bw (KB/s) : min=  638, max=22028, per=100.18%, avg=14002.23, stdev=3574.84
  write: io=837451KB, bw=3487KB/s, iops=456, runt=240178msec
    slat (usec): min=8, max=82965, avg=39.22, stdev=879.68
    clat (msec): min=2, max=411, avg=32.15, stdev=19.87
    bw (KB/s) : min=  143, max= 6229, per=100.24%, avg=3494.37, stdev=978.32
  cpu          : usr=1.61%, sys=3.33%, ctx=486907, majf=0, minf=2037
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued r/w: total=438621/109526, short=0/0
     lat (usec): 500=0.01%, 750=0.01%, 1000=0.01%
     lat (msec): 2=0.03%, 4=0.35%, 10=6.98%, 20=25.20%, 50=61.69%
     lat (msec): 100=5.16%, 250=0.50%, 500=0.09%


Run status group 0 (all jobs):
   READ: io=3278MB, aggrb=13977KB/s, minb=14313KB/s, maxb=14313KB/s, mint=240178msec, maxt=240178msec
  WRITE: io=837450KB, aggrb=3486KB/s, minb=3570KB/s, maxb=3570KB/s, mint=240178msec, maxt=240178msec


Disk stats (read/write):
  dm-2: ios=444703/110964, merge=0/0, ticks=11551431/3469922, in_queue=15022315, util=99.99%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    sda: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=-nan%
















# fio /usr/share/doc/fio/examples/iometer-file-access-server
iometer: (g=0): rw=randrw, bs=512-64K/512-64K, ioengine=libaio, iodepth=64
Starting 1 process
Jobs: 1 (f=1): [m] [100.0% done] [10024K/2343K /s] [1814/440 iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=3006
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
  read : io=3275MB, bw=13814KB/s, iops=1803, runt=242741msec
    slat (usec): min=8, max=91899, avg=14.62, stdev=223.95
    clat (usec): min=274, max=271561, avg=27238.02, stdev=15129.18
    bw (KB/s) : min= 6126, max=24264, per=100.13%, avg=13832.62, stdev=3542.72
  write: io=841456KB, bw=3466KB/s, iops=450, runt=242741msec
    slat (usec): min=8, max=92795, avg=24.85, stdev=804.11
    clat (msec): min=2, max=312, avg=32.90, stdev=18.28
    bw (KB/s) : min= 1494, max= 6642, per=100.22%, avg=3473.76, stdev=973.39
  cpu          : usr=1.52%, sys=3.12%, ctx=490739, majf=0, minf=1828
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued r/w: total=437853/109371, short=0/0
     lat (usec): 500=0.01%, 750=0.01%, 1000=0.01%
     lat (msec): 2=0.02%, 4=0.33%, 10=6.85%, 20=24.64%, 50=61.23%
     lat (msec): 100=6.43%, 250=0.48%, 500=0.01%


Run status group 0 (all jobs):
   READ: io=3275MB, aggrb=13814KB/s, minb=14145KB/s, maxb=14145KB/s, mint=242741msec, maxt=242741msec
  WRITE: io=841456KB, aggrb=3466KB/s, minb=3549KB/s, maxb=3549KB/s, mint=242741msec, maxt=242741msec


Disk stats (read/write):
  dm-2: ios=442970/110805, merge=0/0, ticks=11997351/3630364, in_queue=15628640, util=100.00%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    sda: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=-nan%
 
I'd say that the performance is just a bit over half of what it could be, are you sure there is no additional I/O when doing fio?
The only thing missing in your perc conf would be some readahead policy for read, but still the write performance is only 456 iops.
Could you test one disk separately ? doing a fio with only 1 disk.
And could you activate write policy: write back without BBU ( just for you to test with fio to see if it changes anything in this test )

LE: Are you sure you have a RAID10 setup? can you get the full config of the VD?
 
Last edited:
Thinking more about your results, this cannot be a RAID10 setup, you should have pretty much the same iops read/write, this looks like a RAID-5 or 6 setup..
 
Thinking more about your results, this cannot be a RAID10 setup, you should have pretty much the same iops read/write, this looks like a RAID-5 or 6 setup..

Yeah I know it does, that's why im worried about the performance! haha. Not in office now, but I can pull the data again when I go there.

From a few posts up:

~# megaclisas-status-- Controller informations --
-- ID | Model
c0 | PERC H310 Mini


-- Arrays informations --
-- ID | Type | Size | Status | InProgress
c0u0 | RAID10 | 1633G | Optimal | None


-- Disks informations
-- ID | Model | Status
c0u0p0 | WD WD3001BKHG D1S4WXS1E32PTTZS | Online, Spun Up
c0u0p1 | WD WD3001BKHG D1S4WXS1E32PTTXY | Online, Spun Up
c0u0p2 | WD WD3001BKHG D1S4WXN1E32MULTS | Online, Spun Up
c0u0p3 | WD WD3001BKHG D1S4WXN1E32LMKMV | Online, Spun Up
c0u0p4 | WD WD3001BKHG D1S4WX51C62D7241 | Online, Spun Up
c0u0p5 | WD WD3001BKHG D1S4WXN1E32MULKF | Online, Spun Up
c0u0p0 | WD WD3001BKHG D1S4WX51C62D7311 | Online, Spun Up
c0u0p1 | WD WD3001BKHG D1S4WX51C62D7215 | Online, Spun Up
c0u0p2 | WD WD3001BKHG D1S4WXN1E32KSLUV | Online, Spun Up
c0u0p3 | WD WD3001BKHG D1S4WX51C62D7177 | Online, Spun Up
c0u0p4 | WD WD3001BKHG D1S4WX51C62D8835 | Online, Spun Up
c0u0p5 | WD WD3001BKHG D1S4WXN1E32MSJKS | Online, Spun Up


I can't do an fio with 1 disk without breaking the raid setup and for this i'll have to move off the VMs, i've been working on getting another server to do this temporarily will let you know the progress of this. I'm going to investigate any installed apps and remove what i can besides the core system.
 
Last edited:
Would Time have any affect on this performance (detrimentally so?), time was off between nodes by 3 minutes.

My test so far has been W2k8 install times, I tested this with *RAW format and it was pretty quick compared to *VMDK which was taking around 2 hours.
Haven't re-tested with *VMDK with Server 2008 install.
 
The time can't have anything to do with iops performance. As mir said vmdk would give you a performance penalty.. but still I would be worried regarding the gross write iops performance..
 
Wasn't sure if there was some sort of polling that would affect the disks. Anyhow I'm bringing down the server today so I'll be able to update firmware and break the raid.

Sent from my Nexus 5 using Tapatalk
 
Disk0 - Same As Proxmox install.
Code:
Disk 0CPU BOGOMIPS:      63999.52
REGEX/SECOND:      1247555
HD SIZE:           68.66 GB (/dev/mapper/pve-root)
BUFFERED READS:    153.33 MB/sec
AVERAGE SEEK TIME: 6.72 ms
FSYNCS/SECOND:     47.75
DNS EXT:           146.81 ms
DNS INT:           0.82 ms

Disk1 - Single, Different Disk from Proxmox install
Code:
CPU BOGOMIPS:      63999.52
REGEX/SECOND:      1303049
HD SIZE:           275.01 GB (/dev/sdb)
BUFFERED READS:    188.18 MB/sec
AVERAGE SEEK TIME: 7.01 ms
FSYNCS/SECOND:     46.40
DNS EXT:           281.44 ms
DNS INT:           0.74 ms

Raid 0 - 10 Drives


Code:
CPU BOGOMIPS:      63999.52
REGEX/SECOND:      1294524
HD SIZE:           2744.99 GB (/dev/sdc)
BUFFERED READS:    1081.80 MB/sec
AVERAGE SEEK TIME: 7.45 ms
FSYNCS/SECOND:     53.22
DNS EXT:           230.36 ms
DNS INT:           0.71 ms