Hello,
I have a small test setup of 3 proxmox servers based on RAID10 local storage.
Server1: 2.66GHz QC3330 xeon, 8GB mem, 4x 1TB sata, AHCI
Server2: 1.66GHz atom, 1GB mem, 4x 2TB sata, AHCI
Server3: 3.2GHz QC AMD PhenomII, 12GB mem, 4x 500GB sata, AHCI
All the servers have RAID10 setup with MDADM:
mdadm --create /dev/md0 -l 10 -c 1024 -p f2 -n 4 /dev/sd[abcd]1
After full sync I try to benchmark the md0 set, without a filesystem:
/dev/md0: Timing buffered disk reads: 254 MB in 3.00 seconds = 84.54 MB/sec
After the filesystem (ext3 or ext4) it's still under 100MB/sec. A single disk performs about 120MB/sec, so the RAID10 is even slower!
Changing readahead gives a better score in hdparm, but it doesn't really improve anything. What am I doing wrong?
I know you prefer hardware RAID, but I also see nice performance from people who use linux raid.
I have a small test setup of 3 proxmox servers based on RAID10 local storage.
Server1: 2.66GHz QC3330 xeon, 8GB mem, 4x 1TB sata, AHCI
Server2: 1.66GHz atom, 1GB mem, 4x 2TB sata, AHCI
Server3: 3.2GHz QC AMD PhenomII, 12GB mem, 4x 500GB sata, AHCI
All the servers have RAID10 setup with MDADM:
mdadm --create /dev/md0 -l 10 -c 1024 -p f2 -n 4 /dev/sd[abcd]1
After full sync I try to benchmark the md0 set, without a filesystem:
/dev/md0: Timing buffered disk reads: 254 MB in 3.00 seconds = 84.54 MB/sec
After the filesystem (ext3 or ext4) it's still under 100MB/sec. A single disk performs about 120MB/sec, so the RAID10 is even slower!
Changing readahead gives a better score in hdparm, but it doesn't really improve anything. What am I doing wrong?
I know you prefer hardware RAID, but I also see nice performance from people who use linux raid.