I have a HP Gen8 Microserver which I have been running for a few years. I've never benchmarked it, it's never been fast and I never expected it to be, but recently I thought I would give it an update from the old version of CentOS it was running to Proxmox5, and take the opportunity to upgrade the hardware also.
It's currently consists of:
Intel Xeon E31260L
16GB ECC RAM
IBM M1015 HBA
4x 3TB WD Red (RAID10)
While migrating data to the new setup, I have been suffering from awful disk speeds. I appreciate the WD Reds are not the fastest disks in the world, but regardless it seems bad.
I'm dd'ing a previous VM's raw disk image to a new Proxmox VM disk, and it's going so slow. About 20MB/s and it's the disks themselves which seem to be the bottleneck.
- I've tried a variety of dd blocksizes from 4k to 64M, no significant improvement.
- Rsyncing/cp’ing the file to the local filesystem is the same speed.
- I have been using the IBM M1015 card, but have also been testing using the onboard software RAID which performs approximately the same.
- I've tried ZFS RAID 10 via JBOD passthrough (tested via both the M1015 and onboard fake-raid HBA), and also XFS and EXT4 filesystems on M1015 HBA RAID 10.
- Changed disk scheduler to noop/deadline without any real difference.
I've shown the results if iostat and sar below. The tps value is approx 80, which I guess is around the max iops for the WD Reds. As the disks seems to be writing at approx 10MB/s each, this would mean each transaction is probably 128KB(?). This is where my knowledge gets flaky and I don't know if that's normal, bad or irrelevant. I missing something obvious? Surely I'm not being unrealistic to expect my RAID10 array to be faster than 20MB/s for sustained disk activity?
Thank you for any help.
Steve
It's currently consists of:
Intel Xeon E31260L
16GB ECC RAM
IBM M1015 HBA
4x 3TB WD Red (RAID10)
While migrating data to the new setup, I have been suffering from awful disk speeds. I appreciate the WD Reds are not the fastest disks in the world, but regardless it seems bad.
I'm dd'ing a previous VM's raw disk image to a new Proxmox VM disk, and it's going so slow. About 20MB/s and it's the disks themselves which seem to be the bottleneck.
- I've tried a variety of dd blocksizes from 4k to 64M, no significant improvement.
- Rsyncing/cp’ing the file to the local filesystem is the same speed.
- I have been using the IBM M1015 card, but have also been testing using the onboard software RAID which performs approximately the same.
- I've tried ZFS RAID 10 via JBOD passthrough (tested via both the M1015 and onboard fake-raid HBA), and also XFS and EXT4 filesystems on M1015 HBA RAID 10.
- Changed disk scheduler to noop/deadline without any real difference.
I've shown the results if iostat and sar below. The tps value is approx 80, which I guess is around the max iops for the WD Reds. As the disks seems to be writing at approx 10MB/s each, this would mean each transaction is probably 128KB(?). This is where my knowledge gets flaky and I don't know if that's normal, bad or irrelevant. I missing something obvious? Surely I'm not being unrealistic to expect my RAID10 array to be faster than 20MB/s for sustained disk activity?
Thank you for any help.
Steve
Code:
avg-cpu: %user %nice %system %iowait %steal %idle
0.25 0.00 0.38 12.53 0.00 86.84
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 83.00 0.00 10.26 0 10
sdb 84.00 0.00 10.38 0 10
sdc 84.00 0.00 10.38 0 10
sdd 84.00 0.00 10.38 0 10
zd0 0.00 0.00 0.00 0 0
zd16 0.00 0.00 0.00 0 0
zd32 0.00 0.00 0.00 0 0
sde 0.00 0.00 0.00 0 0
dm-0 0.00 0.00 0.00 0 0
10:15:54 PM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
10:15:55 PM sda 83.00 0.00 19552.00 235.57 1.06 13.01 12.00 99.60
10:15:55 PM sdb 85.00 0.00 19728.00 232.09 1.06 12.89 11.76 100.00
10:15:55 PM sdc 84.00 0.00 19544.00 232.67 1.04 12.57 11.76 98.80
10:15:55 PM sdd 83.00 0.00 18272.00 220.14 1.04 12.92 11.90 98.80
10:15:55 PM zd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:15:55 PM zd16 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:15:55 PM zd32 6.00 0.00 48.00 8.00 0.98 20.67 161.33 96.80
10:15:55 PM sde 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:15:55 PM MaxtorLUKS 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00