I've been getting slow performance with my drives, especially backups.
Maybe it's just because they're TB drives but I thought I'd look into it and see if I can improve things at all.
This is what I came up with:
So, is setting multcount -m16 actually dangerous for these drives, or is that message over cautious?
Google highly recommends this solution but I can't find anything about even being cautious using this parameter elsewhere.
Maybe it's just because they're TB drives but I thought I'd look into it and see if I can improve things at all.
This is what I came up with:
Code:
root@pmhost:~# hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 56058 MB in 2.00 seconds = 28091.49 MB/sec
Timing buffered disk reads: 748 MB in 3.01 seconds = 248.70 MB/sec
root@pmhost:~# hdparm -tT /dev/sdb
/dev/sdb:
Timing cached reads: 58344 MB in 2.00 seconds = 29239.03 MB/sec
Timing buffered disk reads: 714 MB in 3.00 seconds = 237.62 MB/sec
root@pmhost:~# hdparm -tT /dev/sdc
/dev/sdc:
Timing cached reads: 57226 MB in 2.00 seconds = 28676.96 MB/sec
Timing buffered disk reads: 746 MB in 3.00 seconds = 248.49 MB/sec
root@pmhost:~# hdparm -tT /dev/sdd
/dev/sdd:
Timing cached reads: 56568 MB in 2.00 seconds = 28347.41 MB/sec
Timing buffered disk reads: 728 MB in 3.01 seconds = 242.06 MB/sec
root@pmhost:~# hdparm -tT /dev/sde
/dev/sde:
Timing cached reads: 59282 MB in 2.00 seconds = 29709.93 MB/sec
Timing buffered disk reads: 1526 MB in 3.00 seconds = 508.11 MB/sec
root@pmhost:~# hdparm -I /dev/sda | grep -i speed
* Gen1 signaling speed (1.5Gb/s)
* Gen2 signaling speed (3.0Gb/s)
* Gen3 signaling speed (6.0Gb/s)
root@pmhost:~# hdparm -I /dev/sdb | grep -i speed
* Gen1 signaling speed (1.5Gb/s)
* Gen2 signaling speed (3.0Gb/s)
* Gen3 signaling speed (6.0Gb/s)
root@pmhost:~# hdparm -I /dev/sdc | grep -i speed
* Gen1 signaling speed (1.5Gb/s)
* Gen2 signaling speed (3.0Gb/s)
* Gen3 signaling speed (6.0Gb/s)
root@pmhost:~# hdparm -I /dev/sdd | grep -i speed
* Gen1 signaling speed (1.5Gb/s)
* Gen2 signaling speed (3.0Gb/s)
* Gen3 signaling speed (6.0Gb/s)
root@pmhost:~# hdparm -I /dev/sde | grep -i speed
* Gen1 signaling speed (1.5Gb/s)
* Gen2 signaling speed (3.0Gb/s)
* Gen3 signaling speed (6.0Gb/s)
root@pmhost:~# dmesg | grep -i sata | grep 'link up'
[ 2.063947] ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[ 2.071931] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[ 2.071953] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[ 2.071977] ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[ 2.072001] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
root@pmhost:~# dmesg | grep -i ahci
[ 1.591359] ahci 0000:0c:00.0: version 3.0
[ 1.591662] ahci 0000:0c:00.0: AHCI 0001.0301 32 slots 4 ports 6 Gbps 0xf impl RAID mode
[ 1.591665] ahci 0000:0c:00.0: flags: 64bit ncq sntf ilck pm led clo only pmp fbs pio slum part
[ 1.592038] scsi host0: ahci
[ 1.592135] scsi host1: ahci
[ 1.592210] scsi host2: ahci
[ 1.592288] scsi host3: ahci
[ 1.592464] ahci 0000:0d:00.0: AHCI 0001.0301 32 slots 1 ports 6 Gbps 0x1 impl RAID mode
[ 1.592466] ahci 0000:0d:00.0: flags: 64bit ncq sntf ilck pm led clo only pmp fbs pio slum part
[ 1.592599] scsi host4: ahci
root@pmhost:~# hdparm /dev/sd[abcde]
/dev/sda:
multcount = 0 (off)
IO_support = 1 (32-bit)
readonly = 0 (off)
readahead = 256 (on)
geometry = 972801/255/63, sectors = 15628053168, start = 0
/dev/sdb:
multcount = 0 (off)
IO_support = 1 (32-bit)
readonly = 0 (off)
readahead = 256 (on)
geometry = 972801/255/63, sectors = 15628053168, start = 0
/dev/sdc:
multcount = 0 (off)
IO_support = 1 (32-bit)
readonly = 0 (off)
readahead = 256 (on)
geometry = 972801/255/63, sectors = 15628053168, start = 0
/dev/sdd:
multcount = 0 (off)
IO_support = 1 (32-bit)
readonly = 0 (off)
readahead = 256 (on)
geometry = 972801/255/63, sectors = 15628053168, start = 0
/dev/sde:
multcount = 1 (on)
IO_support = 1 (32-bit)
readonly = 0 (off)
readahead = 256 (on)
geometry = 121601/255/63, sectors = 1953525168, start = 0
root@pmhost:~# hdparm -I /dev/sd[abcde] | grep "Model Number"
Model Number: ST8000NE001-2M7101
Model Number: ST8000NE001-2M7101
Model Number: ST8000NE001-2M7101
Model Number: ST8000NE001-2M7101
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 04 51 e0 01 21 04 00 00 80 2f 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Model Number: Samsung SSD 840 EVO 1TB
root@pmhost:~# hdparm -I /dev/sd[abcd] | grep "Model Number"
Model Number: ST8000NE001-2M7101
Model Number: ST8000NE001-2M7101
Model Number: ST8000NE001-2M7101
Model Number: ST8000NE001-2M7101
root@pmhost:~# hdparm -I /dev/sde | grep "Model Number"
Model Number: Samsung SSD 840 EVO 1TB
root@pmhost:~# hdparm -m16 /dev/sda
/dev/sda:
setting multcount to 16
Use of -m is VERY DANGEROUS.
Only the old IDE drivers work correctly with -m with kernels up to at least 2.6.29.
libata drives may fail and get hung if you set this flag.
Please supply the --yes-i-know-what-i-am-doing flag if you really want this.
Program aborted.
So, is setting multcount -m16 actually dangerous for these drives, or is that message over cautious?
Google highly recommends this solution but I can't find anything about even being cautious using this parameter elsewhere.