hdparm -m16 sata dangerous?

Kodey

Member
Oct 26, 2021
130
6
23
I've been getting slow performance with my drives, especially backups.
Maybe it's just because they're TB drives but I thought I'd look into it and see if I can improve things at all.
This is what I came up with:
Code:
root@pmhost:~# hdparm -tT /dev/sda
/dev/sda:
 Timing cached reads:   56058 MB in  2.00 seconds = 28091.49 MB/sec
 Timing buffered disk reads: 748 MB in  3.01 seconds = 248.70 MB/sec

root@pmhost:~# hdparm -tT /dev/sdb
/dev/sdb:
 Timing cached reads:   58344 MB in  2.00 seconds = 29239.03 MB/sec
 Timing buffered disk reads: 714 MB in  3.00 seconds = 237.62 MB/sec

root@pmhost:~# hdparm -tT /dev/sdc
/dev/sdc:
 Timing cached reads:   57226 MB in  2.00 seconds = 28676.96 MB/sec
 Timing buffered disk reads: 746 MB in  3.00 seconds = 248.49 MB/sec

root@pmhost:~# hdparm -tT /dev/sdd
/dev/sdd:
 Timing cached reads:   56568 MB in  2.00 seconds = 28347.41 MB/sec
 Timing buffered disk reads: 728 MB in  3.01 seconds = 242.06 MB/sec

root@pmhost:~# hdparm -tT /dev/sde
/dev/sde:
 Timing cached reads:   59282 MB in  2.00 seconds = 29709.93 MB/sec
 Timing buffered disk reads: 1526 MB in  3.00 seconds = 508.11 MB/sec



root@pmhost:~# hdparm -I /dev/sda | grep -i speed
           *    Gen1 signaling speed (1.5Gb/s)
           *    Gen2 signaling speed (3.0Gb/s)
           *    Gen3 signaling speed (6.0Gb/s)
root@pmhost:~# hdparm -I /dev/sdb | grep -i speed
           *    Gen1 signaling speed (1.5Gb/s)
           *    Gen2 signaling speed (3.0Gb/s)
           *    Gen3 signaling speed (6.0Gb/s)
root@pmhost:~# hdparm -I /dev/sdc | grep -i speed
           *    Gen1 signaling speed (1.5Gb/s)
           *    Gen2 signaling speed (3.0Gb/s)
           *    Gen3 signaling speed (6.0Gb/s)
root@pmhost:~# hdparm -I /dev/sdd | grep -i speed
           *    Gen1 signaling speed (1.5Gb/s)
           *    Gen2 signaling speed (3.0Gb/s)
           *    Gen3 signaling speed (6.0Gb/s)
root@pmhost:~# hdparm -I /dev/sde | grep -i speed
           *    Gen1 signaling speed (1.5Gb/s)
           *    Gen2 signaling speed (3.0Gb/s)
           *    Gen3 signaling speed (6.0Gb/s)


root@pmhost:~# dmesg | grep -i sata | grep 'link up'
[    2.063947] ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    2.071931] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    2.071953] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    2.071977] ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    2.072001] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)


root@pmhost:~# dmesg | grep -i ahci
[    1.591359] ahci 0000:0c:00.0: version 3.0
[    1.591662] ahci 0000:0c:00.0: AHCI 0001.0301 32 slots 4 ports 6 Gbps 0xf impl RAID mode
[    1.591665] ahci 0000:0c:00.0: flags: 64bit ncq sntf ilck pm led clo only pmp fbs pio slum part
[    1.592038] scsi host0: ahci
[    1.592135] scsi host1: ahci
[    1.592210] scsi host2: ahci
[    1.592288] scsi host3: ahci
[    1.592464] ahci 0000:0d:00.0: AHCI 0001.0301 32 slots 1 ports 6 Gbps 0x1 impl RAID mode
[    1.592466] ahci 0000:0d:00.0: flags: 64bit ncq sntf ilck pm led clo only pmp fbs pio slum part
[    1.592599] scsi host4: ahci


root@pmhost:~# hdparm /dev/sd[abcde]

/dev/sda:
 multcount     =  0 (off)
 IO_support    =  1 (32-bit)
 readonly      =  0 (off)
 readahead     = 256 (on)
 geometry      = 972801/255/63, sectors = 15628053168, start = 0

/dev/sdb:
 multcount     =  0 (off)
 IO_support    =  1 (32-bit)
 readonly      =  0 (off)
 readahead     = 256 (on)
 geometry      = 972801/255/63, sectors = 15628053168, start = 0

/dev/sdc:
 multcount     =  0 (off)
 IO_support    =  1 (32-bit)
 readonly      =  0 (off)
 readahead     = 256 (on)
 geometry      = 972801/255/63, sectors = 15628053168, start = 0

/dev/sdd:
 multcount     =  0 (off)
 IO_support    =  1 (32-bit)
 readonly      =  0 (off)
 readahead     = 256 (on)
 geometry      = 972801/255/63, sectors = 15628053168, start = 0

/dev/sde:
 multcount     =  1 (on)
 IO_support    =  1 (32-bit)
 readonly      =  0 (off)
 readahead     = 256 (on)
 geometry      = 121601/255/63, sectors = 1953525168, start = 0

root@pmhost:~# hdparm -I /dev/sd[abcde] | grep "Model Number"
        Model Number:       ST8000NE001-2M7101
        Model Number:       ST8000NE001-2M7101
        Model Number:       ST8000NE001-2M7101
        Model Number:       ST8000NE001-2M7101
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 04 51 e0 01 21 04 00 00 80 2f 00 00 00 00 00 00 00 00 00 00 00 00 00 00
        Model Number:       Samsung SSD 840 EVO 1TB
root@pmhost:~# hdparm -I /dev/sd[abcd] | grep "Model Number"
        Model Number:       ST8000NE001-2M7101
        Model Number:       ST8000NE001-2M7101
        Model Number:       ST8000NE001-2M7101
        Model Number:       ST8000NE001-2M7101
root@pmhost:~# hdparm -I /dev/sde | grep "Model Number"
        Model Number:       Samsung SSD 840 EVO 1TB


root@pmhost:~# hdparm -m16 /dev/sda

/dev/sda:
 setting multcount to 16
Use of -m is VERY DANGEROUS.
Only the old IDE drivers work correctly with -m with kernels up to at least 2.6.29.
libata drives may fail and get hung if you set this flag.
Please supply the --yes-i-know-what-i-am-doing flag if you really want this.
Program aborted.

So, is setting multcount -m16 actually dangerous for these drives, or is that message over cautious?
Google highly recommends this solution but I can't find anything about even being cautious using this parameter elsewhere.
 
It sounds very unlikely that it will physically damage the drives. And I assume that you don't care about the data on the drives during testing. Maybe a forum on hard drives will have more people familiar with hdparm tuning.
 
I do care about the data, these are a zpool for my Proxmox 7.2. That's why I asked here before going ahead.
Which forum do you recommend @leesteken?
Code:
root@pmhost:~# hdparm -i /dev/sd[abcde] | grep MultSect
 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=off
 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=off
 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=off
 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=off
 BuffType=unknown, BuffSize=unknown, MaxMultSect=1, MultSect=1
 
Last edited:
According to the documentation, the setting can corrupt the data if the setting is falsly advertised by the drive. I think you need someone who knows whether this works with your specific hard drives.
 
Seagate manual and support says this will work.
When I try to update the disk, it looks like it works, but nothing happens.
Code:
root@pmhost:~# hdparm -m16 --yes-i-know-what-i-am-doing /dev/sda

/dev/sda:
 setting multcount to 16
 multcount     =  0 (off)
root@pmhost:~# hdparm /dev/sda

/dev/sda:
 multcount     =  0 (off)
 IO_support    =  1 (32-bit)
 readonly      =  0 (off)
 readahead     = 256 (on)
 geometry      = 972801/255/63, sectors = 15628053168, start = 0

Is that what I should expect? What have I done wrong?
 
On boot it seems to be being set already by default and maybe it's just reporting the state incorrectly to hdparm:

May 20 16:30:01 pmhost kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 20 16:30:01 pmhost kernel: ata1.00: ATA-11: ST8000NE001-2M7101, EN01, max UDMA/133 May 20 16:30:01 pmhost kernel: ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 20 16:30:01 pmhost kernel: ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 20 16:30:01 pmhost kernel: ata4.00: ATA-11: ST8000NE001-2M7101, EN01, max UDMA/133 May 20 16:30:01 pmhost kernel: ata3.00: ATA-11: ST8000NE001-2M7101, EN01, max UDMA/133 May 20 16:30:01 pmhost kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 20 16:30:01 pmhost kernel: ata2.00: ATA-11: ST8000NE001-2M7101, EN01, max UDMA/133 May 20 16:30:01 pmhost kernel: [B]ata1.00: 15628053168 sectors, multi 16: LBA48 NCQ (depth 32), AA[/B] May 20 16:30:01 pmhost kernel: ata1.00: Features: NCQ-sndrcv May 20 16:30:01 pmhost kernel: [B]ata4.00: 15628053168 sectors, multi 16: LBA48 NCQ (depth 32), AA[/B] May 20 16:30:01 pmhost kernel: ata4.00: Features: NCQ-sndrcv May 20 16:30:01 pmhost kernel: [B]ata3.00: 15628053168 sectors, multi 16: LBA48 NCQ (depth 32), AA[/B] May 20 16:30:01 pmhost kernel: ata3.00: Features: NCQ-sndrcv May 20 16:30:01 pmhost kernel: [B]ata2.00: 15628053168 sectors, multi 16: LBA48 NCQ (depth 32), AA[/B] May 20 16:30:01 pmhost kernel: ata2.00: Features: NCQ-sndrcv May 20 16:30:01 pmhost kernel: ata1.00: configured for UDMA/133 May 20 16:30:01 pmhost kernel: scsi 0:0:0:0: Direct-Access ATA ST8000NE001-2M71 EN01 PQ: 0 ANSI: 5 May 20 16:30:01 pmhost kernel: sd 0:0:0:0: Attached scsi generic sg0 type 0 May 20 16:30:01 pmhost kernel: sd 0:0:0:0: [sda] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB) May 20 16:30:01 pmhost kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 20 16:30:01 pmhost kernel: sd 0:0:0:0: [sda] Write Protect is off May 20 16:30:01 pmhost kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 May 20 16:30:01 pmhost kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 20 16:30:01 pmhost kernel: ata4.00: configured for UDMA/133 May 20 16:30:01 pmhost kernel: ata3.00: configured for UDMA/133 May 20 16:30:01 pmhost kernel: ata2.00: configured for UDMA/133 May 20 16:30:01 pmhost kernel: scsi 1:0:0:0: Direct-Access ATA ST8000NE001-2M71 EN01 PQ: 0 ANSI: 5 May 20 16:30:01 pmhost kernel: sd 1:0:0:0: Attached scsi generic sg1 type 0 May 20 16:30:01 pmhost kernel: sd 1:0:0:0: [sdb] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB) May 20 16:30:01 pmhost kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks May 20 16:30:01 pmhost kernel: sd 1:0:0:0: [sdb] Write Protect is off May 20 16:30:01 pmhost kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 May 20 16:30:01 pmhost kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 20 16:30:01 pmhost kernel: scsi 2:0:0:0: Direct-Access ATA ST8000NE001-2M71 EN01 PQ: 0 ANSI: 5 May 20 16:30:01 pmhost kernel: sd 2:0:0:0: Attached scsi generic sg2 type 0 May 20 16:30:01 pmhost kernel: sd 2:0:0:0: [sdc] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB) May 20 16:30:01 pmhost kernel: sd 2:0:0:0: [sdc] 4096-byte physical blocks May 20 16:30:01 pmhost kernel: sd 2:0:0:0: [sdc] Write Protect is off May 20 16:30:01 pmhost kernel: sd 2:0:0:0: [sdc] Mode Sense: 00 3a 00 00 May 20 16:30:01 pmhost kernel: sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 20 16:30:01 pmhost kernel: scsi 3:0:0:0: Direct-Access ATA ST8000NE001-2M71 EN01 PQ: 0 ANSI: 5 May 20 16:30:01 pmhost kernel: sd 3:0:0:0: Attached scsi generic sg3 type 0 May 20 16:30:01 pmhost kernel: sd 3:0:0:0: [sdd] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB) May 20 16:30:01 pmhost kernel: sd 3:0:0:0: [sdd] 4096-byte physical blocks May 20 16:30:01 pmhost kernel: sd 3:0:0:0: [sdd] Write Protect is off May 20 16:30:01 pmhost kernel: sd 3:0:0:0: [sdd] Mode Sense: 00 3a 00 00 May 20 16:30:01 pmhost kernel: sd 3:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 20 16:30:01 pmhost kernel: sda: sda1 sda9 May 20 16:30:01 pmhost kernel: sdd: sdd1 sdd9 May 20 16:30:01 pmhost kernel: sdc: sdc1 sdc9 May 20 16:30:01 pmhost kernel: sdb: sdb1 sdb9 May 20 16:30:01 pmhost kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 20 16:30:01 pmhost kernel: sd 2:0:0:0: [sdc] Attached SCSI disk May 20 16:30:01 pmhost kernel: sd 3:0:0:0: [sdd] Attached SCSI disk May 20 16:30:01 pmhost kernel: sd 1:0:0:0: [sdb] Attached SCSI disk

It's hard to interpret if multi 16: is just being reported here in the log or it's actually being set. Am I reading that right?
It's also interesting to note that the ata numbers the drives 1-4 while the scsi numbers them 0-3.