HDDs still spin down despite hdparm -S 0

Jan 24, 2023
21
1
3
Hi there!
I'm struggling with a recent issue. Apparently this hasn't been an issue before as the HDDs have been running 24/7 since I started running proxmox close to 9 months ago.
My HDDs spin down after about 15 minutes of inactivity. This means they are spinning down and back up anywhere from 10-50 times per day. While they are Ironwolf Seagate 8TB rated to spin down quite a few times I do imagine that it's not healthy to spin down this often and it's probably better for them to be spinning 24/7.

I first tried setting hdparm -B but it seems that the drives do not support it, I then tried setting hdparm -S 0 for each of the drives which seemed to set, but the drives are still spinning down after 15 minutes of inactivity.

I am wondering first, how can I check this in proxmox? Right now I am checking by just listening to the server to hear when they spin down, and checking power consumption via a plug similar to the "kill a watt" plug.

Secondly I am wondering what I can do to stop it spinning down so often if at all?

Thank you in advance.

EDIT: After some more searching I see that `hdparm -C` returns "unknown" on all 5 of my HDDs. I assume in that case hdparm is not able to spin up or down the drives and it's something else.
Anyone can help with what possible other options I have?
 
Last edited:
ok, so temporary fix until you find a solution, just make a simple bash script that creates / deletes a simple textfile every 10 minutes or so, this should prevent the disks from spinning down.

Code:
while true
    do
    touch test.txt
    sleep 500
    rm test.txt

done

For a permanent fix, how are the drives connected? hba, raidcard, sasexpander, mainboard?
 
ok, so temporary fix until you find a solution, just make a simple bash script that creates / deletes a simple textfile every 10 minutes or so, this should prevent the disks from spinning down.

Code:
while true
    do
    touch test.txt
    sleep 500
    rm test.txt

done

For a permanent fix, how are the drives connected? hba, raidcard, sasexpander, mainboard?
Thanks, that's a very quick and dirty patch, I love it!
As for how it's done, the 5 drives are connected to the motherboard directly via SATA.
On Proxmox they're setup in a RAID6 using mdadm (Because I didn't want the overly insane read/write that comes with ZFS),
the OS is on 2 SSDs, also setup in RAID1, using mdadm, also directly on motherboard via m.2 slots.
 
If you add the drives (or a directory on those drives) as a Storage, Proxmox will prevent them from spinning down as it continuously polls it for its graphs (which is a common complaint on this forum).
 
If you add the drives (or a directory on those drives) as a Storage, Proxmox will prevent them from spinning down as it continuously polls it for its graphs (which is a common complaint on this forum).
Do you mean this? The drives were added as a directory months ago and it's how I access them. Unfortunately this has not changed anything.
1694849098804.png
 
You probably need sdparm (apt install sdparm), as those are SATA drives and Linux sees them as SCSI instead of IDE. Simply check if fdisk -l shows your drives as /dev/sdX.

Then, use sdparm --flexible -6 -l -a /dev/sdX | grep -E 'SCT|STANDBY ' on each drive.
  • SCT=value*100ms (on some drives it's 50ms) is the time to wait before spin down. Default is 4294967286, which means "infinite", so no spin down.
  • STANDBY=0 disabled, 1 enabled. Default is 0.

Supposing that your drives have STANDBY set to 1, this should be enough to stop them from spinning down:

sdparm --flexible -6 -l --save --set STANDBY=0 /dev/sdX

This has worked for me to spin down a USB drive used for backups once a week, so it may help you too: SOURCE.
 
You probably need sdparm (apt install sdparm), as those are SATA drives and Linux sees them as SCSI instead of IDE. Simply check if fdisk -l shows your drives as /dev/sdX.

Then, use sdparm --flexible -6 -l -a /dev/sdX | grep -E 'SCT|STANDBY ' on each drive.
  • SCT=value*100ms (on some drives it's 50ms) is the time to wait before spin down. Default is 4294967286, which means "infinite", so no spin down.
  • STANDBY=0 disabled, 1 enabled. Default is 0.

Supposing that your drives have STANDBY set to 1, this should be enough to stop them from spinning down:

sdparm --flexible -6 -l --save --set STANDBY=0 /dev/sdX

This has worked for me to spin down a USB drive used for backups once a week, so it may help you too: SOURCE.
Thank you so much for this, this is fantastic information. Unfortunately I think there's something not quite right.

The command you sent sdparm --flexible -6 -l -a /dev/sdX | grep -E 'SCT|STANDBY ' returns nothing for each drive a,b,c,d,e.
Typing both sdparm --flexible -6 -l --get SCT /dev/sdX and sdparm --flexible -6 -l --get STANDBY /dev/sdX returns that it's not supported.

Code:
root@pve:~# sdparm --flexible -6 -l --get SCT /dev/sda
    /dev/sda: ATA       ST8000VN004-2M21  SC60
SCT not found in Power condition [po] mode page
root@pve:~# sdparm --flexible -6 -l --get STANDBY /dev/sda
    /dev/sda: ATA       ST8000VN004-2M21  SC60
STANDBY not found in Power condition [po] mode page

From the source you provided I gathered some extra info, I'm not sure if it's of any use to find out what the issue is.

Code:
root@pve:~# sdparm --flexible -6 -l -i -a /dev/sda
    /dev/sda: ATA       ST8000VN004-2M21  SC60
Supported VPD pages VPD page:
  [0x00] Supported VPD pages [sv]
  [0x80] Unit serial number [sn]
  [0x83] Device identification [di]
  [0x89] ATA information (SAT) [ai]
  [0xb0] Block limits (SBC) [bl]
  [0xb1] Block device characteristics (SBC) [bdc]
  [0xb2] Logical block provisioning (SBC) [lbpv]

Code:
root@pve:~# sdparm --flexible -6 -l -a /dev/sda
    /dev/sda: ATA       ST8000VN004-2M21  SC60
    Direct access device specific parameters: WP=0  DPOFUA=0
Read write error recovery [rw] mode page:
  AWRE          1  [cha: n, def:  1]  Automatic write reallocation enabled
  ARRE          0  [cha: n, def:  0]  Automatic read reallocation enabled
  TB            0  [cha: n, def:  0]  Transfer block
  RC            0  [cha: n, def:  0]  Read continuous
  EER           0  [cha: n, def:  0]  Enable early recovery (obsolete)
  PER           0  [cha: n, def:  0]  Post error
  DTE           0  [cha: n, def:  0]  Data terminate on error
  DCR           0  [cha: n, def:  0]  Disable correction (obsolete)
  RRC           0  [cha: n, def:  0]  Read retry count
  COR_S         0  [cha: n, def:  0]  Correction span (obsolete)
  HOC           0  [cha: n, def:  0]  Head offset count (obsolete)
  DSOC          0  [cha: n, def:  0]  Data strobe offset count (obsolete)
  LBPERE        0  [cha: n, def:  0]  Logical block provisioning error reporting enabled
  MWR           0  [cha: n, def:  0]  Misaligned write reporting
  EMCDR         0  [cha: n, def:  0]  Enhanced media certification and defect reporting
  WRC           0  [cha: n, def:  0]  Write retry count
  ERWS          0  [cha: n, def:  0]  Error reporting window size (blocks)
  RTL           0  [cha: n, def:  0]  Recovery time limit (ms)
Caching (SBC) [ca] mode page:
  IC            0  [cha: n, def:  0]  Initiator control
  ABPF          0  [cha: n, def:  0]  Abort pre-fetch
  CAP           0  [cha: n, def:  0]  Caching analysis permitted
  DISC          0  [cha: n, def:  0]  Discontinuity
  SIZE          0  [cha: n, def:  0]  Size enable
  WCE           1  [cha: y, def:  1]  Write cache enable
  MF            0  [cha: n, def:  0]  Multiplication factor
  RCD           0  [cha: n, def:  0]  Read cache disable
  DRRP          0  [cha: n, def:  0]  Demand read retention priority
  WRP           0  [cha: n, def:  0]  Write retention priority
  DPTL          0  [cha: n, def:  0]  Disable pre-fetch transfer length
  MIPF          0  [cha: n, def:  0]  Minimum pre-fetch
  MAPF          0  [cha: n, def:  0]  Maximum pre-fetch
  MAPFC         0  [cha: n, def:  0]  Maximum pre-fetch ceiling
  FSW           0  [cha: n, def:  0]  Force sequential write
  LBCSS         0  [cha: n, def:  0]  Logical block cache segment size
  DRA           0  [cha: n, def:  0]  Disable read ahead
  SYNC_PROG     0  [cha: n, def:  0]  Synchronous cache progress indication
  NV_DIS        0  [cha: n, def:  0]  Non-volatile cache disable
  NCS           0  [cha: n, def:  0]  Number of cache segments
  CSS           0  [cha: n, def:  0]  Cache segment size
Control [co] mode page:
  TST           0  [cha: n, def:  0]  Task set type
  TMF_ONLY      0  [cha: n, def:  0]  Task management functions only
  DPICZ         0  [cha: n, def:  0]  Disable protection information check if protect field zero
  D_SENSE       0  [cha: y, def:  0]  Descriptor format sense data
  GLTSD         1  [cha: n, def:  1]  Global logging target save disable
  RLEC          0  [cha: n, def:  0]  Report log exception condition
  QAM           0  [cha: n, def:  0]  Queue algorithm modifier
  NUAR          0  [cha: n, def:  0]  No unit attention on release
  QERR          0  [cha: n, def:  0]  Queue error management
  RAC           0  [cha: n, def:  0]  Report a check
  UA_INTLCK     0  [cha: n, def:  0]  Unit attention interlocks control
  SWP           0  [cha: n, def:  0]  Software write protect
  ATO           0  [cha: n, def:  0]  Application tag owner
  TAS           0  [cha: n, def:  0]  Task aborted status
  ATMPE         0  [cha: n, def:  0]  Application tag mode page enabled
  RWWP          0  [cha: n, def:  0]  Reject write without protection
  SBLP          0  [cha: n, def:  0]  Supported block lengths and protection information
  AUTOLOAD      0  [cha: n, def:  0]  Autoload mode
  BTP           -1  [cha: n, def: -1]  Busy timeout period (100us)
  ESTCT         30  [cha: n, def: 30]  Extended self test completion time (sec)

A side note, these drives are setup in a RAID6 with mdadm.
Code:
root@pve:~# mdadm --detail /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Tue Apr 11 12:35:49 2023
        Raid Level : raid6
        Array Size : 23441679360 (22355.73 GiB 24004.28 GB)
     Used Dev Size : 7813893120 (7451.91 GiB 8001.43 GB)
      Raid Devices : 5
     Total Devices : 5
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun Sep 17 00:49:11 2023
             State : clean
    Active Devices : 5
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : pve:1  (local to host pve)
              UUID : 592b0087:cfab0caf:5ca8c3cc:7c5f4fc4
            Events : 51985

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
       4       8       65        4      active sync   /dev/sde1

Code:
root@pve:~# lsblk
NAME                                                 MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                                    8:0    0   7.3T  0 disk 
└─sda1                                                 8:1    0   7.3T  0 part 
  └─md1                                                9:1    0  21.8T  0 raid6 /mnt/md1
sdb                                                    8:16   0   7.3T  0 disk 
└─sdb1                                                 8:17   0   7.3T  0 part 
  └─md1                                                9:1    0  21.8T  0 raid6 /mnt/md1
sdc                                                    8:32   0   7.3T  0 disk 
└─sdc1                                                 8:33   0   7.3T  0 part 
  └─md1                                                9:1    0  21.8T  0 raid6 /mnt/md1
sdd                                                    8:48   0   7.3T  0 disk 
└─sdd1                                                 8:49   0   7.3T  0 part 
  └─md1                                                9:1    0  21.8T  0 raid6 /mnt/md1
sde                                                    8:64   0   7.3T  0 disk 
└─sde1                                                 8:65   0   7.3T  0 part 
  └─md1                                                9:1    0  21.8T  0 raid6 /mnt/md1


Code:
Disk /dev/sda: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: ST8000VN004-2M21
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 7EDEF0AB-5F66-4D5D-883A-561E366F15D0

Device     Start         End     Sectors  Size Type
/dev/sda1   2048 15628053134 15628051087  7.3T Linux RAID


Disk /dev/sdb: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: ST8000VN004-2M21
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 74D71A95-9750-448F-824E-0683924C876C

Device     Start         End     Sectors  Size Type
/dev/sdb1   2048 15628053134 15628051087  7.3T Linux RAID


Disk /dev/sdc: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: ST8000VN004-2M21
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: BA726D06-3076-4E54-A9A5-4A699A539BFD

Device     Start         End     Sectors  Size Type
/dev/sdc1   2048 15628053134 15628051087  7.3T Linux RAID


Disk /dev/sdd: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: ST8000VN004-2M21
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 5AC48006-3E6A-4F88-AE11-96226FC10BF5

Device     Start         End     Sectors  Size Type
/dev/sdd1   2048 15628053134 15628051087  7.3T Linux RAID


Disk /dev/sde: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: ST8000VN004-2M21
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 07251E68-96AA-4562-A6FC-549F838FC1E3

Device     Start         End     Sectors  Size Type
/dev/sde1   2048 15628053134 15628051087  7.3T Linux RAID

Disk /dev/md1: 21.83 TiB, 24004279664640 bytes, 46883358720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes
 
Last edited:
Do you have hd-idle installed by any chance?
 
Hi there!
I'm struggling with a recent issue. Apparently this hasn't been an issue before as the HDDs have been running 24/7 since I started running proxmox close to 9 months ago.
My HDDs spin down after about 15 minutes of inactivity. This means they are spinning down and back up anywhere from 10-50 times per day. While they are Ironwolf Seagate 8TB rated to spin down quite a few times I do imagine that it's not healthy to spin down this often and it's probably better for them to be spinning 24/7.

I first tried setting hdparm -B but it seems that the drives do not support it, I then tried setting hdparm -S 0 for each of the drives which seemed to set, but the drives are still spinning down after 15 minutes of inactivity.

I am wondering first, how can I check this in proxmox? Right now I am checking by just listening to the server to hear when they spin down, and checking power consumption via a plug similar to the "kill a watt" plug.

Secondly I am wondering what I can do to stop it spinning down so often if at all?

Thank you in advance.

EDIT: After some more searching I see that `hdparm -C` returns "unknown" on all 5 of my HDDs. I assume in that case hdparm is not able to spin up or down the drives and it's something else.
Anyone can help with what possible other options I have?

"Arise!" and this necro thread is alive again. Apologies if anyone is offended. However I'm posting to add a different perspective on HDDs and why I switched to SSDs. I know it could help someone in future if they have the same issues I did.

I had this issue in 2018 with a windows VM host that couldn't handle it's hard drives. That issue forced me to relook at the the HDD vs SSD debate...

- Spin up / down issues
- Heating issues (with WD, shockingly the same WD drives worked well on another machine with identical chassis / etc)
- Spin failure during a RAID mirroring which left both drives' data slightly corrupted (the drives worked fine, but there was about 4-5% data loss)
- Inability to handle replication during high load

I don't know if you've had these issues. Needless to say all my current proxmox systems run on SSD.

Granted it's costlier per unit disk space, but in my case that was the solution.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!