Hdparm standby disks on pve

etnicor

Member
Jan 19, 2023
42
2
8
Hello,
have done a reinstall of proxmox 7.3 on a new computer. However now my old disks no longer seem to go into idle->standby anymore.

I have no problem setting hdparm -S manually, but it will no longer be done automatically by hdparm.conf.

Are disks supposed to remember the -S setting? It feels like that the problem may be that when a disk has been waken up the -S is never reapplied again.

I guess I could just write a cron script which check if a drive in state "active/idle" and then apply hdparm -S. Just feels hacky.



I have these config's.

lvm.conf:
global_filter=["r|/dev/zd.*|", "r|/dev/sda.*|", "r|/dev/sdb.*|", "r|/dev/sdc.*|"]

hdparm.conf
Code:
/dev/disk/by-id/ata-TOSHIBA_MG09ACA18TE_XXX {
        apm = 128
        spindown_time = 50
}

/dev/disk/by-id/ata-TOSHIBA_MG09ACA18TE_XXX{
        apm = 128
        spindown_time = 50
}

/dev/disk/by-id/ata-TOSHIBA_MG09ACA18TE_XXX {
        apm = 128
        spindown_time = 50
}
 
I find conflicting information on this(like everything else with hdparm).
But apm=128 worked on my old install

But have tried with apm=127 aswell, I have same issue.
 
Last edited:
Have tried without apm aswell. Default value seems to be 254 when checking with -B.

Just noticed that the disk which is last in the hdparm.conf seem to go to standby mode by itself.

All disks are same model/brand.

*Considering reinstalling proxmox, something is really weired.
 
Last edited:
Update:
reinstalled promox and still only one of the disks go into standby.

So i'll write a cron script which run hdparm -S on disks which are in state "active/idle".

A question though, when running hdparm -S disk go from state "active/idle" -> "idle". It's supposedly then in low power mode, does that men r/w speed will be bad? Or does idle not really mean anything ?

All of this is so badly documented(not proxmox but linux in general).
 
Update:

This is so weired.
even when disks are not mounted only one goes to standby.

The only difference is that the disk going to standby is formated with xfs.

The other 2 disks are ext4 and it's 2-3 months since these disks were formated so I doubt ext4lazyinit is the problem. Also they didn't have any problem before the proxmox reinstall on new hardware.

Tried to write a script which do hdparm -S but that doesn't seem to work either, only if I set time to bellow 5min.

Have run dstat and I don't see any process accessing the disks.

Will reformat the ext4 drives to xfs and see if that make any difference. I am clueless
 
Last edited:
Try with following "global_filter":
global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|", "r|/dev/mapper/.*-(vm|base)--[0-9]+--disk--[0-9]+|", "r|/dev/sda|", "r|/dev/sdb|", "r|/dev/sdc|" ]

Close the Proxmox GUI in Browser, this can wake up the disks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!