No spin-down via hd-idle after upgrade to 9.1.5

3dbruce

New Member
Feb 6, 2026
5
0
1
Hi,
this morning I upgraded to PVE 9.1.5 and kernel 6.17.9-1 on a small DXP2800 NAS. I had initial problems starting one lxc container with a read-only bind-mount but was able to workaround that by removing the read-only flag.

A few hours ago, however I noticed that my hard drives were not spinning down any more when idle for more than 900secs. Apparently the hd-idle service is started normally, but the last entry in my hd-idle.log is from 9:29 right before I started the update to 9.1.5.

I was wondering if that could have been another side-effect of the bind-mount issue and decided to use the other recommended workaround, namely to downgrade pve-container to 6.0.18 (and reinstate the read-only flag in the bind mount). Unfortunately even after another reboot of the pve host, my hard-drives still refuse to spin-down now.

Any idea what is going on here?

Thanks and best regards
- Uwe
 
After rebooting with the old 6.17.4-2-pve kernel, the spin-down is working again. So it seems to be an issue with the new 6.17.9-1-pve kernel.
 
Last edited:
Yep, this is what I had to do as well. So I used the following command until a new kernel release fixes it.
proxmox-boot-tool kernel pin 6.17.4-2-pve

The new kernel has something in it that keeps waking my drives that I put to sleep using hdparm -Y /dev/sdX
so will keep using the previous version until a new version is released that works well. Spent a lot of time disabling smartmontools and other things (/etc/lvm settings?) that in the end had nothing to do with the problem.

Bash:
# proxmox-boot-tool kernel list
Manually selected kernels:
None.

Automatically selected kernels:
6.17.4-2-pve
6.17.9-1-pve

Pinned kernel:
6.17.4-2-pve
 
After a helpful hint on reddit, I set the debug flag for hd-idle yesterday. I seems that something keeps writing to the drives roughly every 600secs, so my 900 sec timeout is never reached. This also happens even when I shutdown all VMs and containers. I tried to find the process that causes this via iotop and identified a few processes that were always active when the respective writes to my harddisks occur. However none of those look particularly suspicious (at least to me):

Rich (BB code):
    TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN      IO    COMMAND
  41543 be/4 root        0.00 B/s    7.32 K/s ?unavailable?  [kworker/u16:1-events_power_efficient]
    430 be/4 root        0.00 B/s    6.93 K/s ?unavailable?  systemd-journald
   1418 be/4 root        0.00 B/s  136.37 B/s ?unavailable?  pmxcfs
  57174 be/4 www-data    0.00 B/s  102.28 B/s ?unavailable?  pveproxy worker
    324 be/4 root        0.00 B/s   68.19 B/s ?unavailable?  [txg_sync]

I haven't found a tool yet that lists the process AND the device that is being written to, so I cannot tell which of those is writing to my hard-drives. Hence I am a bit stuck here. For the time being I also pinned the old kernel and wait for a solution.
 
Not sure I understand what you mean exactly. The two harddisks (at /dev/sda and /dev/sdb) form a ZFS mirror and the respective pool contains several datasets which are mounted at /mnt/<poolname>/<datasetname>

EDIT: And to expand a bit further, all of these datasets are used in bind-mounts for various lxc containers (for samba, sftp, plex, etc.). Writes to the disks happen with the new kernel even when all of these lxcs are shutdown, though. I have also not installed any additional services on the pve host itself.
 
Last edited:
When VMs and containers are off, you may use lsof | grep /mnt
(or some more specific option of lsof to avoid grep ).

If some file there is open by some process, lsof will show it.
Of course there might be a situation that a file is not open all the time, just periodically and lsof won't show it in the moment of using it.

Then you can install auditd package and set a filter for logging accesses to selected directories or files.
 
Last edited:
  • Like
Reactions: 3dbruce
When VMs and containers are off, you may use lsof | grep /mnt
(or some more specific option of lsof to avoid grep ).

If some file there is open by some process, lsof will show it.
Of course there might be a situation that a file is not open all the time, just periodically and lsof won't show it in the moment of using it.

Then you can install auditd package and set a filter for logging accesses to a selected directories or files.

Thanks a lot! Will try to do that tomorrow (after my nightly jobs are finished and I can reboot with the new kernel again).