[SOLVED] Upgrade from 5.x to 6.x -> harddrives dont sleep no more

juppzupp

Member
May 8, 2020
14
2
8
51
Hi,

upgraded from 5.0-23 (yes old) to 6.1-11.
Have an entire harddisk assigned to a vm :
Code:
root@pve:~# cat /etc/pve/qemu-server/106.conf
...
virtio2: /dev/disk/by-id/ata-ST4000LM016-1N2170_W800GKPN,backup=0,cache=unsafe,size=3907018584K
...

With 5.x, I was using hdparm -S60 /dev/sdc to have that spin down after 5 Minutes idle. Was working without problems.

Now since I've upgraded it seems to fail.

Code:
root@pve:~# hdparm -S60 /dev/sdc

/dev/sdc:
setting standby to 60 (5 minutes)
root@pve:~# sleep 300
root@pve:~# hdparm -C /dev/sdc

/dev/sdc:
drive state is:  active/idle
root@pve:~# qm status 106
status: stopped
root@pve:~#

I did even power down the vm to ensure it's not accessing the drive.

Any hints ?

Thanks
 
Last edited:
PVE isn't really designed for spinning down disks, but it certainly should be possible. Try using 'iotop -o' to see which processes are doing disk IO, install and use the 'blktrace' package from APT, or take a look at this SO thread for more inspiration.

Did you configure anything besides the 'hdparm' command? A good idea for example might be to blacklist your drive from LVM scanning in /etc/lvm/lvm.conf (e.g. add "r|/dev/sdc|" to the end of the 'global_filter' array, reboot afterwards to make sure it's applied).
 
  • Like
Reactions: juppzupp
Awesome !
iotop didnt show anything, but I expected that since the entire drive was mapped to the vm.
what did the trick was blktrace -d /dev/sdc -o - | blkparse -i - which showed immediately that lvs and vgs where responsible.
by your advice I excluded the drive in lvm.conf, and it is working now.

Code:
root@pve:~# sleep 300 && hdparm -C /dev/sdc

/dev/sdc:
 drive state is:  standby