[TUTORIAL] Disk prevent from spinning down because of pvestatd

mac.1

New Member
Jan 19, 2019
5
4
1
28
Hi,
you might have the problem that you want to spin down some disks, but that doesn't work as planned.
I just spend 2 hours debugging the problem, and so decided to write this short tutorial.

Symptom:
You have multiple drives in your proxmox machines, and want some (like all HDDs) to spin down after time.
Lets say you want `sdc` to spin down if unused.
However the drives get constant reads as you can find out via:
Code:
dstat -D sdc -ta --top-bio
-> constant 44k reads every few seconds.

Analysis:
If you google you will find alot of threads claiming pvestatd being responsible for this, and they ask for fixes which isn't possible.
However, pvestatd just seems to call the lvm utilities, like `vgscan`.
If you execute it in a second session you will see the same 44k read on your drive.
Luckily there is an option in lvm to say vgscan to never look at your drive. (That ofc means, you cannot have lvm usage on sdc, which is okay in my case, because i only have zfs running on it).

Fix:
open the file:
Code:
/etc/lvm/lvm.conf
And modify the global_filter from:
Code:
global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|" ]
to:
Code:
global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|", "r|/dev/sdc*|" ]
Add a entry for each drive you want vgscan to ignore in the future.
Save and run
Code:
vgscan
once to reload the configs. you should no longer see the reads on the drive.
Hope i can help some of you :)
 

emalenfant

New Member
Dec 7, 2019
1
1
1
47
Hi, thanks for sharing. I applied this modification to 4 ext4 HDD with only media on them with the following code:

global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|", "r|/dev/sdb*|", "r|/dev/sdc*|", "r|/dev/sde*|", "r|/dev/sdf*|" ]

It seems to be working but unfortunately containers do not start after a reboot. When I remove the modification and do a vgscan I get: "Found volume group "pve" using metadata type lvm2". Then the containers will start after a reboot.
 
Last edited:
  • Like
Reactions: pdionisis

bernstein

New Member
Apr 11, 2020
1
0
1
39
Great Tip!

just found this a day late... my solution was to not use lvm, on any attached disk. (which wasn't a problem as i avoid it when possible)
=> so reinstalled proxomox on a zfs mirror (which i use on my other disk anyway)

on a side note : proxmox zfs installer gui is probably the worst ever, had to detach most other drives before booting into the installer as it adds all detected drives to the mirror yet only allows removal of a few. also for mysterious reasons, it needs two drives of identical size. no you can't choose partitions either. solution: install to two identical usb-sticks after installation move to ssds *sigh*
 

pdionisis

New Member
Aug 31, 2019
10
0
1
49
I tried this at lvs.conf also but I also have the problem that when I do vgscan the lvm-thin volume is greyd with a questionmark

Also the existing VMs start but I can not make new VMs (no LVM-thin volume found).

Any idea how to have both sleeping disks and LVM working?
 
Last edited:

flying_tiantian

New Member
Jun 9, 2020
6
2
3
25
It appears that filtering `sdb*` devices in the "global_filter" also stops vgscan from scanning `sda* devices. (You can see this behaviour through `dstat -D sda`.)
There may be some implementation issues of the filter or i misunderstand the syntax of "global_filter"...

My solution is to specifically allow the scanning of sda* devices (since the lvm only uses `sda3` in my system). And the setting in my lvm.conf is:

global_filter = [ "a|/dev/sda|", "r|/dev/sdb*|", "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|" "r|/dev/mapper/.*-(vm|base)--[0-9]+--disk--[0-9]+|" ]

That works as expected for me.
 
  • Like
Reactions: mcbb23

flying_tiantian

New Member
Jun 9, 2020
6
2
3
25
I tried this at lvs.conf also but I also have the problem that when I do vgscan the lvm-thin volume is greyd with a questionmark

Also the existing VMs start but I can not make new VMs (no LVM-thin volume found).

Any idea how to have both sleeping disks and LVM working?
See my commant, maybe help for you.
 

flying_tiantian

New Member
Jun 9, 2020
6
2
3
25
I tried this at lvs.conf also but I also have the problem that when I do vgscan the lvm-thin volume is greyd with a questionmark

Also the existing VMs start but I can not make new VMs (no LVM-thin volume found).

Any idea how to have both sleeping disks and LVM working?

The expression should be `/dev/sdb.*`, not `/dev/sdb*`, for more informaton please view this thread.
 

pgesting

New Member
May 31, 2020
2
0
1
40
Hmmm this does not work for me. I still have pvestatd hitting my drive in dstat. My setting is:

global_filter = [ "r|/dev/zd.*|", "r|/dev/sda.*|", "r|/dev/sdb.*|", "r|/dev/sdc.*|", "r|/dev/sdd.*|", "r|/dev/sde.*|", "r|/dev/sdf.*|", "r|/dev/sdg.*|", "r|/dev/sdh.*|", "r|/dev/sdi.*|", "r|/dev/sdj.*|", "r|/dev/sdk.*|", "r|/dev/sdl.*|", "r|/dev/sdm.*|", "r|/dev/mapper/pve-.*|" "r|/dev/mapper/.*-(vm|base)--[0-9]+--disk--[0-9]+|"]
 

guletz

Famous Member
Apr 19, 2017
1,458
226
83
Brasov, Romania
Hi,

For all who want to spin down their hdd - this will have a downside, because each spin on/off will reduce the hdd lifetime. And in the end you may find that what you gain using less electricity will lose because you will need to buy new hdds after 2-3 years insted of 4 ;)
 

pgesting

New Member
May 31, 2020
2
0
1
40
Hi,

For all who want to spin down their hdd - this will have a downside, because each spin on/off will reduce the hdd lifetime. And in the end you may find that what you gain using less electricity will lose because you will need to buy new hdds after 2-3 years insted of 4 ;)

My problem is that the constant reads are causing high temps (even with 6 fans running). I am worried that the high temps would be even worse.
 

pdionisis

New Member
Aug 31, 2019
10
0
1
49
Now I have the same problem (hard disk always accesed by something and can not go to sleep) with proxmox-backup (pbs)

I run it inside a debian lxc and share a local disk between proxmox and pbs(debian) with:
pct set 100 -mp0 /mnt/f,mp=/mnt/f
This disk is used for backups

The problem is that mnt/f is very frequently accessed from pbs and can not sleep at all

Tried to do the lvm.conf thing as above at the pbs also but does not seem to help

Any idea?
 
Last edited:

zyndata

New Member
Jan 9, 2021
1
0
1
35
Thanks for the tutorial. It works for me but only with this additional steps.

I checked my disks with
Code:
hdparm -B /dev/sd?
My HDD was > 127 (do not allow to spin down)
So I edit
Code:
/etc/hdparm.conf
and I added on the end of file
Code:
/dev/disk/by-id/ata-Hitachi_My_disk_id {
    spindown_time = 180
    apm = 127
}

180 it is 15 minutes. (180 * 5seconds)
More info here: https://wiki.archlinux.org/index.php/hdparm

With this command it is possible to check current disk status
Code:
hdparm -C /dev/sda
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!