HDD never spin down

Using
Code:
blktrace -d /dev/sdc -o - | blkparse -i -
I see this which causes the disks to spin up:


Code:
  8,32   7      173  1155.524252951  4140  I   W 14991898680 + 8 [z_wr_int_1]
  8,32   7      174  1155.524254007  4140  D   W 14991898680 + 8 [z_wr_int_1]
  8,32   7      175  1155.525452948 558788  A   W 14856242120 + 32 <- (8,33) 14856240072
  8,32   7      176  1155.525453616 558788  Q   W 14856242120 + 32 [z_wr_int_0]
  8,32   7      177  1155.525455476 558788  G   W 14856242120 + 32 [z_wr_int_0]

Code:
  8,32   4      616  1155.464635571 558694  P   N [z_wr_iss]
  8,32   4      617  1155.464635796 558694  U   N [z_wr_iss] 1
  8,32   4      618  1155.464636009 558694  I   W 14855845584 + 8 [z_wr_iss]
  8,32   4      619  1155.464637397 558694  D   W 14855845584 + 8 [z_wr_iss]

Something from zfs keeps the disks spinning?
 
Hello, I found a way to keep unused disk standby.
I see all hdd become active when I use lvm commands (like lvdisplay), and pvestatd will call lvm functions to get the status of default local-lvm storage.
so I edit the /etc/lvm/lvm.conf, append a new filter rule in
devices{​
global_filter []​

}​

now it is
global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|", "r|/dev/sdb.*|" ]​

and now pvestatd will not awake my /dev/sdb
I also tried this, however this can have an adverse side-effect: In my case, there is an LVM PV on that drive. When I disable it via a global filter, the VG does not get started at all.

I have resorted to exclude the device only for pvestatd, which indirectly uses /usr/share/perl5/PVE/Storage/LVMPlugin.pm in lvm_vgs():

Perl:
    #my $cmd = ['/sbin/vgs', '--separator', ':', '--noheadings', '--units', 'b',
    my $cmd = ['/sbin/vgs', '--config', 'devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }', '--separator', ':', '--noheadings', '--units', 'b',
               '--unbuffered', '--nosuffix', '--options'];

However, it ssems there are two more calls to LVM, like:

Code:
    $cmd = ['/sbin/pvs', '--config', 'devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }', '--separator', ':', '--noheadings', '--units', 'k',
            '--unbuffered', '--nosuffix', '--options',
            'pv_name,pv_size,vg_name,pv_uuid', $device];

and

Code:
    my $cmd = [
        '/sbin/lvs', '--separator', ':', '--noheadings', '--units', 'b',
        '--unbuffered', '--nosuffix',
        '--config', 'report/time_format="%s" devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }',
        '--options', $option_list,
    ];
 
Last edited:
I have successfully disabled pvestatd via modification of /usr/share/perl5/PVE/Storage/LVMPlugin.pm.

However, I reverted that change, because it had an adverse side-effect: Every night, upon automatic backup, the first backup always failed with a timeout, because the disk did not spin up fast enough, giving an error like:

Code:
cannot determine size of volume 'local-lvm:vm-120-disk-0' - command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options lv_size /dev/pve/vm-120-disk-0' failed: got timeout
 
Using
Code:
blktrace -d /dev/sdc -o - | blkparse -i -
I see this which causes the disks to spin up:


Code:
  8,32   7      173  1155.524252951  4140  I   W 14991898680 + 8 [z_wr_int_1]
  8,32   7      174  1155.524254007  4140  D   W 14991898680 + 8 [z_wr_int_1]
  8,32   7      175  1155.525452948 558788  A   W 14856242120 + 32 <- (8,33) 14856240072
  8,32   7      176  1155.525453616 558788  Q   W 14856242120 + 32 [z_wr_int_0]
  8,32   7      177  1155.525455476 558788  G   W 14856242120 + 32 [z_wr_int_0]

Code:
  8,32   4      616  1155.464635571 558694  P   N [z_wr_iss]
  8,32   4      617  1155.464635796 558694  U   N [z_wr_iss] 1
  8,32   4      618  1155.464636009 558694  I   W 14855845584 + 8 [z_wr_iss]
  8,32   4      619  1155.464637397 558694  D   W 14855845584 + 8 [z_wr_iss]

Something from zfs keeps the disks spinning?
Same issue here. Did you find a solution?
 
Maybe it is pvestatd, but not every 8 seconds. The disk starts spinning each time after several ten minutes.

What I have done so far:
  • create a zfs patition "local-backup2" on /dev/sda at node "pve01"
  • mounted the zfs partition as directory "local-backup"
  • added a storage "backup-dir" in /local-backup2" at datacenter for content "VZDump backup file"
  • excluded the devices in lvm.conf from scanning
  • activated spindown in hdparm.conf
  • excluded /dev/sda from smartd scans
Spindown works as configured in hdparm.conf. But the disk starts spinning with same blktrace messages as user flove (see above).

Code:
root@pve01:~# mount |grep backup
local-backup2 on /local-backup2 type zfs (rw,relatime,xattr,noacl,casesensitive)
local-backup2/subvol-113-disk-1 on /local-backup2/subvol-113-disk-1 type zfs (rw,relatime,xattr,posixacl,casesensitive)
local-backup2/backup-dir on /local-backup2/backup-dir type zfs (rw,relatime,xattr,noacl,casesensitive)

Code:
root@pve01:~# tail -4 /etc/lvm/lvm.conf
devices {
     # added by pve-manager to avoid scanning ZFS zvols
     global_filter=["r|/dev/disk/by-label/local-backup2.*|", "r|/dev/sda.*|", "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|"]
}

Code:
root@pve01:~# tail -4 /etc/hdparm.conf
/dev/disk/by-id/ata-ST1750LM000_HN-M171RAD_S34Dxxxxxxxxxx {
        spindown_time = 120
}

Code:
root@pve01:~# cat /etc/smartd.conf
[...]
/dev/sda -d ignore
DEVICESCAN -d removable -n standby -m root -M exec /usr/share/smartmontools/smartd-runner
[...]
 
As long as that directory storage isn't disabled pvestatd will poll it all the time keeping the disk awake.
 
Maybe it is pvestatd, but not every 8 seconds. The disk starts spinning each time after several ten minutes.

What I have done so far:
  • create a zfs patition "local-backup2" on /dev/sda at node "pve01"
  • mounted the zfs partition as directory "local-backup"
  • added a storage "backup-dir" in /local-backup2" at datacenter for content "VZDump backup file"
  • excluded the devices in lvm.conf from scanning
  • activated spindown in hdparm.conf
  • excluded /dev/sda from smartd scans
Spindown works as configured in hdparm.conf. But the disk starts spinning with same blktrace messages as user flove (see above).
Finally I found the reason: I accidentaly installed one LXC on this hdd. After moving the LXC to the ssd it works; the hdd keeps standby status. :)
 
What do you mean with "implemented"?

Just follow the steps in post #27. Works for me now.
 
I also tried this, however this can have an adverse side-effect: In my case, there is an LVM PV on that drive. When I disable it via a global filter, the VG does not get started at all.

I have resorted to exclude the device only for pvestatd, which indirectly uses /usr/share/perl5/PVE/Storage/LVMPlugin.pm in lvm_vgs():

Perl:
    #my $cmd = ['/sbin/vgs', '--separator', ':', '--noheadings', '--units', 'b',
    my $cmd = ['/sbin/vgs', '--config', 'devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }', '--separator', ':', '--noheadings', '--units', 'b',
               '--unbuffered', '--nosuffix', '--options'];

However, it ssems there are two more calls to LVM, like:

Code:
    $cmd = ['/sbin/pvs', '--config', 'devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }', '--separator', ':', '--noheadings', '--units', 'k',
            '--unbuffered', '--nosuffix', '--options',
            'pv_name,pv_size,vg_name,pv_uuid', $device];

and

Code:
    my $cmd = [
        '/sbin/lvs', '--separator', ':', '--noheadings', '--units', 'b',
        '--unbuffered', '--nosuffix',
        '--config', 'report/time_format="%s" devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }',
        '--options', $option_list,
    ];
Hi, looks like an option for my problem. But i cant get the code work. I copied the following section to the config file. But i am getting the following error:
"Parse error at byte 111831 (line 2454): unexpected token". Any suggestions? Sorry, i am not an expert in such deep linux stuff and new in proxmox universe :) .


Code:
    my $cmd = [
        '/sbin/lvs', '--separator', ':', '--noheadings', '--units', 'b',
        '--unbuffered', '--nosuffix',
        '--config', 'report/time_format="%s" devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }',
        '--options', $option_list,
    ];
 
Last edited:
Here's what I did:

Code:
||/ Name           Version      Architecture Description
+++-==============-============-============-============================================
ii  pve-manager    8.1.5        amd64        Proxmox Virtual Environment Management Tools

Diff:
--- /usr/share/perl5/PVE/Storage/LVMPlugin.pm        2024-03-31 21:05:44.756599416 -0700
+++ /usr/share/perl5/PVE/Storage/LVMPlugin.pm   2024-03-31 21:03:55.585610904 -0700
@@ -111,7 +111,8 @@
 sub lvm_vgs {
     my ($includepvs) = @_;

-    my $cmd = ['/sbin/vgs', '--separator', ':', '--noheadings', '--units', 'b',
+    my $cmd = ['/sbin/vgs', '--config', 'devices { global_filter=["r|/dev/mapper/.*|","r|/dev/sd.*|"] }',
+              '--separator', ':', '--noheadings', '--units', 'b',
               '--unbuffered', '--nosuffix', '--options'];

     my $cols = [qw(vg_name vg_size vg_free lv_count)];
@@ -510,13 +511,13 @@

     # In LVM2, vgscans take place automatically;
     # this is just to be sure
-    if ($cache->{vgs} && !$cache->{vgscaned} &&
-        !$cache->{vgs}->{$scfg->{vgname}}) {
-        $cache->{vgscaned} = 1;
-        my $cmd = ['/sbin/vgscan', '--ignorelockingfailure', '--mknodes'];
-        eval { run_command($cmd, outfunc => sub {}); };
-        warn $@ if $@;
-    }
+    #if ($cache->{vgs} && !$cache->{vgscaned} &&
+    #    !$cache->{vgs}->{$scfg->{vgname}}) {
+    #    $cache->{vgscaned} = 1;
+    #    my $cmd = ['/sbin/vgscan', '--ignorelockingfailure', '--mknodes'];
+    #    eval { run_command($cmd, outfunc => sub {}); };
+    #    warn $@ if $@;
+    #}

     # we do not acticate any volumes here ('vgchange -aly')
     # instead, volumes are activate individually later
 
I can't get this to work. I am using hd-idle

Things I have tried
  • disabling smartd
  • disabling smartmontools
  • disabling pvestatd
  • LVM devices
    Code:
     global_filter=["r|/NAS.*|", "r|/dev/disk/by-label/NAS.*|", "r|/dev/sda.*|", "r|/dev/sdb.*|", "r|/dev/sdc.*|", "r|/dev/sdd.*|", "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|"]
  • Changing from storage directories > ZFS storage
And I still can't power down for more than a few seconds

1712308108499.png

Any suggestions?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!