blktrace -d /dev/sdc -o - | blkparse -i -
8,32 7 173 1155.524252951 4140 I W 14991898680 + 8 [z_wr_int_1]
8,32 7 174 1155.524254007 4140 D W 14991898680 + 8 [z_wr_int_1]
8,32 7 175 1155.525452948 558788 A W 14856242120 + 32 <- (8,33) 14856240072
8,32 7 176 1155.525453616 558788 Q W 14856242120 + 32 [z_wr_int_0]
8,32 7 177 1155.525455476 558788 G W 14856242120 + 32 [z_wr_int_0]
8,32 4 616 1155.464635571 558694 P N [z_wr_iss]
8,32 4 617 1155.464635796 558694 U N [z_wr_iss] 1
8,32 4 618 1155.464636009 558694 I W 14855845584 + 8 [z_wr_iss]
8,32 4 619 1155.464637397 558694 D W 14855845584 + 8 [z_wr_iss]
I also tried this, however this can have an adverse side-effect: In my case, there is an LVM PV on that drive. When I disable it via a global filter, the VG does not get started at all.Hello, I found a way to keep unused disk standby.
I see all hdd become active when I use lvm commands (like lvdisplay), and pvestatd will call lvm functions to get the status of default local-lvm storage.
so I edit the /etc/lvm/lvm.conf, append a new filter rule in
devices{global_filter []
}
now it is
global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|", "r|/dev/sdb.*|" ]
and now pvestatd will not awake my /dev/sdb
#my $cmd = ['/sbin/vgs', '--separator', ':', '--noheadings', '--units', 'b',
my $cmd = ['/sbin/vgs', '--config', 'devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }', '--separator', ':', '--noheadings', '--units', 'b',
'--unbuffered', '--nosuffix', '--options'];
$cmd = ['/sbin/pvs', '--config', 'devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }', '--separator', ':', '--noheadings', '--units', 'k',
'--unbuffered', '--nosuffix', '--options',
'pv_name,pv_size,vg_name,pv_uuid', $device];
my $cmd = [
'/sbin/lvs', '--separator', ':', '--noheadings', '--units', 'b',
'--unbuffered', '--nosuffix',
'--config', 'report/time_format="%s" devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }',
'--options', $option_list,
];
cannot determine size of volume 'local-lvm:vm-120-disk-0' - command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options lv_size /dev/pve/vm-120-disk-0' failed: got timeout
Same issue here. Did you find a solution?UsingI see this which causes the disks to spin up:Code:blktrace -d /dev/sdc -o - | blkparse -i -
Code:8,32 7 173 1155.524252951 4140 I W 14991898680 + 8 [z_wr_int_1] 8,32 7 174 1155.524254007 4140 D W 14991898680 + 8 [z_wr_int_1] 8,32 7 175 1155.525452948 558788 A W 14856242120 + 32 <- (8,33) 14856240072 8,32 7 176 1155.525453616 558788 Q W 14856242120 + 32 [z_wr_int_0] 8,32 7 177 1155.525455476 558788 G W 14856242120 + 32 [z_wr_int_0]
Code:8,32 4 616 1155.464635571 558694 P N [z_wr_iss] 8,32 4 617 1155.464635796 558694 U N [z_wr_iss] 1 8,32 4 618 1155.464636009 558694 I W 14855845584 + 8 [z_wr_iss] 8,32 4 619 1155.464637397 558694 D W 14855845584 + 8 [z_wr_iss]
Something from zfs keeps the disks spinning?
root@pve01:~# mount |grep backup
local-backup2 on /local-backup2 type zfs (rw,relatime,xattr,noacl,casesensitive)
local-backup2/subvol-113-disk-1 on /local-backup2/subvol-113-disk-1 type zfs (rw,relatime,xattr,posixacl,casesensitive)
local-backup2/backup-dir on /local-backup2/backup-dir type zfs (rw,relatime,xattr,noacl,casesensitive)
root@pve01:~# tail -4 /etc/lvm/lvm.conf
devices {
# added by pve-manager to avoid scanning ZFS zvols
global_filter=["r|/dev/disk/by-label/local-backup2.*|", "r|/dev/sda.*|", "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|"]
}
root@pve01:~# tail -4 /etc/hdparm.conf
/dev/disk/by-id/ata-ST1750LM000_HN-M171RAD_S34Dxxxxxxxxxx {
spindown_time = 120
}
root@pve01:~# cat /etc/smartd.conf
[...]
/dev/sda -d ignore
DEVICESCAN -d removable -n standby -m root -M exec /usr/share/smartmontools/smartd-runner
[...]
Excuse me, respected Professor Wolfgang. May I ask for your advice on the hardware export issue of Proxmox.Hi,
here is a checklist what can keep the disk running
https://rudd-o.com/linux-and-free-software/tip-letting-your-zfs-pool-sleep
Finally I found the reason: I accidentaly installed one LXC on this hdd. After moving the LXC to the ssd it works; the hdd keeps standby status.Maybe it is pvestatd, but not every 8 seconds. The disk starts spinning each time after several ten minutes.
What I have done so far:
Spindown works as configured in hdparm.conf. But the disk starts spinning with same blktrace messages as user flove (see above).
- create a zfs patition "local-backup2" on /dev/sda at node "pve01"
- mounted the zfs partition as directory "local-backup"
- added a storage "backup-dir" in /local-backup2" at datacenter for content "VZDump backup file"
- excluded the devices in lvm.conf from scanning
- activated spindown in hdparm.conf
- excluded /dev/sda from smartd scans
Hi, looks like an option for my problem. But i cant get the code work. I copied the following section to the config file. But i am getting the following error:I also tried this, however this can have an adverse side-effect: In my case, there is an LVM PV on that drive. When I disable it via a global filter, the VG does not get started at all.
I have resorted to exclude the device only for pvestatd, which indirectly uses /usr/share/perl5/PVE/Storage/LVMPlugin.pm in lvm_vgs():
Perl:#my $cmd = ['/sbin/vgs', '--separator', ':', '--noheadings', '--units', 'b', my $cmd = ['/sbin/vgs', '--config', 'devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }', '--separator', ':', '--noheadings', '--units', 'b', '--unbuffered', '--nosuffix', '--options'];
However, it ssems there are two more calls to LVM, like:
Code:$cmd = ['/sbin/pvs', '--config', 'devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }', '--separator', ':', '--noheadings', '--units', 'k', '--unbuffered', '--nosuffix', '--options', 'pv_name,pv_size,vg_name,pv_uuid', $device];
and
Code:my $cmd = [ '/sbin/lvs', '--separator', ':', '--noheadings', '--units', 'b', '--unbuffered', '--nosuffix', '--config', 'report/time_format="%s" devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }', '--options', $option_list, ];
my $cmd = [
'/sbin/lvs', '--separator', ':', '--noheadings', '--units', 'b',
'--unbuffered', '--nosuffix',
'--config', 'report/time_format="%s" devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }',
'--options', $option_list,
];
||/ Name Version Architecture Description
+++-==============-============-============-============================================
ii pve-manager 8.1.5 amd64 Proxmox Virtual Environment Management Tools
--- /usr/share/perl5/PVE/Storage/LVMPlugin.pm 2024-03-31 21:05:44.756599416 -0700
+++ /usr/share/perl5/PVE/Storage/LVMPlugin.pm 2024-03-31 21:03:55.585610904 -0700
@@ -111,7 +111,8 @@
sub lvm_vgs {
my ($includepvs) = @_;
- my $cmd = ['/sbin/vgs', '--separator', ':', '--noheadings', '--units', 'b',
+ my $cmd = ['/sbin/vgs', '--config', 'devices { global_filter=["r|/dev/mapper/.*|","r|/dev/sd.*|"] }',
+ '--separator', ':', '--noheadings', '--units', 'b',
'--unbuffered', '--nosuffix', '--options'];
my $cols = [qw(vg_name vg_size vg_free lv_count)];
@@ -510,13 +511,13 @@
# In LVM2, vgscans take place automatically;
# this is just to be sure
- if ($cache->{vgs} && !$cache->{vgscaned} &&
- !$cache->{vgs}->{$scfg->{vgname}}) {
- $cache->{vgscaned} = 1;
- my $cmd = ['/sbin/vgscan', '--ignorelockingfailure', '--mknodes'];
- eval { run_command($cmd, outfunc => sub {}); };
- warn $@ if $@;
- }
+ #if ($cache->{vgs} && !$cache->{vgscaned} &&
+ # !$cache->{vgs}->{$scfg->{vgname}}) {
+ # $cache->{vgscaned} = 1;
+ # my $cmd = ['/sbin/vgscan', '--ignorelockingfailure', '--mknodes'];
+ # eval { run_command($cmd, outfunc => sub {}); };
+ # warn $@ if $@;
+ #}
# we do not acticate any volumes here ('vgchange -aly')
# instead, volumes are activate individually later
global_filter=["r|/NAS.*|", "r|/dev/disk/by-label/NAS.*|", "r|/dev/sda.*|", "r|/dev/sdb.*|", "r|/dev/sdc.*|", "r|/dev/sdd.*|", "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|"]