blktrace -d /dev/sdc -o - | blkparse -i -
8,32 7 173 1155.524252951 4140 I W 14991898680 + 8 [z_wr_int_1]
8,32 7 174 1155.524254007 4140 D W 14991898680 + 8 [z_wr_int_1]
8,32 7 175 1155.525452948 558788 A W 14856242120 + 32 <- (8,33) 14856240072
8,32 7 176 1155.525453616 558788 Q W 14856242120 + 32 [z_wr_int_0]
8,32 7 177 1155.525455476 558788 G W 14856242120 + 32 [z_wr_int_0]
8,32 4 616 1155.464635571 558694 P N [z_wr_iss]
8,32 4 617 1155.464635796 558694 U N [z_wr_iss] 1
8,32 4 618 1155.464636009 558694 I W 14855845584 + 8 [z_wr_iss]
8,32 4 619 1155.464637397 558694 D W 14855845584 + 8 [z_wr_iss]
I also tried this, however this can have an adverse side-effect: In my case, there is an LVM PV on that drive. When I disable it via a global filter, the VG does not get started at all.Hello, I found a way to keep unused disk standby.
I see all hdd become active when I use lvm commands (like lvdisplay), and pvestatd will call lvm functions to get the status of default local-lvm storage.
so I edit the /etc/lvm/lvm.conf, append a new filter rule in
devices{global_filter []
}
now it is
global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|", "r|/dev/sdb.*|" ]
and now pvestatd will not awake my /dev/sdb
#my $cmd = ['/sbin/vgs', '--separator', ':', '--noheadings', '--units', 'b',
my $cmd = ['/sbin/vgs', '--config', 'devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }', '--separator', ':', '--noheadings', '--units', 'b',
'--unbuffered', '--nosuffix', '--options'];
$cmd = ['/sbin/pvs', '--config', 'devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }', '--separator', ':', '--noheadings', '--units', 'k',
'--unbuffered', '--nosuffix', '--options',
'pv_name,pv_size,vg_name,pv_uuid', $device];
my $cmd = [
'/sbin/lvs', '--separator', ':', '--noheadings', '--units', 'b',
'--unbuffered', '--nosuffix',
'--config', 'report/time_format="%s" devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }',
'--options', $option_list,
];
cannot determine size of volume 'local-lvm:vm-120-disk-0' - command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options lv_size /dev/pve/vm-120-disk-0' failed: got timeout
Same issue here. Did you find a solution?UsingI see this which causes the disks to spin up:Code:blktrace -d /dev/sdc -o - | blkparse -i -
Code:8,32 7 173 1155.524252951 4140 I W 14991898680 + 8 [z_wr_int_1] 8,32 7 174 1155.524254007 4140 D W 14991898680 + 8 [z_wr_int_1] 8,32 7 175 1155.525452948 558788 A W 14856242120 + 32 <- (8,33) 14856240072 8,32 7 176 1155.525453616 558788 Q W 14856242120 + 32 [z_wr_int_0] 8,32 7 177 1155.525455476 558788 G W 14856242120 + 32 [z_wr_int_0]
Code:8,32 4 616 1155.464635571 558694 P N [z_wr_iss] 8,32 4 617 1155.464635796 558694 U N [z_wr_iss] 1 8,32 4 618 1155.464636009 558694 I W 14855845584 + 8 [z_wr_iss] 8,32 4 619 1155.464637397 558694 D W 14855845584 + 8 [z_wr_iss]
Something from zfs keeps the disks spinning?
root@pve01:~# mount |grep backup
local-backup2 on /local-backup2 type zfs (rw,relatime,xattr,noacl,casesensitive)
local-backup2/subvol-113-disk-1 on /local-backup2/subvol-113-disk-1 type zfs (rw,relatime,xattr,posixacl,casesensitive)
local-backup2/backup-dir on /local-backup2/backup-dir type zfs (rw,relatime,xattr,noacl,casesensitive)
root@pve01:~# tail -4 /etc/lvm/lvm.conf
devices {
# added by pve-manager to avoid scanning ZFS zvols
global_filter=["r|/dev/disk/by-label/local-backup2.*|", "r|/dev/sda.*|", "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|"]
}
root@pve01:~# tail -4 /etc/hdparm.conf
/dev/disk/by-id/ata-ST1750LM000_HN-M171RAD_S34Dxxxxxxxxxx {
spindown_time = 120
}
root@pve01:~# cat /etc/smartd.conf
[...]
/dev/sda -d ignore
DEVICESCAN -d removable -n standby -m root -M exec /usr/share/smartmontools/smartd-runner
[...]
Excuse me, respected Professor Wolfgang. May I ask for your advice on the hardware export issue of Proxmox.Hi,
here is a checklist what can keep the disk running
https://rudd-o.com/linux-and-free-software/tip-letting-your-zfs-pool-sleep
Finally I found the reason: I accidentaly installed one LXC on this hdd. After moving the LXC to the ssd it works; the hdd keeps standby status.Maybe it is pvestatd, but not every 8 seconds. The disk starts spinning each time after several ten minutes.
What I have done so far:
Spindown works as configured in hdparm.conf. But the disk starts spinning with same blktrace messages as user flove (see above).
- create a zfs patition "local-backup2" on /dev/sda at node "pve01"
- mounted the zfs partition as directory "local-backup"
- added a storage "backup-dir" in /local-backup2" at datacenter for content "VZDump backup file"
- excluded the devices in lvm.conf from scanning
- activated spindown in hdparm.conf
- excluded /dev/sda from smartd scans
Hi, looks like an option for my problem. But i cant get the code work. I copied the following section to the config file. But i am getting the following error:I also tried this, however this can have an adverse side-effect: In my case, there is an LVM PV on that drive. When I disable it via a global filter, the VG does not get started at all.
I have resorted to exclude the device only for pvestatd, which indirectly uses /usr/share/perl5/PVE/Storage/LVMPlugin.pm in lvm_vgs():
Perl:#my $cmd = ['/sbin/vgs', '--separator', ':', '--noheadings', '--units', 'b', my $cmd = ['/sbin/vgs', '--config', 'devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }', '--separator', ':', '--noheadings', '--units', 'b', '--unbuffered', '--nosuffix', '--options'];
However, it ssems there are two more calls to LVM, like:
Code:$cmd = ['/sbin/pvs', '--config', 'devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }', '--separator', ':', '--noheadings', '--units', 'k', '--unbuffered', '--nosuffix', '--options', 'pv_name,pv_size,vg_name,pv_uuid', $device];
and
Code:my $cmd = [ '/sbin/lvs', '--separator', ':', '--noheadings', '--units', 'b', '--unbuffered', '--nosuffix', '--config', 'report/time_format="%s" devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }', '--options', $option_list, ];
my $cmd = [
'/sbin/lvs', '--separator', ':', '--noheadings', '--units', 'b',
'--unbuffered', '--nosuffix',
'--config', 'report/time_format="%s" devices { global_filter=["r|/dev/zd.*|","r|/dev/mapper/.*|","r|/dev/sda.*|"] }',
'--options', $option_list,
];
||/ Name Version Architecture Description
+++-==============-============-============-============================================
ii pve-manager 8.1.5 amd64 Proxmox Virtual Environment Management Tools
--- /usr/share/perl5/PVE/Storage/LVMPlugin.pm 2024-03-31 21:05:44.756599416 -0700
+++ /usr/share/perl5/PVE/Storage/LVMPlugin.pm 2024-03-31 21:03:55.585610904 -0700
@@ -111,7 +111,8 @@
sub lvm_vgs {
my ($includepvs) = @_;
- my $cmd = ['/sbin/vgs', '--separator', ':', '--noheadings', '--units', 'b',
+ my $cmd = ['/sbin/vgs', '--config', 'devices { global_filter=["r|/dev/mapper/.*|","r|/dev/sd.*|"] }',
+ '--separator', ':', '--noheadings', '--units', 'b',
'--unbuffered', '--nosuffix', '--options'];
my $cols = [qw(vg_name vg_size vg_free lv_count)];
@@ -510,13 +511,13 @@
# In LVM2, vgscans take place automatically;
# this is just to be sure
- if ($cache->{vgs} && !$cache->{vgscaned} &&
- !$cache->{vgs}->{$scfg->{vgname}}) {
- $cache->{vgscaned} = 1;
- my $cmd = ['/sbin/vgscan', '--ignorelockingfailure', '--mknodes'];
- eval { run_command($cmd, outfunc => sub {}); };
- warn $@ if $@;
- }
+ #if ($cache->{vgs} && !$cache->{vgscaned} &&
+ # !$cache->{vgs}->{$scfg->{vgname}}) {
+ # $cache->{vgscaned} = 1;
+ # my $cmd = ['/sbin/vgscan', '--ignorelockingfailure', '--mknodes'];
+ # eval { run_command($cmd, outfunc => sub {}); };
+ # warn $@ if $@;
+ #}
# we do not acticate any volumes here ('vgchange -aly')
# instead, volumes are activate individually later
global_filter=["r|/NAS.*|", "r|/dev/disk/by-label/NAS.*|", "r|/dev/sda.*|", "r|/dev/sdb.*|", "r|/dev/sdc.*|", "r|/dev/sdd.*|", "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|"]
Thank you for this, it is very close.Hello, I do not have time to write a nice post but here are my evernote notes:
spindown / sleep disks
you can edit /etc/lvm/lvm.conf to exclude your drives. Just reject them via global filter.
- ok if:
- pvestatd stop (disable disks stats)
- then hdparm -y or -Y /dev/sda / sdd
- usb 500 GB slim doesn't accept but 8TB de ATLAS ok
- disable selected stats: https://forum.openmediavault.org/index.php?thread/30290-proxmox-spin-down-hdd/
The problem is pvestatd which is constantly scanning your drives.
eg.: dont scan sda and sdb:
global_filter = [ "r|/dev/sda.*|", "r|/dev/sdb.*|" ,"r|/dev/zd.*|", "r|/dev/mapper/pve-.*|" "r|/dev/mapper/.*-(vm|base)--[0-9]+--disk--[0-9]+|"]
other: https://forum.proxmox.com/threads/pvestatd-doesnt-let-hdds-go-to-sleep.29727/
other: https://forum.proxmox.com/threads/upgrade-from-5-x-to-6-x-harddrives-dont-sleep-no-more.69672/
blktrace -d /dev/sdc -o - | blkparse -i - which showed immediately that lvs and vgs where responsible.
I excluded the drive in lvm.conf, and it is working now. Code:
sleep 300 && hdparm -C /dev/sdc
/dev/sdc:
drive state is: standby
Edited LVM.conf:
Filter by-id
global_filter = ["r|/dev/disk/by-id/ata-ST8000DM004-2CX188_ZCT0D032*|","r|/dev/sdd*|", "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|" "r|/dev/mapper/.*-(vm|base)--[0-9]+--disk--[0-9]+|"]
[]add /mnt/pve/BackupsDS718 to sleep DS718 ?
To manualy spindown SDA:
hdparm -y /dev/sda
OKKKKK but not yet automatic (doesn't spindown after XX minutes).
Trying this: (not working)
Added at: /etc/hdparm.conf
/dev/sda {
spindown_time = 60
apm = 127
}
This Works: (For Sata 8TB datastore Windows) (disable scan in lvm.conf before !)
- hdparm -S 12 /dev/sdc
- /dev/sdc:
- setting standby to 12 (1 minutes)
- or
- hdparm -S 240 /dev/sdc
- /dev/sdc:
- setting standby to 240 (1 minutes)
- [] But does it stay after a reboot ???
- If not, bulletproof version: create a bash file: /root/scripts/cron_hdparm.sh
#!/bin/sh
#sleep [B]Sata 8TB Datatstore W2019[/B]
echo \/-----------------------------------\/
date
/usr/sbin/hdparm -C /dev/disk/by-id/ata-ST8000DM004-2CX188_ZCT0D032
/usr/sbin/hdparm -S 240 /dev/disk/by-id/ata-ST8000DM004-2CX188_ZCT0D032
#sleep 10
#hdparm -C /dev/disk/by-id/ata-ST8000DM004-2CX188_ZCT0D032
- Then Cron it twice a day !
crontab -e
#6h and 18h each day
0 6,18 * * * /root/scripts/cron_hdparm.sh >> /root/scripts/cron_hdparm.log 2>&1
infos config sleep hdparm:
==> spindown_time: see https://blog.bravi.org/?p=134
0 = disabled
1..240 = multiples of 5 seconds (5 seconds to 20 minutes)
241..251 = 1..11 x 30 mins 252 = 21 mins
253 = vendor defined (8..12 hours)
254 = reserved
255 = 21 mins + 15 secs
I will not make any response or support, take my note as they are, hope it helps, sorry, no time here.
If you try and it works and you have time, please give your experience below.
Save energy ! ;-)
"r|/dev/sde.*|"
"r|/dev/sde*|"
3305 1238671 udevadm info -p /sys/block/sde --query all
3305 1238671 udevadm info -p /sys/block/sde --query all
3309 1238672 /usr/sbin/smartctl -H /dev/sde
my $SMARTCTL = "/usr/sbin/smartctl";
my $SMARTCTL = "/usr/sbin/smartctl -n standby";
devices {
# added by pve-manager to avoid scanning ZFS zvols and Ceph rbds
global_filter=["r|/dev/zd.*|", "r|/dev/rbd.*|", "r|/dev/mapper/pve-.*|", "r|/dev/sda.*|", "r|/dev/sdb.*|", "r|/dev/sdc.*|", "r|/dev/sdd.*|", "r|/dev/sde.*|", "r|/dev/sdf.*|", "r|/dev/sdg.*|", "r|/dev/sdh.*|"]
}
blktrace -d /dev/sda -o - | blkparse -i -
I still get output like this: 8,0 5 1 0.000000000 26959 D R 36 [smartctl]
8,0 5 2 0.000662194 26959 D R 18 [smartctl]
8,0 5 3 0.001259270 26959 D R 252 [smartctl]
8,0 5 4 0.001902752 26959 D R 36 [smartctl]
8,0 5 5 0.002512097 26959 D R 32 [smartctl]
8,0 5 6 0.003184064 26959 D R 8 [smartctl]
8,0 5 7 0.003758121 26959 D R 64 [smartctl]
8,0 5 8 0.006147579 26959 D R 64 [smartctl]
8,0 17 1 0.000614037 0 C R [0]
8,0 17 2 0.001248970 0 C R [0]
8,0 17 3 0.001878224 0 C R [0]
8,0 17 4 0.002464220 0 C R [0]
8,0 17 5 0.003103347 0 C R [0]
8,0 17 6 0.003718553 0 C R [0]
8,0 17 7 0.006112511 0 C R [0]
8,0 17 8 0.007223956 0 C R [0]
8,0 17 9 0.008250268 0 C R [0]
8,0 17 10 0.008913322 0 C R [0]
8,0 17 11 0.009493070 0 C R [0]
root@proxmox:~# systemctl --type=service --state=running
UNIT LOAD ACTIVE SUB DESCRIPTION
chrony.service loaded active running chrony, an NTP client/server
cron.service loaded active running Regular background program processing daemon
dbus.service loaded active running D-Bus System Message Bus
getty@tty1.service loaded active running Getty on tty1
ksmtuned.service loaded active running Kernel Samepage Merging (KSM) Tuning Daemon
lxc-monitord.service loaded active running LXC Container Monitoring Daemon
lxcfs.service loaded active running FUSE filesystem for LXC
ModemManager.service loaded active running Modem Manager
NetworkManager.service loaded active running Network Manager
polkit.service loaded active running Authorization Manager
postfix@-.service loaded active running Postfix Mail Transport Agent (instance -)
pve-cluster.service loaded active running The Proxmox VE cluster filesystem
pve-container@105.service loaded active running PVE LXC Container: 105
pve-container@107.service loaded active running PVE LXC Container: 107
pve-firewall.service loaded active running Proxmox VE firewall
pve-ha-crm.service loaded active running PVE Cluster HA Resource Manager Daemon
pve-ha-lrm.service loaded active running PVE Local HA Resource Manager Daemon
pve-lxc-syscalld.service loaded active running Proxmox VE LXC Syscall Daemon
pvedaemon.service loaded active running PVE API Daemon
pvefw-logger.service loaded active running Proxmox VE firewall logger
pveproxy.service loaded active running PVE API Proxy Server
pvescheduler.service loaded active running Proxmox VE scheduler
pvestatd.service loaded active running PVE Status Daemon
qmeventd.service loaded active running PVE Qemu Event Daemon
rpcbind.service loaded active running RPC bind portmap service
rrdcached.service loaded active running LSB: start or stop rrdcached
spiceproxy.service loaded active running PVE SPICE Proxy Server
ssh.service loaded active running OpenBSD Secure Shell server
systemd-journald.service loaded active running Journal Service
systemd-logind.service loaded active running User Login Management
systemd-udevd.service loaded active running Rule-based Manager for Device Events and Files
user@0.service loaded active running User Manager for UID 0
watchdog-mux.service loaded active running Proxmox VE watchdog multiplexer
webmin.service loaded active running Webmin server daemon
wpa_supplicant.service loaded active running WPA supplicant
zfs-zed.service loaded active running ZFS Event Daemon (zed)
/usr/share/perl5/PVE/Diskmanage.pm
as you suggested (tried both with -n idle
and -n standby
followed by service pvedaemon restart, did not change anything. Even with a broken command (so that smartctl cannot get called), It still shows smartctl checking in blktrace
. So something is still calling smartctl - and I cannot say what process is responsible for it. (I can see it with htop --filter smartctl
)We use essential cookies to make this site work, and optional cookies to enhance your experience.