# dstat -D sdg -ta --top-bio
----system---- --total-cpu-usage-- --dsk/sdg-- -net/total- ---paging-- ---system-- ----most-expensive----
time |usr sys idl wai stl| read writ| recv send| in out | int csw | block i/o process
10-05 09:36:21| 3 2 94 1 0| 22k 20k| 0 0 |5462B 6291B|4962 12k|z_rd_int 1100k 0
10-05 09:36:22| 2 1 97 0 0| 0 0 | 18k 21k| 0 0 |5078 10k|miniserv.pl 449k 23k
10-05 09:36:23| 2 1 97 0 0| 0 0 | 46k 45k| 0 0 |3313 5655 |webm 175k 0
10-05 09:36:24| 2 1 96 1 0| 0 0 | 23k 26k| 0 0 |4062 11k|webm 274k 23k
10-05 09:36:25| 1 1 98 0 0| 0 0 | 38k 38k| 0 0 |3142 5481 |pmxcfs 0 40k
10-05 09:36:26| 1 1 98 0 0| 0 0 | 39k 39k| 0 0 |3316 5882 |miniserv.pl 449k 23k
10-05 09:36:27| 2 1 97 0 0| 0 0 | 43k 42k| 0 0 |3720 6354 |webm 175k 0
10-05 09:36:28| 2 1 98 0 0| 0 0 | 52k 55k| 0 0 |3530 6236 |webm 274k 23k
10-05 09:36:29| 1 1 96 2 0| 0 0 | 36k 33k| 0 0 |3860 9926 |pihole-FTL 100B 0
10-05 09:36:30| 3 2 95 0 0| 0 0 | 53k 54k| 0 0 |3959 6976 |miniserv.pl 449k 23k
10-05 09:36:31| 2 1 97 0 0| 0 0 | 35k 37k| 0 0 |3487 5828 |webm 175k 0
10-05 09:36:32| 3 1 96 0 0| 0 0 | 27k 28k| 0 0 |4287 7516 |webm 274k 23k
10-05 09:36:33| 1 1 98 0 0| 0 0 | 27k 26k| 0 0 |3803 6633 |zvol 0 160k
10-05 09:36:34| 1 2 96 1 0| 0 0 | 43k 39k| 0 0 |4101 11k|miniserv.pl 449k 23k
10-05 09:36:35| 2 1 98 0 0| 0 0 | 34k 41k| 0 0 |3351 5569 |webm 175k 0
10-05 09:36:36| 2 1 98 0 0| 0 0 | 47k 47k| 0 0 |3441 5946 |webm 274k 23k
10-05 09:36:37| 1 1 98 0 0| 0 0 | 37k 34k| 0 0 |3668 6080 |sshd 32k 0
10-05 09:36:38| 1 1 98 0 0| 0 0 | 42k 43k| 0 0 |3144 5625 |miniserv.pl 449k 23k
10-05 09:36:39| 2 2 94 2 0| 0 0 | 48k 47k| 0 0 |4288 11k|rrdcached 0 276k
10-05 09:36:40| 3 2 95 0 0| 0 0 | 34k 35k| 0 0 |4047 6860 |webm 274k 23k
10-05 09:36:41| 1 1 98 0 0| 0 0 | 36k 39k| 0 0 |3265 5569 |jbd2/dm-1-8 0 36k
10-05 09:36:42| 1 1 98 0 0| 0 0 | 31k 33k| 0 0 |3050 5279 |miniserv.pl 449k 23k
10-05 09:36:43| 2 1 97 0 0| 0 0 | 39k 38k| 0 0 |3426 5687 |webm 175k 0
10-05 09:36:44| 2 2 95 1 0| 0 0 | 31k 30k| 0 0 |4267 11k|webm 274k 23k
10-05 09:36:45| 1 1 98 0 0| 0 0 | 43k 46k| 12k 0 |3196 5504 |pmxcfs 0 60k
10-05 09:36:46| 1 1 98 0 0| 0 0 | 30k 33k| 0 0 |3164 5445 |miniserv.pl 449k 23k
10-05 09:36:47| 2 1 97 0 0| 0 0 | 39k 41k| 0 0 |3976 6736 |webm 175k 0
10-05 09:36:48| 2 1 96 0 0| 0 0 | 35k 34k| 0 0 |3533 6294 |webm 274k 23k
10-05 09:36:49| 2 1 95 1 0| 0 0 | 28k 28k| 0 0 |3917 10k|pihole-FTL 100B 0
----system---- --total-cpu-usage-- --dsk/sdg-- -net/total- ---paging-- ---system-- ----most-expensive----
time |usr sys idl wai stl| read writ| recv send| in out | int csw | block i/o process
# hdparm -C /dev/sdg
/dev/sdg:
SG_IO: bad/missing sense data, sb[]: f0 00 01 00 50 00 81 0a 80 00 00 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
drive state is: unknown
#
This depends a lot on the disks. This did not work for me, but the option -n idle did:smartctl -i -n standby /dev/sdg
smartctl -i -n idle /dev/sdg
-n idle,6,q
# smartctl -i -n idle /dev/sdg
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.4.106-1-pve] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org
Device is in IDLE_A mode, exit(2)
#
/dev/sdg -n idle,6,q
yes that should work/dev/sdg -n idle,6,q
DEFAULT -a -I 194 -W 0,45 -R 5! -n idle,6,q -m root
/dev/disk/by-id/ata-WDC_WD80EZAZ-11TDBA0_xxxxxxxxx -R 1! -R 22! -R 196! -l scterc,70,70
# hdparm
/etc/lvm/lvm.conf
dstat -D sdc -ta --top-bio
----system---- --total-cpu-usage-- --dsk/sda-- -net/total- ---paging-- ---system-- ----most-expensive----
time |usr sys idl wai stl| read writ| recv send| in out | int csw | block i/o process
15-06 22:31:13| 16 6 76 3 0| 381k 23M| 0 0 | 5B 32B| 17k 8054 |proxmox-bac 382k 22M
15-06 22:31:14| 0 1 99 0 0| 0 0 | 120B 1078B| 0 0 | 42 65 |
15-06 22:31:15| 1 0 98 0 0| 0 2048k| 734B 406B| 0 0 | 88 122 |jbd2/dm-1-8 0 4096B
15-06 22:31:16| 8 2 90 0 0| 0 0 |5749B 9766B| 0 0 | 192 200 |proxmox-bac 0 4096B
15-06 22:31:17| 1 0 99 0 0| 0 0 |1711B 759B| 0 0 | 54 62 |
15-06 22:31:18| 1 0 99 0 0| 0 0 | 60B 406B| 0 0 | 57 81 |
15-06 22:31:19| 1 0 99 0 0| 0 0 | 60B 406B| 0 0 | 33 45 |
15-06 22:31:20| 1 0 98 0 0| 0 2048k| 794B 406B| 0 0 | 64 80 |
15-06 22:31:21| 1 0 98 1 0| 0 12k|1985B 743B| 0 0 | 81 108 |jbd2/dm-1-8 0 16k
15-06 22:31:22| 0 0 99 0 0| 0 0 | 272B 438B| 0 0 | 63 70 |
15-06 22:31:23| 0 0 99 0 0| 0 0 | 60B 406B| 0 0 | 31 53 | ^X
15-06 22:31:24| 1 0 99 0 0| 0 0 | 238B 524B| 0 0 | 34 56 |
15-06 22:31:25| 1 0 98 1 0| 0 2048k|2599B 829B| 0 0 | 106 155 |jbd2/dm-1-8 0 8192B
15-06 22:31:26| 7 4 87 3 0| 0 0 |7144B 10k| 0 0 | 206 248 |proxmox-bac 0 8192B^C
systemctl stop proxmox-backup-proxy.service
thank you for the hint.here I use a crontab to start & stop PBS to prevent monitoring its datastoressystemctl stop proxmox-backup-proxy.service
Would an alternative to be to disable pvestatd and still be able to usd lvm on the disks?Hi,
you might have the problem that you want to spin down some disks, but that doesn't work as planned.
I just spend 2 hours debugging the problem, and so decided to write this short tutorial.
Symptom:
You have multiple drives in your proxmox machines, and want some (like all HDDs) to spin down after time.
Lets say you want `sdc` to spin down if unused.
However the drives get constant reads as you can find out via:-> constant 44k reads every few seconds.Code:dstat -D sdc -ta --top-bio
Analysis:
If you google you will find alot of threads claiming pvestatd being responsible for this, and they ask for fixes which isn't possible.
However, pvestatd just seems to call the lvm utilities, like `vgscan`.
If you execute it in a second session you will see the same 44k read on your drive.
Luckily there is an option in lvm to say vgscan to never look at your drive. (That ofc means, you cannot have lvm usage on sdc, which is okay in my case, because i only have zfs running on it).
Fix:
open the file:And modify the global_filter from:Code:/etc/lvm/lvm.conf
to:Code:global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|" ]
Add a entry for each drive you want vgscan to ignore in the future.Code:global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|", "r|/dev/sdc*|" ]
Save and runonce to reload the configs. you should no longer see the reads on the drive.Code:vgscan
Hope i can help some of you
We use essential cookies to make this site work, and optional cookies to enhance your experience.