pvestatd awakes hdd immediately

astrakid

Active Member
Jun 13, 2013
72
1
28
Hi,
I have the following issue:
My Proxmox (Linux server 2.6.32-20-pve) is running from usb-sticks (will be moved to an ssd soon) with 2 hdds in it. Both hdds are not used right now. The first one is going to sleep fine (Toshiba 3TB). The other one (seagate 1.5TB) is not going to sleep. If I force it to sleep (hdparm -y) it gets awake within several seconds. I found out that pvestatd is the reason for this. Shutting down this service works fine. Because 2 other hdds are showing the same behaviour (samsung 500GB, Samsung 1.5TB) I need to solve the root cause. ;)

My question:
1. Why is pvestatd waking up only some HDDs? Is ist because of an ATA command that doesn't wake up my Toshiba? Or why is the issue not occuring at that hdd?
2. How can i force pvestatd not to awake my HDDs?

thanks in advance,
kind regards,
astrakid
 
Last edited:

astrakid

Active Member
Jun 13, 2013
72
1
28
Re: pvstatd awakes hdd immediately

No hints? with pvestatd shut down I am not able to see any graphs which are very useful...
kind regards,
astrakid
 

spirit

Famous Member
Apr 2, 2010
5,325
522
133
www.odiso.com
Re: pvstatd awakes hdd immediately

As you see, pvestats do some stats on available free space,etc... so, it need to access to disk to do that...so that's why it's awake your disks.
 

astrakid

Active Member
Jun 13, 2013
72
1
28
Re: pvstatd awakes hdd immediately

As you see, pvestats do some stats on available free space,etc... so, it need to access to disk to do that...so that's why it's awake your disks.

well, i don't agree to this. On the one hand there were topics on the web that this should not wake up the disks if they are not used by proxmox:
http://forum.proxmox.com/archive/index.php/t-11836.html

=> my disks are not even mounted and will be waked up by proxmox resp. pvestatd.

on the other hand another harddisk is not waked up by pvestatd, even it IS mounted and known by proxmox (toshiba 3TB disk)...

so i am a bit confused and unfortunately don't agree to your explanation. Nevertheless, thanks for your reply!

kind regards,
astrakid
 

spirit

Famous Member
Apr 2, 2010
5,325
522
133
www.odiso.com
Re: pvstatd awakes hdd immediately

well, i don't agree to this. On the one hand there were topics on the web that this should not wake up the disks if they are not used by proxmox:
http://forum.proxmox.com/archive/index.php/t-11836.html

=> my disks are not even mounted and will be waked up by proxmox resp. pvestatd.

on the other hand another harddisk is not waked up by pvestatd, even it IS mounted and known by proxmox (toshiba 3TB disk)...

so i am a bit confused and unfortunately don't agree to your explanation. Nevertheless, thanks for your reply!

kind regards,
astrakid

Indeed, pvestatd only manage devices/storage configured in /etc/pve/storage.cfg.
But maybe scanning lvm (vgscan) is the cause to this wakeup ? Maybe adding some filter in lvm.conf could help ?
http://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/lvm_filters.html
 

astrakid

Active Member
Jun 13, 2013
72
1
28
Re: pvstatd awakes hdd immediately

Indeed, pvestatd only manage devices/storage configured in /etc/pve/storage.cfg.
But maybe scanning lvm (vgscan) is the cause to this wakeup ? Maybe adding some filter in lvm.conf could help ?
http://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/lvm_filters.html

would be a workaround, but not my preferred solution, since I want to have some disks used within my proxmox-environment. And with my Toshiba it is working fine, so I guess there must be another solution. Would like to see a hint how to increase loglevel to narrow down the root cause...

regards,
astrakid
 

spirit

Famous Member
Apr 2, 2010
5,325
522
133
www.odiso.com
Re: pvstatd awakes hdd immediately

would be a workaround, but not my preferred solution, since I want to have some disks used within my proxmox-environment. And with my Toshiba it is working fine, so I guess there must be another solution. Would like to see a hint how to increase loglevel to narrow down the root cause...

regards,
astrakid

I don't think that we can have more log, what pvestatd is doing, is to call status() (/usr/share/perl5/PVE/Storage/LVMPlugin.pm, NFSplugin,....) for each storage defined in /etc/pve/storage.cfg

Now, the lvm plugin is doind this:

/sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free

So I think it try to scan all block devices, and not only the one defined in /etc/pve/storage.cfg.

Can you try this command line (shut pvestatd before), and see if the disk wake up ?
 

astrakid

Active Member
Jun 13, 2013
72
1
28
Re: pvstatd awakes hdd immediately

[...]
/sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free
[...]
Can you try this command line (shut pvestatd before), and see if the disk wake up ?

I will try it at the weekend and report the results.

thanks a lot.
Nevertheless there must be something else, because my other disk is not waking up. Both disks are SATA-disks. The drive currently active in my server showing this issue contains a lvm-partition. buth the other samsung which had this issue as well has only a simple fat32-partition. the toshiba which doesn't have this issue is an ext4 drive.

regards,
astrakid
 

astrakid

Active Member
Jun 13, 2013
72
1
28
Re: pvstatd awakes hdd immediately

the command is waking up my disk. ok. understood.
what i don't understand: why is my toshiba 3TB not being waked up by this command?


regards,
astrakid
 

spirit

Famous Member
Apr 2, 2010
5,325
522
133
www.odiso.com
Re: pvstatd awakes hdd immediately

the command is waking up my disk. ok. understood.
what i don't understand: why is my toshiba 3TB not being waked up by this command?


regards,
astrakid

This is strange that it doesn't wake up, or maybe they are already filtering in lvm.conf ?
what are the /dev/.... for both disks ?
 

astrakid

Active Member
Jun 13, 2013
72
1
28
Re: pvstatd awakes hdd immediately

This is strange that it doesn't wake up, or maybe they are already filtering in lvm.conf ?
what are the /dev/.... for both disks ?

this is my lvm.conf:
Code:
root@server:/etc/icinga/objects# grep -v "^#" /etc/lvm/lvm.conf |grep -v \ *\#|grep -v "^$"
devices {
    dir = "/dev"
    scan = [ "/dev" ]
    obtain_device_list_from_udev = 1
    preferred_names = [ ]
    filter = [ "a/.*/" ]
    cache_dir = "/run/lvm"
    cache_file_prefix = ""
    write_cache_state = 1
    sysfs_scan = 1
    multipath_component_detection = 1
    md_component_detection = 1
    md_chunk_alignment = 1
    data_alignment_detection = 1
    data_alignment = 0
    data_alignment_offset_detection = 1
    ignore_suspended_devices = 0
    disable_after_error_count = 0
    require_restorefile_with_uuid = 1
    pv_min_size = 2048
    issue_discards = 1
}
log {
    verbose = 1
    syslog = 1
    file = "/var/log/lvm2.log"
    overwrite = 0
    level = 2
    indent = 1
    command_names = 1
    prefix = "  "
}
backup {
    backup = 1
    backup_dir = "/etc/lvm/backup"
    archive = 1
    archive_dir = "/etc/lvm/archive"
    retain_min = 10
    retain_days = 30
}
shell {
    history_size = 100
}
global {
    umask = 077
    test = 0
    units = "h"
    si_unit_consistency = 1
    activation = 1
    proc = "/proc"
    locking_type = 1
    wait_for_locks = 1
    fallback_to_clustered_locking = 1
    fallback_to_local_locking = 1
    locking_dir = "/run/lock/lvm"
    prioritise_write_locks = 1
    abort_on_internal_errors = 0
    detect_internal_vg_cache_corruption = 0
    metadata_read_only = 0
    mirror_segtype_default = "mirror"
    use_lvmetad = 0
}
activation {
    checks = 0
    udev_sync = 1
    udev_rules = 1
    verify_udev_operations = 0
    retry_deactivation = 1
    missing_stripe_filler = "error"
    use_linear_target = 1
    reserved_stack = 64
    reserved_memory = 8192
    process_priority = -18
    mirror_region_size = 512
    readahead = "auto"
    raid_fault_policy = "warn"
    mirror_log_fault_policy = "allocate"
    mirror_image_fault_policy = "remove"
    snapshot_autoextend_threshold = 100
    snapshot_autoextend_percent = 20
    thin_pool_autoextend_threshold = 100
    thin_pool_autoextend_percent = 20
    thin_check_executable = "/sbin/thin_check -q"
    use_mlockall = 0
    monitoring = 0
    polling_interval = 15
}
dmeventd {
    mirror_library = "libdevmapper-event-lvm2mirror.so"
    snapshot_library = "libdevmapper-event-lvm2snapshot.so"
    thin_library = "libdevmapper-event-lvm2thin.so"
}

log-levels and verbose-levels have been increased by me some minutes ago to get more infos in logfiles.
the disks are changing due to the fact that i had lots of usb-sticks plugged to my server, up to /dev/sdh. currently it is /dev/sdc (seagate waking up) and /dev/sdd (toshiba sleeping well).

regards and thanks for your support,
astrakid
 

astrakid

Active Member
Jun 13, 2013
72
1
28
Re: pvstatd awakes hdd immediately

Hi,
here the latest information:
i removed the Seagate 1.5 TB and replaced it by the Samsung 1.5TB (UI154). The same behaviour. Whe pvestatd is running, the Samsung is prevented from going to standby. The toshiba in contrast goes to standby...??? Both harddisks are mounted to /mnt/<label>, both are set to hdparm-parameters -B 128, -M 128 and -S 180.
The Toshiba goes to standby automatically after the configured timeout, the Samsung stays awake. When sending the Samsung to standby with hdparm -y it awakes some seconds later again. Stopping pvestatd results in the Samsung going to sleep.
I don't understand why one harddisk is hindered and the other is not...

is there any chance of increasing log information to get the root cause? the command "/sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free" should have affect to both or none...

kind regards,
astrakid
 

berni

New Member
Mar 25, 2010
4
0
1
Hi astrakid,
same situation here with a Seagate 3TB drive and proxmox 2.13. This drive is needed in one VM only, so as a workaround I tried to make it directly available for this VM as an IDE device. That leaves the disk sleeping well, but with the disk in use, it produces DMA overflows in this VM even in cache mode "unsafe" ... so back to LVM.

Could you find a solution or workaround?

Regards, berni
 

astrakid

Active Member
Jun 13, 2013
72
1
28
no, there is no solution...
would be great to get support... ';-)

regards,
astrakid
 

rubictus

Member
Nov 3, 2015
5
0
21
I had the issue, and fixed it.
In /etc/lvm/lvm.conf, simply set :
use_lvmetad = 1

And restart the server. Be sure to have access to it, it may not reboot properly.
Specially if you have LVM on MD devices. I had to apply the quick and dirty workaround from post #74 of this thread :
bugs.debian.org/cgi-bin/bugreport.cgi?bug=774082#74


# pveversion --verbose
proxmox-ve: 4.4-87 (running kernel: 4.4.59-1-pve)
pve-manager: 4.4-13 (running version: 4.4-13/7ea56165)
pve-kernel-4.4.59-1-pve: 4.4.59-87
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!