External USB disk spindown problem since 5.0

daveb

New Member
Nov 9, 2017
4
0
1
Newton, MA
My external usb disks (used for backup) are not allowed to spin down when
running the latest Proxmox VE version 5.1. On other systems I have used these
same disks they are allowed to spin down if they are not accessed for some time.

I only need to plug in the drive, not even configure it in any way.
I see the drive activity flash every 10 seconds. Of couse, the drive is
never allowed to spin down.

I don't recall this behavior on the original version of Proxmox that I
installed (the original release of 5.0).

So, on another machine I retest:

- Installed the orginal release Proxmox 5.0 from 04.07.2017:
Attached external drive and do NOT see frequent access as I do with newest Proxmox.
After I wait for the drive to sleep, I see a delay when I choose the Disks menu
item as the disk is spun up. Just as I would expect.

Saved pvetest-report-Thu-14-December-2017-15-58.txt

- Try OS upgrade to latest Debian and retest. No change, still works as expected.

Saved pvetest-report-Thu-14-December-2017-19-53.txt

- Test with a new install of the updated version of 5.0 from 09.08.2017:
Behavior is now the same as what is seen on version 5.1.
The external hard disk is accessed approximately every 10 seconds.

Saved pvetest-report-Thu-14-December-2017-20-16.txt

So I didn't bother to retest with 5.1 since the problem appears with
the updated version of 5.0.

Is there some way to prevent this happening on drives that are only used for backup?

Is there some other debug activity you would like me to perform?

Thanks.
 

Attachments

  • pvetest-report-Thu-14-December-2017-15-58.txt
    13.6 KB · Views: 3
  • pvetest-report-Thu-14-December-2017-20-16.txt
    13.6 KB · Views: 2

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,512
1,760
164
South Tyrol/Italy
shop.proxmox.com
Hi,

as your do not have an entry for the HDD in fstab an neither track it as PVE storage the PVE stack shouldn't really touch it, I'm currently not aware of a change from our side which could trigger this behavior, I suspect the newer kernel and some slightly different defaults, possibly.

Can you try to set the spindown timeout with the hdparm tool?
Code:
hdparm -S 60 /dev/sdX

Not that the above time is not in seconds, but a multiply of 5 in seconds (so 60 => 5 minutes), as this would be still to straightforward it doesn't always means a multiply of 5:
Values from 1 to 240 specify multiples of 5 seconds, yielding timeouts from 5 seconds to 20 minutes. Values from 241 to 251 specify from 1 to 11 units of 30 minutes, yielding timeouts from 30 minutes to 5.5 hours.

If this works then you could add a entry in /etc/hdparm.conf for this disk, so you do not need to call the hdparm -S after an reboot.
 

Dark26

Active Member
Nov 27, 2017
233
17
38
45
The file system of the drive is ext4 ?

perhaps activities is due to ext4's journaling being initiated.
 

Dorin

Member
Sep 11, 2017
33
2
13
33
My system has a similar behavior.
At every ~5s the led activity is flashing and after that i can hear a different noisy sound coming from the hard drives, even if the system is in idle state (just with an active ssh session).
I don't want to spin them down but it looks like these hard drives are "stressed" pointless.
What do you guys think?

Code:
# uname -r
4.13.13-2-pve

# zpool iostat rpool 60
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
rpool       2.37T   354G      2     26  56.1K   261K
rpool       2.37T   354G      0     20  3.40K   226K
rpool       2.37T   354G      0     22      0   247K
rpool       2.37T   354G      0     23      0   248K
rpool       2.37T   354G      0     23      0   246K
^C

# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sda2    ONLINE       0     0     0
            sdb2    ONLINE       0     0     0
            sdc2    ONLINE       0     0     0

errors: No known data errors
 

Dark26

Active Member
Nov 27, 2017
233
17
38
45
you can try to find who est sineup your disk with :
echo 1 > /proc/sys/vm/block_dum

and see what happened in dmesg

Becareful , lots of log with that !!!
 

Dorin

Member
Sep 11, 2017
33
2
13
33
you can try to find who est sineup your disk with :
echo 1 > /proc/sys/vm/block_dum

and see what happened in dmesg

Becareful , lots of log with that !!!
Thanks.
With "dmesg | grep -i block" i got a very long list of entries like:
Code:
[ 4670.941087] z_wr_iss(349): WRITE block 1497164472 on sdb2 (8 sectors)
[ 4670.941088] z_wr_iss(348): WRITE block 1497164464 on sdb2 (8 sectors)
[ 4670.941090] z_wr_iss(350): WRITE block 1497164472 on sda2 (8 sectors)
[ 4670.941092] z_wr_iss(349): WRITE block 1497164472 on sdc2 (8 sectors)
[ 4670.941175] z_wr_int_4(356): WRITE block 1497164480 on sda2 (16 sectors)
[ 4670.941178] z_wr_int_1(353): WRITE block 1497164480 on sdb2 (8 sectors)
[ 4670.941236] z_wr_int_3(355): WRITE block 1742164720 on sda2 (32 sectors)
[ 4670.941262] z_wr_int_5(357): WRITE block 1742164720 on sdb2 (24 sectors)
[ 4670.941298] z_wr_int_5(357): WRITE block 54584520 on sda2 (24 sectors)
[ 4670.941316] z_wr_int_1(353): WRITE block 1497164480 on sdc2 (8 sectors)
...
[ 4676.059377] z_wr_iss(349): WRITE block 1496998968 on sdc2 (8 sectors)
[ 4676.059387] z_wr_iss(348): WRITE block 1497164536 on sdb2 (16 sectors)
[ 4676.059397] z_wr_iss(349): WRITE block 1496998968 on sdb2 (8 sectors)
...
[ 4681.179435] z_wr_iss(348): WRITE block 1496998984 on sda2 (8 sectors)
[ 4681.179436] z_wr_iss(349): WRITE block 1496998992 on sdb2 (8 sectors)
[ 4681.179454] z_wr_iss(349): WRITE block 1496998992 on sda2 (8 sectors)
...

Now i have to figure out what are these: z_null_iss, z_wr_int_0, ..., z_wr_int_7, z_wr_iss.
 
Last edited:

daveb

New Member
Nov 9, 2017
4
0
1
Newton, MA
As for the difference of behavior in 5.0, there were modifications to pve-storage.pm sub storage_info that changed how it was determined that storage was 'enabled'. So I'm assuming the change in behavior was deliberate.
In any case, it is more interesting that the disks are so frequently accessed when they are configured. In my case, I made sure I was using a lower power drive for my USB backups such as a WD Blue.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!