syslog full of systemd-udevd timeout messages

stra4d

Renowned Member
Mar 1, 2012
81
0
71
Not too sure what is happening with our system. Seems to be quite slow. In /var/log/syslog we have a lot of system-udevd timeout messages:

Code:
Oct 31 02:48:34 servername pvedaemon[16994]: <root@pam> successful auth for user 'root@pam'
Oct 31 02:50:52 servername systemd-udevd[27508]: timeout 'udisks-lvm-pv-export vKWuRT-Z7tJ-e66n-YxAP-T26C-tsa8-ow6Ger'
Oct 31 02:50:53 servername systemd-udevd[27508]: timeout: killing 'udisks-lvm-pv-export vKWuRT-Z7tJ-e66n-YxAP-T26C-tsa8-ow6Ger' [27510]
Oct 31 02:50:54 servername systemd-udevd[27508]: timeout: killing 'udisks-lvm-pv-export vKWuRT-Z7tJ-e66n-YxAP-T26C-tsa8-ow6Ger' [27510]
Oct 31 02:50:54 servername systemd-udevd[27506]: timeout 'ata_id --export /dev/sdc'
Oct 31 02:50:55 servername systemd-udevd[27508]: timeout: killing 'udisks-lvm-pv-export vKWuRT-Z7tJ-e66n-YxAP-T26C-tsa8-ow6Ger' [27510]
Oct 31 02:50:55 servername systemd-udevd[27506]: timeout: killing 'ata_id --export /dev/sdc' [27519]
Oct 31 02:50:56 servername systemd-udevd[27508]: timeout: killing 'udisks-lvm-pv-export vKWuRT-Z7tJ-e66n-YxAP-T26C-tsa8-ow6Ger' [27510]
Oct 31 02:50:56 servername systemd-udevd[27506]: timeout: killing 'ata_id --export /dev/sdc' [27519]
Oct 31 02:50:57 servername systemd-udevd[27508]: timeout: killing 'udisks-lvm-pv-export vKWuRT-Z7tJ-e66n-YxAP-T26C-tsa8-ow6Ger' [27510]
Oct 31 02:50:57 servername systemd-udevd[27506]: timeout: killing 'ata_id --export /dev/sdc' [27519]
Oct 31 02:50:58 servername systemd-udevd[27508]: timeout: killing 'udisks-lvm-pv-export vKWuRT-Z7tJ-e66n-YxAP-T26C-tsa8-ow6Ger' [27510]
Oct 31 02:50:58 servername systemd-udevd[27506]: timeout: killing 'ata_id --export /dev/sdc' [27519]
Oct 31 02:50:59 servername systemd-udevd[27508]: timeout: killing 'udisks-lvm-pv-export vKWuRT-Z7tJ-e66n-YxAP-T26C-tsa8-ow6Ger' [27510]
Oct 31 02:50:59 servername systemd-udevd[27506]: timeout: killing 'ata_id --export /dev/sdc' [27519]
Oct 31 02:51:00 servername systemd-udevd[27508]: timeout: killing 'udisks-lvm-pv-export vKWuRT-Z7tJ-e66n-YxAP-T26C-tsa8-ow6Ger' [27510]
Oct 31 02:51:00 servername systemd-udevd[27506]: timeout: killing 'ata_id --export /dev/sdc' [27519]
Oct 31 02:51:01 servername systemd-udevd[27508]: timeout: killing 'udisks-lvm-pv-export vKWuRT-Z7tJ-e66n-YxAP-T26C-tsa8-ow6Ger' [27510]
Oct 31 02:51:01 servername rsyslogd-2007: action 'action 17' suspended, next retry is Tue Oct 31 02:51:31 2017 [try http://www.rsyslog.com/e/2007 ]

The UUID appears to be the main LVM drive partition:
Code:
# lsblk -f
NAME         FSTYPE  LABEL UUID                                   MOUNTPOINT
sda
  sda1
  sda2       vfat          043E-EC1B
  sda3       LVM2_me       vKWuRT-Z7tJ-e66n-YxAP-T26C-tsa8-ow6Ger
    pve-root ext3          30fa0080-925e-43ff-883f-c93369f228c7   /
    pve-swap swap          3c595c07-8abd-4076-b74a-105747b8c41d   [SWAP]
    pve-data ext3          487392cf-36ff-435e-9e34-e24aa446bca9   /var/lib/vz
sdb
  sdb1       ext3          969b3159-4c77-4639-9674-7bc91630e463   /drive2
sdc
  sdc1       ext4          beb94a94-df9b-41dd-b0f2-4feb2d7aff68   /opt/microl...

Any ideas as to what the slowdown could be? Drives appear to be healthy according to the server.

System info:
Code:
# pveversion --v
proxmox-ve: 4.3-71 (running kernel: 4.4.21-1-pve)
pve-manager: 4.3-9 (running version: 4.3-9/f7c6f0cd)
pve-kernel-4.4.21-1-pve: 4.4.21-71
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-46
qemu-server: 4.0-92
pve-firmware: 1.1-10
libpve-common-perl: 4.0-79
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-68
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.3-12
pve-qemu-kvm: 2.7.0-4
pve-container: 1.0-80
pve-firewall: 2.0-31
pve-ha-manager: 1.0-35
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.5-1
lxcfs: 2.0.4-pve2
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80
 
Hi,

I would check the health of your disks.

We can only support current versions of PVE.
 
Proxmox is a rolling release so I can't tell you for sure.
But I'm not aware of any such bug.