[SOLVED] udev malfunction (udisksd high cpu load)

Myst

Member
Dec 23, 2019
5
1
23
Hello,

I've an issue with "udev", I detected this because of an high CPU load:
Code:
    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
   1197 root      20   0  394632  14272  10392 S  45.8   0.0 102:40.37 udisksd
   1171 message+  20   0  195716 180724   3960 S  34.2   0.3  74:33.31 dbus-daemon
 217957 root      20   0   22448   3724   2188 S  30.6   0.0  68:45.21 systemd-udevd
 218074 root      20   0   22448   3724   2188 R  30.2   0.0  72:12.04 systemd-udevd
      1 root      20   0  165116  11416   7720 S  25.9   0.0  55:50.06 systemd
 218073 root      20   0   22448   3724   2188 S  13.6   0.0   1:17.91 systemd-udevd
   1196 root      20   0  177652   7268   6384 S  11.6   0.0  25:35.39 systemd-logind
    775 root      20   0   22400   5480   4048 S  11.0   0.0  24:55.80 systemd-udevd
 238836 root      20   0   16432  10028   7456 S  10.6   0.0   1:25.03 systemd

After some research, I found the "udevadm monitor" command who show this:
Code:
[...]
KERNEL[53174.145591] change   /devices/virtual/block/dm-15 (block)
UDEV  [53174.148426] change   /devices/virtual/block/dm-16 (block)
KERNEL[53174.150561] change   /devices/virtual/block/dm-66 (block)
UDEV  [53174.151355] change   /devices/virtual/block/dm-15 (block)
KERNEL[53174.154048] change   /devices/virtual/block/dm-16 (block)
KERNEL[53174.157705] change   /devices/virtual/block/dm-15 (block)
UDEV  [53174.159710] change   /devices/virtual/block/dm-16 (block)
UDEV  [53174.163470] change   /devices/virtual/block/dm-15 (block)
KERNEL[53174.163596] change   /devices/virtual/block/dm-16 (block)
UDEV  [53174.165404] change   /devices/virtual/block/dm-66 (block)
[...]

Block devices 15, 16 and 66 are LXC containers:
Code:
lvdisplay|awk  '/LV Name/{n=$3} /Block device/{d=$3; sub(".*:","dm-",d); print d,n;}'
dm-15 vm-103-disk-0
dm-16 vm-104-disk-0
dm-66 vm-122-disk-0

I tried to reboot to see if that resolve the issue, but even if it seems at first, it's back this morning.
I tried to clone the LXC 104, but I directly got the same issue with his clone LXC 122.
My containers have been automatically backup during the night, so maybe it's the trigger.

I only have 3 LXC containers (now) and all others are "classic" VMs.

I recently upgraded from V6 to V7 without an export/import, I don't know when the issue first occured.

Thank you for your help,

# pveversion --verbose
proxmox-ve: 7.0-2 (running kernel: 5.11.22-5-pve)
pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e)
pve-kernel-helper: 7.1-2
pve-kernel-5.11: 7.0-8
pve-kernel-5.4: 6.4-6
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.11.22-4-pve: 5.11.22-9
pve-kernel-5.4.140-1-pve: 5.4.140-1
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve1
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-10
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-12
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.11-1
proxmox-backup-file-restore: 2.0.11-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.1-1
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-4
pve-firmware: 3.3-2
pve-ha-manager: 3.3-1
pve-i18n: 2.5-1
pve-qemu-kvm: 6.0.0-4
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-14
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1
 
I have the same symptoms, also following an upgrade. I have 3 containers and 1 VM, but they were all stopped at time of issue. For what it's worth, I have 8 disks organized in 2 ZFS pools.

Restarting udev (systemctl restart udev) postpones the issue a bit, but calling udevadm trigger starts the load right away, which can be then seen with udevadm monitor.

pveversion:
proxmox-ve: 7.0-2 (running kernel: 5.11.22-5-pve)
pve-manager: 7.0-13 (running version: 7.0-13/7aa7e488)
pve-kernel-helper: 7.1-2
pve-kernel-5.11: 7.0-8
pve-kernel-5.4: 6.4-5
pve-kernel-5.3: 6.1-6
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.11.22-4-pve: 5.11.22-9
pve-kernel-5.11.22-3-pve: 5.11.22-7
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve1
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-10
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-12
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.11-1
proxmox-backup-file-restore: 2.0.11-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.1-1
pve-docs: 7.0-5
pve-edk2-firmware: 3.20210831-1
pve-firewall: 4.2-4
pve-firmware: 3.3-2
pve-ha-manager: 3.3-1
pve-i18n: 2.5-1
pve-qemu-kvm: 6.0.0-4
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-16
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1
 
Self-replying, this is less about udev and more about udisks2.

udisks2 (which has udisksd) is not installed by default on Proxmox, and doesn't seem to be needed unless the machine is a workstation.

It can be uninstalled (apt-get remove udisks2) unless other packages depend on it (apt-cache rdepends --installed udisks2).

If you need the package but want to disable the service, you can mask it to prevent it from running, then stop it:
Code:
systemctl mask udisks2.service
systemctl stop udisks2.service

EDIT: I remember installing it to use udisksctl unmount. Stopping/removing udisks2.service stops the CPU load immediately.
 
Last edited:
Self-replying, this is less about udev and more about udisks2.

udisks2 (which has udisksd) is not installed by default on Proxmox, and doesn't seem to be needed unless the machine is a workstation.

It can be uninstalled (apt-get remove udisks2) unless other packages depend on it (apt-cache rdepends --installed udisks2).

If you need the package but want to disable the service, you can mask it to prevent it from running, then stop it:
Code:
systemctl mask udisks2.service
systemctl stop udisks2.service

EDIT: I remember installing it to use udisksctl unmount. Stopping/removing udisks2.service stops the CPU load immediately.

Hello yunacchi,

Thank you for your messages.

I missed your posts but I found to the same solution this morning. We followed the same path of thinking.

I came back on this post to post my solution, but it's the same as yours, uninstall the udisks2 package.

Thank you again
 
Fix the problem in my case the following string:

ACTION=="add|change", KERNEL=="dm-*", OPTIONS:="nowatch"

in file /etc/udev/rulesd/90-fixdm.rules

and then command:

systemctl restart udev
 
Having a workstation setup but using ZFS also lead to the high CPU load. Udevadm monitor showed entries like the following when an LXC container was started.
Code:
KERNEL[210.092705] change   /devices/virtual/block/loop0 (block)
UDEV  [210.095130] change   /devices/virtual/block/loop0 (block)
KERNEL[210.098811] change   /devices/virtual/block/loop0 (block)
UDEV  [210.101638] change   /devices/virtual/block/loop0 (block)
KERNEL[210.105983] change   /devices/virtual/block/loop0 (block)
UDEV  [210.108627] change   /devices/virtual/block/loop0 (block)
KERNEL[210.112380] change   /devices/virtual/block/loop0 (block)
UDEV  [210.114810] change   /devices/virtual/block/loop0 (block)
Adding a new udev rule similar to above but with line
Code:
ACTION=="change",KERNEL=="loop*",OPTIONS:="nowatch"
fixed it.

I would suggest to add this to the Wiki entry for workstation setup.
 
Confirming the udev rules fixed this issue for me on a workstation.

Only addition I'll add is the correct path/file:

/etc/udev/rules.d/90-fixdm.rules
NOT
/etc/udev/rulesd/90-fixdm.rules
 
  • Like
Reactions: Myst
Just wanted to thank the people who took the time to share their solutions, I was flabbergasted by my Proxmox install's CPU consumption (which is NOT meant to be a workstation but just a headless server). I had to use both the lines suggested:

Code:
ACTION=="change",KERNEL=="loop*",OPTIONS:="nowatch"
ACTION=="add|change", KERNEL=="dm-*", OPTIONS:="nowatch"

But then the CPU dropped from 40% in performance mode to 8% on average.
 
Having a workstation setup but using ZFS also lead to the high CPU load. Udevadm monitor showed entries like the following when an LXC container was started.
Code:
KERNEL[210.092705] change   /devices/virtual/block/loop0 (block)
UDEV  [210.095130] change   /devices/virtual/block/loop0 (block)
KERNEL[210.098811] change   /devices/virtual/block/loop0 (block)
UDEV  [210.101638] change   /devices/virtual/block/loop0 (block)
KERNEL[210.105983] change   /devices/virtual/block/loop0 (block)
UDEV  [210.108627] change   /devices/virtual/block/loop0 (block)
KERNEL[210.112380] change   /devices/virtual/block/loop0 (block)
UDEV  [210.114810] change   /devices/virtual/block/loop0 (block)
Adding a new udev rule similar to above but with line
Code:
ACTION=="change",KERNEL=="loop*",OPTIONS:="nowatch"
fixed it.

I would suggest to add this to the Wiki entry for workstation setup.

I did the following and it seems to work. But when I close the LXC container and reopen it again, it becomes the same as before. Do you also have this situation?
 
Hi @xiaolin0199 ,
yes I do have the exact same observation. Setting the udev rule reduces the CPU usage but as soon as the lxc container is stopped and started again the CPU usage jumps up again until I do a
systemctl restart udev
 
I followed some of the instructions in here because I also had high CPU usage that was spinning up my fans, and I think I might have broken something to do with udev, and devices are no longer showing up properly in /dev/. I'm looking for /dev/ttyUSB* and /dev/serial, but these don't exist. Tried reverting the changes but can't get it to work again. Does anyone know what might have happened?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!