Hello,
I've an issue with "udev", I detected this because of an high CPU load:
After some research, I found the "udevadm monitor" command who show this:
Block devices 15, 16 and 66 are LXC containers:
I tried to reboot to see if that resolve the issue, but even if it seems at first, it's back this morning.
I tried to clone the LXC 104, but I directly got the same issue with his clone LXC 122.
My containers have been automatically backup during the night, so maybe it's the trigger.
I only have 3 LXC containers (now) and all others are "classic" VMs.
I recently upgraded from V6 to V7 without an export/import, I don't know when the issue first occured.
Thank you for your help,
I've an issue with "udev", I detected this because of an high CPU load:
Code:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1197 root 20 0 394632 14272 10392 S 45.8 0.0 102:40.37 udisksd
1171 message+ 20 0 195716 180724 3960 S 34.2 0.3 74:33.31 dbus-daemon
217957 root 20 0 22448 3724 2188 S 30.6 0.0 68:45.21 systemd-udevd
218074 root 20 0 22448 3724 2188 R 30.2 0.0 72:12.04 systemd-udevd
1 root 20 0 165116 11416 7720 S 25.9 0.0 55:50.06 systemd
218073 root 20 0 22448 3724 2188 S 13.6 0.0 1:17.91 systemd-udevd
1196 root 20 0 177652 7268 6384 S 11.6 0.0 25:35.39 systemd-logind
775 root 20 0 22400 5480 4048 S 11.0 0.0 24:55.80 systemd-udevd
238836 root 20 0 16432 10028 7456 S 10.6 0.0 1:25.03 systemd
After some research, I found the "udevadm monitor" command who show this:
Code:
[...]
KERNEL[53174.145591] change /devices/virtual/block/dm-15 (block)
UDEV [53174.148426] change /devices/virtual/block/dm-16 (block)
KERNEL[53174.150561] change /devices/virtual/block/dm-66 (block)
UDEV [53174.151355] change /devices/virtual/block/dm-15 (block)
KERNEL[53174.154048] change /devices/virtual/block/dm-16 (block)
KERNEL[53174.157705] change /devices/virtual/block/dm-15 (block)
UDEV [53174.159710] change /devices/virtual/block/dm-16 (block)
UDEV [53174.163470] change /devices/virtual/block/dm-15 (block)
KERNEL[53174.163596] change /devices/virtual/block/dm-16 (block)
UDEV [53174.165404] change /devices/virtual/block/dm-66 (block)
[...]
Block devices 15, 16 and 66 are LXC containers:
Code:
lvdisplay|awk '/LV Name/{n=$3} /Block device/{d=$3; sub(".*:","dm-",d); print d,n;}'
dm-15 vm-103-disk-0
dm-16 vm-104-disk-0
dm-66 vm-122-disk-0
I tried to reboot to see if that resolve the issue, but even if it seems at first, it's back this morning.
I tried to clone the LXC 104, but I directly got the same issue with his clone LXC 122.
My containers have been automatically backup during the night, so maybe it's the trigger.
I only have 3 LXC containers (now) and all others are "classic" VMs.
I recently upgraded from V6 to V7 without an export/import, I don't know when the issue first occured.
Thank you for your help,
# pveversion --verbose
proxmox-ve: 7.0-2 (running kernel: 5.11.22-5-pve)
pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e)
pve-kernel-helper: 7.1-2
pve-kernel-5.11: 7.0-8
pve-kernel-5.4: 6.4-6
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.11.22-4-pve: 5.11.22-9
pve-kernel-5.4.140-1-pve: 5.4.140-1
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve1
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-10
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-12
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.11-1
proxmox-backup-file-restore: 2.0.11-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.1-1
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-4
pve-firmware: 3.3-2
pve-ha-manager: 3.3-1
pve-i18n: 2.5-1
pve-qemu-kvm: 6.0.0-4
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-14
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1
proxmox-ve: 7.0-2 (running kernel: 5.11.22-5-pve)
pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e)
pve-kernel-helper: 7.1-2
pve-kernel-5.11: 7.0-8
pve-kernel-5.4: 6.4-6
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.11.22-4-pve: 5.11.22-9
pve-kernel-5.4.140-1-pve: 5.4.140-1
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve1
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-10
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-12
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.11-1
proxmox-backup-file-restore: 2.0.11-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.1-1
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-4
pve-firmware: 3.3-2
pve-ha-manager: 3.3-1
pve-i18n: 2.5-1
pve-qemu-kvm: 6.0.0-4
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-14
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1