Config locked (backup) problem

TheForumTroll

Member
Nov 21, 2020
29
0
6
47
Hello experts!

I have a CT that got caught in Config Locked (Backup) which I fixed by unlocking it (hoping it was a fluke) but it still does it. I have no idea why since the storage doesn't seem to be the problem (all other CT/VM run backup without getting locked). I do see an error in the log though but it is for the CT before the one crashing.

Log (note 103 finishes backup, then 104 starts and then 103 gets erorrs, 104 never stops):

Code:
Aug 09 04:07:13 pve vzdump[31420]: INFO: Finished Backup of VM 103 (00:01:46)
Aug 09 04:07:13 pve pvestatd[1350]: modified cpu set for lxc/102: 0-1,4-6,9-10,14-15,19-21
Aug 09 04:07:13 pve vzdump[31420]: INFO: Starting Backup of VM 104 (lxc)
Aug 09 04:07:18 pve kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Aug 09 04:07:18 pve kernel: fwbr103i0: port 2(veth103i0) entered blocking state
Aug 09 04:07:18 pve kernel: fwbr103i0: port 2(veth103i0) entered forwarding state
Aug 09 04:07:18 pve audit[3628]: AVC apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/lxc>" name="/dev/" pid=3628 comm="(install)" flags="ro, nosuid, noexec, remount, strictatime"
Aug 09 04:07:18 pve kernel: audit: type=1400 audit(1628474838.865:52): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/lxc>" name="/dev/" pid=3628 comm="(install)" flags="ro, nosuid, noexec, remount, strictatime"
Aug 09 04:07:19 pve audit[3638]: AVC apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/lxc>" name="/dev/" pid=3638 comm="(sh)" flags="ro, nosuid, noexec, remount, strictatime"
Aug 09 04:07:19 pve kernel: audit: type=1400 audit(1628474839.057:53): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/lxc>" name="/dev/" pid=3638 comm="(sh)" flags="ro, nosuid, noexec, remount, strictatime"
Aug 09 04:07:19 pve audit[3654]: AVC apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/lxc>" name="/dev/" pid=3654 comm="(sh)" flags="ro, nosuid, noexec, remount, strictatime"
Aug 09 04:07:19 pve kernel: audit: type=1400 audit(1628474839.161:54): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/lxc>" name="/dev/" pid=3654 comm="(sh)" flags="ro, nosuid, noexec, remount, strictatime"
Aug 09 04:07:20 pve audit[3817]: AVC apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/lxc>" name="/dev/" pid=3817 comm="(mysqld)" flags="ro, nosuid, noexec, remount, strictatime"
Aug 09 04:07:20 pve kernel: audit: type=1400 audit(1628474840.017:55): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/lxc>" name="/dev/" pid=3817 comm="(mysqld)" flags="ro, nosuid, noexec, remount, strictatime"
Aug 09 04:07:31 pve kernel: audit: type=1400 audit(1628474851.109:56): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/lxc>" name="/dev/" pid=4111 comm="(an-start)" flags="ro, nosuid, noexec, remount, strictatime"
Aug 09 04:07:31 pve audit[4111]: AVC apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/lxc>" name="/dev/" pid=4111 comm="(an-start)" flags="ro, nosuid, noexec, remount, strictatime"
Aug 09 04:07:31 pve audit[4118]: AVC apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/lxc>" name="/dev/" pid=4118 comm="(sh)" flags="ro, nosuid, noexec, remount, strictatime"
Aug 09 04:07:31 pve kernel: audit: type=1400 audit(1628474851.289:57): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/lxc>" name="/dev/" pid=4118 comm="(sh)" flags="ro, nosuid, noexec, remount, strictatime"
Aug 09 04:08:00 pve systemd[1]: Starting Proxmox VE replication runner...
Aug 09 04:08:01 pve systemd[1]: pvesr.service: Succeeded.


I don't know if the above error is releated to the failing backup but there are no other errors at all in the log. The backup storage is a local drive so no network, etc. (backups are copied externally later).

Any idea how to find the fault?

EDIT: I forgot to write that there are no log for this backup in /var/lib/vz/dump/ only empty .tmp dir. The last successful log have no errors:

Code:
2021-08-01 17:10:56 INFO: Starting Backup of VM 104 (lxc)
2021-08-01 17:10:56 INFO: status = running
2021-08-01 17:10:56 INFO: CT Name: elk
2021-08-01 17:10:56 INFO: including mount point rootfs ('/') in backup
2021-08-01 17:10:56 INFO: backup mode: snapshot
2021-08-01 17:10:56 INFO: ionice priority: 7
2021-08-01 17:10:56 INFO: create storage snapshot 'vzdump'
2021-08-01 17:10:57 INFO: creating vzdump archive '/var/lib/vz/dump/vzdump-lxc-104-2021_08_01-17_10_56.tar.zst'
2021-08-01 17:13:03 INFO: Total bytes written: 3224299520 (3.1GiB, 25MiB/s)
2021-08-01 17:13:03 INFO: archive file size: 1.22GB
2021-08-01 17:13:03 INFO: cleanup temporary 'vzdump' snapshot
2021-08-01 17:13:04 INFO: Finished Backup of VM 104 (00:02:08)
 
Last edited:
please post the full task log of the backup, and the corresponding timespan from the journal (journalctl --since ... --until ...)
 
I'm not sure what you mean by "full task log of the backup" if it isn't the /var/lib/vz/dump/ logs (which has no log)? Is there another log of the backup process?
 
Manual backups (and snapshots) work just fine (I just tested both). It is only the automated backup that fails/hangs.
 
yes, the backup runs as task, and that task is visible in the GUI and the log is retrievable from there as well ;) (or with 'pvenode task list' / 'pvenode task log ..', but that is likely more cumbersome).
 
I hope this is the correct logs :)

Code:
INFO: starting new backup job: vzdump --compress zstd --mode stop --quiet 1 --storage local --all 1 --mailnotification failure
INFO: Starting Backup of VM 100 (lxc)
INFO: Backup started at 2021-08-09 04:00:04
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: piholeTemplate
INFO: including mount point rootfs ('/') in backup
INFO: creating vzdump archive '/var/lib/vz/dump/vzdump-lxc-100-2021_08_09-04_00_04.tar.zst'
INFO: Total bytes written: 1223342080 (1.2GiB, 36MiB/s)
INFO: archive file size: 376MB
INFO: removing backup 'local:backup/vzdump-lxc-100-2021_06_21-04_00_03.tar.zst'
INFO: Finished Backup of VM 100 (00:00:33)
INFO: Backup finished at 2021-08-09 04:00:37
INFO: Starting Backup of VM 101 (lxc)
INFO: Backup started at 2021-08-09 04:00:37
INFO: status = running
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: fileserver
INFO: including mount point rootfs ('/') in backup
INFO: excluding bind mount point mp0 ('/shared') from backup (not a volume)
INFO: stopping vm
INFO: creating vzdump archive '/var/lib/vz/dump/vzdump-lxc-101-2021_08_09-04_00_37.tar.zst'
INFO: Total bytes written: 1425940480 (1.4GiB, 24MiB/s)
INFO: archive file size: 535MB
INFO: removing backup 'local:backup/vzdump-lxc-101-2021_06_21-04_00_34.tar.zst'
INFO: restarting vm
INFO: guest is online again after 164 seconds
INFO: Finished Backup of VM 101 (00:02:44)
INFO: Backup finished at 2021-08-09 04:03:21
INFO: Starting Backup of VM 102 (lxc)
INFO: Backup started at 2021-08-09 04:03:21
INFO: status = running
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: rent
INFO: including mount point rootfs ('/') in backup
INFO: excluding bind mount point mp0 ('/shared') from backup (not a volume)
INFO: stopping vm
INFO: creating vzdump archive '/var/lib/vz/dump/vzdump-lxc-102-2021_08_09-04_03_21.tar.zst'
INFO: Total bytes written: 2039408640 (1.9GiB, 19MiB/s)
INFO: archive file size: 766MB
INFO: removing backup 'local:backup/vzdump-lxc-102-2021_06_21-04_01_31.tar.zst'
INFO: restarting vm
INFO: guest is online again after 126 seconds
INFO: Finished Backup of VM 102 (00:02:06)
INFO: Backup finished at 2021-08-09 04:05:27
INFO: Starting Backup of VM 103 (lxc)
INFO: Backup started at 2021-08-09 04:05:27
INFO: status = running
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: lamp
INFO: including mount point rootfs ('/') in backup
INFO: stopping vm
INFO: creating vzdump archive '/var/lib/vz/dump/vzdump-lxc-103-2021_08_09-04_05_27.tar.zst'
INFO: Total bytes written: 1250877440 (1.2GiB, 17MiB/s)
INFO: archive file size: 415MB
INFO: removing backup 'local:backup/vzdump-lxc-103-2021_06_21-04_03_14.tar.zst'
INFO: restarting vm
INFO: guest is online again after 106 seconds
INFO: Finished Backup of VM 103 (00:01:46)
INFO: Backup finished at 2021-08-09 04:07:13
INFO: Starting Backup of VM 104 (lxc)
INFO: Backup started at 2021-08-09 04:07:13
INFO: status = running
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: elk
INFO: including mount point rootfs ('/') in backup
INFO: stopping vm
command 'lxc-stop -n 104 --nokill --timeout 600' failed: exit code 1
INFO: creating vzdump archive '/var/lib/vz/dump/vzdump-lxc-104-2021_08_09-04_07_13.tar.zst'
INFO: Total bytes written: 3372861440 (3.2GiB, 26MiB/s)
INFO: archive file size: 1.24GB
no lock found trying to remove 'backup'  lock
INFO: restarting vm
INFO: guest is online again after 25316 seconds
INFO: Finished Backup of VM 104 (07:01:56)
INFO: Backup finished at 2021-08-09 11:09:09
INFO: Starting Backup of VM 110 (lxc)
INFO: Backup started at 2021-08-09 11:09:09
INFO: status = running
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: plex
INFO: including mount point rootfs ('/') in backup
INFO: excluding bind mount point mp0 ('/shared') from backup (not a volume)
INFO: stopping vm
INFO: creating vzdump archive '/var/lib/vz/dump/vzdump-lxc-110-2021_08_09-11_09_09.tar.zst'
INFO: Total bytes written: 6141050880 (5.8GiB, 17MiB/s)
INFO: archive file size: 4.16GB
INFO: removing backup 'local:backup/vzdump-lxc-110-2021_06_14-04_05_52.tar.zst'
INFO: restarting vm
INFO: guest is online again after 378 seconds
INFO: Finished Backup of VM 110 (00:06:18)
INFO: Backup finished at 2021-08-09 11:15:27
INFO: Backup job finished successfully
TASK OK

That... took a long time to fail. The INFO: Backup finished at 2021-08-09 11:09:09 is me unlocking and rebooting I guess? At least it is about the same time as I did (after I posted in here).



Code:
Aug 09 04:07:13 pve vzdump[31420]: INFO: Starting Backup of VM 104 (lxc)
Aug 09 04:07:18 pve kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Aug 09 04:07:18 pve kernel: fwbr103i0: port 2(veth103i0) entered blocking state
Aug 09 04:07:18 pve kernel: fwbr103i0: port 2(veth103i0) entered forwarding state
Aug 09 04:07:18 pve audit[3628]: AVC apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/lxc>" name="/dev/" pid=3628 comm="
Aug 09 04:07:18 pve kernel: audit: type=1400 audit(1628474838.865:52): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/
Aug 09 04:07:19 pve audit[3638]: AVC apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/lxc>" name="/dev/" pid=3638 comm="
Aug 09 04:07:19 pve kernel: audit: type=1400 audit(1628474839.057:53): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/
Aug 09 04:07:19 pve audit[3654]: AVC apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/lxc>" name="/dev/" pid=3654 comm="
Aug 09 04:07:19 pve kernel: audit: type=1400 audit(1628474839.161:54): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/
Aug 09 04:07:20 pve audit[3817]: AVC apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/lxc>" name="/dev/" pid=3817 comm="
Aug 09 04:07:20 pve kernel: audit: type=1400 audit(1628474840.017:55): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/
Aug 09 04:07:31 pve kernel: audit: type=1400 audit(1628474851.109:56): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/
Aug 09 04:07:31 pve audit[4111]: AVC apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/lxc>" name="/dev/" pid=4111 comm="
Aug 09 04:07:31 pve audit[4118]: AVC apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/lxc>" name="/dev/" pid=4118 comm="
Aug 09 04:07:31 pve kernel: audit: type=1400 audit(1628474851.289:57): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-103_</var/lib/
Aug 09 04:08:00 pve systemd[1]: Starting Proxmox VE replication runner...
Aug 09 04:08:01 pve systemd[1]: pvesr.service: Succeeded.
Aug 09 04:08:01 pve systemd[1]: Started Proxmox VE replication runner.
Aug 09 04:09:00 pve systemd[1]: Starting Proxmox VE replication runner...
Aug 09 04:09:01 pve systemd[1]: pvesr.service: Succeeded.
Aug 09 04:09:01 pve systemd[1]: Started Proxmox VE replication runner.
Aug 09 04:10:00 pve systemd[1]: Starting Proxmox VE replication runner...
Aug 09 04:10:01 pve systemd[1]: pvesr.service: Succeeded.
 
Last edited:
so the container that fails to stop is 104... does stopping that one work normally? anything special about that container? could you post its config?
 
Yes, it is 104 that hangs. It stops normally and it also run backup fine if I do so manually. I don't believe there's anything special about it. I just installed the Ubuntu 20.04.2 LTS CT and then PFELK. I took a snapshot after OS install, after install of PFELK and one later. I believe the only container changes is resize of disc and swap.


Code:
root@pve:~# cat /etc/pve/lxc/104.conf

arch: amd64
cores: 4
hostname: elk
memory: 8196
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.0.1,hwaddr=22:B2:B9:EE:78:0E,ip=192.168.0.104/24,type=veth
onboot: 1
ostype: ubuntu
parent: After_fail
rootfs: local-lvm:vm-104-disk-0,mountoptions=noatime,size=18G
swap: 8196
unprivileged: 1

[After_fail]
#After weekly backup failed!
arch: amd64
cores: 4
hostname: elk
memory: 8196
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.0.1,hwaddr=22:B2:B9:EE:78:0E,ip=192.168.0.104/24,type=veth
onboot: 1
ostype: ubuntu
parent: pfelk
rootfs: local-lvm:vm-104-disk-0,mountoptions=noatime,size=18G
snaptime: 1628503758
swap: 8196
unprivileged: 1

[Installed]
#Installed and updated. Ready for pfelk.
arch: amd64
cores: 4
hostname: elk
memory: 8196
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.0.1,hwaddr=22:B2:B9:EE:78:0E,ip=192.168.0.104/24,type=veth
ostype: ubuntu
rootfs: local-lvm:vm-104-disk-0,mountoptions=noatime,size=8G
snaptime: 1627824760
swap: 1024
unprivileged: 1

[pfelk]
#pfelk installed (but is dhcp dashboard working?)
arch: amd64
cores: 4
hostname: elk
memory: 8196
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.0.1,hwaddr=22:B2:B9:EE:78:0E,ip=192.168.0.104/24,type=veth
ostype: ubuntu
parent: Installed
rootfs: local-lvm:vm-104-disk-0,mountoptions=noatime,size=8G
snaptime: 1627826772
swap: 1024
unprivileged: 1

The only errors I could find for 104 were the command 'lxc-stop -n 104 --nokill --timeout 600' failed: exit code 1.
 
Last edited:
do you also have the task log of the manual backup? note that vzdump attempts a clean shutdown, not a stop operation (in case you tried the latter manually ;))
 
Sure! :) And yes, I did test Stop manually and by using the failed command from the log (lxc-stop -n 104 --nokill --timeout 600) but I'm guessing those two methods are the same behind the scenes.

I'm not sure it actually failed to stop unless I'm misunderstanding your question. It did stop (I noticed something was wrong because the container was down). It failed to start again it seems(?) but the backup did go through as it is there with the other backups..

Manual backup log (from after the scheduled one failed):
Code:
INFO: starting new backup job: vzdump 104 --remove 0 --node pve --storage local --compress zstd --mode snapshot
INFO: Starting Backup of VM 104 (lxc)
INFO: Backup started at 2021-08-09 12:15:02
INFO: status = running
INFO: CT Name: elk
INFO: including mount point rootfs ('/') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'
  Logical volume "snap_vm-104-disk-0_vzdump" created.
INFO: creating vzdump archive '/var/lib/vz/dump/vzdump-lxc-104-2021_08_09-12_15_01.tar.zst'
INFO: Total bytes written: 3387197440 (3.2GiB, 24MiB/s)
INFO: archive file size: 1.24GB
INFO: cleanup temporary 'vzdump' snapshot
  Logical volume "snap_vm-104-disk-0_vzdump" successfully removed
INFO: Finished Backup of VM 104 (00:02:20)
INFO: Backup finished at 2021-08-09 12:17:21
INFO: Backup job finished successfully
TASK OK

I don't understand how a manual backup could work but not a scheduled one but I've made a new schedule with a backup in a few minutes to see what happens.
 
Last edited:
the manual backup is using snapshot mode, the scheduled one was using stop mode. in snapshot mode, the container is not turned off ;)
 
I completely forgot that it was set to stop mode. Now the new scheduled backup is using snapshot too which will of course show nothing new. I'll try another schedule, hopefully with the correct settings this time :oops:
 
I'm not sure it actually failed to stop unless I'm misunderstanding your question. It did stop (I noticed something was wrong because the container was down). It failed to start again it seems?

it looks to me like it was half-stopped and in some kind of limbo state. if you can trigger it again, it would be interesting to collect some info before rebooting (replace CTID accordingly)

Code:
find /sys/fs/cgroup/lxc/CTID/ -iname '*.procs' -or -iname '*.threads' -exec cat {} \; | sort -u | while read p; do echo "PID $p"; cat /proc/$p/cgroup; cat /proc/$p/cmdline | tr '\0' ' '; echo; echo;  done

this should list all processes still belonging to the container (if there are any).
 
It seems to be stuck in the same position again now:

Code:
INFO: starting new backup job: vzdump 104 --quiet 1 --node pve --storage local --mailnotification failure --compress zstd --all 0 --mode stop
INFO: Starting Backup of VM 104 (lxc)
INFO: Backup started at 2021-08-10 11:49:59
INFO: status = running
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: elk
INFO: including mount point rootfs ('/') in backup
INFO: stopping vm
command 'lxc-stop -n 104 --nokill --timeout 600' failed: exit code 1


Is the /sys/fs/cgroup/lxc/104/ correct? Because there are no CT IDs in there at all:

Code:
root@pve:~# find /sys/fs/cgroup/lxc/104/ -iname '*.procs' -or -iname '*.threads' -exec cat {} \; | sort -u | while read p; do echo "PID $p"; cat /proc/$p/cgroup; cat /proc/$p/cmdline | tr '\0' ' '; echo; echo;  done
find: ‘/sys/fs/cgroup/lxc/104/’: No such file or directory

root@pve:~# ls -l /sys/fs/cgroup/
total 0
dr-xr-xr-x 7 root root  0 Aug  9 00:04 blkio
lrwxrwxrwx 1 root root 11 Aug  9 00:04 cpu -> cpu,cpuacct
lrwxrwxrwx 1 root root 11 Aug  9 00:04 cpuacct -> cpu,cpuacct
dr-xr-xr-x 7 root root  0 Aug  9 00:04 cpu,cpuacct
dr-xr-xr-x 4 root root  0 Aug  9 00:04 cpuset
dr-xr-xr-x 7 root root  0 Aug  9 00:04 devices
dr-xr-xr-x 4 root root  0 Aug  9 00:04 freezer
dr-xr-xr-x 4 root root  0 Aug  9 00:04 hugetlb
dr-xr-xr-x 7 root root  0 Aug  9 00:04 memory
lrwxrwxrwx 1 root root 16 Aug  9 00:04 net_cls -> net_cls,net_prio
dr-xr-xr-x 4 root root  0 Aug  9 00:04 net_cls,net_prio
lrwxrwxrwx 1 root root 16 Aug  9 00:04 net_prio -> net_cls,net_prio
dr-xr-xr-x 4 root root  0 Aug  9 00:04 perf_event
dr-xr-xr-x 7 root root  0 Aug  9 00:04 pids
dr-xr-xr-x 4 root root  0 Aug  9 00:04 rdma
dr-xr-xr-x 7 root root  0 Aug  9 00:04 systemd
dr-xr-xr-x 7 root root  0 Aug  9 00:04 unified
 
could you provide pveversion -v as well? sorry, should have asked earlier for that ;)
 
ps faxl | grep -A30 104 might also be interesting.
 
Code:
root@pve:~# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.124-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-5
pve-kernel-helper: 6.4-5
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.4.124-1-pve: 5.4.124-2
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.101-1-pve: 5.4.101-1
pve-kernel-5.4.98-1-pve: 5.4.98-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve4~bpo10
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.12-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.2-4
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.5-pve1~bpo10+1
 
This one needs to be in multiple posts to fit in the limit!

Code:
root@pve:~# ps faxl | grep -A30 104
1     0   104     2  20   0      0     0 smpboo S    ?          0:00  \_ [ksoftirqd/15]
1     0   106     2   0 -20      0     0 worker I<   ?          0:00  \_ [kworker/15:0H-kblockd]
5     0   107     2  20   0      0     0 smpboo S    ?          0:00  \_ [cpuhp/16]
5     0   108     2 -51   -      0     0 smpboo S    ?          0:00  \_ [idle_inject/16]
1     0   109     2 -100  -      0     0 smpboo S    ?          0:01  \_ [migration/16]
1     0   110     2  20   0      0     0 smpboo S    ?          0:00  \_ [ksoftirqd/16]
1     0   112     2   0 -20      0     0 worker I<   ?          0:00  \_ [kworker/16:0H-kblockd]
5     0   113     2  20   0      0     0 smpboo S    ?          0:00  \_ [cpuhp/17]
5     0   114     2 -51   -      0     0 smpboo S    ?          0:00  \_ [idle_inject/17]
1     0   115     2 -100  -      0     0 smpboo S    ?          0:01  \_ [migration/17]
1     0   116     2  20   0      0     0 smpboo S    ?          0:11  \_ [ksoftirqd/17]
1     0   118     2   0 -20      0     0 worker I<   ?          0:00  \_ [kworker/17:0H-kblockd]
5     0   119     2  20   0      0     0 smpboo S    ?          0:00  \_ [cpuhp/18]
5     0   120     2 -51   -      0     0 smpboo S    ?          0:00  \_ [idle_inject/18]
1     0   121     2 -100  -      0     0 smpboo S    ?          0:01  \_ [migration/18]
1     0   122     2  20   0      0     0 smpboo S    ?          0:00  \_ [ksoftirqd/18]
1     0   124     2   0 -20      0     0 worker I<   ?          0:00  \_ [kworker/18:0H-kblockd]
5     0   125     2  20   0      0     0 smpboo S    ?          0:00  \_ [cpuhp/19]
5     0   126     2 -51   -      0     0 smpboo S    ?          0:00  \_ [idle_inject/19]
1     0   127     2 -100  -      0     0 smpboo S    ?          0:00  \_ [migration/19]
1     0   128     2  20   0      0     0 smpboo S    ?          0:00  \_ [ksoftirqd/19]
1     0   130     2   0 -20      0     0 worker I<   ?          0:00  \_ [kworker/19:0H-kblockd]
5     0   131     2  20   0      0     0 smpboo S    ?          0:00  \_ [cpuhp/20]
5     0   132     2 -51   -      0     0 smpboo S    ?          0:00  \_ [idle_inject/20]
1     0   133     2 -100  -      0     0 smpboo S    ?          0:01  \_ [migration/20]
1     0   134     2  20   0      0     0 smpboo S    ?          0:01  \_ [ksoftirqd/20]
1     0   136     2   0 -20      0     0 worker I<   ?          0:00  \_ [kworker/20:0H-kblockd]
5     0   137     2  20   0      0     0 smpboo S    ?          0:00  \_ [cpuhp/21]
5     0   138     2 -51   -      0     0 smpboo S    ?          0:00  \_ [idle_inject/21]
1     0   139     2 -100  -      0     0 smpboo S    ?          0:01  \_ [migration/21]
1     0   140     2  20   0      0     0 smpboo S    ?          0:00  \_ [ksoftirqd/21]
--
0     0  1104     2  20   0   2272   744 pipe_w S    ?          0:11  \_ bpfilter_umh
1     0 31427     2   0 -20      0     0 rescue I<   ?          0:00  \_ [kdmflush]
1     0   335     2  20   0      0     0 kmmpd  S    ?          0:01  \_ [kmmpd-dm-8]
1     0   336     2  20   0      0     0 kjourn S    ?          0:02  \_ [jbd2/dm-8-8]
1     0   337     2   0 -20      0     0 rescue I<   ?          0:00  \_ [ext4-rsv-conver]
1     0  1911     2  20   0      0     0 kmmpd  S    ?          0:01  \_ [kmmpd-dm-7]
1     0  1912     2  20   0      0     0 kjourn S    ?          0:02  \_ [jbd2/dm-7-8]
1     0  1913     2   0 -20      0     0 rescue I<   ?          0:00  \_ [ext4-rsv-conver]
1     0  3307     2  20   0      0     0 kmmpd  S    ?          0:01  \_ [kmmpd-dm-6]
1     0  3308     2  20   0      0     0 kjourn S    ?          0:00  \_ [jbd2/dm-6-8]
1     0  3309     2   0 -20      0     0 rescue I<   ?          0:00  \_ [ext4-rsv-conver]
1     0 28649     2  20   0      0     0 kmmpd  S    ?          0:00  \_ [kmmpd-dm-9]
1     0 28650     2  20   0      0     0 kjourn S    ?          0:00  \_ [jbd2/dm-9-8]
1     0 28651     2   0 -20      0     0 rescue I<   ?          0:00  \_ [ext4-rsv-conver]
1     0 32601     2  20   0      0     0 worker I    ?          0:00  \_ [kworker/5:1-cgroup_destroy]
1     0 32606     2  20   0      0     0 worker I    ?          0:00  \_ [kworker/23:0-events]
1     0 32611     2  20   0      0     0 worker I    ?          0:00  \_ [kworker/15:0-rcu_gp]
1     0 32728     2  20   0      0     0 kmmpd  S    ?          0:00  \_ [kmmpd-dm-10]
1     0 32729     2  20   0      0     0 kjourn S    ?          0:00  \_ [jbd2/dm-10-8]
1     0 32730     2   0 -20      0     0 rescue I<   ?          0:00  \_ [ext4-rsv-conver]
1     0 20835     2  20   0      0     0 worker I    ?          0:00  \_ [kworker/21:2-rcu_par_gp]
1     0 21541     2  20   0      0     0 worker I    ?          0:00  \_ [kworker/7:0-rcu_par_gp]
1     0 28662     2  20   0      0     0 worker I    ?          0:00  \_ [kworker/11:1-events]
1     0 28670     2  20   0      0     0 worker I    ?          0:00  \_ [kworker/9:0-events]
1     0 32467     2   0 -20      0     0 rescue I<   ?          0:00  \_ [kdmflush]
1     0  1498     2  20   0      0     0 worker I    ?          0:00  \_ [kworker/7:2-mm_percpu_wq]
1     0  1500     2  20   0      0     0 worker I    ?          0:00  \_ [kworker/21:1-events]
1     0  1507     2  20   0      0     0 worker I    ?          0:00  \_ [kworker/15:2-events]
1     0  1508     2  20   0      0     0 worker I    ?          0:00  \_ [kworker/17:1-mm_percpu_wq]
1     0  1635     2  20   0      0     0 worker I    ?          0:00  \_ [kworker/5:2-mm_percpu_wq]
1     0  1636     2  20   0      0     0 kmmpd  S    ?          0:00  \_ [kmmpd-dm-12]
--
4   104   930     1  20   0   9244  4440 ep_pol Ss   ?          0:05 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
4     0   932     1  20   0 372472  3048 futex_ Ssl  ?          7:13 /usr/bin/lxcfs /var/lib/lxcfs
4     0   934     1  20   0  12228  5600 hrtime Ss   ?          0:00 /usr/sbin/smartd -n
1     0   937     1  20   0   4088   136 ep_pol Ss   ?          0:00 /usr/sbin/qmeventd /var/run/qmeventd.sock
1     0   945     1  20   0   6724  2396 do_wai S    ?          0:03 /bin/bash /usr/sbin/ksmtuned
0     0 25554   945  20   0   5256   752 hrtime S    ?          0:00  \_ sleep 60
4     0  1057     1  20   0   7292  2116 ep_pol Ss   ?          0:00 /usr/lib/x86_64-linux-gnu/lxc/lxc-monitord --daemon
4   110  1065     1  20   0  34084  7796 poll_s Ss   ?          2:26 /usr/sbin/snmpd -Lsd -Lf /dev/null -u Debian-snmp -g Debian-snmp -I -smux mteTrigger mteTriggerConf -f -p /run/snmpd.pid
1     0  1108     1  20   0   6888   280 hrtime Ss   ?          0:03 /sbin/iscsid
5     0  1109     1  10 -10   7392  5000 poll_s S<Ls ?          0:00 /sbin/iscsid
4     0  1116     1  20   0  15848  5456 poll_s Ss   ?          0:00 /usr/sbin/sshd -D
4     0  1124     1  20   0   5608  1644 poll_s Ss+  tty1       0:00 /sbin/agetty -o -p -- \u --noclear tty1 linux
1     0  1234     1  20   0 732764  3664 poll_s Ssl  ?          0:44 /usr/bin/rrdcached -B -b /var/lib/rrdcached/db/ -j /var/lib/rrdcached/journal/ -p /var/run/rrdcached.pid -l unix:/var/run/rrdcached.sock
5     0  1310     1  20   0 608760 67016 futex_ Ssl  ?          2:16 /usr/bin/pmxcfs
5     0  1314     1  20   0  43468  3704 ep_pol Ss   ?          0:00 /usr/lib/postfix/sbin/master -w
4   107  1318  1314  20   0  43872  5984 ep_pol S    ?          0:00  \_ qmgr -l -t unix -u
4   107  3551  1314  20   0  43832  7868 ep_pol S    ?          0:00  \_ pickup -l -t unix -u -c
4     0  1344     1  20   0   8500  2736 hrtime Ss   ?          0:00 /usr/sbin/cron -f
1     0  1349     1  20   0 306200 31812 hrtime Ss   ?          8:17 pve-firewall
5     0  1350     1  20   0 304780 38744 hrtime Ss   ?         36:46 pvestatd
1     0  1374     1  20   0 355584  4076 hrtime Ss   ?          0:01 pvedaemon
5     0 16919  1374  20   0 364108 43424 poll_s S    ?          0:12  \_ pvedaemon worker
5     0 17073  1374  20   0 363504 35620 poll_s S    ?          0:04  \_ pvedaemon worker
5     0 23433  1374  20   0 363636 32996 poll_s S    ?          0:00  \_ pvedaemon worker
5     0 25405 23433  20   0 363504 25656 poll_s Ss   ?          0:00      \_ task UPID:pve:0000633D:00D7FF5F:61127DEA:vncshell::root@pam:
0     0 25406 25405  20   0  95616  7072 ep_pol S    ?          0:00          \_ /usr/bin/termproxy 5900 --path /nodes/pve --perm Sys.Console -- /bin/login -f root
 
Code:
4     0 25409 25406  20   0   6920  3348 do_wai Ss   pts/8      0:00              \_ /bin/login -f

4     0 25433 25409  20   0   7648  4488 do_wai S    pts/8      0:00                  \_ -bash

4     0 25778 25433  20   0  11224  3672 -      R+   pts/8      0:00                      \_ ps faxl

0     0 25779 25433  20   0   6072   892 -      S+   pts/8      0:00                      \_ grep -A30 104

1     0  1383     1  20   0 338144 10752 hrtime Ss   ?          0:17 pve-ha-crm

0    33  1384     1  20   0 357032 35272 hrtime Ss   ?          0:04 pveproxy

1    33 28544  1384  20   0 365660 40228 poll_s S    ?          0:05  \_ pveproxy worker

1    33 13083  1384  20   0 369460 44980 poll_s S    ?          0:03  \_ pveproxy worker

1    33 17567  1384  20   0 365548 39352 poll_s S    ?          0:01  \_ pveproxy worker

0    33  1390     1  20   0  70608 21912 hrtime Ss   ?          0:02 spiceproxy

1    33 27698  1390  20   0  70864 18544 poll_s S    ?          0:01  \_ spiceproxy worker

1     0  1392     1  20   0 337744 20420 hrtime Ss   ?          0:33 pve-ha-lrm

4     0   324     1  20   0   7372  4292 ep_pol Ss   ?          0:22 /usr/bin/lxc-start -F -n 101

4     0   347   324  20   0  57152  2372 ep_pol Ss   ?          0:00  \_ /sbin/init

4     0   576   347  20   0  46096  4312 ep_pol Ss   ?          0:00      \_ /lib/systemd/systemd-journald

4     0   584   347  20   0  99000   116 poll_s Ss   ?          0:00      \_ /sbin/lvmetad -f

1     0   591   347  20   0  20824     0 poll_s Ss   ?          0:00      \_ /usr/sbin/blkmapd

4     0   642   347  20   0  49868  1008 poll_s Ss   ?          0:00      \_ /sbin/rpcbind -f -w

4     0   643   347  20   0  29636  1296 hrtime Ss   ?          0:00      \_ /usr/sbin/cron -f

4     0   644   347  20   0 250112   936 poll_s Ssl  ?          0:00      \_ /usr/sbin/rsyslogd -n

4     0   768   347  20   0  55136  1756 poll_s Ss   ?          0:59      \_ /usr/sbin/snmpd -Lsd -Lf /dev/null -u Debian-snmp -g Debian-snmp -I -smux mteTrigger mteTriggerConf -f

4     0   793   347  20   0  14300   752 poll_s Ss+  pts/0      0:00      \_ /sbin/agetty --noclear --keep-baud console 115200,38400,9600 linux

4     0   795   347  20   0  14300   912 poll_s Ss+  pts/1      0:00      \_ /sbin/agetty --noclear --keep-baud tty2 115200,38400,9600 linux

4     0   797   347  20   0  14300   860 poll_s Ss+  pts/0      0:00      \_ /sbin/agetty --noclear --keep-baud tty1 115200,38400,9600 linux

5     0   846   347  20   0  41412  2056 hrtime S    ?          0:03      \_ /usr/bin/monit -c /etc/monit/monitrc

5   106   847   347  20   0  37100     4 poll_s Ss   ?          0:00      \_ /usr/bin/shellinaboxd -q --background=/var/run/shellinaboxd.pid -c /var/lib/shellinabox -p 12319 -u shellinabox -g shellinabox --user-css White On Black:+/etc/shellinabox/options-enabled/00+White On Black.css,Black on White:-/etc/shellinabox/options-enabled/00_Black on White.css;Color Terminal:+/etc/shellinabox/options-enabled/01+Color Terminal.css,Monochrome:-/etc/shellinabox/options-enabled/01_Monochrome.css --no-beep --disable-ssl --localhost-only

5   106   848   847  20   0  37204   456 unix_s S    ?          0:00      |   \_ /usr/bin/shellinaboxd -q --background=/var/run/shellinaboxd.pid -c /var/lib/shellinabox -p 12319 -u shellinabox -g shellinabox --user-css White On Black:+/etc/shellinabox/options-enabled/00+White On Black.css,Black on White:-/etc/shellinabox/options-enabled/00_Black on White.css;Color Terminal:+/etc/shellinabox/options-enabled/01+Color Terminal.css,Monochrome:-/etc/shellinabox/options-enabled/01_Monochrome.css --no-beep --disable-ssl --localhost-only

4     0   850   347  20   0  69956   672 poll_s Ss   ?          0:00      \_ /usr/sbin/sshd -D

1   108   852   347  20   0 119472    12 poll_s Ss   ?          0:00      \_ /usr/bin/stunnel4 /etc/stunnel/stunnel.conf

5     0   982   347  20   0  93960  2864 poll_s Ss   ?          0:04      \_ /usr/sbin/apache2 -k start

5    33 31086   982  20   0  93456  2436 skb_wa S    ?          0:00      |   \_ /usr/sbin/apache2 -k start

5    33 31088   982  20   0 383112  2004 pipe_w Sl   ?          0:06      |   \_ /usr/sbin/apache2 -k start

5    33 31089   982  20   0 383112  2004 pipe_w Sl   ?          0:06      |   \_ /usr/sbin/apache2 -k start

5     0  1099   347  20   0  81192  1276 ep_pol Ss   ?          0:00      \_ /usr/lib/postfix/sbin/master -w

--

5 100104 28972 28660 20   0   9092  1396 poll_s Ss   ?          0:00      \_ /usr/bin/shellinaboxd -q --background=/var/run/shellinaboxd.pid -c /var/lib/shellinabox -p 12319 -u shellinabox -g shellinabox --user-css White On Black:+/etc/shellinabox/options-enabled/00+White On Black.css,Black on White:-/etc/shellinabox/options-enabled/00_Black on White.css;Color Terminal:+/etc/shellinabox/options-enabled/01+Color Terminal.css,Monochrome:-/etc/shellinabox/options-enabled/01_Monochrome.css --no-beep --disable-ssl --localhost-only

5 100104 28973 28972 20   0   9092   660 unix_s S    ?          0:00      |   \_ /usr/bin/shellinaboxd -q --background=/var/run/shellinaboxd.pid -c /var/lib/shellinabox -p 12319 -u shellinabox -g shellinabox --user-css White On Black:+/etc/shellinabox/options-enabled/00+White On Black.css,Black on White:-/etc/shellinabox/options-enabled/00_Black on White.css;Color Terminal:+/etc/shellinabox/options-enabled/01+Color Terminal.css,Monochrome:-/etc/shellinabox/options-enabled/01_Monochrome.css --no-beep --disable-ssl --localhost-only

4 100000 29046 28660 20   0  15848  3540 poll_s Ss   ?          0:00      \_ /usr/sbin/sshd -D

5 100000 29112 28660 20   0  43468  2240 ep_pol Ss   ?          0:00      \_ /usr/lib/postfix/sbin/master -w

4 100107 29114 29112 20   0  43544  2624 ep_pol S    ?          0:00      |   \_ qmgr -l -t unix -u

4 100107 11183 29112 20   0  43492  7880 ep_pol S    ?          0:00      |   \_ pickup -l -t unix -u -c

1 100000 29215 28660 20   0  27488 13960 poll_s Ss   ?          0:00      \_ /usr/bin/perl /usr/share/webmin/miniserv.pl /etc/webmin/miniserv.conf

1     0 30090     1  20   0   2400  1528 poll_s Ss   ?          0:00 /usr/bin/dtach -A /var/run/dtach/vzctlconsole103 -r winch -z lxc-console -n 103 -e -1

0     0 30091 30090  20   0   7284  4092 ep_pol Ss+  pts/6      0:00  \_ lxc-console -n 103 -e -1

1     0 30712     1  20   0   2400  1544 poll_s Ss   ?          0:00 /usr/bin/dtach -A /var/run/dtach/vzctlconsole110 -r winch -z lxc-console -n 110 -e -1

0     0 30713 30712  20   0   7284  4204 ep_pol Ss+  pts/7      0:00  \_ lxc-console -n 110 -e -1

4     0 32721     1  20   0   7372  4396 ep_pol Ss   ?          0:04 /usr/bin/lxc-start -F -n 104

4 100000 32737 32721 20   0 169448  9040 ep_pol Ss   ?          0:03  \_ /sbin/init

4 100000  425 32737  20   0  43272 16632 ep_pol Ss   ?          0:06      \_ /lib/systemd/systemd-journald

4 100100  490 32737  20   0   7372  2284 ep_pol Ss   ?          0:00      \_ /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile

--systemd-activation --syslog-only

4 100998  498 32737  39  19 9322876 1042296 futex_ SNsl ?      10:14      \_ /usr/share/logstash/jdk/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -Djruby.regexp.interruptible=true -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -Dlog4j2.isThreadContextMapInheritable=true -cp /usr/share/logstash/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/share/logstash/logstash-core/lib/jars/checker-compat-qual-2.0.0.jar:/usr/share/logstash/logstash-core/lib/jars/commons-codec-1.14.jar:/usr/share/logstash/logstash-core/lib/jars/commons-compiler-3.1.0.jar:/usr/share/logstash/logstash-core/lib/jars/commons-logging-1.2.jar:/usr/share/logstash/logstash-core/lib/jars/error_prone_annotations-2.1.3.jar:/usr/share/logstash/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/share/logstash/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/share/logstash/logstash-core/lib/jars/guava-24.1.1-jre.jar:/usr/share/logstash/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-annotations-2.9.10.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-core-2.9.10.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-databind-2.9.10.8.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.10.jar:/usr/share/logstash/logstash-core/lib/jars/janino-3.1.0.jar:/usr/share/logstash/logstash-core/lib/jars/javassist-3.26.0-GA.jar:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.16.0.jar:/usr/share/logstash/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-1.2-api-2.14.0.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-api-2.14.0.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-core-2.14.0.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-jcl-2.14.0.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-slf4j-impl-2.14.0.jar:/usr/share/logstash/logstash-core/lib/jars/logstash-core.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/reflections-0.9.11.jar:/usr/share/logstash/logstash-core/lib/jars/slf4j-api-1.7.30.jar org.logstash.Logstash --path.settings /etc/logstash

4   100  6480     1  20   0  93080  6108 ep_pol Ssl  ?          0:00 /lib/systemd/systemd-timesyncd

4     0  6517     1  20   0  37812 14856 ep_pol Ss   ?          0:00 /lib/systemd/systemd-journald

4     0 16329     1  20   0  22440  4888 ep_pol Ss   ?          0:02 /lib/systemd/systemd-udevd

5     0 24869     1  20   0 371040 38792 unix_s Ss   ?          0:00 task UPID:pve:00006125:00C468A8:61124BC5:vzdump:104:root@pam:

4     0 25173     1  20   0 166188  4548 cv_wai Ssl  ?          0:00 /usr/sbin/zed -F

1     0 28289     1  20   0   2400  1492 poll_s Ss   ?          0:00 /usr/bin/dtach -A /var/run/dtach/vzctlconsole101 -r winch -z lxc-console -n 101 -e -1

0     0 28290 28289  20   0   7284  4148 ep_pol Ss+  pts/10     0:00  \_ lxc-console -n 101 -e -1

1     0 28333     1  20   0   2400  1540 poll_s Ss   ?          0:00 /usr/bin/dtach -A /var/run/dtach/vzctlconsole102 -r winch -z lxc-console -n 102 -e -1

0     0 28334 28333  20   0   7284  4160 ep_pol Ss+  pts/11     0:00  \_ lxc-console -n 102 -e -1

4     0  1624     1  20   0   7372  4416 ep_pol Ss   ?          0:03 /usr/bin/lxc-start -F -n 105

4 100000 1646  1624  20   0 171384 11932 ep_pol Ss   ?          0:04  \_ /lib/systemd/systemd --system --deserialize 15

0 100000 1795  1646  20   0 233388  6864 poll_s Ssl  ?          0:00      \_ /usr/lib/accountsservice/accounts-daemon

4 100000 1796  1646  20   0   3792  2564 hrtime Ss   ?          0:00      \_ /usr/sbin/cron -f

4 100100 1798  1646  20   0   8596  4824 ep_pol Ss   ?          0:00      \_ /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only

0 100000 1802  1646  20   0  26900 17480 poll_s Ss   ?          0:00      \_ /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers

4 100101 1804  1646  20   0 151480  4556 poll_s Ssl  ?          0:00      \_ /usr/sbin/rsyslogd -n -iNONE

4 100000 1805  1646  20   0  17624  6636 ep_pol Ss   ?          0:00      \_ /lib/systemd/systemd-logind

4 100000 1827  1646  20   0   2616  1828 poll_s Ss+  pts/5      0:00      \_ /sbin/agetty -o -p -- \u --noclear --keep-baud console 115200,38400,9600 linux

4 100000 1828  1646  20   0   5792  3988 do_wai Ss   pts/0      0:00      \_ /bin/login -p --

4 100000 2239  1828  20   0   5144  4536 do_wai S    pts/0      0:00      |   \_ -bash

0 100000 17787 2239  20   0   5772  3176 poll_s S+   pts/0      0:24      |       \_ iperf3 -s

4 100000 1829  1646  20   0   2616  1760 poll_s Ss+  pts/1      0:00      \_ /sbin/agetty -o -p -- \u --noclear --keep-baud tty2 115200,38400,9600 linux

5 100000 2038  1646  20   0  38392  4688 ep_pol Ss   ?          0:00      \_ /usr/lib/postfix/sbin/master -w

4 100102 2040  2038  20   0  38700  6248 ep_pol S    ?          0:00      |   \_ qmgr -l -t unix -u

4 100102 17494 2038  20   0  38656  6104 ep_pol S    ?          0:00      |   \_ pickup -l -t unix -u -c

4 100000 9668  1646  20   0  13076  7136 poll_s Ss   ?          0:00      \_ sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups

4 100105 9723  1646  20   0  19624  6828 ep_pol Ss   ?          0:00      \_ /lib/systemd/systemd-networkd

4 100106 9728  1646  20   0  24972 12684 ep_pol Ss   ?          0:00      \_ /lib/systemd/systemd-resolved

4 100000 9732  1646  20   0  35784 11116 ep_pol Ss   ?          0:00      \_ /lib/systemd/systemd-journald

1     0  2150     1  20   0   2400  1576 poll_s Ss   ?          0:00 /usr/bin/dtach -A /var/run/dtach/vzctlconsole105 -r winch -z lxc-console -n 105 -e -1

0     0  2151  2150  20   0   7284  4244 ep_pol Ss+  pts/9      0:00  \_ lxc-console -n 105 -e -1

4     0 25421     1  20   0  21668  9164 ep_pol Ss   ?          0:00 /lib/systemd/systemd --user

5     0 25422 25421  20   0 172436  3560 do_sig S    ?          0:00  \_ (sd-pam)

104 is still stuck in config locked by the way.


EDIT: Strangely, 104 is available in console but the installed services (like pfelk) are down.
 
Last edited:
yeah, the container is still running but with just 4 processes (left?) - systemd, journald, dbus and a java process for logstash. I suspect the latter does not exit for some reason. you could try killing that process to see if the container exits then (or is stoppable without requiring a full reboot).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!