Container crash Broke the node

drjaymz@

Member
Jan 19, 2022
121
5
23
101
I have a 5 node system where only 1 - 3 have any workload, a mixture of imported KVM VM's and new containers.

This morning something wasn't right with one of the containers which was responsible for a couple of intranet websites. As we couldn't ssh to the container we went to look to see if it was running which was done via one of the other nodes.

image

Oddly enough the container that had died was 123 but the entire node was all messed up. I was not able to get on to the node, SSH or otherwise but using the GUI I was able to issue a reboot of that node.
The reboot took about 8 minutes - and looking at the syslog that is because it was nicely shutting down all the workloads for a the reboot, after which everything sprang back to life just fine.

So it looks like 123 crashed and partially took out the node.
The node was recently updated to 8.1.4 and the backup server was also recently updated to 3.1.4 so we are not talking about anything old, last fully updated under a week ago and they have all had a reboot to use the newly updated kernel.

I checked all the VM's and containers and ONLY 123 shows a syslog gap (other than the reboot) it basically died at 1:01 am and the reboot was 7:33. There is nothing in the container logs, it just looks like its frozen. (which it does when you back it up).

On the proxmox syslog - even after it froze I can't see anything too unusual, it carried on running but there is an odd kernel message:

Code:
2024-03-13T01:02:05.776254+00:00 proxmoxy3 systemd[1]: Stopped user-runtime-dir@0.service - User Runtime Directory /run/user/0.
2024-03-13T01:02:05.776819+00:00 proxmoxy3 systemd[1]: Removed slice user-0.slice - User Slice of UID 0.
2024-03-13T01:02:05.776882+00:00 proxmoxy3 systemd[1]: user-0.slice: Consumed 4.450s CPU time.
2024-03-13T01:02:06.679004+00:00 proxmoxy3 kernel: [388842.869608]       Tainted: P           O       6.5.13-1-pve #1
2024-03-13T01:02:06.679019+00:00 proxmoxy3 kernel: [388842.870281] Call Trace:
2024-03-13T01:02:06.679020+00:00 proxmoxy3 kernel: [388842.871114]  ? __pfx_nfs_do_lookup_revalidate+0x10/0x10 [nfs]
2024-03-13T01:02:06.679021+00:00 proxmoxy3 kernel: [388842.871739]  ? __pfx_var_wake_function+0x10/0x10
2024-03-13T01:02:06.679021+00:00 proxmoxy3 kernel: [388842.872684]  filename_lookup+0xe4/0x200
2024-03-13T01:02:06.679023+00:00 proxmoxy3 kernel: [388842.872864]  ? __pfx_zpl_put_link+0x10/0x10 [zfs]
2024-03-13T01:02:06.682810+00:00 proxmoxy3 kernel: [388842.873468]  vfs_statx+0xa1/0x180
2024-03-13T01:02:06.682814+00:00 proxmoxy3 kernel: [388842.873645]  vfs_fstatat+0x58/0x80
2024-03-13T01:02:06.682815+00:00 proxmoxy3 kernel: [388842.873819]  __do_sys_newfstatat+0x44/0x90
2024-03-13T01:02:06.682815+00:00 proxmoxy3 kernel: [388842.874005]  __x64_sys_newfstatat+0x1c/0x30
2024-03-13T01:02:06.682816+00:00 proxmoxy3 kernel: [388842.874532]  ? exit_to_user_mode_prepare+0x39/0x190
2024-03-13T01:02:06.682817+00:00 proxmoxy3 kernel: [388842.874707]  ? syscall_exit_to_user_mode+0x37/0x60
2024-03-13T01:04:07.511182+00:00 proxmoxy3 kernel: [388963.702989]       Tainted: P           O       6.5.13-1-pve #1
2024-03-13T01:04:07.511201+00:00 proxmoxy3 kernel: [388963.704802]  <TASK>
2024-03-13T01:04:07.514838+00:00 proxmoxy3 kernel: [388963.707534]  ? __pfx_var_wake_function+0x10/0x10
2024-03-13T01:04:07.518821+00:00 proxmoxy3 kernel: [388963.710719]  ? strncpy_from_user+0x50/0x170
2024-03-13T01:04:07.518825+00:00 proxmoxy3 kernel: [388963.711123]  vfs_statx+0xa1/0x180
2024-03-13T01:04:07.518826+00:00 proxmoxy3 kernel: [388963.711522]  vfs_fstatat+0x58/0x80
2024-03-13T01:04:07.518827+00:00 proxmoxy3 kernel: [388963.711907]  __do_sys_newfstatat+0x44/0x90
2024-03-13T01:04:07.518827+00:00 proxmoxy3 kernel: [388963.712699]  do_syscall_64+0x58/0x90
2024-03-13T01:04:07.518829+00:00 proxmoxy3 kernel: [388963.713490]  ? exit_to_user_mode_prepare+0x39/0x190
2024-03-13T01:04:07.518829+00:00 proxmoxy3 kernel: [388963.714289]  ? do_syscall_64+0x67/0x90
2024-03-13T01:04:07.522873+00:00 proxmoxy3 kernel: [388963.714688]  ? do_syscall_64+0x67/0x90
2024-03-13T01:04:07.522877+00:00 proxmoxy3 kernel: [388963.715082]  ? do_syscall_64+0x67/0x90
2024-03-13T01:04:07.522877+00:00 proxmoxy3 kernel: [388963.715471]  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
2024-03-13T01:04:07.522878+00:00 proxmoxy3 kernel: [388963.716683] RAX: ffffffffffffffda RBX: 00007ffc140329b0 RCX: 0000759d41e91d3e
2024-03-13T01:06:08.342827+00:00 proxmoxy3 kernel: [389084.535889]       Tainted: P           O       6.5.13-1-pve #1
2024-03-13T01:06:08.342845+00:00 proxmoxy3 kernel: [389084.536762]  <TASK>
2024-03-13T01:06:08.342846+00:00 proxmoxy3 kernel: [389084.538335]  lookup_fast+0x80/0x100
2024-03-13T01:06:08.342847+00:00 proxmoxy3 kernel: [389084.538870]  filename_lookup+0xe4/0x200
2024-03-13T01:06:08.346809+00:00 proxmoxy3 kernel: [389084.540776]  ? syscall_exit_to_user_mode+0x37/0x60
2024-03-13T01:06:08.346813+00:00 proxmoxy3 kernel: [389084.541280]  ? do_syscall_64+0x67/0x90
2024-03-13T01:06:08.346814+00:00 proxmoxy3 kernel: [389084.542156] RDX: 00007ffc140328a0 RSI: 000057c8e7645b40 RDI: 00000000ffffff9c
2024-03-13T01:06:08.346814+00:00 proxmoxy3 kernel: [389084.542330] RBP: 000057c8e7643af0 R08: 000057c8e7646d90 R09: 0000000000000000
2024-03-13T01:06:08.346815+00:00 proxmoxy3 kernel: [389084.542862]  </TASK>
2024-03-13T01:08:09.174886+00:00 proxmoxy3 kernel: [389205.368701] INFO: task php8.1:531710 blocked for more than 1087 seconds.
2024-03-13T01:08:09.174901+00:00 proxmoxy3 kernel: [389205.369014]       Tainted: P           O       6.5.13-1-pve #1
2024-03-13T01:08:09.174902+00:00 proxmoxy3 kernel: [389205.369255] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
2024-03-13T01:08:09.174902+00:00 proxmoxy3 kernel: [389205.369439] task:php8.1          state:D stack:0     pid:531710 ppid:22715  flags:0x00000006
2024-03-13T01:08:09.174903+00:00 proxmoxy3 kernel: [389205.369625] Call Trace:
2024-03-13T01:08:09.174904+00:00 proxmoxy3 kernel: [389205.369802]  <TASK>
2024-03-13T01:08:09.174905+00:00 proxmoxy3 kernel: [389205.369976]  __schedule+0x3fc/0x1440
2024-03-13T01:08:09.174906+00:00 proxmoxy3 kernel: [389205.370157]  ? nfs_access_get_cached+0xd2/0x280 [nfs]
2024-03-13T01:08:09.174906+00:00 proxmoxy3 kernel: [389205.370359]  ? __pfx_nfs_do_lookup_revalidate+0x10/0x10 [nfs]
2024-03-13T01:08:09.174907+00:00 proxmoxy3 kernel: [389205.370554]  schedule+0x63/0x110
2024-03-13T01:08:09.174910+00:00 proxmoxy3 kernel: [389205.370725]  __nfs_lookup_revalidate+0x107/0x140 [nfs]
2024-03-13T01:08:09.174920+00:00 proxmoxy3 kernel: [389205.370912]  ? __pfx_var_wake_function+0x10/0x10
2024-03-13T01:08:09.174920+00:00 proxmoxy3 kernel: [389205.371080]  nfs_lookup_revalidate+0x15/0x30 [nfs]
2024-03-13T01:08:09.174921+00:00 proxmoxy3 kernel: [389205.371260]  lookup_fast+0x80/0x100
2024-03-13T01:08:09.174921+00:00 proxmoxy3 kernel: [389205.371425]  walk_component+0x2c/0x190
2024-03-13T01:08:09.174921+00:00 proxmoxy3 kernel: [389205.371591]  path_lookupat+0x67/0x1a0
2024-03-13T01:08:09.174923+00:00 proxmoxy3 kernel: [389205.371755]  filename_lookup+0xe4/0x200
2024-03-13T01:08:09.174923+00:00 proxmoxy3 kernel: [389205.371918]  ? __pfx_zpl_put_link+0x10/0x10 [zfs]
2024-03-13T01:08:09.174924+00:00 proxmoxy3 kernel: [389205.372250]  ? strncpy_from_user+0x50/0x170
2024-03-13T01:08:09.174924+00:00 proxmoxy3 kernel: [389205.372413]  vfs_statx+0xa1/0x180
2024-03-13T01:08:09.174924+00:00 proxmoxy3 kernel: [389205.372594]  vfs_fstatat+0x58/0x80
2024-03-13T01:08:09.178810+00:00 proxmoxy3 kernel: [389205.373117]  do_syscall_64+0x58/0x90
2024-03-13T01:08:09.178813+00:00 proxmoxy3 kernel: [389205.373816]  ? do_syscall_64+0x67/0x90
2024-03-13T01:08:09.178814+00:00 proxmoxy3 kernel: [389205.374507] RIP: 0033:0x759d41e91d3e
2024-03-13T01:08:09.178814+00:00 proxmoxy3 kernel: [389205.374706] RSP: 002b:00007ffc14032808 EFLAGS: 00000246 ORIG_RAX: 0000000000000106
2024-03-13T01:08:09.178815+00:00 proxmoxy3 kernel: [389205.375236] RBP: 000057c8e7643af0 R08: 000057c8e7646d90 R09: 0000000000000000
2024-03-13T01:08:09.178815+00:00 proxmoxy3 kernel: [389205.375770]  </TASK>
2024-03-13T01:10:10.006843+00:00 proxmoxy3 kernel: [389326.202389]       Tainted: P           O       6.5.13-1-pve #1
2024-03-13T01:10:10.006857+00:00 proxmoxy3 kernel: [389326.204266]  <TASK>
2024-03-13T01:10:10.014804+00:00 proxmoxy3 kernel: [389326.213244]  ? exit_to_user_mode_prepare+0x39/0x190
2024-03-13T01:10:10.014808+00:00 proxmoxy3 kernel: [389326.213656]  ? syscall_exit_to_user_mode+0x37/0x60
2024-03-13T01:10:10.019079+00:00 proxmoxy3 kernel: [389326.216617] RAX: ffffffffffffffda RBX: 00007ffc140329b0 RCX: 0000759d41e91d3e
2024-03-13T01:10:10.022811+00:00 proxmoxy3 kernel: [389326.218375] R13: 000057c8e7645b40 R14: 0000759d3fa15e10 R15: 000057c8e7645b40
2024-03-13T01:10:10.022814+00:00 proxmoxy3 kernel: [389326.218819]  </TASK>
2024-03-13T01:15:04.051044+00:00 proxmoxy3 systemd[1]: Created slice user-0.slice - User Slice of UID 0.
2024-03-13T01:15:04.091042+00:00 proxmoxy3 systemd[1]: Starting user-runtime-dir@0.service - User Runtime Directory /run/user/0...
2024-03-13T01:15:04.096422+00:00 proxmoxy3 systemd[1]: Finished user-runtime-dir@0.service - User Runtime Directory /run/user/0.
2024-03-13T01:15:04.097741+00:00 proxmoxy3 systemd[1]: Starting user@0.service - User Manager for UID 0...
2024-03-13T01:15:04.324844+00:00 proxmoxy3 systemd[3882606]: Queued start job for default target default.target.

Which looks like stack trace information - but I could be barking up the wrong tree here. The CT in question has a PHP and NFS, the latter cause for privileged container.
That container wasn't due to backup at 1am and may just have been what MY pc last saw before it went into standby.

In particular, I'd like to know if I could have done something other than reboot the node as all but one VM's were running ok.
The nodes replicate to each other via a dedicated replication network.

So... I don't have any other clues as to where to look, I actually don't think it was backing up I think it just crashed and put the node in an unstable state.

If you can think of any logs I might be able to get to find any more information please let me know.
 
Last edited:
My colleague has informed me that he could connect to node 3 and get to the web gui, but that the node and the machines were just numbers showing no names exactly as above. I only tried the ssh on to the bad node and I got "connection refused". Even though he could get to the web interface with the greyed out machines, you couldn't do anything as the web interface showed toasts along the lines of 'connection refused'.
 
  • Like
Reactions: Kingneutron
I moved the CT to new node and its happened again today on that node and brought it down.
I can get on the console of the node as root.
 
Last edited:
You said this happens when back it up... How big are the disks attached to it that you are trying to backup? and How exactly are you backing it up? which mode do you use?
 
root@proxmoxy5:~# pct list
hangs
root@proxmoxy5:~# pct stop 123
command 'lxc-stop -n 123 --kill' failed: received interrupt
received interrupt
root@proxmoxy5:~# pct destroy 123
There is a replication job '123-3' for guest '123' - Please remove that first.
root@proxmoxy5:~# lxc-info 123
hangs....

I grepped for any replication jobs, there were none.
 
Last edited:
You said this happens when back it up... How big are the disks attached to it that you are trying to backup? and How exactly are you backing it up? which mode do you use?
I don't think its backing up at the time. I use PBS using snapshot mode like I have done on this container for the last 12 months. Only since 8.1.4 installed are we now having an issue
 
resorted to rebooting the node.
There's obviously a bug here in the new latest proxmox.

and now, I have completely lost control of the node:

Code:
System is going down. Unprivileged users are not permitted to log in anymore. For technical details, see pam_nologin(8).


X11 forwarding request failed on channel 0
System is going down. Unprivileged users are not permitted to log in anymore. For technical details, see pam_nologin(8).


Linux proxmoxy5 6.5.13-1-pve #1 SMP PREEMPT_DYNAMIC PMX 6.5.13-1 (2024-02-05T13:50Z) x86_64


The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.


Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Thu Mar 14 07:44:28 2024 from 192.168.6.64
 
Last edited:
It must have executed something on lxc 123 because it is locked... I am sure you know already but you can remove the lock by executing pct unlock 123 if that still works since the entire node is not funtioning properly... If i remember correctly i had an issue with my lxc also when backing up using mode snapshot, which worked fine but sometimes not work... I changed it to mode stop and everything is fine since then.
 
  • Like
Reactions: Kingneutron
It must have executed something on lxc 123 because it is locked... I am sure you know already but you can remove the lock by executing pct unlock 123 if that still works since the entire node is not funtioning properly... If i remember correctly i had an issue with my lxc also when backing up using mode snapshot, which worked fine but sometimes not work... I changed it to mode stop and everything is fine since then.
command just hangs.

The issue you are referring to is another bug - which has been talked about since 2020 - thats the one where the filesystem becomes unwritable to the guest even though its still mounted you cannot recover from that until you reboot the VM - but you can reboot the VM. People have suggested its the fs thaw that fails - but they have been looking in the wrong place, fs-thaw fails BECAUSE the fs failed not the other way around. As far as I know its a bug in ZFS.

This issue is different because the entire node is taken down - which is much more of a problem because it means that you cannot trust proxmox with a production workload unless you can guarantee that a problem on one guest doesn't bork the entire host in a way where you cannot migrate nodes, HA doesn't do anything and you cannot get control back - thats the most serious of issues.
 
Last edited:
  • Like
Reactions: Kingneutron
The syslog inside the container indicates nothing unusual but stops at 06:30:02
This and the previous issue indicates that its probably the replication which takes place every 15 mins that is probably the cause.
The backup ran at 5:40 and completed ok.
This container is privileged because it has two mounted NFS locations - but that is the only unusual thing on it.
Otherwise its a ubuntu 22.04 container running ngnix and php PHP 8.2.16 (cli) (built: Mar 7 2024 08:55:56) (NTS).


The host is up to date as far as I know

Dell poweredge 650
64 x Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz (2 Sockets) 128Gb RAM
running 5 960Gb SSD's with ZFS.

:
Code:
()
proxmox-ve: 8.1.0 (running kernel: 6.5.13-1-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5.13-1-pve-signed: 6.5.13-1
proxmox-kernel-6.5: 6.5.13-1
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
ceph-fuse: 17.2.7-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.1
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.4-1
proxmox-backup-file-restore: 3.1.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.5
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-4
pve-firewall: 5.0.3
pve-firmware: 3.9-2
pve-ha-manager: 4.0.3
pve-i18n: 3.2.0
pve-qemu-kvm: 8.1.5-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve2
 
Last edited:
Todays entry:
Code:
Mar 14 06:26:24 proxmoxy5 kernel: INFO: task php8.1:588542 blocked for more than 120 seconds.
Mar 14 06:26:24 proxmoxy5 kernel:       Tainted: P           O       6.5.13-1-pve #1
Mar 14 06:26:24 proxmoxy5 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 14 06:26:24 proxmoxy5 kernel: taskhp8.1          state stack:0     pid:588542 ppid:3630115 flags:0x00000002
Mar 14 06:26:24 proxmoxy5 kernel: Call Trace:
Mar 14 06:26:24 proxmoxy5 kernel:  <TASK>
Mar 14 06:26:24 proxmoxy5 kernel:  __schedule+0x3fc/0x1440
Mar 14 06:26:24 proxmoxy5 kernel:  ? nfs_do_access+0x62/0x290 [nfs]
Mar 14 06:26:24 proxmoxy5 kernel:  ? __pfx_nfs_do_lookup_revalidate+0x10/0x10 [nfs]
Mar 14 06:26:24 proxmoxy5 kernel:  schedule+0x63/0x110
Mar 14 06:26:24 proxmoxy5 kernel:  __nfs_lookup_revalidate+0x107/0x140 [nfs]
Mar 14 06:26:24 proxmoxy5 kernel:  ? __pfx_var_wake_function+0x10/0x10
Mar 14 06:26:24 proxmoxy5 kernel:  nfs_lookup_revalidate+0x15/0x30 [nfs]
Mar 14 06:26:24 proxmoxy5 kernel:  lookup_fast+0x80/0x100
Mar 14 06:26:24 proxmoxy5 kernel:  path_openat+0x108/0x1180
Mar 14 06:26:24 proxmoxy5 kernel:  do_filp_open+0xaf/0x170
Mar 14 06:26:24 proxmoxy5 kernel:  ? __pfx_zpl_put_link+0x10/0x10 [zfs]
Mar 14 06:26:24 proxmoxy5 kernel:  do_sys_openat2+0xb3/0xe0
Mar 14 06:26:24 proxmoxy5 kernel:  __x64_sys_openat+0x6c/0xa0
Mar 14 06:26:24 proxmoxy5 kernel:  do_syscall_64+0x58/0x90
Mar 14 06:26:24 proxmoxy5 kernel:  ? __x64_sys_rt_sigaction+0xb8/0x120
Mar 14 06:26:24 proxmoxy5 kernel:  ? exit_to_user_mode_prepare+0x39/0x190
Mar 14 06:26:24 proxmoxy5 kernel:  ? syscall_exit_to_user_mode+0x37/0x60
Mar 14 06:26:24 proxmoxy5 kernel:  ? do_syscall_64+0x67/0x90
Mar 14 06:26:24 proxmoxy5 kernel:  ? do_syscall_64+0x67/0x90
Mar 14 06:26:24 proxmoxy5 kernel:  ? do_syscall_64+0x67/0x90
Mar 14 06:26:24 proxmoxy5 kernel:  ? sysvec_apic_timer_interrupt+0x4b/0xd0
Mar 14 06:26:24 proxmoxy5 kernel:  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
Mar 14 06:26:24 proxmoxy5 kernel: RIP: 0033:0x7a7aafc2c5b4
Mar 14 06:26:24 proxmoxy5 kernel: RSP: 002b:00007ffde6037590 EFLAGS: 00000293 ORIG_RAX: 0000000000000101
Mar 14 06:26:24 proxmoxy5 kernel: RAX: ffffffffffffffda RBX: 00005fc2e9344750 RCX: 00007a7aafc2c5b4
Mar 14 06:26:24 proxmoxy5 kernel: RDX: 0000000000000000 RSI: 00005fc2e9484630 RDI: 00000000ffffff9c
Mar 14 06:26:24 proxmoxy5 kernel: RBP: 00005fc2e9484630 R08: 0000000000000000 R09: 0000000000000001
Mar 14 06:26:24 proxmoxy5 kernel: R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000000
Mar 14 06:26:24 proxmoxy5 kernel: R13: 00007a7aad65f700 R14: 0000000000000000 R15: 00005fc2e947f800
Mar 14 06:26:24 proxmoxy5 kernel:  </TASK>

This is looking to me like a process that is trying to talk to the NFS share and that is breaking everything.
What I know is that it wasn't a problem until proxmox 8.1.4.

Shortly before we see this:

Code:
Mar 14 06:09:02 proxmoxy5 audit[712015]: AVC apparmor="DENIED" operation="mount" class="mount" info="failed perms check" error=-13 profile="lxc-123_</var/lib/lxc>" name="/run/systemd/unit-root/" pid=712015 comm="(ionclean)" srcname="/" flags="rw, rbind"
Mar 14 06:09:02 proxmoxy5 kernel: audit: type=1400 audit(1710396542.405:83): apparmor="DENIED" operation="mount" class="mount" info="failed perms check" error=-13 profile="lxc-123_</var/lib/lxc>" name="/run/systemd/unit-root/" pid=712015 comm="(ionclean)" srcname="/" flags="rw, rbind"

I don't know if this is related but looks like something is blocking something so I am not sure if I am supposed to do something about it.
 
  • Like
Reactions: Kingneutron

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!