[SOLVED] OPNsense keeps crashing

showiproute

Well-Known Member
Mar 11, 2020
610
32
48
36
Austria
Dear team,

I have a virtualised OPNsense firewall running on one of my Proxmox servers.
Unfortunately it kept crashing randomly. After the newest 23.7 upgrade it is totally unuseable as it crashes instantly.

Some informations/logs:

Code:
root@proxmox1:~# cat /etc/pve/qemu-server/111.conf
agent: 1,fstrim_cloned_disks=1
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 8
cpu: host
efidisk0: SSD_Intel:vm-111-disk-0,efitype=4m,size=1M
ide2: none,media=cdrom
machine: q35
memory: 8192
meta: creation-qemu=6.1.0,ctime=1640064212
name: OPNsense1
net0: virtio=56:58:98:15:C2:0B,bridge=vmbr0,queues=8,tag=500
net1: virtio=F6:F8:2A:F6:59:AB,bridge=vmbr0,queues=8,tag=10
net2: virtio=5A:50:41:2C:9B:9A,bridge=vmbr0,queues=8,tag=450
net3: virtio=86:68:06:86:F1:83,bridge=vmbr0,queues=8,tag=20
net4: virtio=12:15:71:0D:A1:98,bridge=vmbr0,queues=8,tag=11
net5: virtio=AA:3E:3E:0B:BB:13,bridge=vmbr0,queues=8,tag=30
net6: virtio=CE:CD:0B:DB:5B:43,bridge=vmbr0,queues=8,tag=90
numa: 1
onboot: 1
ostype: l26
scsi0: SSD_Intel:vm-111-disk-1,discard=on,size=120G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=d9c123c9-cae8-4333-9f29-f5d5faa416a0
sockets: 1
startup: order=1
vga: qxl
vmgenid: cc2220e3-91ba-4887-872a-e4a1ef0dc830

Code:
root@proxmox1:~# strace -c -p $(cat /var/run/qemu-server/111.pid)

strace: Process 1121544 attached

% time     seconds  usecs/call     calls    errors syscall

------ ----------- ----------- --------- --------- ----------------

 84.49   16.552830          94    175488     22591 ppoll

  4.33    0.847487           3    230366           write

  2.79    0.545752          34     15632       767 futex

  2.30    0.450457           4    106414       134 read

  2.19    0.429361           6     69742        30 ioctl

  1.84    0.361391      120463         3           wait4

  0.98    0.191928           4     47277           recvmsg

  0.95    0.186217           7     25080           io_uring_enter

  0.06    0.012237          11      1109         2 sendmsg

  0.04    0.007806          32       241           close

  0.01    0.001860         930         2           clone

  0.00    0.000696           7        95           openat

  0.00    0.000593           1       442           rt_sigprocmask

  0.00    0.000518           2       192           mmap

  0.00    0.000486           2       194           accept4

  0.00    0.000478           0       484           fcntl

  0.00    0.000372           3        97           eventfd2

  0.00    0.000311           2       146           mprotect

  0.00    0.000273           1       260           brk

  0.00    0.000192           3        56           tgkill

  0.00    0.000167          11        15           clone3

  0.00    0.000129           3        38           munmap

  0.00    0.000125           0       195           getsockname

  0.00    0.000112           2        56           getpid

  0.00    0.000102           3        30           newfstatat

  0.00    0.000067           1        58           madvise

  0.00    0.000052           1        31        20 access

  0.00    0.000015           1         8           gettid

  0.00    0.000013           1        13           lseek

  0.00    0.000010           3         3           rt_sigaction

  0.00    0.000005           2         2         1 setsockopt

  0.00    0.000000           0         2         1 pread64

  0.00    0.000000           0         3           dup2

  0.00    0.000000           0         2           socket

  0.00    0.000000           0         1           connect

  0.00    0.000000           0         1           shutdown

  0.00    0.000000           0         1           bind

  0.00    0.000000           0         1           listen

  0.00    0.000000           0         1         1 getpeername

  0.00    0.000000           0         1           socketpair

  0.00    0.000000           0         2           getsockopt

  0.00    0.000000           0         1           chdir

  0.00    0.000000           0         1         1 unlink

  0.00    0.000000           0         1           getrandom

------ ----------- ----------- --------- --------- ----------------

100.00   19.592042          29    673787     23548 total


Any idea why this keeps crashing?
 
Within journalctl I can see following errors right after the VM crash:

Code:
Aug 03 11:25:10 proxmox1 QEMU[681780]: kvm: unable to start vhost net: 24: falling back on userspace virtio
Aug 03 11:25:10 proxmox1 QEMU[681780]: kvm: Error binding guest notifier: 24
Aug 03 11:25:10 proxmox1 QEMU[681780]: kvm: unable to start vhost net: 24: falling back on userspace virtio
Aug 03 11:25:10 proxmox1 QEMU[681780]: kvm: virtio_bus_set_host_notifier: unable to init event notifier: Too many open files (-24)
Aug 03 11:25:10 proxmox1 QEMU[681780]: kvm: vhost VQ 1 notifier binding failed: 24
Aug 03 11:25:10 proxmox1 QEMU[681780]: kvm: ../softmmu/memory.c:2608: memory_region_del_eventfd: Assertion `i != mr->ioeventfd_nb' failed.
Aug 03 11:25:10 proxmox1 kernel:  zd32: p1 p2 p3 p4
Aug 03 11:25:13 proxmox1 nut-monitor[5993]: Poll UPS [powerwalker@127.0.0.1] failed - Driver not connected
Aug 03 11:25:13 proxmox1 kernel: vmbr0: port 3(tap111i0) entered disabled state
Aug 03 11:25:13 proxmox1 kernel: vmbr0: port 3(tap111i0) entered disabled state
Aug 03 11:25:13 proxmox1 kernel: vmbr0: port 4(tap111i1) entered disabled state
Aug 03 11:25:13 proxmox1 kernel: vmbr0: port 4(tap111i1) entered disabled state
Aug 03 11:25:13 proxmox1 kernel: vmbr0: port 5(tap111i2) entered disabled state
Aug 03 11:25:13 proxmox1 kernel: vmbr0: port 5(tap111i2) entered disabled state
Aug 03 11:25:13 proxmox1 kernel: vmbr0: port 6(tap111i3) entered disabled state
Aug 03 11:25:13 proxmox1 kernel: vmbr0: port 6(tap111i3) entered disabled state
Aug 03 11:25:13 proxmox1 kernel: vmbr0: port 7(tap111i4) entered disabled state
Aug 03 11:25:13 proxmox1 kernel: vmbr0: port 7(tap111i4) entered disabled state
Aug 03 11:25:13 proxmox1 kernel: vmbr0: port 8(tap111i5) entered disabled state
Aug 03 11:25:13 proxmox1 kernel: vmbr0: port 8(tap111i5) entered disabled state
Aug 03 11:25:14 proxmox1 kernel: vmbr0: port 9(tap111i6) entered disabled state
Aug 03 11:25:14 proxmox1 kernel: vmbr0: port 9(tap111i6) entered disabled state
Aug 03 11:25:14 proxmox1 qmeventd[751913]: Starting cleanup for 111
Aug 03 11:25:14 proxmox1 ovs-vsctl[751938]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln111i0
Aug 03 11:25:14 proxmox1 ovs-vsctl[751938]: ovs|00002|db_ctl_base|ERR|no port named fwln111i0
Aug 03 11:25:14 proxmox1 ovs-vsctl[751939]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap111i0
Aug 03 11:25:14 proxmox1 ovs-vsctl[751939]: ovs|00002|db_ctl_base|ERR|no port named tap111i0
Aug 03 11:25:14 proxmox1 ovs-vsctl[751940]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln111i1
Aug 03 11:25:14 proxmox1 ovs-vsctl[751940]: ovs|00002|db_ctl_base|ERR|no port named fwln111i1
Aug 03 11:25:14 proxmox1 ovs-vsctl[751942]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap111i1
Aug 03 11:25:14 proxmox1 ovs-vsctl[751942]: ovs|00002|db_ctl_base|ERR|no port named tap111i1
Aug 03 11:25:14 proxmox1 ovs-vsctl[751943]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln111i4
Aug 03 11:25:14 proxmox1 ovs-vsctl[751943]: ovs|00002|db_ctl_base|ERR|no port named fwln111i4
Aug 03 11:25:14 proxmox1 ovs-vsctl[751944]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap111i4
Aug 03 11:25:14 proxmox1 ovs-vsctl[751944]: ovs|00002|db_ctl_base|ERR|no port named tap111i4
Aug 03 11:25:14 proxmox1 systemd[1]: 111.scope: Deactivated successfully.
░░ Subject: Unit succeeded
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The unit 111.scope has successfully entered the 'dead' state.
Aug 03 11:25:14 proxmox1 systemd[1]: 111.scope: Consumed 4min 52.989s CPU time.
░░ Subject: Resources consumed by unit runtime
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The unit 111.scope completed and consumed the indicated resources.
Aug 03 11:25:14 proxmox1 ovs-vsctl[751945]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln111i6
Aug 03 11:25:14 proxmox1 ovs-vsctl[751945]: ovs|00002|db_ctl_base|ERR|no port named fwln111i6
Aug 03 11:25:14 proxmox1 ovs-vsctl[751946]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap111i6
Aug 03 11:25:14 proxmox1 ovs-vsctl[751946]: ovs|00002|db_ctl_base|ERR|no port named tap111i6
Aug 03 11:25:14 proxmox1 ovs-vsctl[751948]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln111i2
Aug 03 11:25:14 proxmox1 ovs-vsctl[751948]: ovs|00002|db_ctl_base|ERR|no port named fwln111i2
Aug 03 11:25:14 proxmox1 ovs-vsctl[751949]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap111i2
Aug 03 11:25:14 proxmox1 ovs-vsctl[751949]: ovs|00002|db_ctl_base|ERR|no port named tap111i2
Aug 03 11:25:14 proxmox1 ovs-vsctl[751950]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln111i5
Aug 03 11:25:14 proxmox1 ovs-vsctl[751950]: ovs|00002|db_ctl_base|ERR|no port named fwln111i5
Aug 03 11:25:14 proxmox1 ovs-vsctl[751951]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap111i5
Aug 03 11:25:14 proxmox1 ovs-vsctl[751951]: ovs|00002|db_ctl_base|ERR|no port named tap111i5
Aug 03 11:25:14 proxmox1 ovs-vsctl[751952]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln111i3
Aug 03 11:25:14 proxmox1 ovs-vsctl[751952]: ovs|00002|db_ctl_base|ERR|no port named fwln111i3
Aug 03 11:25:14 proxmox1 ovs-vsctl[751953]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap111i3
Aug 03 11:25:14 proxmox1 ovs-vsctl[751953]: ovs|00002|db_ctl_base|ERR|no port named tap111i3
 
Hi,
I have a virtualised OPNsense firewall running on one of my Proxmox servers.
Unfortunately it kept crashing randomly. After the newest 23.7 upgrade it is totally unuseable as it crashes instantly.
please check /var/log/apt/history.log for what packages were updated then (kernel and QEMU should be most relevant).

With the earlier 5.15 kernel everything was running w/o any problems.
Can you try booting that kernel to see if it works there?

Within journalctl I can see following errors right after the VM crash:

Code:
Aug 03 11:25:10 proxmox1 QEMU[681780]: kvm: unable to start vhost net: 24: falling back on userspace virtio
Aug 03 11:25:10 proxmox1 QEMU[681780]: kvm: Error binding guest notifier: 24
Aug 03 11:25:10 proxmox1 QEMU[681780]: kvm: unable to start vhost net: 24: falling back on userspace virtio
Aug 03 11:25:10 proxmox1 QEMU[681780]: kvm: virtio_bus_set_host_notifier: unable to init event notifier: Too many open files (-24)
Aug 03 11:25:10 proxmox1 QEMU[681780]: kvm: vhost VQ 1 notifier binding failed: 24
Aug 03 11:25:10 proxmox1 QEMU[681780]: kvm: ../softmmu/memory.c:2608: memory_region_del_eventfd: Assertion `i != mr->ioeventfd_nb' failed.
Sounds like something goes wrong during setup for the vNICs. Will try to reproduce and have a look at the code when I have time. Do you have many other VMs with many other vNICS running at the same time?
 
Hello @fiona

I updated my PVE server today to the newest released versions:
Code:
Start-Date: 2023-08-04  10:40:54
Commandline: apt upgrade
Install: proxmox-default-kernel:amd64 (1.0.0, automatic), proxmox-kernel-6.2.16-6-pve:amd64 (6.2.16-7, automatic), proxmox-kernel-6.2:amd64 (6.2.16-7, automatic)
Upgrade: libpve-rs-perl:amd64 (0.8.4, 0.8.5), pve-qemu-kvm:amd64 (8.0.2-3, 8.0.2-4), libpve-cluster-api-perl:amd64 (8.0.2, 8.0.3), libpve-guest-common-perl:amd64 (5.0.3, 5.0.4), pve-cluster:amd64 (8.0.2, 8.0.3), libproxmox-rs-perl:amd64 (0.3.0, 0.3.1), proxmox-ve:amd64 (8.0.1, 8.0.2), proxmox-backup-file-restore:amd64 (3.0.1-1, 3.0.2-1), libpve-access-control:amd64 (8.0.3, 8.0.4), proxmox-backup-client:amd64 (3.0.1-1, 3.0.2-1), pve-kernel-6.2:amd64 (8.0.4, 8.0.5), pve-manager:amd64 (8.0.3, 8.0.4), libpve-common-perl:amd64 (8.0.6, 8.0.7), proxmox-kernel-helper:amd64 (8.0.2, 8.0.3), libpve-cluster-perl:amd64 (8.0.2, 8.0.3)
End-Date: 2023-08-04  10:41:45

pveversion -v:
Code:
Linux proxmox1 6.2.16-6-pve #1 SMP PREEMPT_DYNAMIC PMX 6.2.16-7 (2023-08-01T11:23Z) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Aug  4 11:22:40 2023 from 192.168.101.251
root@proxmox1:~# less /var/log/apt/history.log
root@proxmox1:~# pveversion -v
proxmox-ve: 8.0.2 (running kernel: 6.2.16-6-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.3
proxmox-kernel-6.2.16-6-pve: 6.2.16-7
proxmox-kernel-6.2: 6.2.16-7
pve-kernel-6.2.16-5-pve: 6.2.16-6
ceph: 17.2.6-pve1+3
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.4
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.7
libpve-guest-common-perl: 5.0.4
libpve-http-server-perl: 5.0.4
libpve-rs-perl: 0.8.5
libpve-storage-perl: 8.0.2
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
openvswitch-switch: 3.1.0-2
proxmox-backup-client: 3.0.2-1
proxmox-backup-file-restore: 3.0.2-1
proxmox-kernel-helper: 8.0.3
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.2
proxmox-widget-toolkit: 4.0.6
pve-cluster: 8.0.3
pve-container: 5.0.4
pve-docs: 8.0.4
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.7-1
pve-ha-manager: 4.0.2
pve-i18n: 3.0.5
pve-qemu-kvm: 8.0.2-4
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1


Unfortunately I have already upgraded to the newest PVE version and cutted off all loose ends (= kernel 5.15 from PVE 7).
Therefore a cannot use/boot it.

In general I do have another equal OPNsense VM running on my 2nd PVE server which runs without any issues with the same virtual settings.

The only difference between the physical servers are the CPUs:
PVE1 (which crashes) uses an AMD EPYC 7272 12-Core Processor
while PVE2 uses an Intel(R) Xeon(R) CPU E5-2637 v3

The funny fact is that on PVE2 the VM freezes with 100 % CPU while on PVE1 the VM crashes...
 
Code:
Start-Date: 2023-08-04  10:40:54
Commandline: apt upgrade
Install: proxmox-default-kernel:amd64 (1.0.0, automatic), proxmox-kernel-6.2.16-6-pve:amd64 (6.2.16-7, automatic), proxmox-kernel-6.2:amd64 (6.2.16-7, automatic)
Upgrade: libpve-rs-perl:amd64 (0.8.4, 0.8.5), pve-qemu-kvm:amd64 (8.0.2-3, 8.0.2-4), libpve-cluster-api-perl:amd64 (8.0.2, 8.0.3), libpve-guest-common-perl:amd64 (5.0.3, 5.0.4), pve-cluster:amd64 (8.0.2, 8.0.3), libproxmox-rs-perl:amd64 (0.3.0, 0.3.1), proxmox-ve:amd64 (8.0.1, 8.0.2), proxmox-backup-file-restore:amd64 (3.0.1-1, 3.0.2-1), libpve-access-control:amd64 (8.0.3, 8.0.4), proxmox-backup-client:amd64 (3.0.1-1, 3.0.2-1), pve-kernel-6.2:amd64 (8.0.4, 8.0.5), pve-manager:amd64 (8.0.3, 8.0.4), libpve-common-perl:amd64 (8.0.6, 8.0.7), proxmox-kernel-helper:amd64 (8.0.2, 8.0.3), libpve-cluster-perl:amd64 (8.0.2, 8.0.3)
End-Date: 2023-08-04  10:41:45
Unfortunately it kept crashing randomly. After the newest 23.7 upgrade it is totally unuseable as it crashes instantly.
This would be the interesting upgrade, not the one from today. Maybe it's already rotated into history.log.1.gz?
 
This would be the interesting upgrade, not the one from today. Maybe it's already rotated into history.log.1.gz?
When did Proxmox released the 6.x kernels for PVE7?
My history.log.x.gz goes back for a long time and I would need to get the right version.


For the VM: Earlier it just crashed sometimes if I checked the Surricata logs within the GUI.
After the newest OPNsense release it is unstable.
 
@fiona: Would it work to reenable to bullseye repository and reinstall the 5.15 kernel from there while the rest of the system uses bookworm?
 
@fiona: Would it work to reenable to bullseye repository and reinstall the 5.15 kernel from there while the rest of the system uses bookworm?
For testing purposes that should work, but I wouldn't keep it running for production use.
 
Then let's hope the the issues with the 6.2 kernel can be found and fixed pretty soon.
Did you already test that 5.15 works? Otherwise, I wouldn't jump to conclusions too quickly.

Unfortunately it kept crashing randomly. After the newest 23.7 upgrade it is totally unuseable as it crashes instantly.
How do you know it was that update? It should not be too far back in the apt history, it was only two weeks ago.
 
Did you already test that 5.15 works? Otherwise, I wouldn't jump to conclusions too quickly.
No, as you have said it's not being used for production/just for testing and currently I am abroad. Can test that during the weekend when I am physical connected to the server and not just via VPN.

How do you know it was that update? It should not be too far back in the apt history, it was only two weeks ago.
It was the newest OPNsense release by end of July.
With update I mean the update of OPNsense (the VM) and not Proxmox.
 
It was the newest OPNsense release by end of July.
With update I mean the update of OPNsense (the VM) and not Proxmox.
Ah, sorry. Thank you for the clarification!
 
@fiona I tried to use the old kernel but without success. My VM keeps crashing instantly.
Maybe this could be linked to the newest QEMU version?
 
@fiona I think I found the problem - it is the queue setting for the NICs
net0: virtio=56:58:98:15:C2:0B,bridge=vmbr0,queues=8,tag=500

I migrated my crashing OPNsense VM to my other PVE server to see if it runs stable or not but it kept crashing there as well.
After trying multiple things I tried to remove the queues and this seems to work as the VM runs now for more than a minute.
 
Even after 12 hours since I changed the queue setting the VM is running.
So I guess I found the "guilty" setting.
 
@fiona I think I found the problem - it is the queue setting for the NICs
net0: virtio=56:58:98:15:C2:0B,bridge=vmbr0,queues=8,tag=500

I migrated my crashing OPNsense VM to my other PVE server to see if it runs stable or not but it kept crashing there as well.
After trying multiple things I tried to remove the queues and this seems to work as the VM runs now for more than a minute.
Even after 12 hours since I changed the queue setting the VM is running.
So I guess I found the "guilty" setting.
Nice find! But before the OPNsense upgrade the VM did run successfully with QEMU 8.0 and this setting?

It might be related to this issue too: https://lists.proxmox.com/pipermail/pve-devel/2023-August/058636.html
Maybe OPNsense resets the adapters more often during boot or something which would also leak the file descriptors, but just speculation.

What you could try is increasing the NOFILE (number of open files) system limit and see if that makes it work. You check and increase the limit as described here: https://forum.proxmox.com/threads/qemu-crash-with-vzdump.131603/post-578351
 
Nice find! But before the OPNsense upgrade the VM did run successfully with QEMU 8.0 and this setting?
Yes, everything went well without any issues or problems.


What you could try is increasing the NOFILE (number of open files) system limit and see if that makes it work. You check and increase the limit as described here: https://forum.proxmox.com/threads/qemu-crash-with-vzdump.131603/post-578351
Currently it says:
Code:
NOFILE     max number of open files                1024      4096 files
0
NOFILE     max number of open files                1024    524288 files
371
NOFILE     max number of open files                1024      4096 files
0
NOFILE     max number of open files                1024    524288 files
123
NOFILE     max number of open files                1024      4096 files
0
NOFILE     max number of open files                1024    524288 files
125
NOFILE     max number of open files                1024      4096 files
0
NOFILE     max number of open files                1024    524288 files
123
NOFILE     max number of open files                1024      4096 files
0
NOFILE     max number of open files                1024    524288 files
134
NOFILE     max number of open files                1024      4096 files
0
NOFILE     max number of open files                1024    524288 files
253
NOFILE     max number of open files                1024      4096 files
0
NOFILE     max number of open files                1024    524288 files
189
NOFILE     max number of open files                1024      4096 files
0
NOFILE     max number of open files                1024    524288 files
193
NOFILE     max number of open files                1024      4096 files
0
NOFILE     max number of open files                1024    524288 files
201
NOFILE     max number of open files                1024      4096 files
0
NOFILE     max number of open files                1024    524288 files


May I ask what "files" in that context means?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!