DBUS match rules

Oct 2, 2024
6
0
1
Tampa, Florida
Our nodes are logging several of these per minute:

Nov 12 10:17:30 pvm01 dbus-daemon[2341]: [system] Connection ":1.154771" is not allowed to add more match rules (increase limits in configuration file if required;
max_match_rules_per_connection=512

Before increasing this limit, we'd like to determine what is actually using the match rules and how it can be determined to set the value for, or if we can simply disable what is using the match rules if not applicable to our environment.

TIA
 
This is probably not directly related to PVE. It seems that some program is causing many dbus queries, saturating dbus and causing this message. If that's the case then increasing it will probably be only a short remedy.
It might be better to track down what program is the cause of this and look into a fix from the authors there.
My dbus foo is rather rusty, so I cannot give you concrete help for debugging this, but maybe someone else can help.
 
This is probably not directly related to PVE. It seems that some program is causing many dbus queries, saturating dbus and causing this message. If that's the case then increasing it will probably be only a short remedy.
It might be better to track down what program is the cause of this and look into a fix from the authors there.
My dbus foo is rather rusty, so I cannot give you concrete help for debugging this, but maybe someone else can help.
I'm still digging into this, but the fact that the identical thing is happening across all 8 nodes, it seems that it certainly is PVE related.
 
Hmm, rethinking this there are indeed a handful of situations where we talk to systemd over dbus, mainly for setting some cgroup resource limits on CT/VM start, waiting that the VM really stopped and to see if the ESXi import storage tooling's FUSE mount is active.

Is anything of that (frequently) used? Do your hosts have a long uptime?
 
Hmm, rethinking this there are indeed a handful of situations where we talk to systemd over dbus, mainly for setting some cgroup resource limits on CT/VM start, waiting that the VM really stopped and to see if the ESXi import storage tooling's FUSE mount is active.

Is anything of that (frequently) used? Do your hosts have a long uptime?
We are in the midst of migrating from VMWare -> Proxmox, so on each PVE node we do have ~19 ESXi node storage mounted for use in importing from VMWare.
 
Ack, that's an amount that not a lot of people use, so it seems likely that this might be the cause.

Would you mind opening a bug report to keep track of this? https://bugzilla.proxmox.com/ just mention the error and the high count of storage ESXi entries and this thread. If you wouldn't like to, I can do so tomorrow.

Until then, it might be indeed a good stop-gap to increase the limit; i.e. double it for starters.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!