`got inotify error...` - worrisome ?

chudak

Well-Known Member
May 11, 2019
322
16
58
Newbie is watching `journalctl -f` and seeing:

Code:
got inotify poll request in wrong process - disabling inotify

Should I worry? What does it mean ?

Thx
 
This does not harm, but we log it because it should not happen. I cannot say more without any context...
 
Code:
/var/log/syslog.6.gz:Sep  6 11:51:42 pve pveproxy[15561]: got inotify poll request in wrong process - disabling inotify
/var/log/syslog.6.gz:Sep  6 13:27:37 pve pveproxy[24868]: got inotify poll request in wrong process - disabling inotify
/var/log/syslog.6.gz:Sep  6 14:35:16 pve pveproxy[17161]: got inotify poll request in wrong process - disabling inotify
 
This is an ongoing daemon error even on a fresh install of pve 8.1.
Just posting some logs and context fyi

Code:
Starting system upgrade: apt-get dist-upgrade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
  bind9-dnsutils bind9-host bind9-libs libpve-access-control libunbound8 unbound
  unbound-host
7 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 3892 kB of archives.
After this operation, 48.1 kB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://security.debian.org bookworm-security/main amd64 unbound amd64 1.17.1-2+deb12u2 [949 kB]
Get:2 http://security.debian.org bookworm-security/main amd64 bind9-host amd64 1:9.18.24-1 [305 kB]
Get:3 http://security.debian.org bookworm-security/main amd64 bind9-dnsutils amd64 1:9.18.24-1 [403 kB]
Get:4 http://security.debian.org bookworm-security/main amd64 bind9-libs amd64 1:9.18.24-1 [1413 kB]
Get:5 http://security.debian.org bookworm-security/main amd64 libunbound8 amd64 1.17.1-2+deb12u2 [550 kB]
Get:6 http://security.debian.org bookworm-security/main amd64 unbound-host amd64 1.17.1-2+deb12u2 [201 kB]
Get:7 http://download.proxmox.com/debian/pve bookworm/pve-no-subscription amd64 libpve-access-control all 8.1.1 [71.5 kB]
Fetched 3892 kB in 3s (1461 kB/s)             
Reading changelogs... Done
(Reading database ... 56099 files and directories currently installed.)
Preparing to unpack .../0-unbound_1.17.1-2+deb12u2_amd64.deb ...
Unpacking unbound (1.17.1-2+deb12u2) over (1.17.1-2+deb12u1) ...
Preparing to unpack .../1-bind9-host_1%3a9.18.24-1_amd64.deb ...
Unpacking bind9-host (1:9.18.24-1) over (1:9.18.19-1~deb12u1) ...
Preparing to unpack .../2-bind9-dnsutils_1%3a9.18.24-1_amd64.deb ...
Unpacking bind9-dnsutils (1:9.18.24-1) over (1:9.18.19-1~deb12u1) ...
Preparing to unpack .../3-bind9-libs_1%3a9.18.24-1_amd64.deb ...
Unpacking bind9-libs:amd64 (1:9.18.24-1) over (1:9.18.19-1~deb12u1) ...
Preparing to unpack .../4-libpve-access-control_8.1.1_all.deb ...
Unpacking libpve-access-control (8.1.1) over (8.0.7) ...
Preparing to unpack .../5-libunbound8_1.17.1-2+deb12u2_amd64.deb ...
Unpacking libunbound8:amd64 (1.17.1-2+deb12u2) over (1.17.1-2+deb12u1) ...
Preparing to unpack .../6-unbound-host_1.17.1-2+deb12u2_amd64.deb ...
Unpacking unbound-host (1.17.1-2+deb12u2) over (1.17.1-2+deb12u1) ...
Setting up unbound (1.17.1-2+deb12u2) ...
Setting up bind9-libs:amd64 (1:9.18.24-1) ...
Setting up libunbound8:amd64 (1.17.1-2+deb12u2) ...
Setting up libpve-access-control (8.1.1) ...
Setting up bind9-host (1:9.18.24-1) ...
Setting up unbound-host (1.17.1-2+deb12u2) ...
Setting up bind9-dnsutils (1:9.18.24-1) ...
Processing triggers for libc-bin (2.36-9+deb12u4) ...
Processing triggers for pve-manager (8.1.4) ...
Processing triggers for man-db (2.11.2-2) ...
Processing triggers for pve-ha-manager (4.0.3) ...

Your System is up-to-date

journalctl -e
Code:
Feb 16 10:32:00 pmhost pvedaemon[539277]: starting termproxy UPID:pmhost:00083A8D:00CC0437:65CEAD00:vncshell::root@pam:
Feb 16 10:32:00 pmhost pvedaemon[525102]: <root@pam> starting task UPID:pmhost:00083A8D:00CC0437:65CEAD00:vncshell::root@pam:
Feb 16 10:32:01 pmhost pvedaemon[530401]: <root@pam> successful auth for user 'root@pam'
Feb 16 10:32:07 pmhost audit[539508]: AVC apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="unbound" pid=539508 comm="apparmor_parser"
Feb 16 10:32:07 pmhost kernel: audit: type=1400 audit(1708043527.142:31): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="unbound" pid=539508 comm="apparmor_parser"
Feb 16 10:32:07 pmhost systemd[1]: Reloading.
Feb 16 10:32:07 pmhost unbound[1224]: [1224:0] info: service stopped (unbound 1.17.1).
Feb 16 10:32:07 pmhost systemd[1]: Stopping unbound.service - Unbound DNS server...
<skipped some unbound messages
Feb 16 10:32:07 pmhost systemd[1]: unbound.service: Deactivated successfully.
Feb 16 10:32:07 pmhost systemd[1]: Stopped unbound.service - Unbound DNS server.
Feb 16 10:32:07 pmhost systemd[1]: unbound.service: Consumed 11.885s CPU time.
Feb 16 10:32:07 pmhost systemd[1]: Starting unbound.service - Unbound DNS server...
Feb 16 10:32:07 pmhost audit[539549]: AVC apparmor="DENIED" operation="capable" class="cap" profile="unbound" pid=539549 comm="unbound" capability=12  capname="net_admin"
Feb 16 10:32:07 pmhost kernel: audit: type=1400 audit(1708043527.954:32): apparmor="DENIED" operation="capable" class="cap" profile="unbound" pid=539549 comm="unbound" capability=12  capname="net_admin"
Feb 16 10:32:07 pmhost unbound[539549]: [539549:0] notice: init module 0: subnetcache
Feb 16 10:32:07 pmhost unbound[539549]: [539549:0] notice: init module 1: validator
Feb 16 10:32:07 pmhost unbound[539549]: [539549:0] notice: init module 2: iterator
Feb 16 10:32:08 pmhost unbound[539549]: [539549:0] info: start of service (unbound 1.17.1).
Feb 16 10:32:08 pmhost systemd[1]: Started unbound.service - Unbound DNS server.
Feb 16 10:32:08 pmhost systemd[1]: Reloading.
Feb 16 10:32:09 pmhost systemd[1]: Reloading pvedaemon.service - PVE API Daemon...
Feb 16 10:32:09 pmhost pvedaemon[539592]: send HUP to 1452
Feb 16 10:32:09 pmhost pvedaemon[1452]: received signal HUP
Feb 16 10:32:09 pmhost pvedaemon[1452]: server closing
Feb 16 10:32:09 pmhost pvedaemon[1452]: server shutdown (restart)
Feb 16 10:32:09 pmhost systemd[1]: Reloaded pvedaemon.service - PVE API Daemon.
Feb 16 10:32:09 pmhost systemd[1]: Reloading pvestatd.service - PVE Status Daemon...
Feb 16 10:32:10 pmhost pvestatd[539596]: send HUP to 1422
Feb 16 10:32:10 pmhost pvestatd[1422]: received signal HUP
Feb 16 10:32:10 pmhost pvestatd[1422]: server shutdown (restart)
Feb 16 10:32:10 pmhost systemd[1]: Reloaded pvestatd.service - PVE Status Daemon.
Feb 16 10:32:10 pmhost systemd[1]: Reloading pveproxy.service - PVE API Proxy Server...
Feb 16 10:32:10 pmhost pvedaemon[1452]: restarting server
Feb 16 10:32:10 pmhost pvedaemon[1452]: starting 3 worker(s)
Feb 16 10:32:10 pmhost pvedaemon[1452]: worker 539600 started
Feb 16 10:32:10 pmhost pvedaemon[1452]: worker 539601 started
Feb 16 10:32:10 pmhost pvedaemon[1452]: worker 539602 started
Feb 16 10:32:10 pmhost pvestatd[1422]: restarting server
Feb 16 10:32:11 pmhost pveproxy[539599]: send HUP to 1461
Feb 16 10:32:11 pmhost pveproxy[1461]: received signal HUP
Feb 16 10:32:11 pmhost pveproxy[1461]: server closing
Feb 16 10:32:11 pmhost pveproxy[1461]: server shutdown (restart)
Feb 16 10:32:11 pmhost systemd[1]: Reloaded pveproxy.service - PVE API Proxy Server.
Feb 16 10:32:11 pmhost systemd[1]: Reloading spiceproxy.service - PVE SPICE Proxy Server...
Feb 16 10:32:11 pmhost spiceproxy[539605]: send HUP to 1469
Feb 16 10:32:11 pmhost spiceproxy[1469]: received signal HUP
Feb 16 10:32:11 pmhost spiceproxy[1469]: server closing
Feb 16 10:32:11 pmhost spiceproxy[1469]: server shutdown (restart)
Feb 16 10:32:11 pmhost systemd[1]: Reloaded spiceproxy.service - PVE SPICE Proxy Server.
Feb 16 10:32:11 pmhost systemd[1]: Reloading pvescheduler.service - Proxmox VE scheduler...
Feb 16 10:32:11 pmhost spiceproxy[1469]: restarting server
Feb 16 10:32:11 pmhost spiceproxy[1469]: starting 1 worker(s)
Feb 16 10:32:11 pmhost spiceproxy[1469]: worker 539609 started
Feb 16 10:32:11 pmhost pveproxy[1461]: Using '/etc/pve/local/pveproxy-ssl.pem' as certificate for the web interface.
Feb 16 10:32:11 pmhost pveproxy[1461]: restarting server
Feb 16 10:32:11 pmhost pveproxy[1461]: starting 3 worker(s)
Feb 16 10:32:11 pmhost pveproxy[1461]: worker 539610 started
Feb 16 10:32:11 pmhost pveproxy[1461]: worker 539611 started
Feb 16 10:32:11 pmhost pveproxy[1461]: worker 539612 started
Feb 16 10:32:11 pmhost pvescheduler[539608]: send HUP to 2827
Feb 16 10:32:11 pmhost pvescheduler[2827]: received signal HUP
Feb 16 10:32:11 pmhost pvescheduler[2827]: server shutdown (restart)
Feb 16 10:32:11 pmhost systemd[1]: Reloaded pvescheduler.service - Proxmox VE scheduler.
Feb 16 10:32:12 pmhost pvescheduler[2827]: restarting server
Feb 16 10:32:12 pmhost systemd[1]: Stopping pve-ha-lrm.service - PVE Local HA Resource Manager Daemon...
Feb 16 10:32:13 pmhost pve-ha-lrm[1471]: received signal TERM
Feb 16 10:32:13 pmhost pve-ha-lrm[1471]: restart LRM, freeze all services
Feb 16 10:32:13 pmhost pve-ha-lrm[1471]: server stopped
Feb 16 10:32:14 pmhost systemd[1]: pve-ha-lrm.service: Deactivated successfully.
Feb 16 10:32:14 pmhost systemd[1]: Stopped pve-ha-lrm.service - PVE Local HA Resource Manager Daemon.
Feb 16 10:32:14 pmhost systemd[1]: pve-ha-lrm.service: Consumed 14.275s CPU time.
Feb 16 10:32:14 pmhost systemd[1]: Starting pve-ha-lrm.service - PVE Local HA Resource Manager Daemon...
Feb 16 10:32:14 pmhost pve-ha-lrm[539653]: starting server
Feb 16 10:32:14 pmhost pve-ha-lrm[539653]: status change startup => wait_for_agent_lock
Feb 16 10:32:14 pmhost systemd[1]: Started pve-ha-lrm.service - PVE Local HA Resource Manager Daemon.
Feb 16 10:32:14 pmhost systemd[1]: Stopping pve-ha-crm.service - PVE Cluster HA Resource Manager Daemon...
Feb 16 10:32:15 pmhost pve-ha-crm[1460]: received signal TERM
Feb 16 10:32:15 pmhost pve-ha-crm[1460]: server received shutdown request
Feb 16 10:32:15 pmhost pve-ha-crm[1460]: server stopped
Feb 16 10:32:15 pmhost pvedaemon[525102]: worker exit
Feb 16 10:32:15 pmhost pvedaemon[530401]: worker exit
Feb 16 10:32:15 pmhost pvedaemon[531392]: worker exit
Feb 16 10:32:15 pmhost pvedaemon[1452]: worker 525102 finished
Feb 16 10:32:15 pmhost pvedaemon[1452]: worker 531392 finished
Feb 16 10:32:15 pmhost pvedaemon[1452]: worker 530401 finished
Feb 16 10:32:15 pmhost unbound[539549]: [539549:2] info: generate keytag query _ta-4f66. NULL IN
Feb 16 10:32:16 pmhost systemd[1]: pve-ha-crm.service: Deactivated successfully.
Feb 16 10:32:16 pmhost systemd[1]: Stopped pve-ha-crm.service - PVE Cluster HA Resource Manager Daemon.
Feb 16 10:32:16 pmhost systemd[1]: pve-ha-crm.service: Consumed 9.568s CPU time.
Feb 16 10:32:16 pmhost systemd[1]: Starting pve-ha-crm.service - PVE Cluster HA Resource Manager Daemon...
Feb 16 10:32:16 pmhost spiceproxy[379134]: worker exit
Feb 16 10:32:16 pmhost spiceproxy[1469]: worker 379134 finished
Feb 16 10:32:16 pmhost pve-ha-crm[539666]: starting server
Feb 16 10:32:16 pmhost pve-ha-crm[539666]: status change startup => wait_for_quorum
Feb 16 10:32:16 pmhost pveproxy[532041]: worker exit
Feb 16 10:32:16 pmhost systemd[1]: Started pve-ha-crm.service - PVE Cluster HA Resource Manager Daemon.
Feb 16 10:32:16 pmhost pveproxy[1461]: worker 532041 finished
Feb 16 10:32:16 pmhost pveproxy[1461]: worker 529685 finished
Feb 16 10:32:16 pmhost pveproxy[1461]: worker 534966 finished
Feb 16 10:32:16 pmhost pveupgrade[539280]: update new package list: /var/lib/pve-manager/pkgupdates
Feb 16 10:32:17 pmhost pveproxy[539667]: worker exit
Feb 16 10:32:21 pmhost pveproxy[539668]: got inotify poll request in wrong process - disabling inotify
Code:
Feb 16 10:32:10 pmhost systemd[1]: Reloading pveproxy.service - PVE API Proxy Server...
Feb 16 10:32:11 pmhost pveproxy[539599]: send HUP to 1461
Feb 16 10:32:11 pmhost pveproxy[1461]: received signal HUP
Feb 16 10:32:11 pmhost pveproxy[1461]: server closing
Feb 16 10:32:11 pmhost pveproxy[1461]: server shutdown (restart)
Feb 16 10:32:11 pmhost systemd[1]: Reloaded pveproxy.service - PVE API Proxy Server.
Feb 16 10:32:11 pmhost pveproxy[1461]: Using '/etc/pve/local/pveproxy-ssl.pem' as certificate for the web interface.
Feb 16 10:32:11 pmhost pveproxy[1461]: restarting server
Feb 16 10:32:11 pmhost pveproxy[1461]: starting 3 worker(s)
Feb 16 10:32:11 pmhost pveproxy[1461]: worker 539610 started
Feb 16 10:32:11 pmhost pveproxy[1461]: worker 539611 started
Feb 16 10:32:11 pmhost pveproxy[1461]: worker 539612 started
Feb 16 10:32:16 pmhost pveproxy[532041]: worker exit
Feb 16 10:32:16 pmhost pveproxy[1461]: worker 532041 finished
Feb 16 10:32:16 pmhost pveproxy[1461]: worker 529685 finished
Feb 16 10:32:16 pmhost pveproxy[1461]: worker 534966 finished
Feb 16 10:32:17 pmhost pveproxy[539667]: worker exit
Feb 16 10:32:21 pmhost pveproxy[539668]: got inotify poll request in wrong process - disabling inotify
 
Thanks a lot, It helped to solve the issue.

# pvecm updatecerts
(re)generate node files
merge authorized SSH keys
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!