maybe you can elaborate? how does shutting down a node flood the logs?
... since it was part of a cluster it will.
info
The pvesr command line tool manages the Proxmox VE storage replication framework. Storage replication brings
redundancy for guests using local storage and reduces migration time.
It replicates guest volumes to another node so that all data is available without using shared storage.
Replication uses snapshots to minimize traffic sent over the network. Therefore, new data is sent only incrementally
after the initial full sync. In the case of a node failure, your guest data is still available on the replicated node.
The replication is done automatically in configurable intervals. The minimum replication interval is one minute,
and the maximal interval once a week. The format used to specify those intervals is a subset of systemd calendar events,
see Schedule Format section:
It is possible to replicate a guest to multiple target nodes, but not twice to the same target node.
Each replications bandwidth can be limited, to avoid overloading a storage or server.
Guests with replication enabled can currently only be migrated offline. Only changes since the last replication
(so-called deltas) need to be transferred if the guest is migrated to a node to which it already is replicated.
This reduces the time needed significantly. The replication direction automatically switches if you migrate a guest
to the replication target node.
For example: VM100 is currently on nodeA and gets replicated to nodeB. You migrate it to nodeB,
so now it gets automatically replicated back from nodeB to nodeA.
If you migrate to a node where the guest is not replicated, the whole disk data must send over.
After the migration, the replication job continues to replicate this guest to the configured nodes.
problem
In order to get rid of the loopipng message in the syslog event under your node
Mar 13 12:36:00 pve systemd[1]: Starting Proxmox VE replication runner...
Mar 13 12:36:00 pve systemd[1]: Started Proxmox VE replication runner.
Mar 13 12:37:00 pve systemd[1]: Starting Proxmox VE replication runner...
Mar 13 12:37:00 pve systemd[1]: Started Proxmox VE replication runner.
just type in your cli
systemctl stop pvesr.timer
systemctl disable pvesr.timer
and the outcome Removed /etc/systemd/system/timers.target.wants/pvesr.timer. will appear
other way
change /lib/systemd/system/pvesr.timer 'Minutely' to 'Monthly' and 'systemctl daemon-reload'
I am giving you one more reason
Issue:
Oct 22 18:03:01 pmx pmxcfs[386322]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/138: -1
Oct 22 18:03:01 pmx pmxcfs[386322]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/134: -1
Oct 22 18:03:01 pmx pmxcfs[386322]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/137: -1
Oct 22 18:03:01 pmx pmxcfs[386322]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/136: -1
Oct 22 18:03:01 pmx pmxcfs[386322]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/135: -1
Possible explanation
These error messages usually happen if the system clock was off - if it was in the future at some point the rrd
is written with this timestamp - once your clock gets synchronized to the current time the rrd refuses to updated
(since the timestamp is now older than the one from when the clock was ahead) - once your system
reaches the future timestamp rrds get updated normally again - and this error vanishes
Resolve method:
depending on how important the historical data is - I would probably move the cache directory to a safe place
and restart the services (that way recording should start fresh):
cd /var/lib/rrdcached/ or if the rrdcached folder doesnt exist cd /var/lib/
systemctl stop rrdcached
mv rrdcached rrdcached.bck
systemctl start rrdcached
systemctl restart pve-cluster