Cron job logs, please help solve an issue that helped solve an issue that was solving an other issue.

Sheelyuu

Member
May 22, 2022
11
3
8
Hi, the precursor TLDR request of the long winded explanation below is: how to stop a specific cron job that runs every. single. minute. from being logged at all.
Sending it to &> /dev/null or >/dev/null 2>&1 obviously didn't work since it still reports:

Code:
Oct 30 16:41:01 Nodename CRON[137392]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)

Oct 30 16:41:01 Nodename CRON[137393]: (root) CMD (/root/stalenuke.sh >/dev/null 2>&1)

Oct 30 16:41:01 Nodename CRON[137392]: pam_unix(cron:session): session closed for user root


I just want this cron job not to be logged at all. (and yes, it appears in journalctl -u cron ... along with freshly uncovered 3 years of cron job logs. )




---------------------------------------------------------
Why such a job that runs every minute?

What issue could have been the root cause? My NAS's motherboard died. It was also the target for all my VM/Container backups for my proxmox cluster... AND where I store isos and templates.
I transplanted my NAS into a VM in the one proxmox node that could receive its storage.

Issue solved? Nope.
Since that server became a VM, I tried SMB share, I tried NFS... and said share/networkdrive keeps randomely disconnecting from proxmox (still pings just fine, and accessible from everywhere else tho, just not proxmox), or sometimes outright refusing to be created (even after removing the remnants in /mnt/pve/ on all the nodes..);
And only way to fix the thing is to reboot every nodes.... tedious, unpractical, no way I'm going to spend my days rebooting something meant to run 24/7.
SO I made a script, and a cron job, that umounts any stale thingy in /mnt/pve/ ... and within 10 seconds, pvestatd gets remounted! Great!

Issue solved? No.
Now I have a cron job running every single minute on every node, and creating 3 lines of logs each effing time. Please save me from this nightmare.
 
Last edited:
Cronjob execution is logged as it's it's job and you cannot remove that. If you want to run your script each min without logging you should define a systemd service in an endless while loop with sleep 60 and restart on die. Then you can disable your cronjob.
The real problem is your nfs-server service which lets your nfs-clients hang which maybe the even better solution to solve and reach.
 
Cronjob execution is logged as it's it's job and you cannot remove that. If you want to run your script each min without logging you should define a systemd service in an endless while loop with sleep 60 and restart on die. Then you can disable your cronjob.
The real problem is your nfs-server service which lets your nfs-clients hang which maybe the even better solution to solve and reach.
You bloody genius, very much thank you for the input, worse part is I do have a bunch of of containers running endless services... I guess I wasn't able to think about it after being up for... the last 3? days running on redbulls and some frozen beef patties nuked in a pan in my flesh system.

As for thy metal systems.... eh. Funnily enough it stopped loosing the share since I stopped fiddling with the VM after landing with a config that performed well for other things (and might have fixed what was making it drop in the process), so since there will still be the cron/service taking care of mishaps' IF they happen, I'll call it good for now, as in don't fix what seems to work.


Edit: Correction, where the cron job would create 3lines of log every time, the service creates 4.
 
Last edited:
Yeah, 4 lines at service start but then it's done with logging until service restart or reboot.
 
Yeeeeah, sleep deprivation did a number, after 6 hours of sleep I works better. In no-sleep-stupid I kept the script the same and made a service that would run it and restart in 60seconds.
Went back and redid it properly with a while loop and stuff. Sorry for the inconvenience and being temporarily dense!
 
  • Like
Reactions: waltar
Little update: I also gave a thorough look over the guest.

Apparently, for some guests, VirtIO drivers are causing some kernel errors (in this case: the eth1 - bad gso: type: 1 kind of error.), along with some reporting wonkiness (ui not being able to say what speed/mtu the link is negociated at).

While the latter actually doesn't seem to cause anything but just cosmetic issues, the former is actually a symptom of an issue waiting to cause other issues.
And sure, you could use E1000E virtual NICs to solve that, but when it's on a 10G or + bridge, that's a bit of a waste of potential.

The fix I found for that was to disable, on the guest, generic recieve offload on VirtIO interfaces, with in this case the following command: ethtool -K eth0 rx-gro-hw off
It's not persistent and needs to be added in the boot sequence, but it basically stops kernel errors (which in my case would pop up each time something was calling the guest's GUI, so oof, that stacks up fast in logs)


I still sometimes have SMB/NFS drops, but at this point I'm yet to know whether it's a proxmox issue or a guest on virtual nics issue or a guest issue... or a array spin-up delay issue... eh.
 
  • Like
Reactions: waltar

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!