you tag the filename_upid and a special tag like this_is_upid_detail, and then when you get a UPID in the mainlog, you can do a query on the this_is_upid_detail with filename_upid == UPID to fetch the relevant details.
incron/FAM/etc.[1] is adding a tad more... fragile complexity, to the...
I did :)
Bit.. of a bigger challenge when you want to stream the logs to a central logger/store (Like Loki / ElasticSearch) using a quite static tool (like Promtail / Fluentd / Graylog )
My question, as I'm busy with a central logging project on multiple ProxMox clusters to log to Loki via Promtail: is there a hook script of sorts to be able to know when the next file is created/closed/etc.?
A bit counter productive to scan the directory every couple of minutes, and then to...
One of the questions that arise with this: is there a hook that would be called at the end of tasks?
Then it'll be an easy method to use that to send the task output to the central logger
The same way as ISPs oversell bandwidth by playing a numbers game
ie. expecting that, on average, users will not use more than 50% of RAM, thus 100 x 2G VPS's expecting to not use more than 100*2*0.5GB of RAM.
Some even enforce (inside VMs) echo 3 > /proc/sys/vm/drop_caches and others have...
That is/has been the "solution" to assist in catching up, but it becomes.... tedious to load/create them all. But then it gets stuck with the single big DB which is then single synchronous thread.
I've got a situation where I need to run something like systemtap to track especially DNS requests. Found this solution https://serverfault.com/questions/192893/how-i-can-identify-which-process-is-making-udp-traffic-on-linux/192920#192920 that would've be a "perfect" solution, but PVE kernels...
I have a case where the latency for a cross continent synchronization, is being slowed down by the "small" requests from the pulling PBS. If I start up multiple synchronization jobs for multiple backups groups, I do get the full (expected) bandwidth, but when it's "stuck" on a single group's...
I'm starting to hit my head into the needs for something similar to AWS's IMDS (using IPs fd00:ec2::254 & 169.254.169.254) with something connected to a Vault (ala HAshiCorp's Vault) for secrets and automated secret sharing/etc. without the secrets/passwords being unencrypted in the VM/LXC...
Good day
I've stumbled on FAIme https://fai-project.org/FAIme/ which creates a "nice" ISO installer that mostly just work... except for the requirement/need of DHCP (not an option at present, unless you could advise a ISC KEA option/replacement with cost effective (for me) static host API...
well... that replug via GUI is just.... not feasible when you have tens of VMs/LXCs to do.
Where is that `ifreload` documented?
I wished it was something part of the upgrade documentation/etc. or at least triggered with a warning whenever OpenVSwitch is involved
oh, I've been working around this since 6.x days with out of band RMI/iKVM connections that I run the `apt [dist-]upgrade` in a screen/tmux/byobu session after pre-vacuuming the hypervisor, as I know that the VM etc. interfaces will also by "lost" after the OpenVSwitch restart
It's especially...
Good day,
I got the need to fix the DNS query flooding and to deploy systemd-resolved, BUT that means I need to find a different way to inject the LXC's search domains and DNS server. I know what I all have to modify/change, but before I go on that endeavour, I'm wondering what others are...
I need to add the correction there:
*IF* you use OpenV-Switch *NO*
You must be doing that from a console, and expect to reboot as that is the easier way to get the system and the VM/LXCs connected back to the OpenVswitch bridge(s).
Sorry, I get bitten by this every time when I tried this...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.