LXC and tvheadend - Continuity counter error

interesting. Dont think I've had that problem, so far every time it's been tmpfs with journald. tmpfs ram shows up as 'buffers' for some reason (since its disk related i guess? though flushing buffers of course wont solve it! thus I feel it's misreported in the wrong column and sent me goose chasing down the wrong way!)

your issue is different. you can dump caches as explained in another thread, but that costs the whole server by having no read cache for a while before it's repopulated again - suddenly reads on your drives will jump up til it is.

https://www.tecmint.com/clear-ram-memory-cache-buffer-and-swap-space-on-linux/

Still asking myself why nobody cares about the ram issues on containers.

Yes, but doesn't simply deleting the caches have any deficits?
 
Still asking myself why nobody cares about the ram issues on containers.

Yes, but doesn't simply deleting the caches have any deficits?
Course it does - you are flushing your caches away and increasing the reads from the disks dramatically. Much slower than reading from cache - response times will be slower, and in fact, you could run into head thrash on the disk reading and writing a whole lot at the same time, etc. Your system will seem quite slow til those caches are repopulated. ZFS really likes ARC, dumping it costs.

I think they do care, and most of them seem to be people "using a lot of ram" - either in their tmpfs's or their runaway apps. Debug your usage!
Solutions are throughout this forum as I found for my issue with this.
 
Very interesting thread, I'm another user that is affected by this, it worked fine on Proxmox 6 but not on Proxmox 7, I initially thought that it is disk related, but you guys are right. It happens exactly at the time free memory becomes 0 in top.

What I noticed is that the issue goes away as soon as I increase the memory for the container (while running), but appears again after all that extra memory is used by buff/cache and free becomes 0 again.

Did any of you guys use a workaround successfully?
 
Thread starter here... At least for my situation, unfortunately no. I've re-tried this a few months ago but still failed. Never tried again w/ PVE 8 because I'm using a dedicated small NUC-like box for tvheadend and never had any issue since then.
 
Thread starter here... At least for my situation, unfortunately no. I've re-tried this a few months ago but still failed. Never tried again w/ PVE 8 because I'm using a dedicated small NUC-like box for tvheadend and never had any issue since then.
I think I found a fix in another thread here:

https://forum.proxmox.com/threads/c...l-oom-killer-kill-processes.67666/post-497490

As a test, I'm doing four recordings from four tunes while watching one of them live also, stable for more than half an hour now.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!