I see. that makes more sense then.
The answer lies in the documentation for ramfs (https://github.com/torvalds/linux/blob/v4.17/Documentation/filesystems/ramfs-rootfs-initramfs.txt)
In short- you should be able to use the ramdisk for virtual machines, but for containers it would require a...
Your problem can be solved in a number of ways, but the most impactful would be to adapt your website to docker/kubernetes. what that will allow you is to scale your compute resources with load- and you can run it anywhere including on proxmox. This isnt easy and would require a substantial...
This is not a use case for ceph. ceph is designed for multiple silos of disk drives. since you have two silos with raid, you will likely want to look at gluster if you want replication or just iscsi+lvm if you dont.
well, that would run the script once and will not impact any writes following. my thought was some kind of autodiscard but it doesnt appear that this is possible. Thanks for your thoughts :) I will continue to tweak the cron schedule.
I have a cluster that has relatively heavy IO and consequently free space on the ceph storage is constantly constrained. I'm finding myself performing fstrim on a more and more frequent interval. Is there a way to auto trim a disk for an lxc container?
neat idea- sure.
design as suggested- relatively simple.
Making it work in real life without connectivity loss on failover, data loss, have engineering capability for the MANY edge cases, able to actually support a production environment- practically impossible. this as a DIY will cause you pain.
You dont actually need to be INSIDE the container to kill a process since its exposed to the root, but the better question why in heavens are you running a process that needs sigint to stop... I can think of no use case where its "the proper way." Do yourself a favor and daemonize it. look up...
Just out of curiousity, why are you replacing your 10g network with a 100g? considering your OSDs are HDDs not only are you not going to get much benefit, but at that cost (nics plus switches) you would get MUCH more bang for your buck replacing your HDDs with SSDs...
WRT question 1, adding the interfaces to the raw table does not fix the problem as the filter table still processes the rules regardless, and there is no NOTRACK option for the filter table AFAICT. Other suggestions most welcome.
I've run proxmox on supermicro 16GB SATA DOM devices before without issue, BUT this was on relatively low utilization boxes. If you intend to run anything production quality you'd want to move swap and /var to more robust devices.