Unfortunately that's not how the real world always works. Linux is actually made to use swap when actually needed. Adding more ram won't be the correct way to solve problems in all cases. Nor is it always an option. One of the situations I've experienced this in was on a database server with max amount of ram possilbe at the time on a 2 way intel setup. Adding just 10G of swap in that system transformed the situation from totally useless back to blazing fast. Yes this is a few years back, but still. Not having swap can still be completely wrong in a lot of cases. If the Linux kernel elects to swap out some memory to swap, the kernel have seen that it has more use for that amount of ram for other tasks rather than keeping that data in ram. Now, can swap have a negative effect? For sure, and zram might be better to use in some cases either alone or tiered with some kind of flash. Using zram can reduce the latency significantly instead of using some form of flash.
I've been working with Linux for 20+ years and using swap for more than emergency situations has tangible performance and wear-level costs. I am in the real world, just like you, and it is my responsibility to deal with aspects like this of architecture. Consider for a moment what forum we're in. Do you really think I'd be here talking like this if I didn't work with these systems?
Adding more RAM actually is the right way to do this as it lowers the pressure on pushing data into swap. RAM is orders of magnitude faster than swap in both throughput and latency. Any software that is large enough to use lots of RAM will notice the performance difference of regularly using swap.
Furthermore, if you have many systems (in this case on a hypervisor, be it Proxmox or otherwise) running data regularly in swap (regardless of Windows, Linux or otherwise) then this has compounding performance costs across the whole environment. I've literally gone through deep dive performance explorations of these impacts and seen very substantial gains in tuning systems such that they avoid putting anything into swap unless an emergency situation.
And I know that adding more RAM isn't always an option, but it is extremely achievable in the modern sense. All servers can address very large amounts of RAM for any server made in the last 10+ years, and the costs of adding more RAM is substantially low from an IT CapEx/TCO regard. The performance gains for large implementations grossly outweigh the cost of adding more RAM. Or, dare i say, tuning the environment (tools/apps) to use less RAM if it is misconfigured.
I guarantee your example database would run faster with more RAM than swap, and it is mathematically provable if you compare the actual performance of swap (as in on disk) performance vs RAM. There are no scenarios where swap (on disk) performs anywhere near as close to how RAM performs in any generation of RAM or physical disk, including top-end NVMe.
I _NEVER_ said do not have swap. Don't act like that's what I said because I have not said that. I have, however, said that day to day you do not want data running in swap as there are substantial performance impacts to that, and again, increased wear on storage devices (in ways that is completely avoidable). Swap, in my professional experience and opinion, should only ever be used as last resort.
The claims that swap has zero performance cost is bunk and I have encountered evidence in my professional career many times that proves this.
Consider that a Dell R720, a server from the 2012/2014 era, can have 768GB to 1.5TB of RAM installed in it. And that's just one server example that's extremely affordable in the modern sense (I can pick up a Dell R720 for about $100 pretty often, before upgrading the RAM of course). Newer servers have even larger RAM ceiling options to them.