zswap is great, but is no substitute for ram. if you are using a production environment (read: for money) just provision sufficient ram. yes its expensive, but its worth much more in consistent performance.
What are you talking about? All our code is fully open source, we encourage and support upstream contributions to actually make things better for all by default over some third party modification hacks. So posting fixes to our official submission...
That is as succinct a description of the problem as you or I have posited so far.
You want a system in place provided by the devs, but when looking at what it would take you correctly (if hyperbolically) estimate the work as requiring a "sun...
This is true, but neglects we're living in 2026.
git clone my_container
cd my_container
docker compose up -d
In context, the guest disks contains no relevent data, and are pointless to back up. When you have multiple guests accessing the same...
In OPs case, many if not all the containers are accessing the same data, which would be passed as a mountpoint. It wouldnt really make sense to do that, since there's nothing to be gained backing up the container itself.
When you say "mainstream advice" its good to note the background, reasoning, and authority of whoever you are quoting. In an enterprise environment with security policies, or on a dirty (shared) hypervisor the advice certainly applies- but its a...
Before answering the question (its really just the one) I encourage you to really consider the implications of an all docker workload. docker images are immutable and live in a repo; the only things you MIGHT want to keep a copy of is the...
I dont really understand. What stops you from doing this? not a single one of your asks is necessary for the operation of a pve node or cluster, but a lot of what you ask (and some you didnt) is included by my general purpose post install script...
You are asking a lot from a relatively modest host. its doable, but you'll need to temper your "performance" expectations. on that subject-
That will multiply the "performance expectations issue" substantially. In addition, doing a passthrough...
Virtualization makes the most sense when you use it to break down work into small chunks. Imagine trying to fit tetris pieces that are 4 squares, and then you have a piece that is 48 squares in size. If your use case is really that big, the only...
pbs storage isnt like NFS; if its not present it will not hang your host. I would investigate your logs a bit more carefully to see what the actual culprit is- perhaps you have other items in /etc/fstab and/or /etc/pve/storage.cfg?
what @bbgeek17 suggested should have been the first thing Veeam support instruct you to do. That kind of experience with their support (and other things) have led me to abandon using Veeam with PVE.
I recommend that you figure out a curl based way to upload a file to local storage with the same account that Veeam is using. Run it local to PVE first, if that works - run it from the Veeam network segment. If that works, convert it to PS...
vmbr0 and vmbr1 are taken from your existing configuration... if they dont work, you have bigger problems.
iface bond0.661 inet static
...etc
You keep using that term. I'm unfamiliar with such a topology- is it ethernet?
The physical layer bringing your interfaces to your hosts isnt the relevent factor, its how you manage your logical networking configuration.
I assume your ethernet interfaces you describe are 2x 25gbit and 2x 1G, so you have 4 interfaces (the...
/etc/pve isnt a normal filesystem. its a special cfs that is kept in a database format, and gets distributed and synchronized in real time. you can read about it here: https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pmxcfs)
/etc/pve...