Your Plan sounds good, you should do an backup anyhow. Don't you have them at the moment?
Lnxbils proposal will be faster than a complete restore though.
I would backup the vms and lxcs togethwe with /etc/pve. Than reinstall and restore the...
If the data is extern as you described, the easiest method of "recreating" the VMs is to backup /etc/pve/qemu-server and/or /etc/pve/lxc and copy them back after reinstall. Those are the config files. Best to also have a full backup of /etc/pve...
No, no, no - do not do this.
I see that this is tempting, but a 12 disk wide vdev is already considered "wide".
Additionally: two vdevs will double IOPS, which is always a good idea ;-)
Disclaimer: I've never setup a 24 wide vdev, not even...
Read https://pve.proxmox.com/pve-docs/chapter-ha-manager.html#_requirements
"
at least three cluster nodes (to get reliable quorum)
shared storage for VMs and containers
"
Then look at this table...
It's not supported to run OMV ( or any NAS OS) inside a lxc:
https://forum.openmediavault.org/index.php?thread/51217-trying-to-access-additional-drives-via-proxmox-with-lxc/
Either run it in a vm with a dedicated sata-controller ( see...
I would install debian on your hw and use it for fileservice (nfs/samba/http/s3/... all what you want) while all having in any kind of raid'ed form (sw like zfs/mdadm or by hw ctrl) as even separated as OS and data, after upgrade it to pve too...
In my universe the KISS principle is still valid --> do not put too much complexity in one system, keep it as simple as possible instead. With Linux knowledge @waltar is right. If you want a preconfigured system then TrueNAS probably suites...
You may learn whatever you want, of course :-)
Just realize that ZFS does not destroy cheap Solid State Disks just for fun. It gives us a lot of things for that...
You can you just won't be able to run any VMs. But just running lxc containers should work without any problems ;)
For your current issue I would however strongly recommend to:
Buy used enterprise ssds with power-loss-protection (PLP is...
Another one of those messing with stuff that is totally useless to mess with. Haven't you asked before? I answer this here and on reddit almost weekly: Just don't. It's not worth the hassle, just buy used enterprise ssds and keep on doing stuff...
That's exactly what ZFS does.
And no, you should not expand that setting.
Nevertheless for reference: https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/ZIO%20Scheduler.html -- "The zfs_txg_timeout tunable (default=5seconds)...
Nothing I'm aware of, inside a vm you could do something like that with an union fs. You could also use something like log2ram
I wouldn't recommend it though since the potential for dataloss is a little bit high for my preference. You don't need...
There is no such thing. The only supported version on PVE 9 is Ceph Squid. If this is an updated cluster from PVE8, you should have updated repos and Ceph to Squid [1].
You package list show packages version 19.2.*, not Quincy ones (17.2.*)...
Correct @Johannes S , this supports both the legacy and new API. And to be fair, it looks like @boomshankerx was developing their plugin long before me. I've been writing down the concepts for a year or so, but didn't start development on it...
Off topic:
We have 2025 and most people trust LargeLanguageModels for important decisions. At the same time auto correction of stupid typos is an unsolved problem.
Interesting times... :-(
Meanwhile there are several external scripts helping to do so - including expiring old snapshots. (By keeping a defined number of them, not by defining an expiration date.) For example: search for "cv4pve-autosnap" which is my personal favorite...