It's not supported to run OMV ( or any NAS OS) inside a lxc:
https://forum.openmediavault.org/index.php?thread/51217-trying-to-access-additional-drives-via-proxmox-with-lxc/
Either run it in a vm with a dedicated sata-controller ( see...
I would install debian on your hw and use it for fileservice (nfs/samba/http/s3/... all what you want) while all having in any kind of raid'ed form (sw like zfs/mdadm or by hw ctrl) as even separated as OS and data, after upgrade it to pve too...
In my universe the KISS principle is still valid --> do not put too much complexity in one system, keep it as simple as possible instead. With Linux knowledge @waltar is right. If you want a preconfigured system then TrueNAS probably suites...
You may learn whatever you want, of course :-)
Just realize that ZFS does not destroy cheap Solid State Disks just for fun. It gives us a lot of things for that...
You can you just won't be able to run any VMs. But just running lxc containers should work without any problems ;)
For your current issue I would however strongly recommend to:
Buy used enterprise ssds with power-loss-protection (PLP is...
Another one of those messing with stuff that is totally useless to mess with. Haven't you asked before? I answer this here and on reddit almost weekly: Just don't. It's not worth the hassle, just buy used enterprise ssds and keep on doing stuff...
That's exactly what ZFS does.
And no, you should not expand that setting.
Nevertheless for reference: https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/ZIO%20Scheduler.html -- "The zfs_txg_timeout tunable (default=5seconds)...
Nothing I'm aware of, inside a vm you could do something like that with an union fs. You could also use something like log2ram
I wouldn't recommend it though since the potential for dataloss is a little bit high for my preference. You don't need...
There is no such thing. The only supported version on PVE 9 is Ceph Squid. If this is an updated cluster from PVE8, you should have updated repos and Ceph to Squid [1].
You package list show packages version 19.2.*, not Quincy ones (17.2.*)...
Correct @Johannes S , this supports both the legacy and new API. And to be fair, it looks like @boomshankerx was developing their plugin long before me. I've been writing down the concepts for a year or so, but didn't start development on it...
Off topic:
We have 2025 and most people trust LargeLanguageModels for important decisions. At the same time auto correction of stupid typos is an unsolved problem.
Interesting times... :-(
Meanwhile there are several external scripts helping to do so - including expiring old snapshots. (By keeping a defined number of them, not by defining an expiration date.) For example: search for "cv4pve-autosnap" which is my personal favorite...
osd.1 appears to be unreachable; in case this is a networking issue, since you are paranoid about posting your actual IP addresses (or at least within their respective subnets) I cant really help you.
That said, check the host of osd.1 to see...
Same issue on fresh 9.0.10.
It's pretty annoying - it was seamless and non-disruptive - now it's not.
My testbed was accidentally touching hypervisor's power button in DC and triggering a reboot of tenths of VMs ... lol
Well two is exceptionally(!) bad because then there is no voting with a majority-concept. None may fail --> the risk for a failure is more than double as high as with a single one!
One is bad because... if it fails you have a problem -->...