That's worst design possible. Beside the too cheap devices...
Look at one node: when one OSD fails, the other one on the same node has to take over the data from the dead one. It can not be sent to another node because there are already copies...
Yes, they cost more and will get really expensive in the coming months, but second hand SATA/SAS are easy to find and not that costly. In the long run they end up being cheaper as they don't degrade as fast as consumer ones, so won't need to...
Don't want to start an argument here, but whoever told you that has little idea what is a PVE Ceph mesh cluster. Linux kernel routing may use like 0'1% of CPU and FRR may use like 3% CPU while converging or during node boot for a few seconds. If...
Added a comment to that bugzilla post....hopefully someone from Proxmox can chime in on this issue. I'm unsure still if this is an actual bug or if its something that I'm doing with the config of the filters.....or even something else.
When I need to, I use an AD backend and filter by groups that I create specifically to manage PVE privileges. Never had the need to use nested groups as the environments I've used this were not big enough to justify nesting groups or not creating...
For once, AI is right :)
Any consumer drive will have low Ceph performance due to rocksDB and sync writes, but those drives in particular are terrible for anything but PC archiving purposes due to it's small SLC cache and very slow QLC nand...
This lab is using PVE9.1.4, no subscription (although don't think there's anything different in Enterprise repo in this regard). Using nested PVE to test Ceph configs, etc. When installing the nested PVE with the iso I choosed to pin the network...
Hi,
it is a racy bug that's is fixed in qemu-server >= 9.1.3 with:
https://git.proxmox.com/?p=qemu-server.git;a=commit;h=b82c2578e7a452dd5119ca31b8847f75f22fe842...