This implies that the operator has both the skill, experience, and the wherewithal (not having a dozen other responsibilities) to understand the docs and apply them. Ceph is not attractive at the low end precisely because it requires engineer...
If by redundancy you mean disk fault tolerance, the higher the value after "raidz" the higher the fault tolerance. In practice, raidz2+ (never use single parity raidz unless prepared to lose the pool at any time)
performance= striped mirrors...
Good plan. I hope you understand that this experiment will not ever be worth any money you put into it. You already have a working solution- and if the "politics" of the money prevent you from putting together a sane configuration you're just...
@RodolfoRibeiro If you want more direct assistance, post the content of (from both hosts)
lsblk
multipath -ll -v2
if you have system logs available from the point in time when your vm became corrupted, would be good to look at what happened.
Hi @RodolfoRibeiro , thank you for clarifying. This matches my initial understanding of your situation. SAS and iSCSI are transfer and connectivity protocols. While the article I suggested is using iSCSI as example, once you are beyond basic...
For the generations of hardware where iscsi and SAS were offered as available SKUs there was no meaningful performance difference- 16G FC simply had more headroom to fill cache. When 25GB iscsi product started shipping, THOSE were faster (even...
Although they were about replicated pools (so no ec) following reads might serve as a hint why (outside of experiments/lab setups) it's not a good idea to go against the recommendations...
K=6,M=2 results in 6 data strips per 8 total. 6/8=0.75
in replication you have 1 data strips per 3 total. 1/3=0.33
its not exactly the "same" availability because survivability in a replication group is much higher; you need one living osd per...
"lower" and "higher" are subjective. Ceph achieves HA using raw capacity.
suit yourself. this is not a recommended deployment. You are far better served by just having two SEPERATE VMs each serving all those functions without any ceph at all-...
The number of OSDs isn't relevant to a pool as long as it is larger then the minimum required by the crush rule. For example, If you have an EC profile of K=8,N=2 rule, you need a minimum of 10 OSDs DISTRIBUTED ACROSS 10 NODES. so 1 OSD per node...
from understanding failure domains. damn @UdoB beat me to the punch. I wont "professor" you on this. You can either read and understand, or deploy your preconcieved notions and learn on your flesh and blood. I would also note that if your...
Ok. lets touch on this. From my perspective, there are two types of storage (there are more but in scope.) There is payload (think OS and application) storage and bulk storage. Bulk storage can most efficiently be served by a single device such...
Caching occurs in multiple layers of presentation. By the time a virtual disk is presented to a guest, the multiple caching layers can conflict and actually SLOW the guest storage performance. see...
you dont need pci passthrough for lxc- just would need to install the proper nvidia driver based on hardware and kernel deployed. You are better off creating an installation script, especially if you intend on having multiple nodes with GPUs...
I think you need to carefully consider what your end goal is. PCIe passthrough is not a good citizen in a PVE cluster, since VMs with PCIe pins not only cannot move anywhere, but also liable to hang the host. if you MUST use PCIe passthrough...
in the many years I've been using PVE, I havent had much call for using Windows guests, and when I did it was usually Windows 2016 (and older before) and had reasonably good results. In the last few weeks, I had need of a Windows guest for a...
In a cluster you dont need or even want to backup a host. everything important lives in /etc/pve which exists on all nodes. If you DID back up a host(s), you'd open the possibility of restoring a node that has been removed from the cluster and...