@alexskysilk
You're right to call that out. "Very small environments" was a poor choice of words on my part. The distinction I was trying to make isn't about node count, it's about business value. We have customers running three node clusters...
Hi @nvanaert ,
There's quite a bit to unpack here.
First and foremost, Proxmox VE does not monitor storage or network path health. An All Paths Down (APD) condition will not result in a node fence, nor does it interact with the VM HA subsystem...
This implies that the operator has both the skill, experience, and the wherewithal (not having a dozen other responsibilities) to understand the docs and apply them. Ceph is not attractive at the low end precisely because it requires engineer...
If by redundancy you mean disk fault tolerance, the higher the value after "raidz" the higher the fault tolerance. In practice, raidz2+ (never use single parity raidz unless prepared to lose the pool at any time)
performance= striped mirrors...
Good plan. I hope you understand that this experiment will not ever be worth any money you put into it. You already have a working solution- and if the "politics" of the money prevent you from putting together a sane configuration you're just...
@RodolfoRibeiro If you want more direct assistance, post the content of (from both hosts)
lsblk
multipath -ll -v2
if you have system logs available from the point in time when your vm became corrupted, would be good to look at what happened.
Hi @RodolfoRibeiro , thank you for clarifying. This matches my initial understanding of your situation. SAS and iSCSI are transfer and connectivity protocols. While the article I suggested is using iSCSI as example, once you are beyond basic...
For the generations of hardware where iscsi and SAS were offered as available SKUs there was no meaningful performance difference- 16G FC simply had more headroom to fill cache. When 25GB iscsi product started shipping, THOSE were faster (even...
Although they were about replicated pools (so no ec) following reads might serve as a hint why (outside of experiments/lab setups) it's not a good idea to go against the recommendations...
K=6,M=2 results in 6 data strips per 8 total. 6/8=0.75
in replication you have 1 data strips per 3 total. 1/3=0.33
its not exactly the "same" availability because survivability in a replication group is much higher; you need one living osd per...
"lower" and "higher" are subjective. Ceph achieves HA using raw capacity.
suit yourself. this is not a recommended deployment. You are far better served by just having two SEPERATE VMs each serving all those functions without any ceph at all-...