U = FOS
Lots of us here are in tech-support related positions, so forum support gets to seem like "more work" after a while.
A) Watch proxmox-related youtube videos
B) Read the last 30 days of forum posts, here and on Reddit (free education)...
Just to put things in perspective, I have nodes running on consumer level OS ssds for OVER 10 YEARS, and that's without local log prevention. as long as you're not comingling payload and OS even crappy old drives dont get enough writes for it to...
You will only lose the affected PGs and their objects. This will lead to corrupted files (when the data pool is affected) or a corrupted filesystem (if the metadata pool is affected). Depending on which directory is corrupted you may not be able...
Also of note you’d have to lose the 2 OSD at the same time…after one drops Ceph will immediately copy those PG to other OSDs. On the same node if you have only 3.
This also means you need the capacity to handle that.
no. if you lose three disks on three separate nodes AT THE SAME TIME, the pool will become read only and you'll lose all payload that had placement group with shards on ALL THREE of those OSDs.
BUT here's the thing- the odds of that happening...
Might not be an obvious question, but why? your OS needs are pretty meagre, and disk performance will have little (if any) impact on your vms. The only real consumer of iops are the logs, and if you are really concerned with write endurance...
Generally speaking you wont get much benefit from more than two host connections to a node (one per controller,) but it is conceivable you would be able to consume more then 25gbit on a single host under which case you will want to ensure that...
Hi @daus2936 , your question was:
The title of the documentation page provided by @alexskysilk is:
Set up the multipath.conf file in E-Series - Linux (iSCSI)
It is very succinct and states:
No changes to /etc/multipath.conf are required.
It...
Thats an interesting take. For someone who derides others for being fanboys, that statement shows an astounding lack of self awareness.
ceph is a scaleout filesystem with multiple api ingress points. zfs is a traditional filesystem and not...
Well..., a RAID10 (with two vdev) will give you double the IOPS of a single RaidZ vdev.
If this is relevant depends on your use case - and the specific test you run to check the behavior.
This is a big pet peeve for me. you dont LOSE anything. you write things multiple times so you can lose a disk and continue functioning. It is irrational to think you get to use 100% of the available disk AND handle its failure.
All fault...
Based on your original criteria- why bother clustering anything at all? Since it appears all you're really after is a single pane of glass- leave them all as standalone servers and use PDM for the control plane.
Clustering makes sense when you...
that depends on how you dice the data. If a "pve admin" is just the infrastructure admin, storage is provided by the storage team. if its a home user, not sure that what they recognize is of particular importance.
Not from my viewpoint- these...
based on that requirement, seems like option 2 is the only rational solution- or, BTW, there are other ways to get fault tolerant storage- you can buy it. a Dell ME50xx or HP MSA26xx would do the trick nicely.
penny wise, pound foolish...
This is only true if the underlying storage is also thick provisioned.
You are conflating hardware snapshot=all snapshots. Its true that pve does not provide any built in orchestration tools for hardware snapshotting- but the option to make your...
any storage solution has a sweet spot, but that spot is completely dependent on its use. ceph scales well with number of initiators, which in the hypervisor use case can translate to number of VMs. if you have 3 VMs, you can scale to 100 nodes...
oh for sure. but I think you're concentrating on the wrong thing.
why? whats wrong with what you have now? "as possible" is a, forgive me for my bluntness, a stupid metric. if I were you, I'd start by asking the question "what are the goals, and...
Why is LVM thin required? your SAN is likely thin provisioned anyway, so it serves no actual benefit to thin provision above that. the problem was that there was no snapshot support, not the lvm thin part.
your SAN either supports dedup or it...