"kernel" being "out" doesnt mean anything directly to a distribution with a support policy. Instead, the maintainers of your linux distribution (in this case, ubuntu) will backport important changes that are deployed to downstream kernels...
I would retort with calling a solution shady because its imperfect to be an irrelevant argument to begin with. The difference between "first tier" and not ISNT that they are (perfect), its that they have the engineering capacity and support staff...
There are vendors, and there are vendors. NetAPP is first tier. the fact that my wife's nephew put together a NAS using gum and bailing wire doesnt make him of the same caliber. As for trusing your data... on prem storage exists precisely so you...
> Battery backup of a single cache still makes it a SPOF
No, it doesn't.
In your scenario, another, prior and unaddressed, failure is required to make NVRAM (and/or other component(s)) a potential single point of failure.
It doesn't make it a...
Not an endorsement since I have not used myself, but Starwind StarLVM (the substrate of Starwind VSAN) looks like it would do what you ask.
https://www.starwindsoftware.com/starwind-virtual-san
that explains your observed performance.
LACP is your first choice. if thats not possible, use active-backup and MAKE SURE the switches have plenty of bandwidth interconnecting them. balance-xor sounds good on paper but not in practice.
set...
sure. https://www.proxmox.com/en/services/support-services/support
I dont see any issues. boot storage could pose some specific challenges depending on hba model, but solvable.
see https://pve.proxmox.com/wiki/Storage. shouldnt pose any issue...
network interface mtu mismatch would decimate percieved performance, but there are other possibilities. while I'm not volunteering to check for you, you might want to
ceph config dump
ceph config show osd.x --show-with-defaults
and go over it...
This doesnt result in any meaningful benefit vs just having the same address for public and private traffic. OP, if you have multiple switches, I would create laggs for public and private traffic- and make sure to cross physical nics (presuming...
looking at the whitepaper, the author did much of the heavy lifting already. there's enough foundation for you to write the plugin. Having said that- making a supportable solution is still not a trivial task.
Read the link @bbgeek17 referenced. when you're done, you should have a realization that the problem you will run into isnt just how many NODES are in the cluster, but also how much virtual resources. PVE's solution for cluster metadata...
running software at home and for production are two completely seperate skillsets, mindsets, and realms of responsibility. As others have pointed out, you opted to install an optional kernel, and got bit. it happens.
if you did that on a...
7.0 is a test kernel,not an enterprise kernel atm, so if you are okay with breakages you use this kernel. If not, you usually pay for enterprise subscription.
Why would you be angry on a company giving you a test repo?
I forgot df so we know what /mnt/pve/pVE-ISO points to.
it looks like you're only using one of your LUNs for virtual disk use; I only see two volume groups so its a wonder where it is assigned to. do NOT assign it to PVE-DS01 as it is a shared...
Just be sure you do NOT mix other traffic along with these, most especially corosync. if you have more then 4 interfaces keep the other forms of traffic on different interfaces. If you dont- consider only using two interfaces for ceph and two...
rather then quoting, I'll try to address all possible alternatives.
ceph carries traffic on two seperate networks- public (host) and private (OSD-to-OSD.) Think of this as the host bus and disk bus on a RAID subsystem.
While you can have both...
up to you how you manage your models. in my experience, new models are released every week and I dont bother keeping the old ones.
Feel free to keep your hoarded models on the zpool. its not like its getting any use ;)
so yes :)
l2arc almost never yields useful results. you're better off just using the drive seperately.
More to the point- what is your usecase? in a homelab, its common that your bulk storage can be slow without any real impact. put your...
IBM Storage arrays can be one of many different solutions/topologies. It would be useful to mention what model/topology product you're referring to. Having said that- it most likely is a block device so yes, lvm would still be necessary if you're...