What nics are in the system, and what drivers are you using?
lspci | grep -i 'eth\|net'
identifying drives is a bit trickier- use lsmod to identify, then use modinfo to get driver details.
Thats exactly right. You can add as many MDS's to the cluster, but their function would be dictated by your policies.
To tell the filesystem what you want to do, set your max_mds variable per FS, like so:
ceph fs set [ceph fs name] max_mds [n]
where [ceph fs name] is the fs and [n] is the...
Doing this is difficult, both in terms of software orchestration and hardware requirements. If you need this level of uptime, consider deploying your software in a truly multiheaded fashion instead of monolithic- Kubernetes is one way to get there.
Its not.
Because there are no current x86 hardware interconnects to emulate for >4 sockets.
lets make it really simple. under normal circumstances there is almost nothing to be gained by selecting more then one socket for a vm. This option is mostly for software that is licensed on a socket...
thats not workable. you need more osds. I hope my explanation helped you understand.
No file system likes being full. Ceph REALLY doesnt like it. if you intend to fill up your file system, have more spare room.
10% variance in misallocation isnt a big deal, especially with such a small number of OSDs. Like I said, you're worried about the wrong thing. I have nodes with 36 OSDs with roughly the same variance from the least-to-most utilized, and I dont worry about it as long as they're all less then 70%...
Docker=infrastructure as code
proxmox=type 1(ish) hypervisor.
apples and oranges, which is to say different targeted usecases. you CAN deploy docker workloads to a pve environment (at the host/vm/lxc level) but to make full use out of docker you really want the whole enchilada (git...
you have a bunch of different sized osds, with one of your hosts grossly undersized. How What exactly are you expecting? The docs ALSO say you should have equal sized nodes.
Having some variance in pgs between OSDs is normal, since not all the data you're writing is equal sized. I'd be far more...
Absolutely FANTASTIC tip. I went to look through some of my osds and found a BUNCH with WC turned off ;)
While not PLP related, pro tip: TURN ON YOUR OSD WRITE CACHE.
egg on my face; I didnt realize this isnt trivial with a pve8 system. I tried looking at journalctl on a pve8 system for disk events, and couldn't find any. Perhaps someone from proxmox team can help?
in the meantime, might be good to install rsyslog and reboot (apt install rsyslog)
What you're describing appears consistent with firmware/timing issues.
the first order of business is defining the worst acceptable outcome: do you need the data?
if so, install your drives on a generic sata hba and see if the pool will import. if it does, proceed to check/update the firmware...
hrm. Will you be continously spinning up hosts? Because from my vantage point the calculus isn't saving work, its PERFORMING a lot of work for very little effect.
I had thought about doing AIS deployments in the past, but in the final analysis it simply wasnt worth the effort given how...
yes. lots.
and to forestall your next question, they're in the documentation but more to the point they're not relevant until you define what it is you're trying to accomplish.
conflating "this is what I have, and this is what I want to do" and "no other choice" is folly. hardware is cheap and easy; building solutions on inadequate hardware is saving a penny to lose a pound. Whats the relative cost of an outage? If it doesnt doest present a cost, I posit its probably...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.