Coisas que ao longo do tempo me fizeram perceber que são elas as verdadeiras causas do pvestatd service ter problemas e precisar ser re-inicializado.
NFS ou sistema de arquivos comum ao cluster não encontrado ou com problemas de latência na conexão ou perda de pacotes entre es NFS
Uso excesivo...
I thinks is totally possible, as ceph is represented in a lvm disk as war o diskimg
The problem is how to invoke this commandas and open ports in ceph for that. Also the key sharing.
The easiest way is to put a NFS Server over CephFs and share the image over NFS
This might help...
How could i make those changes? I have proxmox installed from .iso image in baremetal
Should i compile it?
Where can i find documentation for that and make those improvments and commit them to the pve communityr?
For some time i have been wondering how could i contribute to proxmox.
I really would like to have the standard vizualization filter for the
{{cluster}}, {{guestname}}, {{node}}, {{vmid}}
In the backups listing on storage
This would be a must for me and I think that for other people.
First...
Well for example, if we compare to OpenShift or OpenStack they are huge projects kind a working on a different approach to the same problem PVE is.
In my point of view some things like Multi Cluster integration and HA between Geo locations can be done with K8S for example, but then everything...
It seems as a nice idea but:
First of all PVE is already well integrated with LXC
It has been a sucessfull implementation and we have TKL support for it
I don't know if the effort is worthit for this huge integration, comprehend that to request a feature in proxmox that supports KATA we have...
How do you manage multi site vlans for the load balancer? How do you integrated to separated phisical networks from each other in a single vlan ?
Thanks
Beacuse those server were already bought and they offer 150 Watts consumption/Blade Total 2.2Kw Chassi for sppining up 512RAM/Blade (Total 7.2Tb RAM) and come with 2x24 Cores CPUS by Blade, total of 672 cores 3.2Ghz. Energy efficency is really good and also We have two NFS outside for Backups...
Aaron, in the case scenario:
E9000 Huawei with 7Blades CH121 (2Disks per Blade (one disk for Proxmox OS + one disk for CephOSD)) wich would be an appropriate size/min settings?
PowerSupply and Switch is centralized for this type of system (6 PSU + 2Switches) I supouse 1 node failure is to...
Unfortunetly this didn't get any answer. But i'll tell you my case today.
We just bought a bunch of Huawei E9000 that looks equivalent to this setup. We have 14 blade per/chassis with the same setup 2.5in disks.
One possibility is to use NVMe drives as here in this documentation is allegedly...
Yes the Same for the entire MultiBlade E9000 for High Density. This is really a must for the next version, all big data-centers are now migrating from Dell and HP to Huawei and IBM because of the cost and shipping from China being the best.
CH121
CH 222
And al the blades compatible with the...
This Thread need an UP because it really doesn't make sense that we are still in 2023 and LXC (The base of the containarization that all the docker garbage is based on) is still not fisable along side Kubernetes.
Also Having proxmox and Ceph is a must not only by the reliability but also...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.