When you only have one fast network connection use that for the Ceph public network and do not configure a cluster network.
A cluster network is only useful when you have a separate physical network infrastructure that is at least twice as fast...
Choosing the Interface is currently not possible. It always takes the IP of the interface that provides the default route (or none, if there is no default route).
Better SNAT/DNAT support is something that I am considering implementing with the...
If you're gunna be running Proxmox on 48 nodes, use your commercial support to lodge a ticket to get exact advice on your setup.
You do have a support subscription given that Proxmox VE is core to your operation, right?
Hello,
We have seen clusters of around ~24 nodes in production. In our experience this can work without fine-tuning if they follow Corosync best practices (see e.g. [1]) and the network latency (of Corosync's network) is small enough and stable...
Hi @Nathan Stratton and all,
You need clear guidance here: do not do that unless you have a very compelling reason to.
a) Your hardware is discontinued and past the end of service, which significantly increases the likelihood of component...
Accounting und Mandantenfähigkeit ist bei Proxmox nicht vorhanden und daher gibt es auch keine solchen getrennten Ansichten.
Rechnet ihr immer eine Bruttokapazität ab, egal was verbreitet wird? Für so ein Setup wäre die Ansicht einer Quota sehr...
Da ist tatsächlich ein Anwendungsfall für mehrere Pools und Quota.
In einem Pool, der in Proxmox als Storage eingebunden wird, kann dann jedes Institut für sich mehrere VMs ablegen.
In Proxmox lässt sich das dann auch verrechten.
Das RBD ist die virtuelle Festplatte für eine VM. Dieses hat immer eine bestimmte Größe.
Ein Ceph-Pool kann mehrere RBDs aufnehmen und ist grundsätzlich so groß wie der gesamte Cluster. Außer er wird mit einer Qouta versehen.
Mehrere Pools...
You can only jump over one Ceph version when upgrading.
So from 14 you can upgrade to 16 and only after that to 18.
The upgrade procedures are documented on https://pve.proxmox.com/
osd.1 appears to be unreachable; in case this is a networking issue, since you are paranoid about posting your actual IP addresses (or at least within their respective subnets) I cant really help you.
That said, check the host of osd.1 to see...
Well two is exceptionally(!) bad because then there is no voting with a majority-concept. None may fail --> the risk for a failure is more than double as high as with a single one!
One is bad because... if it fails you have a problem -->...
My bad, I just checked my documentation and yes, the node 1 monitor was there automatically from the base installation. i just added the other 2 for node 2 and 3 after creating the OSDs.
There is no firewall yet.
Did you not use the PVE installation wizard? The monitor "maintains a master copy of the cluster map." I would imagine doing it the other way could result in OSDs not connected to the system...?
re: SSDs, yes it can make a big difference...
Hey all, I'm sure a few of you have seen the plugin we are developing over Reddit, but I wanted to make a formal post on the forum here to hopefully get more testers and feedback.
The plugin, source code, and documentation is all available here...