The PVE GUI
Yeah, that is something that would be useful.
Cool. Thanks. Will try that.
But in the case at hand, my issue does not seem to be coming from a VM, because there are no VMs on the node and the VMs running on the other node do not affect the iodelay of the other nodes (so badly)...
Yes, I had one disk (per type) per node for the longest time.
Huh, and there was I thinking that adding an HDD per node to the HDD pool would actually improve operational safety...
At the moment, I have approx. 14TB worth of data across the two HDDs per node. What you are telling me, if I...
Maybe there was a misunderstanding. I am not wondering why the 14TB and the 4TB HDDs get a different number of PGs. That is expected, as you explain.
My issue is that on Host 2 and Host 3 the 14TB HDD gets filled only to 76% whereas the 4TB HDD gets filled to 92% capacity while on Host 1...
I'd say it's definitely no local storage (because there are no VMs running on the node anymore).
It could be the HDDs for sure. But shouldn't that affect the other nodes as well? (They are all practically identical).
I will take a snapshot later and post the technical details.
Thought it was balancing automatically.
This is the output:
"last_optimize_started": "Mon Nov 27 20:02:25 2023",
"optimize_result": "Unable to find further...
Yes, the "r" was from the "replicated_rule" whereas the "c" was from "ceph_hdd" and "ceph_sdd" - my own replicated rules for two pools.
I did change the default replicated rule for .mgr to "ceph_ssd" as suggested. There was a very brief spike of activity in Ceph but overall nothing has changed...
I am running a Mailcow instance in a VM on my cluster and it is going great.
But, me being me and my cluster being a home lab and no commercial operation, I am always trying out new things, playing around.
Now I am wondering whether there would be any benefit in setting up a Proxmox Mail...
I have a three node PVE cluster with identical nodes. Each node has an SSD that is part of a Ceph pool (I know, I should have more SSDs in the pool). And each node also has two HDDs that are part of another Ceph pool.
I replaced the three enterprise grade SSDs with three other, larger...
Hmm. So I just migrated / shut down the VMs on the node with high iodelay. Turns out, even with zero VMs running, the iodelay remains (almost) the same. Probably has nothing to do with the VMs then. I'll open a new thread for this topic.
But I'd still be interested how to interpret the output...
Having a strange situation again: This time, I replaced the three enterprise grade SSDs with three other, larger enterprise grade SSDs. Two nodes show low iodelays (2%) while one node shows very high iodelays (25%). The three nodes are basically identical (make, model, cpu, memory) and also the...
Thanks but I'm afraid my question was not phrased clearly enough:
Oguz had sad that the checkbox (to activate encryption) could be checked anytime.
When I asked how to check that, I did not mean how to check whether encryption was active or not but rather how to check that checkbox (in order...
So I have a local PBS running, backing up my PVE cluster. Backups are encrypted.
And I have another PBS running offsite, syncing the backups from my local PBS for safekeeping. The same backups are not encrypted at the other PBS.
What do I need to do to keep my encrypted backups the way I...