Just to understand, in each No I have 2 OSDs
If 1 OSD dies, won't it rebalance to the OSDS of the other Nodes? Will you try to rebalance to the other OSD on the same node?
In this case, would it be correct to create 2 pools? each pool with 1 OSD per NODE?
Hi Aaron,
root@cephnode1:~# pveceph pool ls --noborder
Name Size Min Size PG Num min. PG Num Optimal PG Num PG Autoscale Mode PG Autoscale Target Size PG Autoscale Target Ratio Crush Rule Name %-Used Used
.mgr 3 2 1 1 1 on...
Sirs,
I'm having my first experience with CEPH.
I have 3 nodes with 2 disks each totaling 6 OSDS
My total storage reports that I am 80% occupied, but 1 of my OSDs reports that I am 94% occupied, generating a HEALTH_WARN status
Why this behavior and what should I do to balance the data in...
Here in my MX cluster I have a setting in POSTFIX that allows me to enable filters per domain
EX:
domain A with RBL and GRAYLIST
domain B with RBL ONLY
C domain with GREYLIST only
Everything managed by Mysql
This feature would be extremely useful in PMG,
perhaps the lack of this will prevent...
apt dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
How do I update the system? I want to upgrade to 6.4 to later upgrade to 7
Do I need a...
Why the memory information of a VM in the GUI is not the same as what I have inside a Linux VM for example?
In the GUI always show an average of 70 to 80% in use and within the VMs shows only 20% to 30%
I have the same problem.
After updating all my trouble started we already tried everything to identify the problem and I can not .
Dirk.Nilius could tell what was causing your problem ?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.