Thanks for you answer.
It is to extend the existing storage pool of, so adding a extra pool is not what i want.
Reweight the bigger disk at the cost of space it will be a better idee to just buy the cheaper 480 disks.
When replacing all the 480 gb disk for 960 all the disk will have more...
Hi All,
I have a 6 node cluster that is faster running out of space than expected.
Got core`s and memory enough,
My Ceph pool almost at 70% of used space
I got 7 SSD in all six nodes of 480 GB 2/3 Replication.
1 have one slot left in all Six nodes.
The wearout of the 480 GB disk is only 6%...
Hello DCsapak,
Thanks for your answer, a bit late reaction of mine....
The output of 'Cepch osd df'
ID
CLASS
WEIGHT
REWEIGHT
SIZE
RAW USE
DATA
OMAP
META
AVAIL
%USE
VAR
PGS
STATUS
0
hdd
0.30009
0.95001
307 GiB
164 GiB
135 GiB
3.1 MiB
1021 MiB
143 GiB
53.36
1.17
31
up
1
hdd
0.30009...
Hello,
I Cant get my head around this.
We forgot do remove a unattached disk of a vm, created a new one did de math we should have enough even whit the forgotten disk.
On the windows vm we needed to extract multiple archives so de new disk is growing fast.
Then suddenly i got a error on...
Hallo Aaron Thanks for the replay.
Yes i am planning to Enable HA
No i don`t have any ports left,
i have 2 10 Gbe spf+ ports left witch i was planning to use fore internet traffic Max 1 Gbe, and give Corosync the rest of the available bandwidth.
Using a vLag -> ovs Bond and add 2 ovs...
Hi.
I am building a 5 node cluster, whit 2 switches that support Vlag.
Only now i notice that in the future i am not going to have enough Vlag`s.
I have 3 bonds per node,
- Ceph Public
- Ceph Cluster network.
- Proxmox Cluster (Corosync) & internet Traffic.
so i was thinking to use...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.