Hello all,
We're running our servers on a PRoxmox 8.1 cluster, and there is Ceph installed. I have both ceph block pool and cephfs pool using actively.
But it seems like i divides into 2 our total usable storage size and i dont know how to determine limits i need. Now it's like 20TB-20TB...
Just ran into this in the lab, haven't gone digging in prod yet.
pve-manager/8.1.3/b46aac3b42da5d15 (running kernel: 6.2.16-20-pve)
Cluster is alive, working, zero issues, everything in GUI is happy, 100% alive -- however... the "ceph device" table appears to have NOT updated itself for a...
I am currently evaluating Proxmox in a cluster environment and have intention to expand it to 7 storage nodes and 7 compute nodes to harness the storage provided by ceph. I have spent the last few weeks spending my effort formatting the machines and reinstalling everytime I make a ceph...
So i have been doing a lot of tests with proxmox and ceph.
Im now thinking about a certain case, is it posible to use the ceph pools created inside a vm ? Or maybe a ceph fs ?
How should i go about it withought breaking everything :) ?
Hi Guys,
I have managed to make a EC Pool and use CephFS instead of RBD.
The idea is to mount a directory inside of the cephFS and to make this directory as shared (as its on cephfs)
This works and we can make qcow2 images on top of this. so you basically get the benefits of a EC Pool and...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.