Good morning everyone,
I have a cluster of 3 Proxmox servers under version 6.4-13.
Last Friday I updated Ceph from nautilus to octupus, since it is one of the requirements to upgrade Proxmox to version 7.
At first everything worked wonders.
But today when I check I find that it is giving me the...
Hello everyone,
I honestly don't really know or remember how i got myself into this situation.
What i remember is: quite some time ago (early/mid 2020) I installed Ceph on my now only PVE node to take a look at it.
After some time I uninstalled it; most likely with apt as i wasn't aware of...
Hello,
We have a three node cluster that has been working for some month now. Today I rebooted one node because of updates and now we are no longer able to access ceph:
Do you have any recommendation how to solve this? We already tried to reinstall the packages for python rados.
Thank you and...
I setup a Proxmox cluster with 3 servers (Intel Xeon E5-2673 and 192 GB RAM each).
There are 2 Ceph Pools configured on them and separated into a NVMe- and a SSD-Pool through crush rules.
The public_network is using a dedicated 10 GBit network while the cluster_network is using a dedicated 40...
I wanted to know what was the status of non-Linux OSes support for direct access to native I/O paths of Ceph with RADOS.
I am trying to evaluate which existing program would allow a direct CEPH cluster access in order to avoid using iSCSI or CIFS gateways.
Idea is to use such programs to...
Hello together,
we have a big problem with our ceph configuration.
Since 2 weeks the Bandwidth dropped extreme low.
Has anybody an idea how we can fix this?
I've got 2 proxmox clusters - one with hammer ceph cluster (bigger) and second one working as a ceph client with dedicated pool (smaller). I wanted to first upgrade smaller cluster before bigger one with data. After upgrading two nodes of smaller cluster to proxmox 5.1 they are working properly...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.