rados

  1. C

    Can't load RADOS for module PVE::RADOS, GUI and CLI tools fail

    Hello, my proxmox server has been running happily for a few months since last reboot. I tried to run a new container and got this message: Can't load '/usr/lib/x86_64-linux-gnu/perl5/5.36/auto/PVE/RADOS/RADOS.so' for module PVE::RADOS: libboost_iostreams.so.1.74.0: cannot open shared object...
  2. S

    [SOLVED] Ceph Pacific. RADOS. Objects are not deleted, but only orphaned

    RADOS ceph version 16.2.6 (1a6b9a05546f335eeeddb460fdc89caadf80ac7a) pacific (stable) added file to bucket radosgw-admin --bucket=support-files bucket radoslist | wc -l 96 ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 44 TiB 44 TiB 4.7 GiB 4.7 GiB...
  3. A

    Ceph Error

    Good morning everyone, I have a cluster of 3 Proxmox servers under version 6.4-13. Last Friday I updated Ceph from nautilus to octupus, since it is one of the requirements to upgrade Proxmox to version 7. At first everything worked wonders. But today when I check I find that it is giving me the...
  4. S

    Unexpected Ceph behaviour from unused Ceph installation

    Hello everyone, I honestly don't really know or remember how i got myself into this situation. What i remember is: quite some time ago (early/mid 2020) I installed Ceph on my now only PVE node to take a look at it. After some time I uninstalled it; most likely with apt as i wasn't aware of...
  5. hoffmn01

    [SOLVED] undefined symbol: rados_mgr_command_target

    Hello, We have a three node cluster that has been working for some month now. Today I rebooted one node because of updates and now we are no longer able to access ceph: Do you have any recommendation how to solve this? We already tried to reinstall the packages for python rados. Thank you and...
  6. R

    Ceph Performance Understanding

    I setup a Proxmox cluster with 3 servers (Intel Xeon E5-2673 and 192 GB RAM each). There are 2 Ceph Pools configured on them and separated into a NVMe- and a SSD-Pool through crush rules. The public_network is using a dedicated 10 GBit network while the cluster_network is using a dedicated 40...
  7. DynFi User

    Native CEPH access from non-Linux OSes

    I wanted to know what was the status of non-Linux OSes support for direct access to native I/O paths of Ceph with RADOS. I am trying to evaluate which existing program would allow a direct CEPH cluster access in order to avoid using iSCSI or CIFS gateways. Idea is to use such programs to...
  8. ssaman

    Bandwidth very low - 2,3 MB/sec

    Hello together, we have a big problem with our ceph configuration. Since 2 weeks the Bandwidth dropped extreme low. Has anybody an idea how we can fix this?
  9. B

    Issues after upgrade to 5.1

    I've got 2 proxmox clusters - one with hammer ceph cluster (bigger) and second one working as a ceph client with dedicated pool (smaller). I wanted to first upgrade smaller cluster before bigger one with data. After upgrading two nodes of smaller cluster to proxmox 5.1 they are working properly...