cephfs

  1. D

    Extremely SLOW Ceph Storage from over 60% usage ???

    We have a "Lab" Ceph Object Storage consisting of a 4x Multinode Server and the following Node components: Per Node: PVE Manager Version pve-manager/7.1-7/df5740ad Kernel Version Linux 5.13.19-2-pve #1 SMP PVE 5.13.19-4 (Mon, 29 Nov 2021 12:10:09 +0100) 24 x Intel(R) Xeon(R) CPU X5675 @...
  2. E

    CEPH multiple MDS on the same node

    Hi, I am running a 5-node ceph cluster (Octopus) and when I increased the number of active MDs from 2 to 4 I experienced a gain of performance in my CephFS. Since I have a lot of client using the cephfs pool I think it might be a good idea to increase the number of MDs even more...
  3. D

    Ceph 16.2.6 - CEPHFS failed after upgrade from 16.2.5

    TL;DR - Upgrade from 16.2.5 to 16.2.6 - CEPHFS fails to start after upgrade, all MDS in "standby" - requires ceph fs compat <fs name> add_incompat 7 "mds uses inline data" to work again. Longer version : pve-manager/7.0-11/63d82f4e (running kernel: 5.11.22-5-pve) apt dist-upgraded, CEPH...
  4. E

    CephFS user/permission conflict between different nodes (best practices?)

    Hi, we are using CephFS on a 3-node proxmox cluster. We have mounted the CephFS to /home on several different Debian clients. All Debian clients (server) see the files of the other Debian clients in the cephfs mount (/home). It happens that client XY has services on Debian client 1 and Debian...
  5. I

    Confusing Ceph GUI Info when using multiple CephFS volumes

    I recently upgraded to Proxmox 7 and Ceph Pacific which brought multiple CephFS support. My goal was to create one FS on my HDD OSDs and one FS on my SSD OSDs so I can balance workloads across the two sets of hardware. I have a "performance" and "capacity" crush rule. Previously, I had 2 RBD...
  6. J

    [SOLVED] [Warning] Ceph Upgrade Octopus 15.2.13 to Pacific 16.2.x

    This will probably not hit many people but it bit me and should be in the doc, at least until Octopus packages are upgraded to 15.2.14. The Bug that hit me: https://tracker.ceph.com/issues/51673 Fixed in 15.2.14: It was not easy to downgrade to Octopus but it can be done and everything is...
  7. A

    Ceph MDS reports oversized cache

    Hello, we've noticed some latency in the last month in our ceph cluster and when I checked ceph dashboard I found this warning in the attached file. As I understood this means mds_cache_memory_limit property is not configured correctly. Can that be the reason why we experience latency in the...
  8. D

    Recommended Config : Multiple CephFS

    been running around in circles trying to figure this out.. what's the best/most-direct way to get more than 1 CephFS running/working on a pmx7 cluster with the pool types NOT matching? IE, I'd like to have the following: 1. /mnt/pve/cephfs - replicated, SSD 2. /mnt/pve/ec_cephfs - erasure...
  9. DynFi User

    mount cephfs using fstab

    I have a PBS system where I need to mount a CephFS FS. I have managed to mount this using this command: mount -t ceph 192.168.215.4,192.168.215.3,192.168.215.2,192.168.215.1:/ /mnt/mycephfs -o name=bob,secret=xxxxxxxxxxxxxxxxxxxxxxx== This is working like a charm and I have access to my...
  10. DynFi User

    Best way to access CephFS from within VM (high perf)

    We have a large 4 node cluster with about 419To split in two main pools, one for NVMe based disks and another one for SSD. We are planning to use the NVMe RBD to store our VMs and the other pool to store shared data. Shared data will be very voluminous and with +100 millions of files. Beside...
  11. K

    PVE/CEPH Cluster Verhalten

    Hallo, wir haben ein 3-Node CEPH/PVE Cluster (PVE 6.4-5/ CEPH 15.2.11) und haben ein paar Ausfalltests durchgeführt. Dabei ist uns aufgefallen, dass CEPH quasi gar nicht mehr reagiert, wenn die Links vom CEPH Public und CEPH Cluster Netzwerk down sind. Also auch die (pve-)ceph. Befehle geben...
  12. E

    Backup ceph-fs?

    Hi, I’ve started to use pbs for VMs and containers backups, but I don’t find a way to backup Ceph file systems... I’ve created ceph-fs in the proxmox cluster. Is there any proper way to do it? If not, is there any plan for proxmox or pbs next releases to support this feature? Thanks!
  13. S

    Cephfs content backed up in Proxmox Backup Server

    Hi! I have a 6 server cluster: 3 servers are hybrid nodes with a lot of OSDs and other 3 nodes are like VM processing nodes. Everything is backed up by 2x2 port 10G NIC in hybrid nodes and 1x2 port 10G NIC un processing nodes and two stacked N3K switches. Ceph does the thing for VM storage and...
  14. K

    cephfs mount error: exit code 16 (500)

    Hello, On some servers in cloud I see this error, while trying to check cephfs - content: mount error: exit code 16 (500) I have next package versions:
  15. I

    proxmxo managed cephfs mount on external computer (non porxmox) cant access file content

    i have usefully integrated ceph(proxmox based) in all the lxc containers, now i want to integrate it outside of proxmox for some user for read only access , to replace the current nfs share, what do i need to do ? what params to put in /etc/fstab
  16. K

    ceph tooling fault when creating MDS

    good evening, i posted in another thread (https://forum.proxmox.com/threads/proxmox-6-ceph-mds-stuck-on-creating.57524/#post-268549) that was created on the same topic and just hopped on to it, but that thread seems to be dead. so i am trying my luck here to see if this is a general problem...
  17. C

    Ceph show "slow requests are blocked" when creating / modifying CephFS

    Hi, I have noticed in Ceph log (ceph -w) an increase of "slow requests are blocked" when I create CephFS, e.g. 2019-10-14 16:41:32.083294 mon.ld5505 [INF] daemon mds.ld4465 assigned to filesystem cephfs as rank 0 2019-10-14 16:41:32.121895 mon.ld5505 [INF] daemon mds.ld4465 is now active in...
  18. G

    Cephfs storage not unmounted on removal?

    Hello, After adding and then removing a cephfs instance in the storage gui I noticed that it was not unmounted and/or deleted from /mnt/pve/[title]. I was wondering if this was intentional or not? Note: This was my 2nd cephfs storage instance in case that matters. I cannot remove my primary...
  19. R

    Mounting CEPHFS on a client

    I would like to mount CEPHFS on a Client. Since CEPHFS version is Nautilus, I decided to use, as client, a container running CentOS 7. It might have well been an external physical machine, just happened I wanted to try with a container. Yes, CEPHFS is already installed on ProxMox and working...
  20. S

    Proxmox 6.x + Ceph + CephFS

    I am currently evaluating Proxmox in a cluster environment and have intention to expand it to 7 storage nodes and 7 compute nodes to harness the storage provided by ceph. I have spent the last few weeks spending my effort formatting the machines and reinstalling everytime I make a ceph...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!