cephfs

  1. shared storage for VMs on cephfs

    Hi First time posting here. I have a proxmox cluster with 3 nodes. All identical HP proliant (older models) server computers with 10G networking for CEPH. The vm's are a mix of win10 and centos 8. What i am trying to achieve is to have a part of cephfs treated as a directory which can be shared...
  2. Extremely SLOW Ceph Storage from over 60% usage ???

    We have a "Lab" Ceph Object Storage consisting of a 4x Multinode Server and the following Node components: Per Node: PVE Manager Version pve-manager/7.1-7/df5740ad Kernel Version Linux 5.13.19-2-pve #1 SMP PVE 5.13.19-4 (Mon, 29 Nov 2021 12:10:09 +0100) 24 x Intel(R) Xeon(R) CPU X5675 @...
  3. CEPH multiple MDS on the same node

    Hi, I am running a 5-node ceph cluster (Octopus) and when I increased the number of active MDs from 2 to 4 I experienced a gain of performance in my CephFS. Since I have a lot of client using the cephfs pool I think it might be a good idea to increase the number of MDs even more...
  4. Ceph 16.2.6 - CEPHFS failed after upgrade from 16.2.5

    TL;DR - Upgrade from 16.2.5 to 16.2.6 - CEPHFS fails to start after upgrade, all MDS in "standby" - requires ceph fs compat <fs name> add_incompat 7 "mds uses inline data" to work again. Longer version : pve-manager/7.0-11/63d82f4e (running kernel: 5.11.22-5-pve) apt dist-upgraded, CEPH...
  5. CephFS user/permission conflict between different nodes (best practices?)

    Hi, we are using CephFS on a 3-node proxmox cluster. We have mounted the CephFS to /home on several different Debian clients. All Debian clients (server) see the files of the other Debian clients in the cephfs mount (/home). It happens that client XY has services on Debian client 1 and Debian...
  6. Confusing Ceph GUI Info when using multiple CephFS volumes

    I recently upgraded to Proxmox 7 and Ceph Pacific which brought multiple CephFS support. My goal was to create one FS on my HDD OSDs and one FS on my SSD OSDs so I can balance workloads across the two sets of hardware. I have a "performance" and "capacity" crush rule. Previously, I had 2 RBD...
  7. [SOLVED] [Warning] Ceph Upgrade Octopus 15.2.13 to Pacific 16.2.x

    This will probably not hit many people but it bit me and should be in the doc, at least until Octopus packages are upgraded to 15.2.14. The Bug that hit me: https://tracker.ceph.com/issues/51673 Fixed in 15.2.14: It was not easy to downgrade to Octopus but it can be done and everything is...
  8. Ceph MDS reports oversized cache

    Hello, we've noticed some latency in the last month in our ceph cluster and when I checked ceph dashboard I found this warning in the attached file. As I understood this means mds_cache_memory_limit property is not configured correctly. Can that be the reason why we experience latency in the...
  9. Recommended Config : Multiple CephFS

    been running around in circles trying to figure this out.. what's the best/most-direct way to get more than 1 CephFS running/working on a pmx7 cluster with the pool types NOT matching? IE, I'd like to have the following: 1. /mnt/pve/cephfs - replicated, SSD 2. /mnt/pve/ec_cephfs - erasure...
  10. DynFi User

    mount cephfs using fstab

    I have a PBS system where I need to mount a CephFS FS. I have managed to mount this using this command: mount -t ceph 192.168.215.4,192.168.215.3,192.168.215.2,192.168.215.1:/ /mnt/mycephfs -o name=bob,secret=xxxxxxxxxxxxxxxxxxxxxxx== This is working like a charm and I have access to my...
  11. DynFi User

    Best way to access CephFS from within VM (high perf)

    We have a large 4 node cluster with about 419To split in two main pools, one for NVMe based disks and another one for SSD. We are planning to use the NVMe RBD to store our VMs and the other pool to store shared data. Shared data will be very voluminous and with +100 millions of files. Beside...
  12. PVE/CEPH Cluster Verhalten

    Hallo, wir haben ein 3-Node CEPH/PVE Cluster (PVE 6.4-5/ CEPH 15.2.11) und haben ein paar Ausfalltests durchgeführt. Dabei ist uns aufgefallen, dass CEPH quasi gar nicht mehr reagiert, wenn die Links vom CEPH Public und CEPH Cluster Netzwerk down sind. Also auch die (pve-)ceph. Befehle geben...
  13. Backup ceph-fs?

    Hi, I’ve started to use pbs for VMs and containers backups, but I don’t find a way to backup Ceph file systems... I’ve created ceph-fs in the proxmox cluster. Is there any proper way to do it? If not, is there any plan for proxmox or pbs next releases to support this feature? Thanks!
  14. Cephfs content backed up in Proxmox Backup Server

    Hi! I have a 6 server cluster: 3 servers are hybrid nodes with a lot of OSDs and other 3 nodes are like VM processing nodes. Everything is backed up by 2x2 port 10G NIC in hybrid nodes and 1x2 port 10G NIC un processing nodes and two stacked N3K switches. Ceph does the thing for VM storage and...
  15. cephfs mount error: exit code 16 (500)

    Hello, On some servers in cloud I see this error, while trying to check cephfs - content: mount error: exit code 16 (500) I have next package versions:
  16. proxmxo managed cephfs mount on external computer (non porxmox) cant access file content

    i have usefully integrated ceph(proxmox based) in all the lxc containers, now i want to integrate it outside of proxmox for some user for read only access , to replace the current nfs share, what do i need to do ? what params to put in /etc/fstab
  17. ceph tooling fault when creating MDS

    good evening, i posted in another thread (https://forum.proxmox.com/threads/proxmox-6-ceph-mds-stuck-on-creating.57524/#post-268549) that was created on the same topic and just hopped on to it, but that thread seems to be dead. so i am trying my luck here to see if this is a general problem...
  18. Ceph show "slow requests are blocked" when creating / modifying CephFS

    Hi, I have noticed in Ceph log (ceph -w) an increase of "slow requests are blocked" when I create CephFS, e.g. 2019-10-14 16:41:32.083294 mon.ld5505 [INF] daemon mds.ld4465 assigned to filesystem cephfs as rank 0 2019-10-14 16:41:32.121895 mon.ld5505 [INF] daemon mds.ld4465 is now active in...
  19. Cephfs storage not unmounted on removal?

    Hello, After adding and then removing a cephfs instance in the storage gui I noticed that it was not unmounted and/or deleted from /mnt/pve/[title]. I was wondering if this was intentional or not? Note: This was my 2nd cephfs storage instance in case that matters. I cannot remove my primary...
  20. Mounting CEPHFS on a client

    I would like to mount CEPHFS on a Client. Since CEPHFS version is Nautilus, I decided to use, as client, a container running CentOS 7. It might have well been an external physical machine, just happened I wanted to try with a container. Yes, CEPHFS is already installed on ProxMox and working...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!