cephfs

  1. Z

    CephFS two way mirror

    Hi I'm trying to get two cephfs shares on 2 different sites to mirror in both way's. To try and achieve this i was following this guide: https://docs.ceph.com/en/latest/dev/cephfs-mirroring/ But when i try to "systemctl enable cephfs-mirror@mirror" i'm getting the error that this service...
  2. M

    [TUTORIAL] What are the steps to create and mount CephFS on a Linux VM?

    It will help if it is GUI-based steps to create CephFS. I currently have configured Ceph as shared storage on all 3 nodes. Also, Linux VMs are on local storage. I have to use CephFs to create a shared folder between 2 VMs on different nodes.
  3. R

    Proxmox CephFS Permission Denied

    We're succesfully using Ceph on Proxmox, and have started to attempt to use CephFS. We are able to mount, and create a file, but can then not write to the file, it shows the below error: root@<redacted>:/mnt/ceph# echo "test" > /mnt/ceph/testfile -bash: echo: write error: Operation not permitted...
  4. M

    [SOLVED] pve 8.1.3, cephfs iso viewing permission

    [SOLVED] user error - permissions for group(s) for / Hello, I have a fresh installation of pve 8.1.3 using pam auth, ceph reef 18.2.0, using cephfs as the iso datastore. If I'm logged in as root, create vm, OS tab, the ISOs there are listed. I then create a user with permissions...
  5. D

    CephFS Scrub or Trim

    Is it needed to set Scrub or Trim for CephFS when it has SSD / NVME OSD drives? I know I can start it manual seeing (active+clean+scrubbing: 1), but how to configure this to run automatically?
  6. L

    cephfs doesnt mount on centos 8

    Hi ! Please help me understand what happen. I have pve 7 and ceph version 17.2.5. Created ceph fs and try to mount it on my centos 8 fresh installed server. But there is an error in cli: mount -t ceph 10.20.0.120:/cephfs /mnt/cephfs -o name=user_test,secret=%ANY-SECRET% mount: /mnt/cephfs...
  7. C

    Can’t create CephFS

    New novice user trying to use Proxmox. This software is very unpleasant. could this possibly be any more over complicated and unfriendly? ``` creating data pool 'cephfs_data'... pool cephfs_data: applying application = cephfs pool cephfs_data: applying pg_num = 128 creating metadata pool...
  8. J

    mount ceph-fuse via systemd

    Hi, I want to mount a Ceph-FS via Systemd. with the default service template it didn´t start so I made an Override GNU nano 5.4 /etc/systemd/system/ceph-fuse@-mnt-ceph.service.d/.#override.conf4df7857ee6ef3e15...
  9. T

    Mount external CephFS - no fsname in /etc/pve/storage.cfg

    Hi I'm trying to add external cephfs storage directly in /etc/pve/storage.cfg because I need to specify "subdir". (It is not possible to type subdir in the GUI) But I also have to specify 'FS name' because it's not the default cephfs file system. Can I add 'FS name' to etc/pve/storage.cfg and how?
  10. N

    Can you mount CephFS on PBS as Datastore?

    Hello guys, we have a PBS running in our Dev-Environment with a Tape-Lib attached, but as I found out, you can only backup to tape if the backups are on a Datastore beforehand... Originally I was planning to backup directly to tape, so I deployed a 1U PBS Server that barely has enough space for...
  11. B

    [SOLVED] Ungeplanter Neustart aller Nodes im Cluster auf denen ceph läuft

    Hallo an alle, wir haben Folgendes Problem: Es ist jetzt zum zweiten Mal vorgekommen, dass alle Nodes unseres Clusters, auf denen Ceph läuft, sich ohne Vorwarnung neu gestartet haben. Wir finden den Grund nicht und hoffen das ihr uns Tipps geben könnt wo wir suchen können. Cluster: 9 Nodes ...
  12. U

    ceph filesystem stuck in read only

    Hi, i'm looking for some help/ideas/advices in order to solve the problem that occurs on my metadata server after the server reboot. "Ceph status" warns about my MDS being "read only" but the fileystem and the data seem healthy. It is still possible to access the content of my cephfs volumes...
  13. V

    shared storage for VMs on cephfs

    Hi First time posting here. I have a proxmox cluster with 3 nodes. All identical HP proliant (older models) server computers with 10G networking for CEPH. The vm's are a mix of win10 and centos 8. What i am trying to achieve is to have a part of cephfs treated as a directory which can be shared...
  14. D

    Extremely SLOW Ceph Storage from over 60% usage ???

    We have a "Lab" Ceph Object Storage consisting of a 4x Multinode Server and the following Node components: Per Node: PVE Manager Version pve-manager/7.1-7/df5740ad Kernel Version Linux 5.13.19-2-pve #1 SMP PVE 5.13.19-4 (Mon, 29 Nov 2021 12:10:09 +0100) 24 x Intel(R) Xeon(R) CPU X5675 @...
  15. E

    CEPH multiple MDS on the same node

    Hi, I am running a 5-node ceph cluster (Octopus) and when I increased the number of active MDs from 2 to 4 I experienced a gain of performance in my CephFS. Since I have a lot of client using the cephfs pool I think it might be a good idea to increase the number of MDs even more...
  16. D

    Ceph 16.2.6 - CEPHFS failed after upgrade from 16.2.5

    TL;DR - Upgrade from 16.2.5 to 16.2.6 - CEPHFS fails to start after upgrade, all MDS in "standby" - requires ceph fs compat <fs name> add_incompat 7 "mds uses inline data" to work again. Longer version : pve-manager/7.0-11/63d82f4e (running kernel: 5.11.22-5-pve) apt dist-upgraded, CEPH...
  17. E

    CephFS user/permission conflict between different nodes (best practices?)

    Hi, we are using CephFS on a 3-node proxmox cluster. We have mounted the CephFS to /home on several different Debian clients. All Debian clients (server) see the files of the other Debian clients in the cephfs mount (/home). It happens that client XY has services on Debian client 1 and Debian...
  18. I

    Confusing Ceph GUI Info when using multiple CephFS volumes

    I recently upgraded to Proxmox 7 and Ceph Pacific which brought multiple CephFS support. My goal was to create one FS on my HDD OSDs and one FS on my SSD OSDs so I can balance workloads across the two sets of hardware. I have a "performance" and "capacity" crush rule. Previously, I had 2 RBD...
  19. J

    [SOLVED] [Warning] Ceph Upgrade Octopus 15.2.13 to Pacific 16.2.x

    This will probably not hit many people but it bit me and should be in the doc, at least until Octopus packages are upgraded to 15.2.14. The Bug that hit me: https://tracker.ceph.com/issues/51673 Fixed in 15.2.14: It was not easy to downgrade to Octopus but it can be done and everything is...
  20. A

    Ceph MDS reports oversized cache

    Hello, we've noticed some latency in the last month in our ceph cluster and when I checked ceph dashboard I found this warning in the attached file. As I understood this means mds_cache_memory_limit property is not configured correctly. Can that be the reason why we experience latency in the...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!