We have a "Lab" Ceph Object Storage consisting of a 4x Multinode Server and the following Node components:
Per Node:
PVE Manager Version pve-manager/7.1-7/df5740ad
Kernel Version Linux 5.13.19-2-pve #1 SMP PVE 5.13.19-4 (Mon, 29 Nov 2021 12:10:09 +0100)
24 x Intel(R) Xeon(R) CPU X5675 @...
Hi,
I am running a 5-node ceph cluster (Octopus) and when I increased the number of active MDs from 2 to 4 I experienced a gain of performance in my CephFS.
Since I have a lot of client using the cephfs pool I think it might be a good idea to increase the number of MDs even more...
TL;DR - Upgrade from 16.2.5 to 16.2.6 - CEPHFS fails to start after upgrade, all MDS in "standby" - requires
ceph fs compat <fs name> add_incompat 7 "mds uses inline data"
to work again.
Longer version :
pve-manager/7.0-11/63d82f4e (running kernel: 5.11.22-5-pve)
apt dist-upgraded, CEPH...
Hi,
we are using CephFS on a 3-node proxmox cluster. We have mounted the CephFS to /home on several different Debian clients.
All Debian clients (server) see the files of the other Debian clients in the cephfs mount (/home).
It happens that client XY has services on Debian client 1 and Debian...
I recently upgraded to Proxmox 7 and Ceph Pacific which brought multiple CephFS support. My goal was to create one FS on my HDD OSDs and one FS on my SSD OSDs so I can balance workloads across the two sets of hardware. I have a "performance" and "capacity" crush rule. Previously, I had 2 RBD...
This will probably not hit many people but it bit me and should be in the doc, at least until Octopus packages are upgraded to 15.2.14.
The Bug that hit me:
https://tracker.ceph.com/issues/51673
Fixed in 15.2.14:
It was not easy to downgrade to Octopus but it can be done and everything is...
Hello, we've noticed some latency in the last month in our ceph cluster and when I checked ceph dashboard I found this warning in the attached file. As I understood this means mds_cache_memory_limit property is not configured correctly. Can that be the reason why we experience latency in the...
been running around in circles trying to figure this out.. what's the best/most-direct way to get more than 1 CephFS running/working on a pmx7 cluster with the pool types NOT matching?
IE, I'd like to have the following:
1. /mnt/pve/cephfs - replicated, SSD
2. /mnt/pve/ec_cephfs - erasure...
I have a PBS system where I need to mount a CephFS FS.
I have managed to mount this using this command:
mount -t ceph 192.168.215.4,192.168.215.3,192.168.215.2,192.168.215.1:/ /mnt/mycephfs -o name=bob,secret=xxxxxxxxxxxxxxxxxxxxxxx==
This is working like a charm and I have access to my...
We have a large 4 node cluster with about 419To split in two main pools, one for NVMe based disks and another one for SSD.
We are planning to use the NVMe RBD to store our VMs and the other pool to store shared data.
Shared data will be very voluminous and with +100 millions of files.
Beside...
Hallo,
wir haben ein 3-Node CEPH/PVE Cluster (PVE 6.4-5/ CEPH 15.2.11) und haben ein paar Ausfalltests durchgeführt. Dabei ist uns aufgefallen, dass CEPH quasi gar nicht mehr reagiert, wenn die Links vom CEPH Public und CEPH Cluster Netzwerk down sind. Also auch die (pve-)ceph. Befehle geben...
Hi, I’ve started to use pbs for VMs and containers backups, but I don’t find a way to backup Ceph file systems... I’ve created ceph-fs in the proxmox cluster.
Is there any proper way to do it? If not, is there any plan for proxmox or pbs next releases to support this feature?
Thanks!
Hi!
I have a 6 server cluster:
3 servers are hybrid nodes with a lot of OSDs and other 3 nodes are like VM processing nodes.
Everything is backed up by 2x2 port 10G NIC in hybrid nodes and 1x2 port 10G NIC un processing nodes and two stacked N3K switches.
Ceph does the thing for VM storage and...
Hello,
On some servers in cloud I see this error, while trying to check cephfs - content:
mount error: exit code 16 (500)
I have next package versions:
i have usefully integrated ceph(proxmox based) in all the lxc containers,
now i want to integrate it outside of proxmox for some user for read only access , to replace the current nfs share,
what do i need to do ? what params to put in /etc/fstab
good evening,
i posted in another thread (https://forum.proxmox.com/threads/proxmox-6-ceph-mds-stuck-on-creating.57524/#post-268549) that was created on the same topic and just hopped on to it, but that thread seems to be dead. so i am trying my luck here to see if this is a general problem...
Hi,
I have noticed in Ceph log (ceph -w) an increase of "slow requests are blocked" when I create CephFS, e.g.
2019-10-14 16:41:32.083294 mon.ld5505 [INF] daemon mds.ld4465 assigned to filesystem cephfs as rank 0
2019-10-14 16:41:32.121895 mon.ld5505 [INF] daemon mds.ld4465 is now active in...
Hello,
After adding and then removing a cephfs instance in the storage gui I noticed that it was not unmounted and/or deleted from /mnt/pve/[title]. I was wondering if this was intentional or not?
Note: This was my 2nd cephfs storage instance in case that matters. I cannot remove my primary...
I would like to mount CEPHFS on a Client.
Since CEPHFS version is Nautilus, I decided to use, as client, a container running CentOS 7. It might have well been an external physical machine, just happened I wanted to try with a container. Yes, CEPHFS is already installed on ProxMox and working...
I am currently evaluating Proxmox in a cluster environment and have intention to expand it to 7 storage nodes and 7 compute nodes to harness the storage provided by ceph. I have spent the last few weeks spending my effort formatting the machines and reinstalling everytime I make a ceph...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.