New novice user trying to use Proxmox. This software is very unpleasant.
could this possibly be any more over complicated and unfriendly?
creating data pool 'cephfs_data'...
pool cephfs_data: applying application = cephfs
pool cephfs_data: applying pg_num = 128
creating metadata pool...
I want to mount a Ceph-FS via Systemd.
with the default service template it didn´t start so I made an Override
GNU nano 5.4 /etc/systemd/system/ceph-fuse@-mnt-ceph.service.d/.#override.conf4df7857ee6ef3e15...
I'm trying to add external cephfs storage directly in /etc/pve/storage.cfg because I need to specify "subdir". (It is not possible to type subdir in the GUI)
But I also have to specify 'FS name' because it's not the default cephfs file system. Can I add 'FS name' to etc/pve/storage.cfg and how?
we have a PBS running in our Dev-Environment with a Tape-Lib attached, but as I found out, you can only backup to tape if the backups are on a Datastore beforehand...
Originally I was planning to backup directly to tape, so I deployed a 1U PBS Server that barely has enough space for...
Hallo an alle,
wir haben Folgendes Problem:
Es ist jetzt zum zweiten Mal vorgekommen, dass alle Nodes unseres Clusters, auf denen Ceph läuft, sich ohne Vorwarnung neu gestartet haben.
Wir finden den Grund nicht und hoffen das ihr uns Tipps geben könnt wo wir suchen können.
9 Nodes ...
i'm looking for some help/ideas/advices in order to solve the problem that occurs on my metadata
server after the server reboot.
"Ceph status" warns about my MDS being "read only" but the fileystem and the data seem healthy.
It is still possible to access the content of my cephfs volumes...
First time posting here. I have a proxmox cluster with 3 nodes. All identical HP proliant (older models) server computers with 10G networking for CEPH.
The vm's are a mix of win10 and centos 8. What i am trying to achieve is to have a part of cephfs treated as a directory which can be shared...
We have a "Lab" Ceph Object Storage consisting of a 4x Multinode Server and the following Node components:
PVE Manager Version pve-manager/7.1-7/df5740ad
Kernel Version Linux 5.13.19-2-pve #1 SMP PVE 5.13.19-4 (Mon, 29 Nov 2021 12:10:09 +0100)
24 x Intel(R) Xeon(R) CPU X5675 @...
I am running a 5-node ceph cluster (Octopus) and when I increased the number of active MDs from 2 to 4 I experienced a gain of performance in my CephFS.
Since I have a lot of client using the cephfs pool I think it might be a good idea to increase the number of MDs even more...
TL;DR - Upgrade from 16.2.5 to 16.2.6 - CEPHFS fails to start after upgrade, all MDS in "standby" - requires
ceph fs compat <fs name> add_incompat 7 "mds uses inline data"
to work again.
Longer version :
pve-manager/7.0-11/63d82f4e (running kernel: 5.11.22-5-pve)
apt dist-upgraded, CEPH...
we are using CephFS on a 3-node proxmox cluster. We have mounted the CephFS to /home on several different Debian clients.
All Debian clients (server) see the files of the other Debian clients in the cephfs mount (/home).
It happens that client XY has services on Debian client 1 and Debian...
I recently upgraded to Proxmox 7 and Ceph Pacific which brought multiple CephFS support. My goal was to create one FS on my HDD OSDs and one FS on my SSD OSDs so I can balance workloads across the two sets of hardware. I have a "performance" and "capacity" crush rule. Previously, I had 2 RBD...
This will probably not hit many people but it bit me and should be in the doc, at least until Octopus packages are upgraded to 15.2.14.
The Bug that hit me:
Fixed in 15.2.14:
It was not easy to downgrade to Octopus but it can be done and everything is...
Hello, we've noticed some latency in the last month in our ceph cluster and when I checked ceph dashboard I found this warning in the attached file. As I understood this means mds_cache_memory_limit property is not configured correctly. Can that be the reason why we experience latency in the...
been running around in circles trying to figure this out.. what's the best/most-direct way to get more than 1 CephFS running/working on a pmx7 cluster with the pool types NOT matching?
IE, I'd like to have the following:
1. /mnt/pve/cephfs - replicated, SSD
2. /mnt/pve/ec_cephfs - erasure...
I have a PBS system where I need to mount a CephFS FS.
I have managed to mount this using this command:
mount -t ceph 192.168.215.4,192.168.215.3,192.168.215.2,192.168.215.1:/ /mnt/mycephfs -o name=bob,secret=xxxxxxxxxxxxxxxxxxxxxxx==
This is working like a charm and I have access to my...
We have a large 4 node cluster with about 419To split in two main pools, one for NVMe based disks and another one for SSD.
We are planning to use the NVMe RBD to store our VMs and the other pool to store shared data.
Shared data will be very voluminous and with +100 millions of files.
wir haben ein 3-Node CEPH/PVE Cluster (PVE 6.4-5/ CEPH 15.2.11) und haben ein paar Ausfalltests durchgeführt. Dabei ist uns aufgefallen, dass CEPH quasi gar nicht mehr reagiert, wenn die Links vom CEPH Public und CEPH Cluster Netzwerk down sind. Also auch die (pve-)ceph. Befehle geben...
Hi, I’ve started to use pbs for VMs and containers backups, but I don’t find a way to backup Ceph file systems... I’ve created ceph-fs in the proxmox cluster.
Is there any proper way to do it? If not, is there any plan for proxmox or pbs next releases to support this feature?
I have a 6 server cluster:
3 servers are hybrid nodes with a lot of OSDs and other 3 nodes are like VM processing nodes.
Everything is backed up by 2x2 port 10G NIC in hybrid nodes and 1x2 port 10G NIC un processing nodes and two stacked N3K switches.
Ceph does the thing for VM storage and...