Hi,
after creating MDS and CephFS manually in my cluster, I want to create a storage of type cephfs.
However this fails with error:
error with cfs lock 'file-storage_cfg': mount error: exit code 2
This is the complete output:
root@ld3955:~# pvesm add cephfs pve_cephfs
mount error 2 = No such...
I am using Proxmox 5.4 with CephFS and multiple file systems. One filesystem is called cephfs and it's on NVMe and the other is cephfs_slow and it's on standard SATA. I can manually mount each file system with:
mount -t ceph -o mds_namespace=cephfs virt0,virt4,virt8,virt12:/ /foo
mount -t ceph...
So im trying to configure a proxmox cluster with ceph.
So from what i can see i can make a pool directly and use it (as in add it to the cluster storage as RBD)
In order to create a CephFS storage in proxmox i need to create 2 sepparate ceph pools and then create the cephfs specifying the pool...
So i am trying to configure proxmox to use ceph server (on 3 nodes\servers).
So i have created the osds, the pool, and the metadata servers.
So im following this : https://pve.proxmox.com/pve-docs/chapter-pveceph.html
So i added the ceph server to the cluster storage, and can use it.
Now im...
Hi ,
Is there anyway to change read ahead of the cephfs.
According :
docs.ceph.com/docs/master/man/8/mount.ceph/
and :
lists.ceph.com/pipermail/ceph-users-ceph.com/2016-November/014553.html
(could not place hyper link - new user)
this is should be improve reading single large files.
right now...
When creating a regular (RBD) Ceph pool, there are options in both the GUI and in pveceph to determine the size (replication count) and the min. size (online replicas for read) of the pool. However, when creating a CephFS pool, neither the GUI, not pveceph provides an option to create one with a...
Hey,
I am a little bit confused or maybe I am just missing a small detail.
My plan is to deploy and mount a cephfs directory on a samba VM.
Thus I created a MDS in the proxmox cephfs GUI but the address of the MDS is from my cluster network where the OSDs are communicating.
Well, I tried to...
The PVE 5.3 release notes says: "The distributed file system CephFS eliminates the need for external file storage such as NFS or Samba and thus helps reducing hardware cost and simplifies management."
1. I have never used this and as I am about to setup new pve servers I would like to get some...
Hi,
I have 2 SSD per node. and I have 6 Nodes. which makes its 12 SSD
now what will give me a good capacity and resilience against failures.
I am confused between choosing -
EC21 --- i.e K=2, M=1 - 66% Capacity
EC42 --- i.e K=4, M=2 - 66% Capacity
EC32 --- i.e K=3, M=2 - 60%...
When a backup volume is created, it allows the setting of max backups in pmx4.4 as shown below.
In pmx5.x, when using a non-ceph backup location, the option is still there:
However, in when creating a cephfs backup volume, only one backup is stored and no option exists to specify more than...
Hi,
We have a 4 node proxmox cluster that I just updated to proxmox 5.3 (from 5.2) without any problems.
Now I want to test the new CephFS support in Proxmox 5.3 , but after I add it via the storage menu in the webinterface the cephfs storage entry only has a grey question mark on it.
The...
One minor but important observation to please fix in the documentation.
on https://pve.proxmox.com/wiki/Manage_Ceph_Services_on_Proxmox_VE_Nodes
at
Destroy CephFS
the command
ceph rm fs NAME --yes-i-really-mean-it
should be
ceph fs rm NAME --yes-i-really-mean-it
However,
I have an up and...
I just upgraded to version 5.3 and tried to create multiple MDS for CephFS. However, an error occurs:
Binary not installed: /usr/bin/ceph-mds (500)
Is this intended? I cannot install it via apt. What is the issue here?
Hey guys
We are using ceph-fuse to mount a CephFS volume for Proxmox backups at `/srv/proxmox/backup/`.
Recently, I noticed that the backup volume kept running out of free space and therefore the backup jobs were failing (we had a Ceph quota of 2 TB in place on the pool for safety reasons)...
Good evening,
Been trying to deploy MDS & CephFS by compiling a couple of very scarce threads from here & there.
But no luck till now. Ceph-deploy tool does not work as expected and couldn't
start the MDS after manually adding the [mds] section etc in ceph.conf
Is MDS and CephFS management in...
We have a small Ceph Hammer cluster (only a few monitors and less then 10 OSDs), still it proves very useful for low IO guest storage. Our Ceph cluster runs on our Proxmox nodes, but has it's own, separate gigabit LAN, and performance is adequate for our needs.
We would like to use it as backup...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.