Hi, I’ve started to use pbs for VMs and containers backups, but I don’t find a way to backup Ceph file systems... I’ve created ceph-fs in the proxmox cluster.
Is there any proper way to do it? If not, is there any plan for proxmox or pbs next releases to support this feature?
I have a 6 server cluster:
3 servers are hybrid nodes with a lot of OSDs and other 3 nodes are like VM processing nodes.
Everything is backed up by 2x2 port 10G NIC in hybrid nodes and 1x2 port 10G NIC un processing nodes and two stacked N3K switches.
Ceph does the thing for VM storage and...
i have usefully integrated ceph(proxmox based) in all the lxc containers,
now i want to integrate it outside of proxmox for some user for read only access , to replace the current nfs share,
what do i need to do ? what params to put in /etc/fstab
i posted in another thread (https://forum.proxmox.com/threads/proxmox-6-ceph-mds-stuck-on-creating.57524/#post-268549) that was created on the same topic and just hopped on to it, but that thread seems to be dead. so i am trying my luck here to see if this is a general problem...
I have noticed in Ceph log (ceph -w) an increase of "slow requests are blocked" when I create CephFS, e.g.
2019-10-14 16:41:32.083294 mon.ld5505 [INF] daemon mds.ld4465 assigned to filesystem cephfs as rank 0
2019-10-14 16:41:32.121895 mon.ld5505 [INF] daemon mds.ld4465 is now active in...
After adding and then removing a cephfs instance in the storage gui I noticed that it was not unmounted and/or deleted from /mnt/pve/[title]. I was wondering if this was intentional or not?
Note: This was my 2nd cephfs storage instance in case that matters. I cannot remove my primary...
I would like to mount CEPHFS on a Client.
Since CEPHFS version is Nautilus, I decided to use, as client, a container running CentOS 7. It might have well been an external physical machine, just happened I wanted to try with a container. Yes, CEPHFS is already installed on ProxMox and working...
I am currently evaluating Proxmox in a cluster environment and have intention to expand it to 7 storage nodes and 7 compute nodes to harness the storage provided by ceph. I have spent the last few weeks spending my effort formatting the machines and reinstalling everytime I make a ceph...
after creating MDS and CephFS manually in my cluster, I want to create a storage of type cephfs.
However this fails with error:
error with cfs lock 'file-storage_cfg': mount error: exit code 2
This is the complete output:
root@ld3955:~# pvesm add cephfs pve_cephfs
mount error 2 = No such...
I am using Proxmox 5.4 with CephFS and multiple file systems. One filesystem is called cephfs and it's on NVMe and the other is cephfs_slow and it's on standard SATA. I can manually mount each file system with:
mount -t ceph -o mds_namespace=cephfs virt0,virt4,virt8,virt12:/ /foo
mount -t ceph...
So im trying to configure a proxmox cluster with ceph.
So from what i can see i can make a pool directly and use it (as in add it to the cluster storage as RBD)
In order to create a CephFS storage in proxmox i need to create 2 sepparate ceph pools and then create the cephfs specifying the pool...
So i am trying to configure proxmox to use ceph server (on 3 nodes\servers).
So i have created the osds, the pool, and the metadata servers.
So im following this : https://pve.proxmox.com/pve-docs/chapter-pveceph.html
So i added the ceph server to the cluster storage, and can use it.
Is there anyway to change read ahead of the cephfs.
(could not place hyper link - new user)
this is should be improve reading single large files.
When creating a regular (RBD) Ceph pool, there are options in both the GUI and in pveceph to determine the size (replication count) and the min. size (online replicas for read) of the pool. However, when creating a CephFS pool, neither the GUI, not pveceph provides an option to create one with a...
I am a little bit confused or maybe I am just missing a small detail.
My plan is to deploy and mount a cephfs directory on a samba VM.
Thus I created a MDS in the proxmox cephfs GUI but the address of the MDS is from my cluster network where the OSDs are communicating.
Well, I tried to...
The PVE 5.3 release notes says: "The distributed file system CephFS eliminates the need for external file storage such as NFS or Samba and thus helps reducing hardware cost and simplifies management."
1. I have never used this and as I am about to setup new pve servers I would like to get some...
I have 2 SSD per node. and I have 6 Nodes. which makes its 12 SSD
now what will give me a good capacity and resilience against failures.
I am confused between choosing -
EC21 --- i.e K=2, M=1 - 66% Capacity
EC42 --- i.e K=4, M=2 - 66% Capacity
EC32 --- i.e K=3, M=2 - 60%...
When a backup volume is created, it allows the setting of max backups in pmx4.4 as shown below.
In pmx5.x, when using a non-ceph backup location, the option is still there:
However, in when creating a cephfs backup volume, only one backup is stored and no option exists to specify more than...
We have a 4 node proxmox cluster that I just updated to proxmox 5.3 (from 5.2) without any problems.
Now I want to test the new CephFS support in Proxmox 5.3 , but after I add it via the storage menu in the webinterface the cephfs storage entry only has a grey question mark on it.