Because of protection.outlook.com, we have to add:
* 40.92.0.0/15
* 40.107.0.0/16
* 52.100.0.0/14
* 104.47.0.0/17
* 51.4.72.0/24
* 51.5.72.0/24
* 51.5.80.0/27
* 51.4.80.0/27
It would be nice to achieve this with a single entry.
Hi,
how can I edit the behavior of the fencing process?
By default, a mail with subject "FENCE: Try to fence node '<node>' is sent. I would like to add some custom commands.
Cedric
We would like to mount CephFS in a VM without using VirtFS, because using VirtFS breaks the live-migration:
2019-12-11 15:46:22 migrate uri => unix:/run/qemu-server/103.migrate failed: VM 103 qmp command 'migrate' failed - Migration is disabled when VirtFS export path '/mnt/pve/cephfs' is...
The Ceph Monitors are supposed to be exposed in the public network, so that clients can reach them in order to mount CephFS by using the kernel driver or FUSE.
What harm could a compromised client do to the Cluster by exploiting the connection to Ceph Monitors? Are the Monitors secure enough...
That is why we think about reducing the min_size automatically in the case of nodes failing. That would make the the ceph storage writeable again, right?
We wonder if we could just create a RBD-Storage using the "cephfs_data"-Pool. We would like to make the setup as flexible as possible, because we don't know yet how to split out storage capacity to RBDs and CephFS. Are there any downsides?
And how to decide on the relation of CephFS data/metadata?
We plan to have 4 nodes and 1 external quorum device for the PVE side. For Ceph, we plan to have a configuration of 3/3. Could you please comment in the idea of adapting the min_size automatically. To my understanding, it would enable writing to the rbd in the case of 2 nodes failing. Are there...
Hello,
we would like to build a 4 node Proxmox/Ceph-Cluster that is able to recover from 2 nodes failing at once. To prevent data loss in such a case, we have to choose a min_size of 3. But when 2 nodes fail, there are only 2 nodes left. That is why we came up with the idea of reducing the...
Hello,
we wonder which of the following two setups might be the better choice for using Proxmox VE with Ceph:
usual: RBD
less usual: qcow2 on CephFS
The second setup was mentioned in Thread.
What pros and cons do you come up with?
Cedric
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.