Search results

  1. S

    Greylisting problem with wide range ip senders

    Because of protection.outlook.com, we have to add: * 40.92.0.0/15 * 40.107.0.0/16 * 52.100.0.0/14 * 104.47.0.0/17 * 51.4.72.0/24 * 51.5.72.0/24 * 51.5.80.0/27 * 51.4.80.0/27 It would be nice to achieve this with a single entry.
  2. S

    Greylisting problem with wide range ip senders

    We would like to have that feature as well. How can we track that request?
  3. S

    fencing actions

    And can we modify the behavior of the failing node? The default seems to be a restart.
  4. S

    fencing actions

    We would like to stonith the failing node: ipmitool -H <ip> -U <user> -P <password> chassis power off
  5. S

    fencing actions

    Hi, how can I edit the behavior of the fencing process? By default, a mail with subject "FENCE: Try to fence node '<node>' is sent. I would like to add some custom commands. Cedric
  6. S

    Security of exposing Ceph Monitors

    Is live-migration technically not possible for some reason, when mounting a VirtFS export path in the guest? Or is that a feature to come?
  7. S

    Security of exposing Ceph Monitors

    We would like to mount CephFS in a VM without using VirtFS, because using VirtFS breaks the live-migration: 2019-12-11 15:46:22 migrate uri => unix:/run/qemu-server/103.migrate failed: VM 103 qmp command 'migrate' failed - Migration is disabled when VirtFS export path '/mnt/pve/cephfs' is...
  8. S

    Security of exposing Ceph Monitors

    The Ceph Monitors are supposed to be exposed in the public network, so that clients can reach them in order to mount CephFS by using the kernel driver or FUSE. What harm could a compromised client do to the Cluster by exploiting the connection to Ceph Monitors? Are the Monitors secure enough...
  9. S

    changing min_size automatically

    Would a Raspberry Pi be enough for a Ceph monitor?
  10. S

    changing min_size automatically

    That is why we think about reducing the min_size automatically in the case of nodes failing. That would make the the ceph storage writeable again, right?
  11. S

    changing min_size automatically

    The quorum device doesn't have any OSDs. Wouldn't 3/3 be sufficient to ensure the data availability even if 2 nodes fail?
  12. S

    RBDs in "cephfs_data"-Pool

    We wonder if we could just create a RBD-Storage using the "cephfs_data"-Pool. We would like to make the setup as flexible as possible, because we don't know yet how to split out storage capacity to RBDs and CephFS. Are there any downsides? And how to decide on the relation of CephFS data/metadata?
  13. S

    changing min_size automatically

    And would it work, if the quorum device is a Ceph monitor as well?
  14. S

    changing min_size automatically

    We plan to have 4 nodes and 1 external quorum device for the PVE side. For Ceph, we plan to have a configuration of 3/3. Could you please comment in the idea of adapting the min_size automatically. To my understanding, it would enable writing to the rbd in the case of 2 nodes failing. Are there...
  15. S

    changing min_size automatically

    Hello, we would like to build a 4 node Proxmox/Ceph-Cluster that is able to recover from 2 nodes failing at once. To prevent data loss in such a case, we have to choose a min_size of 3. But when 2 nodes fail, there are only 2 nodes left. That is why we came up with the idea of reducing the...
  16. S

    qcow2 on CephFS versus RBD

    Hello, we wonder which of the following two setups might be the better choice for using Proxmox VE with Ceph: usual: RBD less usual: qcow2 on CephFS The second setup was mentioned in Thread. What pros and cons do you come up with? Cedric

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!