Search results

  1. S

    Using 2 storages and 6 nodes.

    Correct, you need to setup Proxmox on all the servers add them as a single cluster. You can then do most of the CEPH setup within the GUI on Proxmox now. However, if this is production id still suggest reaching out to a consultant or someone or paying for Proxmox Subscription Support to make...
  2. S

    Using 2 storages and 6 nodes.

    If you have the supporting network capacity/design then 3 nodes is the min that CEPH can safely run on.
  3. S

    Deleted ceph on node, stupid!

    So as thought your need to atleast add your mon IP's into the ceph.conf Any running VM will be fine as they picked these up on boot / mapping of the RBD
  4. S

    ceph failure scenario

    I think you might also be over suffering as you have only a small amount of test OSD's. So during the one node going down the other disks will be hit hard during the peering stage. With more OSD's and nodes being peering progress will be staggered more across hardware. How many PGs do you...
  5. S

    Deleted ceph on node, stupid!

    The default config does not have too much settings in it that will have a huge issue not being their. More tweaking and performance. Main ones normally the list of mons for commands to find your cluster. Does ceph -s run fine on each node and report healthy? I dont use proxmox ceph but someone...
  6. S

    New server alone, reboot my cluster in produccion.

    Have you checked to make sure your not got a duplicate IP or got a port in the wrong LACP group? So when the new switch comes online it's making one of the old server(s) seem network unavailable and causing the cluster to reboot.
  7. S

    Ceph - preference node for writes?

    Primary-affinity is really the only option you have, Id double check your config as it should work as you require: https://ceph.com/geen-categorie/ceph-get-the-best-of-your-ssd-with-primary-affinity/
  8. S

    How many OSDs?

    Yes, pools free space will be calculated by the OSDs that are available to it and the replication. For example a Pool of 3:2 and a Pool of 2:1 with the same backing OSDs will show a different available disk space. The limit is set on the RBD, however you need to monitor pool usage and OSD via...
  9. S

    How many OSDs?

    You don't set the disk size on the pool, you set it when you create the RBD disk attached to the VM is when you set the disk size.
  10. S

    Shared ceph storage on a 5 node setup

    Running a replica of 2 is never ever suggested, your almost guarantee yourself some data loss in the near future. You can do a replica of 3 across 2 node's, however your have to accept the extra storage overhead.
  11. S

    Proxmox cluster + ceph recommendations

    Has been mentioned many times on the Ceph Mailing list, it causes a kernel deadlock/crash, fine it you dont use KRBD as then the RBD mount is outside of the kernel space.
  12. S

    pve5to6 fails with "Unsupported SSH Cipher configured for root in /root/.ssh/config: 3des"

    No not the file just the cipher list line so it will use defaults, or also remove arcfour The issue is due to them being removed in Debian10
  13. S

    Proxmox cluster + ceph recommendations

    To get the best performance out of CEPH your want to use KRBD with VM's, however you can't use KRBD on the same kernel as a server that is running OSD's. On the RAM side you can limit ram with the new bluestore versions, however it's only a target and it can still increase sometimes over this...
  14. S

    How many OSDs?

    Correct an RBD pool can only store RBD for VM disk storage. Your need to upload the ISO to the standard location /var/lib/vz/template/iso
  15. S

    about ceph RBD

    No, nothing to do with Proxmox. Ceph Mon's run in a quorum , with a quorum then CEPH will not operate at all. With only 1 out of 3 mon's are online CEPH will become unavailable as your seeing.
  16. S

    about ceph RBD

    Then your have 2 out of 3 monitors down theirfore you can't get a quorum.
  17. S

    about ceph RBD

    Do you have a mon installed on each server? It's sigested never to run CEPH with a min of 1.
  18. S

    How many OSDs?

    Start with 256, 512 is too many, even for 12 OSD's 256 should be fine. But you can easily increase it to 512 in the future.