Search results

  1. A

    Max cluster size

    Hi, I'm currently running a 33 node cluster over 10GBit network for storage (ceph) and corosync. Second 10GBit network for ceph backend traffic. I know this is not recommended, but that's the current state. Anyone has experience to how many nodes such setups could scale? How many nodes can I...
  2. A

    Backup to multiple backup storages

    Hi, I'm currently running 25 Proxmox nodes in a cluster (ceph storage backend) which are backed up to a Proxmox backup server every night. I set up separate backup jobs for each host starting every 30min (average backup time of a host is 20min). Because of space/load issues on the backup server...
  3. A

    Grundsätzliche Frage zur Ceph Skalierung

    Wir betreiben seit Jahren einen immer wachsenden Proxmox+Ceph Cluster. 6 Proxmox Knoten sind im Moment dediziert für Ceph zuständig. Bei der Planung der Erweiterungen kommt natürlich immer wieder die Frage nach der Ausfallsicherheit auf. Mit der standard min_site=2 nimmt der Pool bei einem...
  4. A

    [SOLVED] VMs freeze after migration

    Thx. Seems related to Kernel issues, booting the 5.13 Kernel solved the issue at least for now.
  5. A

    [SOLVED] VMs freeze after migration

    Hi, I' running an 8 node cluster with 4 dedicated ceph nodes and 4 nodes for VM hosting. All nodes running the latest proxmox version. Only minimal load on all nodes. I call the VM hosting nodes 3 8 6 and 11 I can do live migration from node 6 to 11 and 11 to 6. Migration from 3 to 8 and 8 to...
  6. A

    Network bridge on pure ceph nodes

    Hi, I have a cluster with a few nodes dedicated to run only ceph. There are no VMs running on the ceph nodes. Can I disable the bridge mode on the network interfaces of the ceph nodes or is the bridge necessary for cluster operation?
  7. A

    [SOLVED] Cluster Problem after Ceph Jewel to Luminous update

    I had to use the ceph Luminous repository on the none ceph nodes, too. dist-upgrade and restarting the pvestatd fixed the problem.
  8. A

    [SOLVED] Cluster Problem after Ceph Jewel to Luminous update

    Hi, we updated our Ceph nodes from Jewel to Luminous. After the update the cluster seems to have a problem (see screenshot), though on console it seems fine: # pvecm status Quorum information ------------------ Date: Wed Dec 27 10:32:56 2017 Quorum provider: corosync_votequorum...
  9. A

    CEPH - IP address change

    Its not that easy, check the ceph documentation, there is a chapter about changing monitor IP address.