ha cluster

  1. D

    Proxmox Cluster 3 nodes, Monitors refuse to start

    Hi all, i am facing a strange issue, after using having a proxmox pc for my self hosted app I decided to play around and create a cluter to dive deeper into the HA topics, i dowloaded the latest ISO and build up a cluster from scratch. My Cluster works, i can see every node, my ceph storage says...
  2. F

    [WegweiserZumEigenbau] PVE mit extrem schnellem Full NVME ZFS Speichersystem mit Hochverfügbarkeit (Aktiv/Aktiv) über TCP oder RDMA auf Standard-HW

    Für die Forumsleser, die sich nur auf dem deutschsprachigen Bereich rumtreiben, mal eine Verlinkung zu interessanten Beiträgen: https://forum.proxmox.com/threads/pathfinder4diy-pve-and-high-performance-full-nvme-storage-with-high-availability-active-active-via-tcp-or-rdma-on-commodity-hw.172217/
  3. F

    [WegweiserZumEigenbau] PVE mit sehr schnellem ZFS-over-ISCSI Speichersystem mit Hochverfügbarkeit (Aktiv/Passiv) über TCP oder RDMA auf Standard-HW

    Für die Forumsleser, die sich nur auf dem deutschsprachigen Bereich rumtreiben, mal eine Verlinkung zu interessanten Beiträgen...
  4. F

    [Pathfinder4DIY] PVE and high performance ZFS-over-ISCSI Storage with High-Availability (Active/Passiv) via TCP or RDMA on commodity HW

    !!! 1.Hint: This Thread is not for Ceph fan boys. Go away! !!!! Occasion: In the Thread "Vote for Feature in ZFS-over-ISCSI" my one (the word "one" is always related to a human being, not a matter or person or number) had to put question marks in the row "own build with free software possible"...
  5. N

    After upgrade to PVE 9, qm, ha-manager, pvestatd, and more fail to start (unknown file 'ha/rules.cfg')

    After upgrading to PVE9 vms fail to start, qm, ha-manager, services like pvestatd, web gui, and more fail to start with the common error: root@pve3:/etc/pve# qm unknown file 'ha/rules.cfg' at /usr/share/perl5/PVE/Cluster.pm line 524, <DATA> line 960. Compilation failed in require at...
  6. M

    3−Node Proxmox EPYC 7T83 (7763) • 10 × NVMe per host • 100 GbE Ceph + Flink + Kafka — sanity-check me!

    Hey folks, I'm starting a greenfield data pipeline project for my startup. I need real-time stream processing so I'm upgrading some Milan hosts I have into a 3 node Proxmox + Ceph cluster. Per-node snapshot Motherboard – Gigabyte MZ72-HB0 Compute & RAM – 2 × EPYC 7T83 (128 c/256 t) + 1 TB...
  7. Y

    HA's decision to reboot all hypervisors of a cluster that came together after 1/3 of the hypervisors failed.

    Hi all, we are facing a problem which is HA behavior, the other day we lost network connectivity with one of the three datacenters which was hosting 1/3 of the hypervisors from the cluster, the cluster itself consists of 37 hypervisors. Expectedly 1/3 group of failed hypervisors tried to form a...
  8. C

    VM stuck on starting after moving nodes

    Hello all, I am trying to set up a 3-node cluster with failover, nodes 1 and 2 have my OSDs and nodes 1, 2, and 3 are monitors, nodes 1 and 2 are my production servers and node 3 is just for negotiation. If I gracefully power off node 1 the VM will move to node 2 (using HA fail over) and start...
  9. M

    HA, one nodte failure, vm reaction

    Hello, I have a Proxmox cluster built with 4 servers and HA configured. Now, a question that might sound silly: In the event of one server failing, all virtual machines are migrated to another server in line with the policy, which is obvious. However, will these virtual machines shouldbe...
  10. M

    Volume Degraded

    Hallo zusammen, bei mir ist das Datenlaufwerk scheinbar defekt: Wie gehe ich da jetzt am besten vor? Meine Idee: Alle VM von diesem Gerät auf den 2. mittels HA migrieren. Dann die HA auflösen Die Festplatte tauschen neues Volume erstellen HA wieder einrichten VM mirgrieren Wäre das so ok...
  11. A

    HA Ceph Breaks on Migration to 3rd Node...

    During testing My HA-Ceph installation I received this error when trying to migrate to my 3rd node. It Migrated but stopped. I was then able to migrate back to one of the other 2 nodes and restart Be gentile I am very New to HA-Ceph. lol Any insight you can provide would be great. Also no...
  12. B

    [SOLVED] Failed setup of external quorum

    Hello everyone, I am currently setting up a cluster consisting of 2 nodes on Proxmox 8. To implement high availability (HA), I would like to set up an external server that would serve as an external vote. This server is also a Proxmox 8 instance that is independent. On the external server, I...
  13. E

    Understanding LXC Migration Behavior in a Proxmox HA Cluster with 3 Nodes

    I'm currently managing a Proxmox cluster with three nodes configured for high availability (HA). I've observed some behaviors regarding LXC container management and failover mechanisms, and I'd appreciate any insights or clarifications you might offer. Preventing Duplicate LXC Instances: In our...
  14. I

    Ceph select specific OSD to form a Pool

    Hello there, I want to create two separate pools in my CEPH. At the moment I have a configuration made on 4 nodes with m.2 NVMe drives as OSDs. My nodes also have SATA SSD drives which I'd like to use for 2nd pool but I don't see any option to select these OSDs, you just add them and that's it...
  15. I

    [SOLVED] Could you please suggest optimal Proxmox HA cluster with Ceph NIC configuration?

    Hi there. I have a PX cluster built on 4 nodes, they have pretty much the same hardware spec which is 2xCPU sockets, 256GB of RAM and 3 network cards: 1x 2-ports 40Gb 1x 2-ports 10Gb 1x 4-ports 1Gb 1x 2,4TB SSD hardware RAID5 1x 2TB NVMe m.2 Because some configs are difficult to change and I...
  16. M

    Two 3 node clusters (6 nodes)

    I'm in the process of designing a new HA cluster across two locations. The two locations are connected together with multiple dark fibers and I would like to create an robust HA system that can migrate vm's between the locations. For storage I'm planning on using Ceph as I have experience with...
  17. J

    Perform HA on all nodes

    Hello everyone. I've tried to find a way to implement HA (High Availability) across all nodes, but I haven't found how to do it, except on NFS where I managed to. However, the issue is that when I turn off this NFS server, it makes my machines that rely on NFS unavailable. Is there a way or...
  18. T

    Proxmoc CT not able to start after shutting down the node

    Currently, I have 2 nodes of proxmox cluster with a quorum device as Raspberry Pi ( on which I am running openmediavault as well) I created an HA group and added the CT, to test the HA I just shut the node the CT migrated to another node however they failed to start. Action Taken Removed from...
  19. M

    Configure HA Cluster

    Hello. I'm new to proxmox and I'm trying to configure a High Availability Cluster with two nodes. I have some VMs in one of the servers (node 1), and, in case it gets down, I wanted them to continue operating in node 2 without losing information. Currently, I am using an external vote to...
  20. N

    About multipath and lvm

    Hello everyone, I have configured 5 Proxmox nodes to use FC multipath. The only way to achieve high availability is to mount them as LVM, but LVM does not seem to support QCOW2 format disks. I want thin disks, otherwise disk resources will be consumed quickly, but I also want HA. Is there any...