Search results

  1. S

    The correct way to centrally maintain PVE clusters

    hh Hi LnxBil I was looking for general blogs online and didn’t see the official operation manual. https://www.cnblogs.com/jackadam/p/15763362.html
  2. S

    The correct way to centrally maintain PVE clusters

    hi I'm confused and I don't feel like I'm going the right way. When a host in my cluster has a hardware failure and needs to be shut down for replacement. I have to do: 1. Migrate the virtual machine on the host machine. 2. Click the shutdown button on the host interface on the web 3. After...
  3. S

    Half of the hosts in the cluster automatically restart due to abnormality

    thk bro This is my HA. I did add virtual machines before to achieve failover, but due to over-allocation of resources, the failover failed and caused restart problems, so I removed all the virtual machines. In addition, I would like to ask, is the watchdog mechanism inevitable? The network...
  4. S

    Cluster reset when one node can only reached over corosync ring0 -- configuration problem?

    Hi bro I would like to know if you have any new progress on this issue. I am also troubled by this matter and I really don’t know what the cause is. To be precise, I'm not sure what mechanism triggered the cluster restart. This is my question...
  5. S

    Half of the hosts in the cluster automatically restart due to abnormality

    Thank you very much for your answer. This problem really bothers me. I also asked network colleagues to help me investigate this problem, and no other unusual problems were found. Including my previous test environment, when adding nodes normally, all hosts restarted inexplicably. I really...
  6. S

    Half of the hosts in the cluster automatically restart due to abnormality

    I especially want to know what protection mechanism the PVE cluster has to allow the host to automatically restart. Environment: There are 13 hosts in the cluster: node1-13 Version: pve-manager/6.4-4/337d6701 (running kernel: 5.4.106-1-pve) Web environment: There are two switches A and B...
  7. S

    kernel: vmbr0: received packet on bond0 with own address as source address

    hi In our environment, broadcast flooding occurs because vlan10 and untag belong to the same BD domain. The host itself will receive packet information sent by itself because bridge-vids 2-4094 in the interfaces contains vlan10. . One can be solved on the host machine, but consider not changing...
  8. S

    kernel: vmbr0: received packet on bond0 with own address as source address

    I don't quite understand the graph of network flow you mentioned, what exactly does it mean, brother, or can you give me an example?I can post my configuration
  9. S

    kernel: vmbr0: received packet on bond0 with own address as source address

    Yes, I have confirmed that the physical machine exchange is configured with lacp, and I also asked my network colleagues to compare the configurations. so i'm very confused
  10. S

    kernel: vmbr0: received packet on bond0 with own address as source address

    There are 7 hosts in my cluster environment, each host is bonded lacp Layer2+3, and each host has a vmbr0. When I randomly disconnect the network connection of a host, every An error that occurs every few seconds. I tried to find a lot of solutions, but still got nothing. Including asking online...
  11. S

    [SOLVED] Standalone node - no cluster defined

    I solved the problem mainly because of the pve-ssl.pem certificate of node node7 in my cluster, the certificate size is 0kb. I copied the pve-ssl.key and pve-ssl.pem files from one of the excluded nodes in the cluster, and the cluster displayed normally. But I'm not sure that copying...
  12. S

    [SOLVED] Standalone node - no cluster defined

    My pve6.4 cluster is abnormal in communication between hosts due to network problems, and some hosts break down and restart. After the host is restarted, the virtual host is normal, but the cluster status changes to Standalone Node: Cluster not defined. But now I can also manage entire clusters...
  13. S

    Restart protection mechanism

    I'm not sure, but twice I added a host to a cluster of 15 that caused the cluster to restart. On another occasion, a host was restarted, causing the cluster to restart. Both are PVE6.4. Thank you very much for the information. Let me have a look at it first
  14. S

    Restart protection mechanism

    I want to know what will trigger the restart protection mechanism of the pve cluster. Is there any detailed documentation?
  15. S

    TASK ERROR: clone failed: cfs-lock 'storage-vmdata' error: got lock request timeout

    I am deploying the PVE7.3 version and using GFS9.2 as shared storage. But when cloning, batch cloning is not possible. And the same configuration method can be cloned in batches on PVE6.4. Looking forward to an answer, thank you very much。 The following is the relevant information on my pve6.4...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!