Search results

  1. S

    How to properly maintain the nodes in the cluster

    thks bro I have not turned on HA now because my resources are limited and the virtual machines are oversold. I am currently using glusterfs. Yesterday I made the change and shut down the operation directly. It went smoothly.
  2. S

    How to properly maintain the nodes in the cluster

    Can I directly click the shutdown button for the node that needs maintenance? I have already migrated the virtual machine.
  3. S

    How to properly maintain the nodes in the cluster

    hi I want to find some information about the correct shutdown process when the node hardware fails in the cluster. Can I directly click the shutdown button in the cluster, or do I need to stop some services first, assuming that my operating system is normal. This is the status and version...
  4. S

    The correct way to centrally maintain PVE clusters

    hh Hi LnxBil I was looking for general blogs online and didn’t see the official operation manual. https://www.cnblogs.com/jackadam/p/15763362.html
  5. S

    The correct way to centrally maintain PVE clusters

    hi I'm confused and I don't feel like I'm going the right way. When a host in my cluster has a hardware failure and needs to be shut down for replacement. I have to do: 1. Migrate the virtual machine on the host machine. 2. Click the shutdown button on the host interface on the web 3. After...
  6. S

    Half of the hosts in the cluster automatically restart due to abnormality

    thk bro This is my HA. I did add virtual machines before to achieve failover, but due to over-allocation of resources, the failover failed and caused restart problems, so I removed all the virtual machines. In addition, I would like to ask, is the watchdog mechanism inevitable? The network...
  7. S

    Cluster reset when one node can only reached over corosync ring0 -- configuration problem?

    Hi bro I would like to know if you have any new progress on this issue. I am also troubled by this matter and I really don’t know what the cause is. To be precise, I'm not sure what mechanism triggered the cluster restart. This is my question...
  8. S

    Half of the hosts in the cluster automatically restart due to abnormality

    Thank you very much for your answer. This problem really bothers me. I also asked network colleagues to help me investigate this problem, and no other unusual problems were found. Including my previous test environment, when adding nodes normally, all hosts restarted inexplicably. I really...
  9. S

    Half of the hosts in the cluster automatically restart due to abnormality

    I especially want to know what protection mechanism the PVE cluster has to allow the host to automatically restart. Environment: There are 13 hosts in the cluster: node1-13 Version: pve-manager/6.4-4/337d6701 (running kernel: 5.4.106-1-pve) Web environment: There are two switches A and B...
  10. S

    kernel: vmbr0: received packet on bond0 with own address as source address

    hi In our environment, broadcast flooding occurs because vlan10 and untag belong to the same BD domain. The host itself will receive packet information sent by itself because bridge-vids 2-4094 in the interfaces contains vlan10. . One can be solved on the host machine, but consider not changing...
  11. S

    kernel: vmbr0: received packet on bond0 with own address as source address

    I don't quite understand the graph of network flow you mentioned, what exactly does it mean, brother, or can you give me an example?I can post my configuration
  12. S

    kernel: vmbr0: received packet on bond0 with own address as source address

    Yes, I have confirmed that the physical machine exchange is configured with lacp, and I also asked my network colleagues to compare the configurations. so i'm very confused
  13. S

    kernel: vmbr0: received packet on bond0 with own address as source address

    There are 7 hosts in my cluster environment, each host is bonded lacp Layer2+3, and each host has a vmbr0. When I randomly disconnect the network connection of a host, every An error that occurs every few seconds. I tried to find a lot of solutions, but still got nothing. Including asking online...
  14. S

    [SOLVED] Standalone node - no cluster defined

    I solved the problem mainly because of the pve-ssl.pem certificate of node node7 in my cluster, the certificate size is 0kb. I copied the pve-ssl.key and pve-ssl.pem files from one of the excluded nodes in the cluster, and the cluster displayed normally. But I'm not sure that copying...
  15. S

    [SOLVED] Standalone node - no cluster defined

    My pve6.4 cluster is abnormal in communication between hosts due to network problems, and some hosts break down and restart. After the host is restarted, the virtual host is normal, but the cluster status changes to Standalone Node: Cluster not defined. But now I can also manage entire clusters...
  16. S

    Restart protection mechanism

    I'm not sure, but twice I added a host to a cluster of 15 that caused the cluster to restart. On another occasion, a host was restarted, causing the cluster to restart. Both are PVE6.4. Thank you very much for the information. Let me have a look at it first
  17. S

    Restart protection mechanism

    I want to know what will trigger the restart protection mechanism of the pve cluster. Is there any detailed documentation?
  18. S

    TASK ERROR: clone failed: cfs-lock 'storage-vmdata' error: got lock request timeout

    I am deploying the PVE7.3 version and using GFS9.2 as shared storage. But when cloning, batch cloning is not possible. And the same configuration method can be cloned in batches on PVE6.4. Looking forward to an answer, thank you very much。 The following is the relevant information on my pve6.4...