Recent content by Sxilderik

  1. S

    [SOLVED] PVE 5.0.21 : cluster stopped working after reboot of a failed node while all LCXs on that node were running before reboot…

    Found solution in this thread : https://github.com/corosync/corosync/issues/506 I restarted corosync.service on haumea, and it worked !
  2. S

    [SOLVED] PVE 5.0.21 : cluster stopped working after reboot of a failed node while all LCXs on that node were running before reboot…

    Hmm, output of corosync-cfgtool -s on both nodes nodeid 1 is haumea nodeid 2 is makemake root@haumea:~# corosync-cfgtool -s Printing link status. Local node ID 1 LINK ID 0 addr = 172.16.1.125 status: nodeid 1: link enabled:1 link connected:1...
  3. S

    [SOLVED] PVE 5.0.21 : cluster stopped working after reboot of a failed node while all LCXs on that node were running before reboot…

    Result on connecting on makemake:8006 and haumea:8006 I guess this is what is called a “split brain”… How can I recover from that? What are my options? Thanks for any help…
  4. S

    [SOLVED] PVE 5.0.21 : cluster stopped working after reboot of a failed node while all LCXs on that node were running before reboot…

    Sorry, broke the 10000 chars limit on makemake (failed node), same command (journalctl -b -u pve*): 10:48:55 systemd[1]: Starting Proxmox VE Login Banner... 10:48:55 systemd[1]: Starting Commit Proxmox VE network changes... 10:48:55 systemd[1]: Starting Proxmox VE firewall logger... 10:48:55...
  5. S

    [SOLVED] PVE 5.0.21 : cluster stopped working after reboot of a failed node while all LCXs on that node were running before reboot…

    Hello I’ve been running a two-node cluster (haumea and makemake, yes I’m into transneptunian objects) for quite some time now, without any problem. Lately, I noticed that on the web interface, one node (makemake) was marked failed, but the containers inside it still worked, so I kept postponing...
  6. S

    [SOLVED] LCX: reviving containers give nogroup nobody

    Thanks, setting unprivileged to 0 worked ! I can’t remember why I created that container as privileged in the first place, though. I now have another problem but I’ll start a new thread for that which I solved by myself :) Thanks again
  7. S

    [SOLVED] LCX: reviving containers give nogroup nobody

    Hello I accidentally removed my containers, or so I thought, when I realized the disks were still available. So I created a new container (208) with the same specs as the old one (108), then edited the 208.conf file to replace disk location with the 108 one. It doesn’t work. Can’t log in. I...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!