Search results

  1. M

    Cluster losing quorum over a troublesome node

    Hello, Our problems started this week after we tried to upgrade a node in a 4-node cluster from 4.4. to 5.0 The upgrade itself went fine. After first reboot the whole cluster went offline due to fencing. This repeated every single time we tried to bring back this node online again. So, I...
  2. M

    corosync/totem question

    Trouble is, the node that create was run on was removed. Should I change this or let it be?
  3. M

    corosync/totem question

    Hello, We have a failed Proxmox 4.4->5.0 upgraded node that makes all cluster nodes (remaining 4) to fence themselves whenever it comes online. Seems I will have to reinstall the node for whatever reason. I've just noticed something strange in corosync.conf while backing up the configuration...
  4. M

    Cluster reliability when removing nodes

    Ok, thanks for clarification.
  5. M

    Cluster reliability when removing nodes

    That's what I reckoned as well, but why warnings of apocalypse from the documentation? So, if I want to reinstall or add a new hardware node with previously "blacklisted" hostname, all I have to do is remove keys from /root/.ssh?
  6. M

    Cluster reliability when removing nodes

    It would be good to have some info on impact of using the same hostname in the same subnet for something else (we, for instance, recycle the names and name hosts sequentially). Also, what if I want to reinstall Proxmox host for whatever reason and rejoin under same name?
  7. M

    How to increase cluster communication timeout?

    I understand what you're saying, I was just hoping there's a way to tune the timeouts.
  8. M

    How to increase cluster communication timeout?

    All my NIC's are in use and I was thinking of a more temporary solution till we replace the switch. And I still need HA. This issue happens twice per month.
  9. M

    How to increase cluster communication timeout?

    Hello, We are currently having issues with a switch stack connecting our cluster nodes. The switch restarts intermittently and cluster nodes fence themselves and restart. This loss of communications lasts between 60-120 seconds. Until we replace the switch, is there any way to temporarily...
  10. M

    Proxmox 4.4 not running init inside lxc

    It is strange to be sure. journalctl: Jun 08 09:46:50 03 kernel: EXT4-fs (dm-7): mounted filesystem with ordered data mode. Opts: (null) Jun 08 09:46:50 03 kernel: IPv6: ADDRCONF(NETDEV_UP): veth114i0: link is not ready Jun 08 09:46:51 03 kernel: device veth114i0 entered promiscuous mode...
  11. M

    Proxmox 4.4 not running init inside lxc

    Ok, so maybe a bit of background. We have a 4 node cluster. This is happening on one of them since a recent reboot of the host. Before that reboot there were no issues with containers. These containers init properly when migrated to other hosts. Whenever I migrate a CT over to the host in...
  12. M

    Proxmox 4.4 not running init inside lxc

    Hello, As of couple of days ago, lxc containers won't run their designated runlevel (centos 6.x container): USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 19292 2324 ? Ss 10:45 0:00 /sbin/init root 98 0.0 0.0 11500...
  13. M

    Suggestion - manage any lxc container via pct regardless of host

    You mean this https://pve.proxmox.com/wiki/Proxmox_VE_API ? Can't find relevant/elegant examples for pct list/enter...
  14. M

    Suggestion - manage any lxc container via pct regardless of host

    Hello, Working daily with LXC in Proxmox cluster, there's really one thing I would like to see implemented, and that's to be able to use commands like pct list/enter/whatever on any lxc container regardless on which host you are currently logged on. For instance, a global pct list (showing all...
  15. M

    How to shut down a Proxmox cluster gracefully

    Ok, thanks. Any way to assure that no quorum shenanigans occur, since I cannot ensure no race condition happens? Something like pvecm expect 1 before shut? Does that make sense?
  16. M

    How to shut down a Proxmox cluster gracefully

    Hi, We have a 4-node cluster running LXC over shared storage (LVM based). All VM's are HA-enabled So, if I do a simple "init 0" on all hosts, all VM's will shut down without crazy stuff (VM restarts, migration hangs) going on because of quorum loss and shutdown race conditions etc?
  17. M

    How to shut down a Proxmox cluster gracefully

    Hello, As stated in title - what's the best way to shutdown a Proxmox cluster gracefully either by running a script on each node or preferably from a central location? I have to implement and test an emergency shutdown scenario in case of severe power outage. The script should be triggered by...
  18. M

    Proxmox 4.4 lxc memory usage reporting

    Thx, already went through with it, everything is fine.
  19. M

    Proxmox 4.4 lxc memory usage reporting

    Okay. Since we are on 4.4 without subscriptions, will simply doing dist-upgrade get me there or will it upgrade to 5 beta?
  20. M

    Proxmox 4.4 lxc memory usage reporting

    veversion -v proxmox-ve: 4.4-76 (running kernel: 4.4.35-1-pve) pve-manager: 4.4-1 (running version: 4.4-1/eb2d6f1e) pve-kernel-4.4.6-1-pve: 4.4.6-48 pve-kernel-4.4.35-1-pve: 4.4.35-76 pve-kernel-4.4.8-1-pve: 4.4.8-52 lvm2: 2.02.116-pve3 corosync-pve: 2.4.0-1 libqb0: 1.0-1 pve-cluster...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!