Search results

  1. J

    [SOLVED] After upgrade to 5.2.11 Corosync does not come up

    Here is described how to change corosync.conf a stale member; https://pve.proxmox.com/wiki/Separate_Cluster_Network#Write_config_when_not_quorate But how to change?
  2. J

    [SOLVED] After upgrade to 5.2.11 Corosync does not come up

    Hi, first of all we use the non-subscription repos. Today we want to upgrade to the newest version 5.2.11. After upgrading one node it cannot join the cluster again... The old version of the remaining running nodes is: 5.2-6. It cannot join the cluster again. (VM start reports "waiting for...
  3. J

    Clusterproblem - all nodes restarted automatically

    Dual configuration done on a running cluster as described above: 1. edit the /etc/pve/corosync.conf on one node (with the dual config): - add unique ring1_addr for all nodes - add interface with the second network (i.e. for ceph) - increase config_version and "set rrp_mode: passive" 2. restart...
  4. J

    Clusterproblem - all nodes restarted automatically

    Thanks a lot I will try! The dual configuration makes absolut sense for me... to make it more fault tolerant. I will add my ceph network as second ring: 10.0.99.0 A quick look in the tutorial showed me, that there are some commands missing in my promox like the maintenance mode: " crm configure...
  5. J

    Clusterproblem - all nodes restarted automatically

    @wolfgang: It is clear, that "worked for years" is not a proof that it is correct ... but almost ;) As I wrote, we've been running it for years without any problems and I wasn't aware that something like this could happen. That one should use a dedicated network for ceph is described in...
  6. J

    Clusterproblem - all nodes restarted automatically

    Hi, it is a "standard" configuration which is working since the cluster was created with Proxmox 4 a few years ago. and it was working all the time. In the night there are backups scheduled they put the network under load... but this is also running for years. Hm, so the reboot can happen...
  7. J

    Clusterproblem - all nodes restarted automatically

    Hi, tonight all nodes of a three node cluster restartet after a crash automatically at the same time ... very strange. We never had this before, it looks like a clustering issue. Any ideas what happened here? Our Version: proxmox-ve: 5.2-2 (running kernel: 4.15.18-1-pve) pve-manager: 5.2-6...
  8. J

    Migrate Architecture (Cross-grade) of LXC i386 to amd64

    Hi, we have some lxc containers which are running in i386 mode. The should be be migrated to amd64. The containers use a debian stretch os inside. I manually edited the /etc/pve/lxc.conf of a container and changed the architecture. #arch: i386 arch: amd64 The containers then boots the 64bit...
  9. J

    4.15 based test kernel for PVE 5.x available

    After a week in an production environment with 4.15.3-1 ... again one node with questionmark. I will try 4.15.10-1-pve
  10. J

    Node with question mark

    Migration was no solution for us, the problem "migrated" also to the other nodes... The reason for our "nodes with question mark" was a nightly offline backup in stop/start mode of some lxc containers. Our workaround is: We switched the backup to "snapshot" and we have no freezes anymore ;)...
  11. J

    Proxmox 5.1 Minor Downgrade

    I think we have already passed by your testlab ... our restart is every 2mins :D
  12. J

    4.15 based test kernel for PVE 5.x available

    The new Testkernel works in our testlab without any lxc start/stop problems. Please see also: https://forum.proxmox.com/threads/proxmox-5-1-minor-downgrade.42138/#post-202730
  13. J

    Proxmox 5.1 Minor Downgrade

    Everything works well in our Testlab with the new 4.15. Testkernel. We have also implemented crontab lxc restart solution with runs for almost 24 h without any problem... :)
  14. J

    Proxmox 5.1 Minor Downgrade

    Thanks for the hint, it is also described in this post here... https://forum.proxmox.com/threads/4-15-based-test-kernel-for-pve-5-x-available.42097/ ...but if you have mixed environments with quemu and lxc it is not without any risks. (quemu is possible broken...) I will test it... There is...
  15. J

    Proxmox 5.1 Minor Downgrade

    Hi, because of this problem here "Nodes with Questionmark" one of our clusters is in a unusable state: https://forum.proxmox.com/threads/node-with-question-mark.41180/page-2#post-202390 Since there is no information from ProxMox if there is a fix planned and when, and none of the workarounds...
  16. J

    Node with question mark

    Again my question: @tom is there a upgrade planned with the fixed kernel?
  17. J

    Node with question mark

    Unfortunately both solutions do not work for me. Everyday one node crashes... unusable. @tom Is there an upgrade planned for this issue?
  18. J

    Node with question mark

    Thanks for the hint, I will also try the "solution" from here ... we don't have zfs: https://forum.proxmox.com/threads/proxmox-ve-5-1-zfs-kernel-tainted-pvestatd-frozen.38408/#post-189727
  19. J

    Node with question mark

    5.1.46, now this happens the second time in this week on the same node in my cluster. When I restart the "pvestatd" on the node, the KVMs become visible again. Some of the lxc containers are running and some are dead. "pct list" hangs...
  20. J

    Meltdown and Spectre Linux Kernel fixes

    Tested with latest Proxmox 5.2 and 4, NOT VULNERABLE. Thank you! Spectre and Meltdown mitigation detection tool v0.27 Checking for vulnerabilities against live running kernel Linux 4.13.13-6-pve #1 SMP PVE 4.13.13-41 (Wed, 21 Feb 2018 10:07:54 +0100) x86_64 CVE-2017-5753 [bounds check bypass]...