Search results

  1. V

    Proxmox 5.1 Minor Downgrade

    Yes, I passed 36 hours no issues at at all with 4.15 kernel.
  2. V

    [SOLVED] Proxmox 5.1.46 LXC cluster error Job for pve-container@101.service failed

    No NFS, I only have ZFS. Backup runs fine with ZFS, no issues with kernel 4.15. I have issues in 25 live nodes with ZFS running kernel 4.13, even with no backups running.
  3. V

    [SOLVED] Proxmox 5.1.46 LXC cluster error Job for pve-container@101.service failed

    For me 18 hours passed with all 5 guests getting restarted every 5 minutes. No issues.
  4. V

    LXC container reboot fails - LXC becomes unusable

    Now 18 hours passed, every 5 minutes all 5 guests are getting stopped and started. No errors yet. SO yes, this Kernel solves the issue.
  5. V

    Proxmox 5.1 Minor Downgrade

    Only LXC has issue. KVM works fine in 4.13. I hope in 2 days they will release as stable version.
  6. V

    Proxmox 5.1 Minor Downgrade

    Kernel 4.15 is on test depository now. You can test it now. We loaded new Kernel 4.15 in our proxmox. Created 5 LXC guests, created cron to stop and start all 5 guests every 5 minutes. Now it passed 6 hours, no errors yet. We are still running the test. CPU(s)24 x Intel(R) Xeon(R) CPU L5639...
  7. V

    [SOLVED] Proxmox 5.1.46 LXC cluster error Job for pve-container@101.service failed

    We loaded new Kernel 4.15. Created 5 LXC guests, created cron to stop and start all 5 guests every 5 minutes. Now it passed 6 hours, no errors yet. We are still running the test. CPU(s)24 x Intel(R) Xeon(R) CPU L5639 @ 2.13GHz (2 Sockets) Kernel Version Linux 4.15.3-1-pve #1 SMP PVE 4.15.3-1...
  8. V

    LXC container reboot fails - LXC becomes unusable

    We loaded new Kernel 4.15. Created 5 LXC guests, created cron to stop and start all 5 guests every 5 minutes. Now it passed 6 hours, no errors yet. We are still running the test. CPU(s)24 x Intel(R) Xeon(R) CPU L5639 @ 2.13GHz (2 Sockets) Kernel Version Linux 4.15.3-1-pve #1 SMP PVE 4.15.3-1...
  9. V

    LXC restart creates kworker CPU 100%

    When i restart the node, everyhting comes back to normal. But issue happens after few hours again.
  10. V

    LXC restart creates kworker CPU 100%

    I also see same issue after i upgraded to 5.1. Waiting for an update. kworker/u48:1 at 100% top - 23:02:35 up 3 days, 23:57, 1 user, load average: 69.34, 64.66, 63.87 Tasks: 720 total, 17 running, 703 sleeping, 0 stopped, 0 zombie %Cpu(s): 1.4 us, 62.9 sy, 0.0 ni, 26.5 id, 9.0 wa...
  11. V

    [SOLVED] Number of nodes we can add to a cluster?

    I have all my nodes connected to the same 1000 mbps port Cisco switch. Each node has 10 guests each (both LXC and KVM) Corosync have a hardcoded limit of 32 nodes max per cluster. What will be a practical limit in my case? 10 nodes will work fine for a cluster? Any suggestions?
  12. V

    [SOLVED] How to reinstall a cluster node with CT

    We tested same procedure on 4 more clusters to add new nodes with live guests. All worked fine.
  13. V

    [SOLVED] Rename a Cluster (Not a Node)

    Spoonsause, please mark this thread as SOLVED.
  14. V

    [SOLVED] How to reinstall a cluster node with CT

    I have the issue solved. Is there any a way to mark this post as SOLVED?
  15. V

    LXC container reboot fails - LXC becomes unusable

    Yes, I also confirm it. It has nothing to do with file system. I am still having sleepless nights with the issue with 25 live nodes. It is very easy to reproduce the issue, with a simple plain node with 5 LXC, and by running a cron to stop and start each LXC.
  16. V

    [SOLVED] How to reinstall a cluster node with CT

    So it turned out to be.... We can add a node with guests to a cluster, if we want.
  17. V

    [SOLVED] How to reinstall a cluster node with CT

    It worked perfect. I added a node with guests to the cluster successfully.
  18. V

    [SOLVED] Rename a Cluster (Not a Node)

    I tried it. And worked fine. Thank you.