Search results

  1. V

    LXC container reboot fails - LXC becomes unusable

    Reproducing is easy. Create 5 LXC CT, run a cron to stop and start each CT every minute. Within minutes you will see issue.
  2. V

    LXC container reboot fails - LXC becomes unusable

    My issue is exactly denos is talking about, and it has no NFS involved
  3. V

    LXC container reboot fails - LXC becomes unusable

    I don't use NFS. I use ZFS. Only local drives. I have 25 live nodes all with local ZFS drives, all 25 modes are facing issues atleast once every 2 days.
  4. V

    LXC container reboot fails - LXC becomes unusable

    Great news. I hope we will get the new update for proxmox soon. That will end my sleepless nights.
  5. V

    LXC container reboot fails - LXC becomes unusable

    You are lucky. I have 25 live nodes, and for last one week, I am having sleepless nights. I lost complete faith in Proxmox.
  6. V

    LXC container reboot fails - LXC becomes unusable

    What i meant is lxc monitor is not running like other issue. I didn't mention in detail. I assumed we all knew.
  7. V

    LXC container reboot fails - LXC becomes unusable

    service pvestatd restart This made node green, but all CT still grey.
  8. V

    LXC container reboot fails - LXC becomes unusable

    Today we had a different issue. We terminated a CT 154, and Node went RED. 154 got deleted, Node and all other CTs pinging fine. Result of ps aux | grep 154 root@P158:~# ps aux | grep 154 root 154 0.0 0.0 0 0 ? S< Mar02 0:00 [netns] 27 5095 0.0 0.0 113276...
  9. V

    LXC container reboot fails - LXC becomes unusable

    I can confirm by 100% that reducing ARC cache from 8GB to 1GB reduced the issue by almost 90% But you did test with 3.14.20 and issue was not there even with 32GB ARC cache right?
  10. V

    LXC container reboot fails - LXC becomes unusable

    Yes, I took a fresh node, installed 5 CT with centos 6. Keep it in loop to stop and start each CT every minute. And issue happened in 10-20 minutes time. Keep in mind I had ZFS, and ARC cache was set at 8GB. When I changed ARC cache to 1GB, issue reduced to almost 10%, it ran for 2-4 hours...
  11. V

    Can't start ct after stopping it

    I have only ZFS, I never tested in ext4.
  12. V

    VM Performance

    You just have 2 VM, Upto 2 or 3 it is ok. SSD is needed if you need more than 3.
  13. V

    VM Performance

    For windows 2012 it really matters even if you don't run anything in it. Windows 2012 takes lot of resources when it starts.
  14. V

    [SOLVED] Proxmox 5.1.46 LXC cluster error Job for pve-container@101.service failed

    I changed ZFS cache from 8GB to 1GB. Then issue didn't happen YET on all 25 nodes for last 36 hours.
  15. V

    Can't start ct after stopping it

    I did a modification, and for last 36 hours I don't have the node froze issue. I changed ZFS cache from 8GB to 1GB. Then issue didn't happen yet.
  16. V

    Can't start ct after stopping it

    Another member here has a custom built kernel module. you can contact him. denos
  17. V

    Can't start ct after stopping it

    It is a known bug in Kernel 3.13.13. We need Kernel 4.14.20+ to get this issue solved. Proxmox not yet released it.
  18. V

    Can't start ct after stopping it

    The issue we faced was a Debian bug, not an LXC bug. First i thought it was ZFS issue, but it is not.
  19. V

    New server recommendations?

    RAID1 with 2x1TB for Proxmox (local) and RAID1 with 1TB SSD drives for VM storage. This is what I use and i can run upto 6 or 7 KVM or 15 LXC. And it is reasonably fast.
  20. V

    HIGH MEMORY CONSUMPTION IN KMV WITH WINDOWS

    Yes 5.1 is the version I also use. And balloonung works perfect.