Search results

  1. N

    Rebooting a CT makes the whole thing stall

    I ended up rebooting the whole node after hours. Had no choice. I have yet to update it too. I have never felt the containers as being reliable on Proxmox. Wishing I built all my Linux CTs as VMs instead. Am I wrong?
  2. N

    Rebooting a CT makes the whole thing stall

    Package versions: proxmox-ve: 5.3-1 (running kernel: 4.15.18-11-pve) pve-manager: 5.3-11 (running version: 5.3-11/d4907f84) pve-kernel-4.15: 5.3-2 pve-kernel-4.15.18-11-pve: 4.15.18-34 pve-kernel-4.15.18-10-pve: 4.15.18-32 corosync: 2.4.4-pve1 criu: 2.11.1-1~bpo90 glusterfs-client: 3.8.8-1...
  3. N

    Rebooting a CT makes the whole thing stall

    Having an urgent issue here where one of my nodes is stalled out (at least in the web UI). All I did was restart one of my containers, a simple cPanel-DNSONLY box due to upgrade and it's stalling out the whole node where none of the names show and it's all grayed out for the VMs and CTs...
  4. N

    Warning: Remote Host Identification Has Changed error

    I ran that command on all my nodes - it seems to be working ok now. I think pve2 showed that error possibly due to my browser being wonky. It was showing it when I went directly to the Shell via Proxmox web GUI. I hadn't tried SSH. I didn't get this when I rebooted and re-logged into Proxmox...
  5. N

    Warning: Remote Host Identification Has Changed error

    I have fives nodes in my cluster named pve0, pve1, pve2, pve3, pve4 and I can shell into all except pve2, which gives me this: How do I even begin to make the suggested changes when I can't shell into the node? Thanks in advance.