Search results

  1. C

    [SOLVED] After upgrade from 5.2-5, my server is now named "CloudInit"?

    I sure would look like that, but I did not. That cloudinit was created within the PVE Admin UI. I remember doing it months ago to experiment with that feature. Never did anything in the shell for that.
  2. C

    [SOLVED] After upgrade from 5.2-5, my server is now named "CloudInit"?

    It appears that after the upgrade, it's booting the wrong root? See attached screenshot. How do I get it to boot to the "root" and not the "vm-9000-cloudinit"?
  3. C

    [SOLVED] After upgrade from 5.2-5, my server is now named "CloudInit"?

    I can't even imagine what I did wrong, but all I did was `apt get update; apt-get dist-upgrade`. After reboot, the server's name is "CloudInit" and it no longer has `/etc/pve` mounted. Even the hosts file is changed: root@CloudInit [~]: # cat /etc/hosts # Your system has configured...
  4. C

    VLANs not working, but the server itself is

    I have a strange problem that I'm hoping someone can help with. I've set up 3 extra PM servers running under VMWare ESXi to use as HA in case our primary physical server goes down. As a reference for this discussion, the 4 server names are: pve0 = bare metal pve1, 2 and 3: VMWare ESXi The...
  5. C

    VM's won't boot if migrated to another cluster node

    I notice my profile is not showing that we are a PVE Enterprise customer (which, I guess, explains the lack of response here?). One of our license keys is listed in my profile though, so am I doing something wrong?
  6. C

    VM's won't boot if migrated to another cluster node

    We brought up 3 new servers and created a cluster over the weekend (4 servers total). All of the VM's work fine on the original server, but if I migrate any of them, they won't boot up in the new servers. It just stalls at "Booting from hard disk". All VM disks are located on an NFS share via...
  7. C

    [SOLVED] Rebooting one cluster server causes the 2nd node to reboot?

    nm, since this is a 2-node cluster, I had to do: pvecm expected 1 On the main server.
  8. C

    [SOLVED] Rebooting one cluster server causes the 2nd node to reboot?

    I just logged into one of the cluster servers and rebooted it. Immediately after that, the primary server rebooted. What would cause this?
  9. C

    [SOLVED] After upgrade today, "error not a correct xfs inode"

    I fixed the PAM error by booting up with the Ubuntu 16 server iso again and mounting the pve/root lvm volume. Once mounted, I ran `pam-auth-update --force`, then rebooted and everything came up.
  10. C

    [SOLVED] After upgrade today, "error not a correct xfs inode"

    pve rescue did not work. But I was able to boot using an Ubuntu Server 16.04 LTS install disk and then the rescue mode from that. Once I booted, I repaired grub (from the menu), then mounted the drive, then re-ran update-grub. The server now comes up and I can get to the UI. But I cannot log...
  11. C

    [SOLVED] After upgrade today, "error not a correct xfs inode"

    I updated one of our servers today to pve-manager/4.4-12/e71b7a74 (running kernel: 4.4.40-1-pve) I was in an ssh console and the upgrade dropped my connection after: Setting up corosync-pve (2.4.2-1) ... And at that point, it looks like the server just rebooted on its own and never came back up...
  12. C

    [SOLVED] Latest kernel update hangs

    If it's on the proxmox server, you can try: service nfs-common restart But that may not work - restarting the Proxmox server would probably be the easiest way. If you mean from your NFS server, it depend on the OS - googling may help ;)
  13. C

    [SOLVED] Latest kernel update hangs

    The problem is not Proxmox, it was NFS - in my case, our NFS server mount was hung.
  14. C

    Both servers in HA cluster rebooted for no apparent reason

    That's not the problem. NEITHER node was down. All I did was migrate a vm (or attempt to) and they both rebooted.
  15. C

    Both servers in HA cluster rebooted for no apparent reason

    It's documented that it needs it, but there is also documentation on how to simply set the votes to 2 on one of the servers. That said, doing a 2-node HA should *definitely never* cause both servers to randomly reboot. That is a serious flaw.
  16. C

    Both servers in HA cluster rebooted for no apparent reason

    Hello, I tried to migrate a VM this morning and BOTH of our pm servers in a two-node HA rebooted without warning or any apparent reason. What can I check to find out why this happened? Server 1: pve-manager/4.3-9/f7c6f0cd (running kernel: 4.4.21-1-pve) Server 2: pve-manager/4.3-9/f7c6f0cd...
  17. C

    Add Hard Disk "Storage Selector" times out

    It was hanging for both NFS and iSCI when I checked. Because of that console message earlier about iSCI, I tried removing the iSCI from the system and now it works as expected. The odd thing here is that both NFS and iSCI are connected to the same NAS server (Synology RS3617xs)
  18. C

    Add Hard Disk "Storage Selector" times out

    I am having difficulty when adding a new disk to a vm where the storage selector dropdown is timing out. I assume this is from I/O degradation so the dropdown can't get the list of available NFS or iSCI mounts. But I can't ascertain the cause. The NAS is connected via quad gigabit bond and the...
  19. C

    qm migrate caused server to reboot

    Hi Dietmar, I think I may know the cause - although it's probably worth checking into your code/unit tests for why this would cause the entire server to reboot without any warning. After the server came back up, I found that the PVE host I was migrating to had the wrong IP in the local server's...
  20. C

    qm migrate caused server to reboot

    pve-manager/4.2-18/158720b9 (running kernel: 4.4.16-1-pve) This morning I tried (via the gui) to migrate a VM. The task said OK, but the VM did not move. I found this page: https://forum.proxmox.com/threads/vm-is-locked-migrate.9358/ And was having the same trouble (VM would not unlock) So I...