Search results

  1. N

    Happen problem when add a node in cluster on PVE 7.0

    I've had similar issues which I was able to fix. The issue was due to one of the nodes had incorrect /etc/hosts files which had aliased `pvelocalhost` to other node, not to itself (copy-paste issue on my fault). I've fixed it, restarted corosync on that node & after that new node joined on pvecm...
  2. N

    Weird networking lags

    Follow up. Looks like I was able to narrow down the issue roots. It seems like I'm getting RX Errors on one of the 2 NICs (ports) attached to bond0 when I get lost packets from the container which increase in number close to lost packets count. RX Errors count grows up when I ping from the host...
  3. N

    Weird networking lags

    I've running the latest PVE 6.x on 2-host cluster: pve-manager/6.4-13/9f411e79 (running kernel: 5.4.128-1-pve) Physical servers aren't the same, but very similar. And I had weird issues with networking from the LXC containers. There are lost packets when there is some constant fast network flow...
  4. N

    pve-kernel-5.0.21-4-pve cause Debian guests to reboot loop on older intel CPUs

    I also could confirm that pve-kernel-5.0.21-4-pve/stable 5.0.21-9 solved the issue for me.
  5. N

    pve-kernel-5.0.21-4-pve cause Debian guests to reboot loop on older intel CPUs

    Same here with Xeon E5405 (Q04'2007 release date). Reported as a bug before seeing this thread: https://bugzilla.proxmox.com/show_bug.cgi?id=2458
  6. N

    How to migrate container online ?

    Bumping this thread because I still don't get if it would be possible for live migration with LXC? Technically, I understand that it doesn't implemented right now in Proxmox. Because Proxmox 3.x allowed for online migration of OpenVZ containers even without shared storage with nearly zero...
  7. N

    proxmox 5.1 - PTY allocation request failed on channel 0

    This are important step not covered in other threads & wiki page. These devices in fstab are enabled by default in OpenVZ images and should be disable or you won't get /dev/pts & /dev/shm working resulting in non-working SSH. Could someone add this to the wiki page please?
  8. N

    [SOLVED] Restoring KVM backup from Proxmox 3.x

    Greetings, EDIT: NVM, seems this is not related with restore procedure, but instead same problem as here: https://forum.proxmox.com/threads/memory-allocation-failure.41441/ I'm trying to restore old VM (KVM) created with Proxmox 3.4. Restore process went without any errors/warnings, but I...
  9. N

    LXC iproute2 ip rule & udp packets

    Weird thing. I'm runinng latest Proxmox 5.1 with CentOS 7 LXC container inside. It's "multi-homed", i.e. having 2 networks attached with 2 different Internet IPv4's. This takes iproute2 rules to work. And such setup usually works perfectly fine for me with KVM/OpenVZ/Physical hosts. I just...
  10. N

    [SOLVED] Moving to NVMe, initrd doesn't creates /dev/nvme* nodes

    You are right! I've added 'vmd' to /etc/modules & /etc/initramfs-tools/modules, rebuild initramfs with 'update-initramfs -u -k all', enabled VMD in BIOS, rebooted and Proxmox booted fine. Marking this as solved. But probably vmd module should be enabled in future releases to avoid such...
  11. N

    [SOLVED] Moving to NVMe, initrd doesn't creates /dev/nvme* nodes

    Weird. But Debian 9.5 Live also fails to start Xorg server with same error when VMD enabled. While Linux Mint 18 KDE starts without any issues.
  12. N

    [SOLVED] Moving to NVMe, initrd doesn't creates /dev/nvme* nodes

    Upgrading to BIOS version 2.0b didn't helped. Issue remains with Intel VMD enabled.
  13. N

    [SOLVED] Moving to NVMe, initrd doesn't creates /dev/nvme* nodes

    More information: systemrescuecd distro also can run Xorg server fine with this option enabled. To this is software related issue. Either kernel or some PCIe libs.
  14. N

    [SOLVED] Moving to NVMe, initrd doesn't creates /dev/nvme* nodes

    Platform: Supermicro SSG-5029P-E1CTR12L MB: Supermicro X11SPH-nCTF BIOS Version: 2.0 / 2.0b BIOS Build Time: 11/29/2017 CPU: Intel Xeon Silver 4108 RAM: 6 x 4 Gb DDR4 2400MHz ECC Symptoms: once NVMe optional drives (including tray & OCuLink cables) installed Proxmox can't boot or start...
  15. N

    [SOLVED] Moving to NVMe, initrd doesn't creates /dev/nvme* nodes

    I've managed to boot existing installation from NVMe by changing BIOS settings. Will test a few things and will post details later. EDIT: Installer also runs Xorg fine now after changing BIOS settings.
  16. N

    [SOLVED] Moving to NVMe, initrd doesn't creates /dev/nvme* nodes

    Hmm... after installing NVMe drives I can't even reinstall Proxmox: Seems like Xorg crashing with libpciaccess.so.1 in backtrace. Somehow related?
  17. N

    [SOLVED] Moving to NVMe, initrd doesn't creates /dev/nvme* nodes

    Unfortunately setting rootdelay to 120 didn't helped:
  18. N

    [SOLVED] Moving to NVMe, initrd doesn't creates /dev/nvme* nodes

    Seems like this problem are because udev running from initramfs doesn't populates /dev/nvme* devices for some reason. According to pve kernel config both nvme and nvme block device drivers are built in kernel.
  19. N

    [SOLVED] Moving to NVMe, initrd doesn't creates /dev/nvme* nodes

    Running recent & updated proxmox 5.2. It was installed on SATA drive, but now I want to move it to NVMe drives. MB has the support for UEFI boot from NVMe. So I've created a GPT partition on NVMe. 512Mb FAT32 partition with ESP flag, and rest are for linux RAID partition where LVM volume will...