I've had similar issues which I was able to fix. The issue was due to one of the nodes had incorrect /etc/hosts files which had aliased `pvelocalhost` to other node, not to itself (copy-paste issue on my fault). I've fixed it, restarted corosync on that node & after that new node joined on pvecm...
Follow up. Looks like I was able to narrow down the issue roots.
It seems like I'm getting RX Errors on one of the 2 NICs (ports) attached to bond0 when I get lost packets from the container which increase in number close to lost packets count.
RX Errors count grows up when I ping from the host...
I've running the latest PVE 6.x on 2-host cluster:
pve-manager/6.4-13/9f411e79 (running kernel: 5.4.128-1-pve)
Physical servers aren't the same, but very similar.
And I had weird issues with networking from the LXC containers. There are lost packets when there is some constant fast network flow...
Bumping this thread because I still don't get if it would be possible for live migration with LXC? Technically, I understand that it doesn't implemented right now in Proxmox. Because Proxmox 3.x allowed for online migration of OpenVZ containers even without shared storage with nearly zero...
This are important step not covered in other threads & wiki page. These devices in fstab are enabled by default in OpenVZ images and should be disable or you won't get /dev/pts & /dev/shm working resulting in non-working SSH. Could someone add this to the wiki page please?
Greetings,
EDIT: NVM, seems this is not related with restore procedure, but instead same problem as here: https://forum.proxmox.com/threads/memory-allocation-failure.41441/
I'm trying to restore old VM (KVM) created with Proxmox 3.4. Restore process went without any errors/warnings, but I...
Weird thing. I'm runinng latest Proxmox 5.1 with CentOS 7 LXC container inside. It's "multi-homed", i.e. having 2 networks attached with 2 different Internet IPv4's. This takes iproute2 rules to work. And such setup usually works perfectly fine for me with KVM/OpenVZ/Physical hosts. I just...
You are right! I've added 'vmd' to /etc/modules & /etc/initramfs-tools/modules, rebuild initramfs with 'update-initramfs -u -k all', enabled VMD in BIOS, rebooted and Proxmox booted fine. Marking this as solved. But probably vmd module should be enabled in future releases to avoid such...
More information: systemrescuecd distro also can run Xorg server fine with this option enabled. To this is software related issue. Either kernel or some PCIe libs.
I've managed to boot existing installation from NVMe by changing BIOS settings. Will test a few things and will post details later.
EDIT: Installer also runs Xorg fine now after changing BIOS settings.
Seems like this problem are because udev running from initramfs doesn't populates /dev/nvme* devices for some reason. According to pve kernel config both nvme and nvme block device drivers are built in kernel.
Running recent & updated proxmox 5.2. It was installed on SATA drive, but now I want to move it to NVMe drives. MB has the support for UEFI boot from NVMe.
So I've created a GPT partition on NVMe. 512Mb FAT32 partition with ESP flag, and rest are for linux RAID partition where LVM volume will...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.