Search results

  1. M

    pve-kernel-5.0.21-4-pve cause Debian guests to reboot loop on older intel CPUs

    We have the same here on this Kernel panic on VM's since Kernel pve-kernel-5.0.21-4-pve: 5.0.21-8 I noticed You have also a old mainboard HP G5 hardware. We also have upgraded newer hardware with no issue. Try to boot the older kernel and see if the vm's are starting angain.
  2. M

    Kernel panic on VM's since Kernel pve-kernel-5.0.21-4-pve: 5.0.21-8

    Since we updated to Kernel pve-kernel-5.0.21-4-pve: 5.0.21-8 we can not start any VM's on this host. Migrated VM's do have kernel panic as like this: This are the package versions we are running on the host: proxmox-ve: 6.0-2 (running kernel: 5.0.21-3-pve) pve-manager: 6.0-11 (running...
  3. M

    Problem to assign vlan to vmbridge (Proxmox VE 6) / pve-bridge error 512

    You are right. udev is the problem here. So net.ifnames=0 as grub boot parameter leads to old ethx interface names and everything is working fine. But the naming scheme mentioned in the doku ,,wiki/Network_Configuration" : ..... We currently use the following naming conventions for device...
  4. M

    Problem to assign vlan to vmbridge (Proxmox VE 6) / pve-bridge error 512

    Yes this is on a host new installed from Proxmox VE Iso. After that we found interfaces named like ,,rename6" on a 4 port HP-network card. Setting .link files is /etc/systemd/network and updating initramfs solved this. An yes, systemd is not used by proxmox, so we also could not set...
  5. M

    Problem to assign vlan to vmbridge (Proxmox VE 6) / pve-bridge error 512

    We do have the following problem. Have a network setup with 3 vmbridge interfaces für different VLAN blocks. This is due to usage of ibm-Blades where we can not bond the nics. The setup is working on the blades. This is the interfaces file on the blades: auto lo iface lo inet loopback...
  6. M

    error 500 can't upload to storage type 'rbd'

    Hi MogiePete, This is expected by design.. As you can see here: ,,centent images", you can upload disk raw images to ceph storage. .iso are also not allowed as LCX containers. You should could use NFS, GlusterFS or iscsi for shared .iso storage. You should see the storage as Enabled...
  7. M

    VE 4.0 Kernel Panic on HP Proliant servers

    Hi t.lamprecht This is exactly, what I got.. You are right, I already found this also. Thought it was only related to 3.x kernels. And blacklisting hpwdt.ko will love the kernel panic. Doing echo "A" > /dev/watchdog with watchdog-service off (kernel module hpwdt.ko blacklisted), as well...
  8. M

    VE 4.0 Kernel Panic on HP Proliant servers

    Hi all. I investigated a bit more now and found the following: Kernel modules loaded are: Watchdog-mux service is using this: and a will instantly generate the kernel panic.:( iLO2 firmware is upgraded to 2.29 (07/16/2015) Maybe this helps someone to assist.
  9. M

    VE 4.0 Kernel Panic on HP Proliant servers

    We have 2 labs setup with Proxmox VE 4.0 from latest ISO Download. In one lab we have HP proliant servers with massive kernel panic on Module hpwdt.ko. Unfortunately we do not have the trace due to HP's dammed ILO :-( but I will give mor Info when catched it up. We have a ceph cluster with 3...