Recent content by hradec

  1. H

    Nested PVE (on PVE host) Kernel panic Host injected async #PF in kernel mode

    quick update: Still using "linux-image-cloud-amd64" and things seems extremely promising. I did had a couple of reboots since last Monday (Nov 11th), booth triggered by the software watchdog. No errors, no messages... just a single line saying watchdog rebooting. So to investigate further, I...
  2. H

    Nested PVE (on PVE host) Kernel panic Host injected async #PF in kernel mode

    Just a quick update on this issue: As a test, I installed the latest "linux-image-cloud-amd64" package, which is kernel version 6.12.48+deb13-cloud-amd64 as of 2025-Nov-10 (there is no 6.14 that I could find, unfortunately), and the Proxmox 9 nested VM has been up without Oops, protection...
  3. H

    Nested PVE (on PVE host) Kernel panic Host injected async #PF in kernel mode

    I have tested disabling the swap on booth host and guest, and I still keep seeing crashes. Unfortunately, the guest without SWAP gives me a different crash which essentially freezes the guest PVE without a panic, so it won't reboot, I can't connect over ssh, and I can't nicely reboot it. I...
  4. H

    Nested PVE (on PVE host) Kernel panic Host injected async #PF in kernel mode

    I'm having the same issues here. One thing I did to be able to see some debug information on the guest PVE was using `qm teminal <vmid>` to see the terminal console of the guest PVE, and leaving it on a byobu/tmux session on the host PVE, so I could scroll back on the output (over ssh) once I...
  5. H

    Nested PVE (on PVE host) Kernel panic Host injected async #PF in kernel mode

    I'm seeing the same problem running PVE9 beta in a nested VM on PVE8.2.4. I'm running Debian13 vm in the nested PVE9, and after a while, I get this crash: pve-yvr login: [ 2089.981351] INFO: task cfs_loop:1122 blocked for more than 122 seconds. [ 2089.981899] Tainted: P O...
  6. H

    LizardFS anyone?

    I have been using LizardFS for more than 10 years now. I have never lost any data because of failed hard drives, and I did had my fair share of Hard drive failures in the past 10 years. I ran 1 chunkserver for each disk in a lxc container, and the master in a container as well, all on the same...