Search results

  1. P

    High CPU usage after upgrade to kernel 6.5.13-1-pve

    Pinned kernel 6.5.11-8-pve, rebooted, and my problems are solved...
  2. P

    High CPU usage after upgrade to kernel 6.5.13-1-pve

    Hi, This morning I upgraded my server to kernel 6.5.13-1-pve. Many of my containers and VMs now have a very high CPU usage. Overall, my Ryzen 5 5600G had a usage of around 20% average before the upgrade, now it hovers around 70%. proxmox-ve: 8.1.0 (running kernel: 6.5.13-1-pve) pve-manager...
  3. P

    Opt-in Linux 6.5 Kernel with ZFS 2.2 for Proxmox VE 8 available on test & no-subscription

    Thank you for your quick reply. I just did an apt dist-upgrade from kernel 6.2 and kernel 6.5 was offered. dkms was complaining that there were no headers available anymore.
  4. P

    after server crash, all logs were gone

    Same problem here on my 8.0.4 node. Read only file system error, no /var/log/messages of kern.log. Nothing in journalctl, only logging of my hard power cycle: Oct 06 06:25:19 frigate-nuc systemd[1]: apt-daily-upgrade.service: Deactivated successfully. Oct 06 06:25:19 frigate-nuc systemd[1]...
  5. P

    Graphs aren't generated anymore in GUI

    Hi, Using Proxmox 7.4-3, kernel 6.2, on a Ryzen 3 4100, the graphs in the web GUI under Summary aren't generated . All show "1970-01-01 01:00:00". This is a clean install. Running a command I've found in a similar thread, throws the following error: root@pve-r3:~# pvesh get...
  6. P

    Kernel (5.15) error when sharing /dev/dri/renderD128 to LXC

    Hi, Running the opt-in kernel 5.15, I'm trying to share the Ryzen 5600G GPU with a LXC. The container seems to work fine, but the host kernel keeps throwing these errors: Feb 13 21:22:38 pve kernel: [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR* Failed to initialize parser -125! Feb 13 21:22:38 pve...
  7. P

    LXC CPU flags passthrough?

    Hi, Albeit a bit strange I'm trying to use a Docker container (Deepstack) in a LXC container. For Deepstack it is required to have a CPU with AXV or AXV2 support. When I run a Debian VM, with CPU type 'host', the Deepstack container runs fine and accepts connections, but when running the...
  8. P

    IOMMU groups get disconnected when starting VM that uses different group

    I'm restoring my VMs now, the Coral now sits in its own group (checked with the command you gave me). I had a 4 port NIC in the x16 slot, but I will manage to live without that (was intended for future use). Before the upgrade I had a 4 core i3-9100 on an H310 chipset, which worked flawlessly...
  9. P

    IOMMU groups get disconnected when starting VM that uses different group

    I'm trying the x16 slot now, having to reinstall again because now the onboard LAN isn't recognized :rolleyes:
  10. P

    IOMMU groups get disconnected when starting VM that uses different group

    adding pcie_acs_override=downstream,multifunction does not resolve it. I will try if it works in a different PCIe slot :)
  11. P

    IOMMU groups get disconnected when starting VM that uses different group

    Hi @avw root@pve:~# for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nns "${d##*/}"; done IOMMU group 0 00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge [1022:1632] IOMMU...
  12. P

    IOMMU groups get disconnected when starting VM that uses different group

    Hi, I just upgraded my server to a Ryzen 5 5600G on a Gigabyte A520M H. Running kernel 5.15 because of errors (DID_BAD_TARGET) on my NVME SSD when shutting the server down, when I attach my Coral (in its own IOMMU group 04) to my Debian VM, other IOMMU groups (that aren't passed through to the...
  13. P

    Opt-in Linux Kernel 5.15 for Proxmox VE 7.x available

    Maybe I should open a new topic, but with or without pcie_acs_override (does not make a difference), the Coral is in its own IOMMU group (04), and all USB/SSD/LAN devices are in a separate group, like 02 and 05. But when I start the VM, groups 02 and 05 get disconnected, when the Coral in 04...
  14. P

    Opt-in Linux Kernel 5.15 for Proxmox VE 7.x available

    Thank you for the clarification, I will report back :)
  15. P

    Opt-in Linux Kernel 5.15 for Proxmox VE 7.x available

    No I did not use pcie_acs_override, should I? I've now removed the NMVE SSD from the M.2 slot, and placed my Coral in there. The Coral was mounted in a mini-PCIe to x1 adapter. I'm just reinstalling Proxmox to test if it is a conflict between the x1 adapter and the NVME-slot. If that is not the...
  16. P

    Opt-in Linux Kernel 5.15 for Proxmox VE 7.x available

    I'm trying this kernel, because on the current 'main' kernel, I'm getting DID_BAD_TARGET on my new WD SN550 NVME SSD when I reboot and the system freezes. That problem is resolved, but now on this kernel, when I start a VM that has a Coral TPU passed through to it, all USB and SSD drives get...
  17. P

    VM won't start after upgrade to 7.1 from latest 7.0

    I've found in another topic that you have to reconfigure the SATA interface: https://forum.proxmox.com/threads/some-vms-arent-booting-after-upgrade-to-7-1.100039/#post-431853 That worked for my Home Assistant OS VM. And while I was at it, I changed the virtual drive adapter from SATA to VirtIO...
  18. P

    Some VMs aren't booting after upgrade to 7.1

    That seems to solve my problem too :D