Unfortunately that does NOT work for me :( .
AMD RX 6600, vendor-reset Module installed & loaded at boot, but still "Invalid Argument" Error ...
root@pve:~# lspci -kk | grep -i vga -B5 -A10
0b:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 23 [Radeon RX 6600/6600...
Concerning the Error
kvm: ../hw/pci/pci.c:1637: pci_irq_handler: Assertion `0 <= irq_num && irq_num < PCI_NUM_PINS' failed.
TASK ERROR: start failed: QEMU exited with code 1
I also seem to be affected. I originally reported this in another Thread ->...
Unfortunately that doesn't seem to help in my Case :(.
Also /sys/devices/pci0000:00/0000:00:03.1/0000:09:00.0/0000:0a:00.0/0000:0b:00.1/d3cold_allowed (AMD RX 6600) was already set to 1.
I tried setting to 0 but that doesn't really solve the Issue.
dmesg says the following (with default...
I'm NOT sure it's the same Issue described in this thread, but I'm also getting a Kernel Panic of 6.8.x AT BOOT TIME.
This is approx. 2-4s after GRUB boots the Kernel, before even Clevis unluks the LUKS encrypted Disks and ZFS Mounts the Filesystem.
I had the Impression all/most Users affected...
Not sure to be honest (the system where this shows the most is down for other reasons now). I'm curious to see if it would improve, but carefully skeptical. After all, everybody seem to point the Finger and Blame "Consumer SSDs" (or HDDs) wheras in my View it's a Kernel (or Kernel + ZFS) Thing...
Not a factor for the First. I always use ZFS on Root rpool (and sometimes I also have a separate zdata Storage Pool, again on ZFS).
Lucky you :). Heck even Nextcloud VM takes 3+ GB of RAM. Same with Mediawiki at 3+ GB of RAM. Also the test Seafile VM that's not doing anything is taking 3+ GB of...
Well for Podman (Docker) I usually setup a KVM Virtual Machine for that Purpose. Initially Debian, now slowly migrating to Fedora because of more recent Podman Support.
Many Services are not even "Deployed". Rather some kind of "Work in Progress" that hasn't progressed for several Months/Years...
I have A LOT of VMs running on Proxmox VE across several Servers.
Most of the VMs run Debian GNU/Linux Bookworm.
Overall the KVMs seems quite inefficient, especially in Terms of:
- RAM Usage (a bit less on Xeon E3 v5/v6, since up to 64GB RAM can be used there)
- Disk Space (each VM takes 16GB...
I migrated my Podman Data from ZFS on top of ZVOL to EXT4 on top of ZVOL.
I tried to do zpool trim rpool on the Proxmox VE Host to see if that would improve Things. It didn't unfortunately.
The Issue still persists. I also tested on Kernel 6.5.x, same thing. IOWait can jump above 80% very...
Well Kernel 6.1.x is the LTS one on Kernel.org and the default one provided by Debian.
If you go the Debian Backports Route, you will get Kernel 6.9.x at the Moment. I also have linux-image-6.6.13+bpo-amd64 but that's because I installed it, it's not in the Repos anymore (and quite old ...
Well the EXTREME Iowait 60%-80% seems more on those Podman Systems (ZFS on top of ZVOL) so maybe there it's a similar Issue to what the User reported on #openzfs (ZFS Pool Deadlock), although not to the same extent.
On the other systems it might indeed be lower, but still somewhat 20% or so.
Uhm I never got that Memory Issue. But that's also probably because I force ZFS to do what I want instead of leaving it on a very loose leash ;) .
I reduced the amount of ARC I allow ZFS to use. Maximum 4GB for the Proxmox VE Host on a 32GB System (otherwise ZFS can eat up to 50%, i.e. 16GB...
Weird ... The GUI being picky about the Kernel Version :oops: ? I mean, if the Kernel is good to run VMs/CTs, then it should also be for the GUI (IMHO).
I don't see how the Kernel Version would break the GUI in that regards ... Did you check Services pvestatd and pveproxy ?
I think I'll try a...
Any further Discoveries ?
I'm quite disappointed that all the Proxmox VE Team and other Users say "Do NOT use Consumer SSD", when the Issue arise after a Package /Kernel and/or ZFS) Upgrade ...
I guess I could maybe take Kernel 6.5.x config from Proxmox VE, Download Kernel 6.6.41 Sources, then...
The only feedback I got from the OpenZFS IRC Channel is that Kernel 6.8 changed MANY THINGS. Not very specific I know, but that's what I know.
Not sure if the Issue was already on Kernel 6.5.x. Granted my Workload might have changed since (and was probably fairly light on Kernel 6.5.x), so the...
Just experienced this as well on my latest Proxmox VE Upgrade to PVE 8.x.
In my case removing /tmp/.ifupdown2-first-install fixed / bypassed this Issue.
It's a weird choice (by Ubuntu Developers and Proxmox VE Developers) to NOT use LTS Kernels (LTS defined on kernel.org Website), like 6.1 and 6.6 Instead.
Given the IOwait Issue I am currently experiencing, the ideal scenario would have been to build a custom Kernel based on 6.6 which is the...
Well you might have a point, at least to some extent, I am not debating that. I just think that if a new Issue shows up on 3-4 Servers of mine after an Update, it's a BUG, not a feature. I am debating that being the only cause.
And while I tend to agree that the Issue seems more predominant in...
LVM is an absolute PITA to manage. I tried to recover previous systems. Never again :rolleyes: !
Why are you so focused on PBS ? I am talking about Proxmox VE, not Proxmox Backup Server.
So is having to change a Partition Layout when you have Data on it already ...
Let alone to setup backups...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.