Search results

  1. E

    pve 8.0 and 8.1 hangs on boot

    Attaching files from R320 as I believe this is the most typical configuration, unlike other two which are more custom-like. There's also an update to my previous post - it would seem that console post-initrd gets initialized only if system is booted without nomodeset. This was part of my...
  2. E

    pve 8.0 and 8.1 hangs on boot

    Encountering same issue here with lack of initrd console on few hardware configurations after upgrade from 6.2.16-7 to 6.5.11-4: - PowerEdge R320; the console does gets initialized after that, but had to enter decryption passphrase blindly - HP thinClient t630 with AMD GX-420GI; the console does...
  3. E

    VM shutdown, KVM: entry failed, hardware error 0x80000021

    That's pretty huge range and since it also catches most of Silver/Gold Xeons from past few years (almost no company replaces servers every year for newer CPU) it becomes quite a big issue. It definitely needs to be resolved one way or another. Preferably it should be resolved upstream, as I...
  4. E

    Regression in kernel 5.15 with megaraid_sas when certain RAID cards have VD in rebuild/consistiency-check state

    In my specific case I've been running with pt since PVE 6.x, so this has no effect on me. It's a shame though that the defaults were reverted back, as it will mask the underlying regression (the issue described did not happen in 5.13 when running in pt mode)
  5. E

    Regression in kernel 5.15 with megaraid_sas when certain RAID cards have VD in rebuild/consistiency-check state

    I believe there was a report opened against Ubuntu's kernel on their bugtracker but I can't seem to locate it anymore (or to be more specific, it seems to lead to 404). Not aware of any other reports of this issue and I'm not really in position to perform bisect to find offending change. One...
  6. E

    [SOLVED] HW Raid Megaraid errors after deployment PVE 7.2 fresh install

    Make sure you are aware of another issue present, which shows itself only during rebuild: https://forum.proxmox.com/threads/regression-in-kernel-5-15-with-megaraid_sas-when-certain-raid-cards-have-vd-in-rebuild-consistiency-check-state.110470/ Be aware that ZFS will not function at its best...
  7. E

    VM shutdown, KVM: entry failed, hardware error 0x80000021

    The higher uptime (even if ultimately crashed after some time) with "Performance" power plan could point to what I previously mentioned - that the issue comes from Windows kernel scheduler doing something when switching between its internal idle/non-idle states that KVM does not like. This would...
  8. E

    Regression in kernel 5.15 with megaraid_sas when certain RAID cards have VD in rebuild/consistiency-check state

    This is more of a PSA than anything, to anyone here that might experience this issue, as it took me some time to debug. There is regression in megaraid_sas kernel module for version 5.15 used in PVE. In a combination with certain RAID cards (such as `LSI MegaRAID SAS 2008`, for example PERC...
  9. E

    VM shutdown, KVM: entry failed, hardware error 0x80000021

    This has been reported by others for 5.15.13 before here: https://old.reddit.com/r/VFIO/comments/s1k5yg/win10_guest_crashes_after_a_few_minutes/ Sadly, this happened for me too. Interestingly, this has only happened on single Windows 11 VM (21H1, 22000.xxx) but not on a Server 2019 (1809...
  10. E

    WSUS cleanup crashes VM

    Some time has passed now and I can confirm that setting aio=native does solve the issue with VMs behaving as they were previously without any crashes. For VM 100 I have applied this only to OS drive while keeping second attached one at new defaults; it would seem that RAID5 backed array is...
  11. E

    WSUS cleanup crashes VM

    I had applied this setting on 109 shortly after writing the initial post as I remembered the io_uring from changelog. So far I have been unable to reproduce this. Will continue observing this for next few days and report back.
  12. E

    WSUS cleanup crashes VM

    Observing similar issue on two VMs, one with Windows 10, other with Windows Server 2019. I can reliably attempt to trigger it (with some degree of chance) by opening Chrome/Chromium/Edge, even on a completely idle system. This has definitely started happening only after upgrade to PVE 7...
  13. E

    Incorrect diskwrite accounting by pvestatd after upgrade from 6.4 to 7.0

    After upgrade from 6.4 to 7.0 all CTs report diskwrite as 0. diskread continues to be accounted for properly. VMs are not affected by this. Observed on all nodes; standard configuration with rootfs stored as raw file on ext4 filesystem. proxmox-ve: 7.0-2 (running kernel: 5.11.22-2-pve)...
  14. E

    100% memory used after 7.0 upgrade

    Not a real issue with your container; just an issue with how PVE presents information which is inconsistent with what it was presenting before. The actual memory used and behavior has not changed. See https://forum.proxmox.com/threads/proxmox-ve-7-0-released.92007/page-6#post-402231 TL;DR...
  15. E

    Proxmox External Metric Server - incorrect data in metrics after migration to PVE7

    Not an issue with external metrics but PVE itself, so far no ack from PVE team on the issue https://forum.proxmox.com/threads/proxmox-ve-7-0-released.92007/page-6#post-402231 Not really critical but annoying; possibly fixed by...
  16. E

    Proxmox VE 7.0 released!

    Same as on any other Linux machine. echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor --- Appreciate the effort however the issue in question is for CTs, not VMs. --- Can we get any update on this? While memory issue isn't a big deal to wait on, this one is rather...
  17. E

    Proxmox VE 7.0 released!

    I can confirm these observations however they do not seem limited to VMs, this is screenshot from very much idle node that runs exclusively CTs: Here are screenshots from two VMs on other node: And overall CPU usage on that node: This is nothing to be blamed on PVE however I am afraid, I...
  18. E

    Proxmox VE 7.0 released!

    Seeing same issue here. Looks to be caused by memory usage shown based on total used memory, including caches, which is inconsistent with how PVE shows host memory usage (where it ignores cache/buffers) and with how it was before. # free -m total used free...
  19. E

    Tuxis launches free Proxmox Backup Server BETA service

    @tuxis @fabian The backups are still not being verified and I can still not access Verify Jobs tab, being shown permission check failed when accessing /api2/json/admin/verify I do seem to have Datastore.Verify permission however?
  20. E

    Tuxis launches free Proxmox Backup Server BETA service

    Yes and no. The built binaries for proxmox-backup-server available for download are from day before Datastore.Verify was added as valid permission for DatastoreAdmin role...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!