Attaching files from R320 as I believe this is the most typical configuration, unlike other two which are more custom-like.
There's also an update to my previous post - it would seem that console post-initrd gets initialized only if system is booted without nomodeset. This was part of my...
Encountering same issue here with lack of initrd console on few hardware configurations after upgrade from 6.2.16-7 to 6.5.11-4:
- PowerEdge R320; the console does gets initialized after that, but had to enter decryption passphrase blindly
- HP thinClient t630 with AMD GX-420GI; the console does...
That's pretty huge range and since it also catches most of Silver/Gold Xeons from past few years (almost no company replaces servers every year for newer CPU) it becomes quite a big issue. It definitely needs to be resolved one way or another. Preferably it should be resolved upstream, as I...
In my specific case I've been running with pt since PVE 6.x, so this has no effect on me. It's a shame though that the defaults were reverted back, as it will mask the underlying regression (the issue described did not happen in 5.13 when running in pt mode)
I believe there was a report opened against Ubuntu's kernel on their bugtracker but I can't seem to locate it anymore (or to be more specific, it seems to lead to 404). Not aware of any other reports of this issue and I'm not really in position to perform bisect to find offending change. One...
Make sure you are aware of another issue present, which shows itself only during rebuild: https://forum.proxmox.com/threads/regression-in-kernel-5-15-with-megaraid_sas-when-certain-raid-cards-have-vd-in-rebuild-consistiency-check-state.110470/
Be aware that ZFS will not function at its best...
The higher uptime (even if ultimately crashed after some time) with "Performance" power plan could point to what I previously mentioned - that the issue comes from Windows kernel scheduler doing something when switching between its internal idle/non-idle states that KVM does not like. This would...
This is more of a PSA than anything, to anyone here that might experience this issue, as it took me some time to debug.
There is regression in megaraid_sas kernel module for version 5.15 used in PVE. In a combination with certain RAID cards (such as `LSI MegaRAID SAS 2008`, for example PERC...
This has been reported by others for 5.15.13 before here: https://old.reddit.com/r/VFIO/comments/s1k5yg/win10_guest_crashes_after_a_few_minutes/
Sadly, this happened for me too. Interestingly, this has only happened on single Windows 11 VM (21H1, 22000.xxx) but not on a Server 2019 (1809...
Some time has passed now and I can confirm that setting aio=native does solve the issue with VMs behaving as they were previously without any crashes. For VM 100 I have applied this only to OS drive while keeping second attached one at new defaults; it would seem that RAID5 backed array is...
I had applied this setting on 109 shortly after writing the initial post as I remembered the io_uring from changelog. So far I have been unable to reproduce this. Will continue observing this for next few days and report back.
Observing similar issue on two VMs, one with Windows 10, other with Windows Server 2019. I can reliably attempt to trigger it (with some degree of chance) by opening Chrome/Chromium/Edge, even on a completely idle system. This has definitely started happening only after upgrade to PVE 7...
After upgrade from 6.4 to 7.0 all CTs report diskwrite as 0. diskread continues to be accounted for properly. VMs are not affected by this. Observed on all nodes; standard configuration with rootfs stored as raw file on ext4 filesystem.
proxmox-ve: 7.0-2 (running kernel: 5.11.22-2-pve)...
Not a real issue with your container; just an issue with how PVE presents information which is inconsistent with what it was presenting before. The actual memory used and behavior has not changed. See https://forum.proxmox.com/threads/proxmox-ve-7-0-released.92007/page-6#post-402231
TL;DR...
Not an issue with external metrics but PVE itself, so far no ack from PVE team on the issue https://forum.proxmox.com/threads/proxmox-ve-7-0-released.92007/page-6#post-402231
Not really critical but annoying; possibly fixed by...
Same as on any other Linux machine.
echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
---
Appreciate the effort however the issue in question is for CTs, not VMs.
---
Can we get any update on this? While memory issue isn't a big deal to wait on, this one is rather...
I can confirm these observations however they do not seem limited to VMs, this is screenshot from very much idle node that runs exclusively CTs:
Here are screenshots from two VMs on other node:
And overall CPU usage on that node:
This is nothing to be blamed on PVE however I am afraid, I...
Seeing same issue here. Looks to be caused by memory usage shown based on total used memory, including caches, which is inconsistent with how PVE shows host memory usage (where it ignores cache/buffers) and with how it was before.
# free -m
total used free...
@tuxis @fabian The backups are still not being verified and I can still not access Verify Jobs tab, being shown permission check failed when accessing /api2/json/admin/verify
I do seem to have Datastore.Verify permission however?
Yes and no. The built binaries for proxmox-backup-server available for download are from day before Datastore.Verify was added as valid permission for DatastoreAdmin role...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.