@kriansa Thank you for sharing those details, that is certainly an interesting find! Cracking detective work there and thanks for the assistance.
I will go back to the reserve option in that case. Thanks for helping me work round the issue so I have a usable system. Your work will also help...
Splendid, that did the trick!
Adding either pci=realloc=off or reserve=0x80000000,0xfffffff to /etc/kernel/cmdline and running proxmox-boot-tool refresh resolved the issue for me. As I'm booting from a ZFS disk it seems that /etc/default/grub is ignored.
@kriansa for the assistance. For now...
Yes sir, I am indeed, but I appreciate what you are saying, wonder if I can enter them manually as a test. This is my update-grub output after editing /etc/default/grub which I presume is what I should have been doing.
update-grub
Generating grub configuration file ...
W: This system is booted...
Thank you for the suggestion, much appreciated. I added and run update-grub but seem to have the same issue:
dmesg | grep mpt
[ 0.010779] Device empty
[ 0.285720] Dynamic Preempt: voluntary
[ 0.285751] rcu: Preemptible hierarchical RCU implementation.
[ 0.309425] MDS...
I'm still having the same issue with a fresh install of Proxmox 8.02 running kernel 6.2.16-19-pve. I did add the following link to /etc/default/grub and ran update-grub but this did not seem to solve the issue for me. What did I get wrong?
GRUB_CMDLINE_LINUX="pci=realloc=off"
Attached is my...
I was getting this message previously but was able to get it to stop using ethtool -C <iface> rx-usecs 0 on both SFP+ interfaces after rebooting.
Problem for me is after upgrading to Proxmox kernel 6.2.16-19-pve the issue back and ethtool no longer solves the problem.
I am also running an HP...
Did you find a solution to this? I have a Proliant DL380 G9 with an "Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection" card. I've installed all firmware updates for the board and netcard but no luck.
Until recently running "/usr/sbin/ethtool -C eno50 rx-usecs 0" would work but...
Hi Everyone,
I've got Proxmox installed on my home server. I upgraded to version 6 last night and am trying to get pve-zsync configured and running but I had a strange issue.
I have two ZFS pools (vmdata01 and vmdata02), one with 7200rpm drives and the other with SSD drives. On my linux VM I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.