Hey folks,
I just installed PVE 8.0.3 on a Fujitsu Esprimo G5010 (D3804) Mini PC for a homelab setup. So far, everything is working fine, except I cant fully shutdown the node.
If I issue the shutdown command (no difference if done via GUI oder CLI and also no difference if a VM is gracefully...
Hi folks,
I'm reading and testing now for two days, but I can't get it running and hope for a hint from you guys.
My HW:
Board: Supermicro X11SCH-LN4F with C246 chipset
CPU: Intel Xeon E-2176G
Soft;
Proxmox 7.4-3
Kernel: 5.15.107-1-pve
Goal:
Setting up a VM (or LXC) with Frigate and passing...
I have the same behavoiur as fireon describes.
Debian 10 and Debian 11, as well as Windows 10 and Server 2016 VM.
*If* it happens, then all VMs showing the same "result" as shown in the screenshot of fireon. Also, a reset VM does not work. I have to stop it, wait some seconds and start it again...
As dea mentioned it, I tried to remember when I upgraded from 6 to 7, it must be between August and September, but indeed I don't remember to had the problem from beginning.
It apperaed later, but I can't say when or with which update it was.
At first I didn't pay much attention, since it was...
Intel I219-V (still or again) not working:
Syslog during boot says:
e1000e 0000:00:1f.6: The NVM Checksum Is Not Valid
e1000e: probe of 0000:00:1f.6 failed with error -5
proxmox-ve: 7.1-1 (running kernel: 5.15.12-1-pve)
pve-manager: 7.1-8 (running version: 7.1-8/5b267f33)
pve-kernel-5.15...
Btw, while still trying to further debug... The e1000 driver is not loaed/bind to the NIC on the newer kernel versions, syslog says:
kernel: e1000e 0000:00:1f.6: The NVM Checksum Is Not Valid
kernel: e1000e: probe of 0000:00:1f.6 failed with error -5
No ideas? Or at least a hint, how to set the default kernel to boot?
I'm on ZFS as root, booted with systemd, but I can't find loader.conf...
I tried to uninstall all the newer kernels, but then the system says it's going to remove Proxmox at all (I added 5.11.22-4 to the manual list...)
Unfortunately it's not that easy.
The I219 NIC is just not availabe, not as eno1, nor with an other name.
I attached an USB NIC (enx00243216680d), which I also added as slave to vmbr1, to get access to the server, if bootet with a newer kernel. It doesen't make any difference, if that USB NIC is...
well, I can't reach the PVE server, since it has no network connection (i.e. no access to the webinterface).
Also, from the host console, I can't ping any outside services, the eno1 interface doens't come up and therefore the vmbr1 does not have a connection to the outside world.
With the old...
Not yet, but if I update to 5.15, the oldest (and here working) version of the kernel will be deleted, since only the last 3 kernels will reamin, or am I wrong?
So in case 5.15 is also not working, I can't go back to 5.11.22-4?
Hi folks,
I installed PVE 7, with kernel 5.11.22-4, which was working fine. I did upgrade the system to the current Version 7.1-8 with kernel 5.13.19-2 and the Intel I219 NIC stopped working. After some research I found out, that the I219 NIC seems to have"some" problem with some (most older)...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.