What doess the pfsense VM side look like under hardware? is pfsense pulling an IP from WAN (assume that is vmbr1). Whats the IP of the computer you are using to access the proxmox webgui? is it pulling an ip address from pfsense DHCP or at least manually set to be in the new pfsense LAN subnet?
Will second @rtorres. I ran pfsense for a couple of years then switched to opnsense for the last 3-4 years all running as a proxmox VM, in a config similar to his description. No issues with either going down on their own. Have come to have as much faith in this setup as any separate hardware...
ixgbe stable / 5.20.9 released.
5.20.9 builds and installs without error for me on 6.8.8-1-pve, BUT the SFP+ ports are still down after a reboot into this kernel/driver combo. Don't have time to investigate much, hopefully others have better luck.
reverted back to 6.5.13-5-pve/ixgbe-5.19.9...
Unless I am misunderstanding your suggestions, I did try already and the build errored out. See https://forum.proxmox.com/threads/intel-x553-sfp-ixgbe-no-go-on-pve8.135129/post-657758
Must be a different issue than what I am experiencing then, those out of tree intel drivers work fine to resurrect my SFP+ ports when installed in the Proxmox 6.5 kernel.
While this affects proxmox, it is not just a proxmox issue, it is a general Linux kernel driver issue, that affects every distribution using Linux kernel 6.* and above, so Ubuntu, debian, suse, etc ....
you can pin the last proxmox 6.5 kernel and build/install the 5.20.3 driver there to regain...
Just doing a mv /usr/src/ixgbe-5.19.9/dkms.conf /usr/src/ixgbe-5.19.9/dkms.conf.bak should be enough. Then you can install the 6.8 kernel, though you will lose function of the sfp+ ports unless you pin the 6.5 kernel with the DKMS 5.19.9 driver.
I have been using all the 6.5.* kernels with DKMS autoinstalling ixgbe-5.19.9 drivers without issue following what I posted above. Could not manually get 5.19.9 to build for 6.8.* series though. Today Intel releases 5.20.3, but that fails with DKMS as well. Went back and pinned to 6.5.13 for...
Comment out rmmod ixgbe from the end of your script and try again. Don't think you should be running that at all with dkms. Or you can just try to run the commands from here at the CLI because with DKMS you should not ever have to run that script more than once.
So with the help of @aarononeal and the How to build a kernel module with DKMS on Linux webpage I have DKMS autoinstall working with the last kernel upgrade.
apt-get install proxmox-default-headers build-essential dkms gcc make
cd /tmp
wget...
Taking a shot in the dark, but pve-headers are no longer what you need. I think that was deprecated somewhere in the last year or so. As mentioned above you now need proxmox-default-headers.
How to download and install ixgbe driver on Ubuntu or Debian
Commands below are what I adapted for proxmox from that page to install the latest out of tree intel ixgbe driver and fix this issue for now. Mainly need to install proxmox-default-headers instead of linux-headers-$(uname -r)
sudo...
So all the not found errors after reboot is because you are running the modinfo ./ixgbe.ko command from the wrong directory. Prior to reboot when you ran modinfo ./ixgbe.ko you were already in /tmp/ixgbe-5.19.9/src folder where ixgbe.ko was physically located. After reboot you are only at /...
Both of those links you listed are really old relative to the kernel version you are currently running. I needed to do this because linux kernels used in PVE 8 and above do not work with x553 based networking. Not sure if that is the case with x540 based cards or whether you are haveing a...
No response from anyone at @proxmox on this one, likely too small of an affected installed base so far. Have seen this Linux kernel commit suggested as the culprit.
An amazon employee states reverting this commit and recompiling the kernel allows their similar network hardware to use the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.