"PME: Spurious native interrupt!" Kernel Meldungen

smasty

Active Member
Mar 21, 2019
9
9
43
Hallo zusammen,

nachdem ich nun den ersten PVE-Knoten von 6.0.9 auf 6.1.5 (und damit Kernel 5.3.13) angehoben habe, wird mein Syslog mit folgenden Meldungen kontinuierlich befüllt:

[...]
Jan 6 10:00:01 kernel: [ 8076.840412] pcieport 0000:00:03.0: PME: Spurious native interrupt!
Jan 6 10:00:04 kernel: [ 8078.958690] pcieport 0000:00:03.0: PME: Spurious native interrupt!
Jan 6 10:00:06 kernel: [ 8080.971962] pcieport 0000:00:03.0: PME: Spurious native interrupt!
Jan 6 10:00:06 kernel: [ 8080.988809] pcieport 0000:00:03.0: PME: Spurious native interrupt!
Jan 6 10:00:09 kernel: [ 8084.103786] pcieport 0000:00:03.0: PME: Spurious native interrupt!
Jan 6 10:00:09 kernel: [ 8084.103963] pcieport 0000:00:03.0: PME: Spurious native interrupt!
Jan 6 10:00:11 kernel: [ 8086.457771] pcieport 0000:00:03.0: PME: Spurious native interrupt!
Jan 6 10:00:16 kernel: [ 8091.345566] pcieport 0000:00:03.0: PME: Spurious native interrupt!
Jan 6 10:00:16 kernel: [ 8091.353984] pcieport 0000:00:03.0: PME: Spurious native interrupt!
Jan 6 10:00:26 kernel: [ 8101.770559] pcieport 0000:00:03.0: PME: Spurious native interrupt!
Jan 6 10:00:27 kernel: [ 8102.724566] pcieport 0000:00:03.0: PME: Spurious native interrupt!
Jan 6 10:00:27 kernel: [ 8102.869279] pcieport 0000:00:03.0: PME: Spurious native interrupt!
Jan 6 10:00:28 kernel: [ 8103.431608] pcieport 0000:00:03.0: PME: Spurious native interrupt!
Jan 6 10:00:31 kernel: [ 8106.841754] pcieport 0000:00:03.0: PME: Spurious native interrupt!
Jan 6 10:00:31 kernel: [ 8106.856727] pcieport 0000:00:03.0: PME: Spurious native interrupt!
Jan 6 10:00:33 kernel: [ 8108.399521] pcieport 0000:00:03.0: PME: Spurious native interrupt!
Jan 6 10:00:38 kernel: [ 8113.399608] pcieport 0000:00:03.0: PME: Spurious native interrupt!
Jan 6 10:00:39 kernel: [ 8114.350334] pcieport 0000:00:03.0: PME: Spurious native interrupt!
[...]

Wie man an den zeitlichen Abständen sieht, kommen da schon viele Log Messages zusammen...

Es geht dabei wohl um folgendes PCI-Device:

# lspci -vv -t|grep 03.0
| +-03.0-[83]--
+-03.0-[04-05]--+-00.0 Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection

Die Meldungen erscheinen allerdings erst im Log, nachdem ich wieder eine VM zurückmigriert habe bzw. dort eine VM läuft.

Ich vermute, dass sich das mit dem/den nächsten Kernel Updates erledigen wird, hat hier jemand ähnliches im Log?
 
Same problem here. Since update yesterday from version 5.4.x to 6.1-5
We have the same Intel 10Gbit interfaces.
We had over 64GB of logging in kern.log filling up OS disk
 
Same problem here. Since update yesterday from version 5.4.x to 6.1-5
We have the same Intel 10Gbit interfaces.
We had over 64GB of logging in kern.log filling up OS disk

This morning our 3th node crashed again (out of diskspace)
After deleting:
/var/log/syslog
/var/log/messages
/var/log/kern.log
and a reboot it is working again.

The spurious interrupt records are filling up logging very fast.
For now solved by creating a cronjob emptying these files every hour.
 
I am experiencing the same issue with the same network card, as seen below.
Code:
lspci -vv -t|grep 03.2
             +-03.2-[07]--+-00.0  Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection

My syslog looks like this as well:
Code:
[...]
Apr 14 20:43:02 *** kernel: [2413144.982743] pcieport 0000:00:03.2: PME: Spurious native interrupt!
Apr 14 20:43:07 *** kernel: [2413150.297445] pcieport 0000:00:03.2: PME: Spurious native interrupt!
Apr 14 20:43:07 *** kernel: [2413150.505500] pcieport 0000:00:03.2: PME: Spurious native interrupt!
Apr 14 20:43:18 *** kernel: [2413161.499213] pcieport 0000:00:03.2: PME: Spurious native interrupt!
[...]
 
Hi,

I have the same issue atm with the 82599ES driver on proxmox. I've found that if you disable interrupt coalescing on the affected cards then the errors stop happening.

sudo ethtool -C <iface> rx-usecs 0
 
Hi
We have the same issue:
Bash:
root@proxmox1:/etc/rsyslog.d# lspci
...
04:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
04:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
...

dmesg:
Bash:
[Thu Jul 16 15:45:32 2020] pcieport 0000:00:02.2: PME: Spurious native interrupt!

We run Virtual Environment 6.2-10 since today, and still got this messages
any ideas, or does we have run the ethtool command like above?

@csutcliff - did you got any issues after this command ?
 
Hi
We have the same issue:
Bash:
root@proxmox1:/etc/rsyslog.d# lspci
...
04:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
04:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
...

dmesg:
Bash:
[Thu Jul 16 15:45:32 2020] pcieport 0000:00:02.2: PME: Spurious native interrupt!

We run Virtual Environment 6.2-10 since today, and still got this messages
any ideas, or does we have run the ethtool command like above?

@csutcliff - did you got any issues after this command ?

I haven't had any issue since turning off interrupt coalescing. The expected side effect of doing this is slight cpu usage increase as there will be more interrupts. In my environment this is immeasurably small.
 
Hi
We have the same issue:
Bash:
root@proxmox1:/etc/rsyslog.d# lspci
...
04:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
04:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
...

dmesg:
Bash:
[Thu Jul 16 15:45:32 2020] pcieport 0000:00:02.2: PME: Spurious native interrupt!

We run Virtual Environment 6.2-10 since today, and still got this messages
any ideas, or does we have run the ethtool command like above?

@csutcliff - did you got any issues after this command ?

Run the command above. Keep in mind the interface will reinitialize and disconnect for a few seconds.
It worked for our enviroment. CPU raise not noticable.
 
Run the command above. Keep in mind the interface will reinitialize and disconnect for a few seconds.
It worked for our enviroment. CPU raise not noticable.
Hi Rdfeji
Did you ran the command below on the physical or bond interface?
Bash:
sudo ethtool -C <iface> rx-usecs 0
 
Ok fine! and its reboot persistant?
My workaround is to just run a @reboot crontab with the following cmdline:
Bash:
@reboot /sbin/ethtool -C ens7f0 rx-usecs 0 && /sbin/ethtool -C ens7f1 rx-usecs 0

Don't forget to replace ens7f0 and ens7f1 with your network interfaces
 
fine, @reboot is straight forward.
i did a script as follows - but untested yet!
Bash:
root@proxmox1:/etc/network/if-pre-up.d# cat ethtool-rx-usecs
#!/bin/sh
ETHTOOL=/usr/sbin/ethtool
$ETHTOOL -C eno49 rx-usecs 0
$ETHTOOL -C eno50 rx-usecs 0
And of course, change interfaces as needed
 
Funny, i'm still investigating. Last weekend i updated all nodes.
I still have the mentioned temporary fix active (run with Cron @reboot)and after a reboot i saw "some but not many" spurious logs again.
Trying to manual disable interrupts it said "nothing to change" so Cron is working.
After enabling interupts again spurious messages were gone?????
Having to investigate this further and still awaiting fix????
 
  • Like
Reactions: kilping
Sorry for reviving an old thread but i just experienced the same weird behavior as @rdfeij
This started happening (for me at least) after enabeling VLAN-awareness on the Bridge using the two ports of the network card.
I am running an HP DL360 G9 with dual Intel Xeon E5-2667v4 and the integrated dual SFP+ card using an Intel 82599ES chipset

Interestingly the system isn't complaining about the NIC itself but rather the PCI Express Root Port at 00:02.2

After using ethtool -C <iface> rx-usecs 0 on both SFP+ interfaces the issue disappeared and reenabling it didn't bring back the log entries (at least for now)

The issue also didn't disappear with the upgrade from 7 to 8

Syslog example entry
kernel: pcieport 0000:00:02.2: PME: Spurious native interrupt

lspci entries for both devices
Code:
00:02.2 PCI bridge: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D PCI Express Root Port 2 (rev 01) (prog-if 00 [Normal decode])
        Subsystem: Hewlett-Packard Company Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D PCI Express Root Port 2
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 30
        NUMA node: 0
        Bus: primary=00, secondary=04, subordinate=04, sec-latency=0
        I/O behind bridge: 2000-2fff [size=4K] [16-bit]
        Memory behind bridge: 92c00000-92efffff [size=3M] [32-bit]
        Prefetchable memory behind bridge: [disabled] [64-bit]
        Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
        BridgeCtl: Parity+ SERR+ NoISA- VGA- VGA16- MAbort- >Reset- FastB2B-
                PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
        Capabilities: [40] Subsystem: Hewlett-Packard Company Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D PCI Express Root Port 2
        Capabilities: [60] MSI: Enable+ Count=1/2 Maskable+ 64bit-
                Address: fee002d8  Data: 0000
                Masking: 00000002  Pending: 00000000
        Capabilities: [90] Express (v2) Root Port (Slot-), MSI 00
                DevCap: MaxPayload 256 bytes, PhantFunc 0
                        ExtTag- RBE+
                DevCtl: CorrErr- NonFatalErr+ FatalErr+ UnsupReq-
                        RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
                        MaxPayload 256 bytes, MaxReadReq 128 bytes
                DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
                LnkCap: Port #5, Speed 8GT/s, Width x8, ASPM L1, Exit Latency L1 <16us
                        ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 5GT/s, Width x8
                        TrErr- Train- SlotClk+ DLActive+ BWMgmt- ABWMgmt-
                RootCap: CRSVisible+
                RootCtl: ErrCorrectable- ErrNon-Fatal+ ErrFatal+ PMEIntEna+ CRSVisible+
                RootSta: PME ReqID 0400, PMEStatus- PMEPending-
                DevCap2: Completion Timeout: Range BCD, TimeoutDis+ NROPrPrP- LTR-
                RootSta: PME ReqID 0400, PMEStatus- PMEPending-
                DevCap2: Completion Timeout: Range BCD, TimeoutDis+ NROPrPrP- LTR-
                         10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
                         EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
                         FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
                         AtomicOpsCap: Routing- 32bit+ 64bit+ 128bitCAS+
                DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- 10BitTagReq- OBFF Disabled, ARIFwd-
                         AtomicOpsCtl: ReqEn- EgressBlck-
                LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
                LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshoot
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
                         EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
                         Retimer- 2Retimers- CrosslinkRes: unsupported
        Capabilities: [e0] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [100 v1] Vendor Specific Information: ID=0002 Rev=0 Len=00c <?>
        Capabilities: [110 v1] Access Control Services
                ACSCap: SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans-
                ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
        Capabilities: [148 v1] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
                AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
                        MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
                HeaderLog: 00000000 00000000 00000000 00000000
                RootCmd: CERptEn- NFERptEn- FERptEn-
                RootSta: CERcvd- MultCERcvd- UERcvd- MultUERcvd-
                         FirstFatal- NonFatalMsg- FatalMsg- IntMsg 0
                ErrorSrc: ERR_COR: 0000 ERR_FATAL/NONFATAL: 0000
                         FirstFatal- NonFatalMsg- FatalMsg- IntMsg 0
                ErrorSrc: ERR_COR: 0000 ERR_FATAL/NONFATAL: 0000
        Capabilities: [1d0 v1] Vendor Specific Information: ID=0003 Rev=1 Len=00a <?>
        Capabilities: [250 v1] Secondary PCI Express
                LnkCtl3: LnkEquIntrruptEn- PerformEqu-
                LaneErrStat: 0
        Capabilities: [280 v1] Vendor Specific Information: ID=0005 Rev=3 Len=018 <?>
        Capabilities: [300 v1] Vendor Specific Information: ID=0008 Rev=0 Len=038 <?>
        Kernel driver in use: pcieport

...

04:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
        DeviceName: Embedded FlexibleLOM 1 Port 1
        Subsystem: Hewlett-Packard Company Ethernet 10Gb 2-port 560FLR-SFP+ Adapter
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin B routed to IRQ 17
        NUMA node: 0
        Region 0: Memory at 92d00000 (32-bit, non-prefetchable) [size=1M]
        Region 2: I/O ports at 2020 [size=32]
        Region 3: Memory at 92e04000 (32-bit, non-prefetchable) [size=16K]
        Expansion ROM at 92e80000 [virtual] [disabled] [size=512K]
        Capabilities: [40] Power Management version 3
                Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
                Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=1 PME-
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
                Address: 0000000000000000  Data: 0000
                Masking: 00000000  Pending: 00000000
        Capabilities: [70] MSI-X: Enable+ Count=64 Masked-
                Vector table: BAR=3 offset=00000000
                PBA: BAR=3 offset=00002000
        Capabilities: [a0] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us
                        ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0W
                DevCtl: CorrErr- NonFatalErr+ FatalErr+ UnsupReq-
                        RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
                        MaxPayload 256 bytes, MaxReadReq 4096 bytes
                DevSta: CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr+ TransPend-
                LnkCap: Port #2, Speed 5GT/s, Width x8, ASPM L0s, Exit Latency L0s unlimited
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp-
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 5GT/s, Width x8
                        TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
                        TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
                         10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
                         EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
                         FRS- TPHComp- ExtTPHComp-
                         AtomicOpsCap: 32bit- 64bit- 128bitCAS-
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- 10BitTagReq- OBFF Disabled,
                         AtomicOpsCtl: ReqEn-
                LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshoot
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
                         EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
                         Retimer- 2Retimers- CrosslinkRes: unsupported
        Capabilities: [e0] Vital Product Data
                Product Name: HP Ethernet 10Gb 2-port 560FLR-SFP+ Adapter
                Read-only fields:
                        [PN] Part number: 665241-001
                        [EC] Engineering changes: B-5514
                        [SN] Serial number: MYI6200JCL
                        [V0] Vendor specific: 11W/8W PCIeG2x8 2p 10Gb SFP+ Intel 82599
                        [V2] Vendor specific: 5620
                        [V4] Vendor specific: 1402EC7717D4
                        [V5] Vendor specific: 0B
                        [RV] Reserved: checksum good, 0 byte(s) reserved
                Read/write fields:
                        [V1] Vendor specific: 4.5.19
                        [V3] Vendor specific: 3.0.24
                        [V6] Vendor specific: 2.3.20
                        [YA] Asset tag: N/A
                        [YB] System specific: xxxxxxxxxxxxxxxx
                        [YC] System specific: xxxxxxxxxxxxxxxx
                End
        Capabilities: [100 v1] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
        Capabilities: [100 v1] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES- TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
                AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
                        MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
                HeaderLog: 00000000 00000000 00000000 00000000
        Capabilities: [140 v1] Device Serial Number 14-02-ec-ff-ff-77-17-d4
        Kernel driver in use: ixgbe
        Kernel modules: ixgbe

pveversion
Code:
proxmox-ve: 8.0.2 (running kernel: 6.2.16-12-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
proxmox-kernel-helper: 8.0.3
pve-kernel-5.15: 7.4-6
proxmox-kernel-6.2.16-12-pve: 6.2.16-12
proxmox-kernel-6.2: 6.2.16-12
pve-kernel-5.15.116-1-pve: 5.15.116-1
pve-kernel-5.15.111-1-pve: 5.15.111-1
pve-kernel-5.15.108-1-pve: 5.15.108-2
pve-kernel-5.15.102-1-pve: 5.15.102-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.5
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.8
libpve-guest-common-perl: 5.0.4
libpve-http-server-perl: 5.0.4
libpve-rs-perl: 0.8.5
libpve-storage-perl: 8.0.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.2-1
proxmox-backup-file-restore: 3.0.2-1
proxmox-kernel-helper: 8.0.3
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.0.6
pve-cluster: 8.0.3
pve-container: 5.0.4
pve-docs: 8.0.4
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.8-2
pve-ha-manager: 4.0.2
pve-i18n: 3.0.5
pve-qemu-kvm: 8.0.2-5
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.7
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1
 
I was getting this message previously but was able to get it to stop using ethtool -C <iface> rx-usecs 0 on both SFP+ interfaces after rebooting.

Problem for me is after upgrading to Proxmox kernel 6.2.16-19-pve the issue back and ethtool no longer solves the problem.

I am also running an HP DL380 G9 with dual Intel Xeon E5-2680v4 and the integrated dual SFP+ card using an Intel 82599ES chipset.

I've found a way to filter the messages from the logs, until a fix is found.

nano /etc/rsyslog.d/10-filter-spurious-interrupt-syslog.conf

Code:
# Filter out messages like these:
# 2023-11-20T10:45:40.441829+00:00 prox01 kernel: [3239768.702255] pcieport 0000:00:02.2: PME: Spurious native interrupt!


:msg, contains, "pcieport 0000:00:02.2: PME: Spurious native interrupt!" stop

systemctl restart rsyslog.service
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!