Plugging in or unplugging the network cable will cause the PVE system to completely lose network connectivity.

huitheme

New Member
Dec 15, 2025
11
0
1
PVE Version: 9.1.2

Device: NUC15

Network Card: Intel i226V (igc)

Symptoms: After recently replacing the network switch and frequently plugging and unplugging network cables, I found that PVE completely loses network connectivity after plugging or unplugging the network cable.

IP:8006 Unable to connect

Pinging 10.0.0.1 (gateway) failed.

The specific log is as follows:

Code:
root@nuc:~# journalctl -k -b -1 | grep -Ei "i226|nic|link|reset|timeout"
Dec 15 10:14:40 nuc kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 15 10:14:40 nuc kernel: audit: initializing netlink subsys (disabled)
Dec 15 10:14:40 nuc kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0
Dec 15 10:14:40 nuc kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1
Dec 15 10:14:40 nuc kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0
Dec 15 10:14:40 nuc kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0
Dec 15 10:14:40 nuc kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0
Dec 15 10:14:40 nuc kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0
Dec 15 10:14:40 nuc kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0
Dec 15 10:14:40 nuc kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0
Dec 15 10:14:40 nuc kernel: simple-framebuffer simple-framebuffer.0: [drm] Registered 1 planes with drm panic
Dec 15 10:14:40 nuc kernel: igc 0000:57:00.0: 4.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x1 link)
Dec 15 10:14:40 nuc kernel: igc 0000:57:00.0 nic0: renamed from enp87s0
Dec 15 10:14:40 nuc kernel: i915 0000:00:02.0: [drm] Registered 4 planes with drm panic
Dec 15 10:14:41 nuc kernel: softdog: initialized. soft_noboot=0 soft_margin=60 sec soft_panic=0 (nowayout=0)
Dec 15 10:14:41 nuc kernel: vmbr0: port 1(nic0) entered blocking state
Dec 15 10:14:41 nuc kernel: vmbr0: port 1(nic0) entered disabled state
Dec 15 10:14:41 nuc kernel: igc 0000:57:00.0 nic0: entered allmulticast mode
Dec 15 10:14:41 nuc kernel: igc 0000:57:00.0 nic0: entered promiscuous mode
Dec 15 10:14:45 nuc kernel: igc 0000:57:00.0 nic0: NIC Link is Up 2500 Mbps Full Duplex, Flow Control: RX
Dec 15 10:14:45 nuc kernel: vmbr0: port 1(nic0) entered blocking state
Dec 15 10:14:45 nuc kernel: vmbr0: port 1(nic0) entered forwarding state
Dec 15 10:14:57 nuc kernel: vfio-pci 0000:01:00.0: resetting
Dec 15 10:14:57 nuc kernel: vfio-pci 0000:01:00.0: reset done
Dec 15 10:14:59 nuc kernel: vfio-pci 0000:01:00.0: resetting
Dec 15 10:14:59 nuc kernel: vfio-pci 0000:01:00.0: reset done
Dec 15 10:14:59 nuc kernel: vfio-pci 0000:01:00.0: resetting
Dec 15 10:14:59 nuc kernel: vfio-pci 0000:01:00.0: reset done
Dec 15 10:17:50 nuc kernel: igc 0000:57:00.0:    [12] Timeout            
Dec 15 10:17:50 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 10:17:50 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 10:17:50 nuc kernel: igc 0000:57:00.0 nic0: NIC Link is Down
Dec 15 10:17:50 nuc kernel: vmbr0: port 1(nic0) entered disabled state
Dec 15 10:17:50 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 10:17:50 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 10:17:59 nuc kernel: igc 0000:57:00.0 nic0: NIC Link is Up 2500 Mbps Full Duplex, Flow Control: RX
Dec 15 10:17:59 nuc kernel: vmbr0: port 1(nic0) entered blocking state
Dec 15 10:17:59 nuc kernel: vmbr0: port 1(nic0) entered forwarding state
Dec 15 11:24:24 nuc kernel: igc 0000:57:00.0:    [12] Timeout            
Dec 15 11:24:24 nuc kernel: igc 0000:57:00.0 nic0: NIC Link is Down
Dec 15 11:24:24 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 11:24:24 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 11:24:24 nuc kernel: vmbr0: port 1(nic0) entered disabled state
Dec 15 11:24:24 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 11:24:24 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 11:24:24 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 11:24:24 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 11:24:24 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 11:24:24 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 11:24:32 nuc kernel: igc 0000:57:00.0 nic0: NIC Link is Up 2500 Mbps Full Duplex, Flow Control: RX
Dec 15 11:24:32 nuc kernel: vmbr0: port 1(nic0) entered blocking state
Dec 15 11:24:32 nuc kernel: vmbr0: port 1(nic0) entered forwarding state
Dec 15 11:41:18 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 11:41:18 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 11:41:18 nuc kernel: igc 0000:57:00.0:    [12] Timeout            
Dec 15 11:41:18 nuc kernel: igc 0000:57:00.0 nic0: NIC Link is Down
Dec 15 11:41:18 nuc kernel: vmbr0: port 1(nic0) entered disabled state
Dec 15 11:41:34 nuc kernel: igc 0000:57:00.0 nic0: NIC Link is Up 2500 Mbps Full Duplex, Flow Control: RX
Dec 15 11:41:34 nuc kernel: vmbr0: port 1(nic0) entered blocking state
Dec 15 11:41:34 nuc kernel: vmbr0: port 1(nic0) entered forwarding state
Dec 15 13:13:39 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 13:13:39 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 13:13:39 nuc kernel: igc 0000:57:00.0 nic0: NIC Link is Down
Dec 15 13:13:39 nuc kernel: vmbr0: port 1(nic0) entered disabled state
Dec 15 13:13:39 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 13:13:39 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 13:13:39 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 13:13:39 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 13:13:39 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 13:13:39 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 13:13:39 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 13:13:39 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 13:13:56 nuc kernel: igc 0000:57:00.0 nic0: NIC Link is Up 2500 Mbps Full Duplex, Flow Control: RX
Dec 15 13:13:56 nuc kernel: vmbr0: port 1(nic0) entered blocking state
Dec 15 13:13:56 nuc kernel: vmbr0: port 1(nic0) entered forwarding state
Dec 15 14:35:00 nuc kernel: igc 0000:57:00.0 nic0: NIC Link is Down
Dec 15 14:35:00 nuc kernel: vmbr0: port 1(nic0) entered disabled state
Dec 15 14:35:25 nuc kernel: igc 0000:57:00.0 nic0: NIC Link is Up 2500 Mbps Full Duplex, Flow Control: RX
Dec 15 14:35:25 nuc kernel: vmbr0: port 1(nic0) entered blocking state
Dec 15 14:35:25 nuc kernel: vmbr0: port 1(nic0) entered forwarding state
Dec 15 15:56:28 nuc kernel: igc 0000:57:00.0:    [12] Timeout            
Dec 15 15:56:28 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 15:56:28 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 15:56:28 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 15:56:28 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 15:56:28 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 15:56:28 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 15:56:28 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 15:56:28 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 15:56:28 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 15:56:28 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 15:56:28 nuc kernel: igc 0000:57:00.0 nic0: NIC Link is Down
Dec 15 15:56:28 nuc kernel: vmbr0: port 1(nic0) entered disabled state
Dec 15 15:56:28 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 15:56:28 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 15:56:28 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 15:56:28 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 15:56:28 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 15:56:28 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 15:56:28 nuc kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
Dec 15 15:56:28 nuc kernel: pcieport 0000:00:1c.0:    [12] Timeout            
Dec 15 15:57:04 nuc kernel: igc 0000:57:00.0 nic0: NIC Link is Up 2500 Mbps Full Duplex, Flow Control: RX
Dec 15 15:57:04 nuc kernel: vmbr0: port 1(nic0) entered blocking state
Dec 15 15:57:04 nuc kernel: vmbr0: port 1(nic0) entered forwarding state
Dec 15 17:42:30 nuc kernel: igc 0000:57:00.0 nic0: PCIe link lost, device now detached
Dec 15 17:42:30 nuc kernel: Modules linked in: tcp_diag inet_diag vfio_pci vfio_pci_core vfio_iommu_type1 vfio iommufd veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter nf_tables bonding tls softdog sunrpc binfmt_misc nfnetlink_log xe gpu_sched drm_gpuvm drm_gpusvm_helper drm_ttm_helper drm_exec snd_hda_codec_intelhdmi drm_suballoc_helper snd_hda_intel snd_sof_pci_intel_mtl snd_sof_intel_hda_generic soundwire_intel snd_sof_intel_hda_sdw_bpt snd_sof_intel_hda_common snd_soc_hdac_hda snd_sof_intel_hda_mlink snd_sof_intel_hda snd_hda_codec_hdmi soundwire_cadence snd_sof_pci snd_sof_xtensa_dsp snd_sof snd_sof_utils snd_hda_ext_core snd_hda_codec snd_hda_core intel_uncore_frequency snd_intel_dspcfg intel_uncore_frequency_common snd_intel_sdw_acpi sch_fq_codel snd_soc_acpi_intel_match x86_pkg_temp_thermal snd_soc_acpi_intel_sdca_quirks intel_powerclamp soundwire_generic_allocation snd_soc_acpi coretemp snd_hwdep soundwire_bus processor_thermal_device_pci snd_soc_sdca kvm_intel
Dec 15 17:42:30 nuc kernel:  snd_soc_core processor_thermal_device i915 mei_gsc_proxy intel_rapl_msr snd_compress processor_thermal_wt_hint ac97_bus kvm platform_temperature_control snd_pcm_dmaengine drm_buddy processor_thermal_soc_slider snd_pcm ttm processor_thermal_rfim intel_pmc_core irqbypass processor_thermal_rapl snd_timer drm_display_helper polyval_clmulni int3403_thermal intel_rapl_common ghash_clmulni_intel pmt_telemetry snd cec processor_thermal_wt_req pmt_discovery aesni_intel pmt_class processor_thermal_power_floor mei_me rc_core rapl intel_pmc_ssram_telemetry soundcore processor_thermal_mbox int3400_thermal intel_cstate pcspkr asus_nb_wmi wmi_bmof crc8 mei intel_vpu acpi_pad i2c_algo_bit acpi_tad int340x_thermal_zone igen6_edac intel_vsec intel_hid acpi_thermal_rel mac_hid zfs(PO) spl(O) msr vhost_net vhost vhost_iotlb tap efi_pstore nfnetlink dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b_generic xor raid6_pq hid_sensor_custom hid_sensor_hub hid_generic intel_ishtp_hid hid ucsi_acpi typec_ucsi typec dm_thin_pool

I tried disabling ASPM from both PVE's GRUB and the NUC's BIOS, but the problem persists.

I updated to the latest NUC15 BIOS, but the problem still persists.

I've also tried several different network cables, but the problem remains.

Under normal circumstances, the NUC15's network port has two lights on. When the PVE network loses connection, only one light on the NUC15's network port is lit. Checking the switch's web interface, the network port connected to the NUC15 also shows an abnormal status.
 
Last edited:
Restarting PVE restores the network connection, but testing by unplugging and replugging the network cable still results in PVE losing network connectivity. ip:8006 Unable to connect
 
Last edited:
Please keep posts in this forum in English - you have a better chance of getting an answer from more people.
Thanks!
 
  • Like
Reactions: t.lamprecht
I suspect it might be an issue with the IGC driver.

nano /etc/modprobe.d/igc.conf
options igc disable_eee=1
reboot

dmesg | grep igc
Code:
root@nuc:~# dmesg | grep igc
[ 1.211391] igc: unknown parameter 'disable_eee' ignored
[ 1.211824] igc 0000:57:00.0: enabling device (0000 -> 0002)
[ 1.212309] igc 0000:57:00.0: PTM enabled, 4ns granularity
[ 1.256530] igc 0000:57:00.0 (unnamed net_device) (uninitialized): PHC added [ 1.285398] igc 0000:57:00.0: 4.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x1 link)
[ 1.285402] igc 0000:57:00.0 eth0: MAC: 88:ae:dd:69:32:ff
[ 1.287121] igc 0000:57:00.0 enp87s0: renamed from eth0
[ 2.485579] igc 0000:57:00.0 nic0: renamed from enp87s0
[ 3.834862] igc 0000:57:00.0 nic0: entered allmulticast mode
[ 3.834888] igc 0000:57:00.0 nic0: entered promiscuous mode
[ 7.254241] igc 0000:57:00.0 nic0: NIC Link is Up 2500 Mbps Full Duplex, Flow Control: RX
root@nuc:~#

This indicates that the igc driver does not support the `disable_eee` parameter at all.

Keep trying.

nano /etc/modprobe.d/igc.conf
options igc InterruptThrottleRate=0 EnergyEfficientEthernet=0
reboot

dmesg | grep igc
Code:
root@nuc:~# dmesg | grep igc
[ 1.133461] igc: unknown parameter 'InterruptThrottleRate' ignored
[ 1.133464] igc: unknown parameter 'EnergyEfficientEthernet' ignored
[ 1.133899] igc 0000:57:00.0: enabling device (0000 -> 0002)
[ 1.134362] igc 0000:57:00.0: PTM enabled, 4ns granularity
[ 1.178783] igc 0000:57:00.0 (unnamed net_device) (uninitialized): PHC added
[ 1.208411] igc 0000:57:00.0: 4.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x1 link)
[ 1.208414] igc 0000:57:00.0 eth0: MAC: 88:ae:dd:69:32:ff
[ 1.210511] igc 0000:57:00.0 enp87s0: renamed from eth0
[ 2.428837] igc 0000:57:00.0 nic0: renamed from enp87s0
[ 3.778169] igc 0000:57:00.0 nic0: entered allmulticast mode
[ 3.778219] igc 0000:57:00.0 nic0: entered promiscuous mode
[ 7.230321] igc 0000:57:00.0 nic0: NIC Link is Up 2500 Mbps Full Duplex, Flow Control: RX
root@nuc:~#


Keep trying,See which parameters IGC actually supports.

modinfo igc
parm: debug:Debug level (0=none,...,16=all) (int)

The igc driver only supports one module parameter: parm: debug

Setting parameters in `/etc/modprobe.d` doesn't work; igc simply doesn't recognize them.

What other reasons could cause the network not to recover automatically after unplugging and replugging the network cable?
 
Last edited:
Attempting again to disable EEE via ethtool.

Code:
Disable EEE
ethtool --set-eee nic0 eee off

Verification:
ethtool --show-eee nic0

Expected output:
EEE status: disabled

Awaiting verification, looking forward to sharing good news.

I have a question: Is this the only solution? What is the root cause of this problem? Is it an Intel network card issue? A Linux kernel issue? Or a PVE driver issue?
 
journalctl -lu pveproxy
Code:
Dec 15 00:20:08 nuc systemd[1]: Reloading pveproxy.service - PVE API Proxy Server...
Dec 15 00:20:08 nuc pveproxy[57533]: send HUP to 1186
Dec 15 00:20:08 nuc pveproxy[1186]: received signal HUP
Dec 15 00:20:08 nuc pveproxy[1186]: server closing
Dec 15 00:20:08 nuc pveproxy[1186]: server shutdown (restart)
Dec 15 00:20:08 nuc systemd[1]: Reloaded pveproxy.service - PVE API Proxy Server.
Dec 15 00:20:09 nuc pveproxy[1186]: restarting server
Dec 15 00:20:09 nuc pveproxy[1186]: starting 3 worker(s)
Dec 15 00:20:09 nuc pveproxy[1186]: worker 57547 started
Dec 15 00:20:09 nuc pveproxy[1186]: worker 57548 started
Dec 15 00:20:09 nuc pveproxy[1186]: worker 57549 started
Dec 15 00:20:14 nuc pveproxy[1189]: worker exit
Dec 15 00:20:14 nuc pveproxy[1188]: worker exit
Dec 15 00:20:14 nuc pveproxy[1187]: worker exit
Dec 15 00:20:14 nuc pveproxy[1186]: worker 1187 finished
Dec 15 00:20:14 nuc pveproxy[1186]: worker 1188 finished
Dec 15 00:20:14 nuc pveproxy[1186]: worker 1189 finished
Dec 15 10:14:00 nuc systemd[1]: Stopping pveproxy.service - PVE API Proxy Server...
Dec 15 10:14:01 nuc pveproxy[1186]: received signal TERM
Dec 15 10:14:01 nuc pveproxy[1186]: server closing
Dec 15 10:14:01 nuc pveproxy[57547]: worker exit
Dec 15 10:14:01 nuc pveproxy[57549]: worker exit
Dec 15 10:14:01 nuc pveproxy[57548]: worker exit
Dec 15 10:14:01 nuc pveproxy[1186]: worker 57549 finished
Dec 15 10:14:01 nuc pveproxy[1186]: worker 57548 finished
Dec 15 10:14:01 nuc pveproxy[1186]: worker 57547 finished
Dec 15 10:14:01 nuc pveproxy[1186]: server stopped
Dec 15 10:14:02 nuc systemd[1]: pveproxy.service: Deactivated successfully.
Dec 15 10:14:02 nuc systemd[1]: Stopped pveproxy.service - PVE API Proxy Server.
Dec 15 10:14:02 nuc systemd[1]: pveproxy.service: Consumed 16.990s CPU time, 412.7M memory peak.
-- Boot 22c7f8a6b8f9447a95b3e0414dc206d3 --
Dec 15 10:14:43 nuc systemd[1]: Starting pveproxy.service - PVE API Proxy Server...
Dec 15 10:14:44 nuc pveproxy[1202]: starting server
Dec 15 10:14:44 nuc pveproxy[1202]: starting 3 worker(s)
Dec 15 10:14:44 nuc pveproxy[1202]: worker 1203 started
Dec 15 10:14:44 nuc pveproxy[1202]: worker 1204 started
Dec 15 10:14:44 nuc pveproxy[1202]: worker 1205 started
Dec 15 10:14:44 nuc systemd[1]: Started pveproxy.service - PVE API Proxy Server.
Dec 15 13:15:24 nuc pveproxy[1204]: worker exit
Dec 15 13:15:24 nuc pveproxy[1202]: worker 1204 finished
Dec 15 13:15:24 nuc pveproxy[1202]: starting 1 worker(s)
Dec 15 13:15:24 nuc pveproxy[1202]: worker 46599 started
Dec 15 13:22:31 nuc pveproxy[1203]: worker exit
Dec 15 13:22:31 nuc pveproxy[1202]: worker 1203 finished
Dec 15 13:22:31 nuc pveproxy[1202]: starting 1 worker(s)
Dec 15 13:22:31 nuc pveproxy[1202]: worker 48363 started
Dec 15 13:29:29 nuc pveproxy[1205]: worker exit
Dec 15 13:29:29 nuc pveproxy[1202]: worker 1205 finished
Dec 15 13:29:29 nuc pveproxy[1202]: starting 1 worker(s)
Dec 15 13:29:29 nuc pveproxy[1202]: worker 50103 started
Dec 15 13:46:47 nuc pveproxy[48363]: worker exit
Dec 15 13:46:47 nuc pveproxy[1202]: worker 48363 finished
Dec 15 13:46:47 nuc pveproxy[1202]: starting 1 worker(s)
Dec 15 13:46:47 nuc pveproxy[1202]: worker 54414 started
Dec 15 13:59:50 nuc pveproxy[46599]: worker exit
Dec 15 13:59:50 nuc pveproxy[1202]: worker 46599 finished
Dec 15 13:59:50 nuc pveproxy[1202]: starting 1 worker(s)
Dec 15 13:59:50 nuc pveproxy[1202]: worker 57649 started
Dec 15 17:56:46 nuc systemd[1]: Stopping pveproxy.service - PVE API Proxy Server...
Dec 15 17:56:47 nuc pveproxy[1202]: received signal TERM
Dec 15 17:56:47 nuc pveproxy[1202]: server closing
Dec 15 17:56:47 nuc pveproxy[54414]: worker exit
Dec 15 17:56:47 nuc pveproxy[50103]: worker exit
Dec 15 17:56:47 nuc pveproxy[57649]: worker exit
Dec 15 17:56:47 nuc pveproxy[1202]: worker 57649 finished
Dec 15 17:56:47 nuc pveproxy[1202]: worker 50103 finished
Dec 15 17:56:47 nuc pveproxy[1202]: worker 54414 finished
Dec 15 17:56:47 nuc pveproxy[1202]: server stopped
Dec 15 17:56:48 nuc systemd[1]: pveproxy.service: Deactivated successfully.
Dec 15 17:56:48 nuc systemd[1]: Stopped pveproxy.service - PVE API Proxy Server.
Dec 15 17:56:48 nuc systemd[1]: pveproxy.service: Consumed 21.842s CPU time, 413.7M memory peak.
-- Boot 0158612c608246d689b64cdce62e7527 --
Dec 15 17:58:15 nuc systemd[1]: Starting pveproxy.service - PVE API Proxy Server...
Dec 15 17:58:15 nuc pveproxy[1205]: starting server
Dec 15 17:58:15 nuc pveproxy[1205]: starting 3 worker(s)
Dec 15 17:58:15 nuc pveproxy[1205]: worker 1206 started
Dec 15 17:58:15 nuc pveproxy[1205]: worker 1207 started
Dec 15 17:58:15 nuc pveproxy[1205]: worker 1208 started
Dec 15 17:58:15 nuc systemd[1]: Started pveproxy.service - PVE API Proxy Server.
Dec 15 18:40:36 nuc systemd[1]: Stopping pveproxy.service - PVE API Proxy Server...
Dec 15 18:40:36 nuc pveproxy[1205]: received signal TERM
Dec 15 18:40:36 nuc pveproxy[1205]: server closing
Dec 15 18:40:36 nuc pveproxy[1205]: worker 1207 finished
Dec 15 18:40:36 nuc pveproxy[1205]: worker 1206 finished
Dec 15 18:40:36 nuc pveproxy[1205]: worker 1208 finished
Dec 15 18:40:36 nuc pveproxy[1205]: server stopped
Dec 15 18:40:37 nuc pveproxy[13047]: worker exit
Dec 15 18:40:37 nuc pveproxy[13045]: worker exit
Dec 15 18:40:37 nuc systemd[1]: pveproxy.service: Deactivated successfully.
Dec 15 18:40:37 nuc systemd[1]: Stopped pveproxy.service - PVE API Proxy Server.
Dec 15 18:40:37 nuc systemd[1]: pveproxy.service: Consumed 6.693s CPU time, 441.5M memory peak.
-- Boot 02ca851284484a63aa7335274d63f5d2 --
Dec 15 18:41:03 nuc systemd[1]: Starting pveproxy.service - PVE API Proxy Server...
Dec 15 18:41:04 nuc pveproxy[1193]: starting server
Dec 15 18:41:04 nuc pveproxy[1193]: starting 3 worker(s)
Dec 15 18:41:04 nuc pveproxy[1193]: worker 1194 started
Dec 15 18:41:04 nuc pveproxy[1193]: worker 1195 started
Dec 15 18:41:04 nuc pveproxy[1193]: worker 1196 started
Dec 15 18:41:04 nuc systemd[1]: Started pveproxy.service - PVE API Proxy Server.
Dec 15 18:45:36 nuc systemd[1]: Stopping pveproxy.service - PVE API Proxy Server...
Dec 15 18:45:36 nuc pveproxy[1193]: received signal TERM
Dec 15 18:45:36 nuc pveproxy[1193]: server closing
Dec 15 18:45:36 nuc pveproxy[1194]: worker exit
Dec 15 18:45:36 nuc pveproxy[1195]: worker exit
Dec 15 18:45:36 nuc pveproxy[1196]: worker exit
Dec 15 18:45:36 nuc pveproxy[1193]: worker 1195 finished
Dec 15 18:45:36 nuc pveproxy[1193]: worker 1194 finished
Dec 15 18:45:36 nuc pveproxy[1193]: worker 1196 finished
Dec 15 18:45:36 nuc pveproxy[1193]: server stopped
Dec 15 18:45:37 nuc systemd[1]: pveproxy.service: Deactivated successfully.
Dec 15 18:45:37 nuc systemd[1]: Stopped pveproxy.service - PVE API Proxy Server.
Dec 15 18:45:37 nuc systemd[1]: pveproxy.service: Consumed 2.653s CPU time, 399.8M memory peak.
-- Boot aa3a21f536fd4edab35887816c5078ef --
Dec 15 18:46:09 nuc systemd[1]: Starting pveproxy.service - PVE API Proxy Server...
Dec 15 18:46:10 nuc pveproxy[1189]: starting server
Dec 15 18:46:10 nuc pveproxy[1189]: starting 3 worker(s)
Dec 15 18:46:10 nuc pveproxy[1189]: worker 1190 started
Dec 15 18:46:10 nuc pveproxy[1189]: worker 1191 started
Dec 15 18:46:10 nuc pveproxy[1189]: worker 1192 started
Dec 15 18:46:10 nuc systemd[1]: Started pveproxy.service - PVE API Proxy Server.
Dec 15 18:51:22 nuc systemd[1]: Stopping pveproxy.service - PVE API Proxy Server...
Dec 15 18:51:22 nuc pveproxy[1189]: received signal TERM
Dec 15 18:51:22 nuc pveproxy[1189]: server closing
Dec 15 18:51:22 nuc pveproxy[1190]: worker exit
Dec 15 18:51:22 nuc pveproxy[1191]: worker exit
Dec 15 18:51:22 nuc pveproxy[1189]: worker 1190 finished
Dec 15 18:51:22 nuc pveproxy[1189]: worker 1192 finished
Dec 15 18:51:22 nuc pveproxy[1189]: worker 1191 finished
Dec 15 18:51:22 nuc pveproxy[1189]: server stopped
Dec 15 18:51:23 nuc systemd[1]: pveproxy.service: Deactivated successfully.
Dec 15 18:51:23 nuc systemd[1]: Stopped pveproxy.service - PVE API Proxy Server.
Dec 15 18:51:23 nuc systemd[1]: pveproxy.service: Consumed 2.736s CPU time, 417.4M memory peak.
-- Boot 885a48cf575d4b7596b7e54ae431ba83 --
Dec 15 18:54:18 nuc systemd[1]: Starting pveproxy.service - PVE API Proxy Server...
Dec 15 18:54:19 nuc pveproxy[1187]: starting server
Dec 15 18:54:19 nuc pveproxy[1187]: starting 3 worker(s)
Dec 15 18:54:19 nuc pveproxy[1187]: worker 1188 started
Dec 15 18:54:19 nuc pveproxy[1187]: worker 1189 started
Dec 15 18:54:19 nuc pveproxy[1187]: worker 1190 started
Dec 15 18:54:19 nuc systemd[1]: Started pveproxy.service - PVE API Proxy Server.
Dec 15 19:37:59 nuc systemd[1]: Stopping pveproxy.service - PVE API Proxy Server...
Dec 15 19:37:59 nuc pveproxy[1187]: received signal TERM
Dec 15 19:37:59 nuc pveproxy[1187]: server closing
Dec 15 19:37:59 nuc pveproxy[1189]: worker exit
Dec 15 19:37:59 nuc pveproxy[1188]: worker exit
Dec 15 19:37:59 nuc pveproxy[1187]: worker 1188 finished
Dec 15 19:37:59 nuc pveproxy[1187]: worker 1189 finished
Dec 15 19:37:59 nuc pveproxy[1187]: worker 1190 finished
Dec 15 19:37:59 nuc pveproxy[1187]: server stopped
Dec 15 19:38:00 nuc pveproxy[12794]: worker exit
Dec 15 19:38:00 nuc systemd[1]: pveproxy.service: Deactivated successfully.
Dec 15 19:38:00 nuc systemd[1]: Stopped pveproxy.service - PVE API Proxy Server.
Dec 15 19:38:00 nuc systemd[1]: pveproxy.service: Consumed 4.156s CPU time, 416.5M memory peak.
-- Boot b01655eb25064bf4a629172524d6528f --
Dec 15 19:43:06 nuc systemd[1]: Starting pveproxy.service - PVE API Proxy Server...
Dec 15 19:43:07 nuc pveproxy[1210]: starting server
Dec 15 19:43:07 nuc pveproxy[1210]: starting 3 worker(s)
Dec 15 19:43:07 nuc pveproxy[1210]: worker 1211 started
Dec 15 19:43:07 nuc pveproxy[1210]: worker 1212 started
Dec 15 19:43:07 nuc pveproxy[1210]: worker 1213 started
Dec 15 19:43:07 nuc systemd[1]: Started pveproxy.service - PVE API Proxy Server.
Dec 15 19:43:42 nuc systemd[1]: Stopping pveproxy.service - PVE API Proxy Server...
Dec 15 19:43:42 nuc pveproxy[1210]: received signal TERM
Dec 15 19:43:42 nuc pveproxy[1210]: server closing
Dec 15 19:43:42 nuc pveproxy[1212]: worker exit
Dec 15 19:43:42 nuc pveproxy[1213]: worker exit
Dec 15 19:43:42 nuc pveproxy[1210]: worker 1212 finished
Dec 15 19:43:42 nuc pveproxy[1210]: worker 1213 finished
Dec 15 19:43:42 nuc pveproxy[1210]: worker 1211 finished
Dec 15 19:43:42 nuc pveproxy[1210]: server stopped
Dec 15 19:43:43 nuc systemd[1]: pveproxy.service: Deactivated successfully.
Dec 15 19:43:43 nuc systemd[1]: Stopped pveproxy.service - PVE API Proxy Server.
Dec 15 19:43:43 nuc systemd[1]: pveproxy.service: Consumed 1.273s CPU time, 368.2M memory peak.
-- Boot 97568cbf7f26416082cbb4f0ce921a50 --
Dec 15 19:46:00 nuc systemd[1]: Starting pveproxy.service - PVE API Proxy Server...
Dec 15 19:46:01 nuc pveproxy[1197]: starting server
Dec 15 19:46:01 nuc pveproxy[1197]: starting 3 worker(s)
Dec 15 19:46:01 nuc pveproxy[1197]: worker 1198 started
Dec 15 19:46:01 nuc pveproxy[1197]: worker 1199 started
Dec 15 19:46:01 nuc pveproxy[1197]: worker 1200 started
Dec 15 19:46:01 nuc systemd[1]: Started pveproxy.service - PVE API Proxy Server.
Dec 15 19:50:01 nuc systemd[1]: Stopping pveproxy.service - PVE API Proxy Server...
Dec 15 19:50:01 nuc pveproxy[1197]: received signal TERM
Dec 15 19:50:01 nuc pveproxy[1197]: server closing
Dec 15 19:50:01 nuc pveproxy[1198]: worker exit
Dec 15 19:50:01 nuc pveproxy[1199]: worker exit
Dec 15 19:50:01 nuc pveproxy[1200]: worker exit
Dec 15 19:50:01 nuc pveproxy[1197]: worker 1198 finished
Dec 15 19:50:01 nuc pveproxy[1197]: worker 1199 finished
Dec 15 19:50:01 nuc pveproxy[1197]: worker 1200 finished
Dec 15 19:50:01 nuc pveproxy[1197]: server stopped
Dec 15 19:50:02 nuc systemd[1]: pveproxy.service: Deactivated successfully.
Dec 15 19:50:02 nuc systemd[1]: Stopped pveproxy.service - PVE API Proxy Server.
Dec 15 19:50:02 nuc systemd[1]: pveproxy.service: Consumed 1.744s CPU time, 391.9M memory peak.
-- Boot cbc81a75879541a9acb513784875544d --
Dec 15 19:51:07 nuc systemd[1]: Starting pveproxy.service - PVE API Proxy Server...
Dec 15 19:51:08 nuc pveproxy[1199]: starting server
Dec 15 19:51:08 nuc pveproxy[1199]: starting 3 worker(s)
Dec 15 19:51:08 nuc pveproxy[1199]: worker 1200 started
Dec 15 19:51:08 nuc pveproxy[1199]: worker 1201 started
Dec 15 19:51:08 nuc pveproxy[1199]: worker 1202 started
Dec 15 19:51:08 nuc systemd[1]: Started pveproxy.service - PVE API Proxy Server.
Dec 15 19:53:48 nuc systemd[1]: Stopping pveproxy.service - PVE API Proxy Server...
Dec 15 19:53:49 nuc pveproxy[1199]: received signal TERM
Dec 15 19:53:49 nuc pveproxy[1199]: server closing
Dec 15 19:53:49 nuc pveproxy[1202]: worker exit
Dec 15 19:53:49 nuc pveproxy[1200]: worker exit
Dec 15 19:53:49 nuc pveproxy[1201]: worker exit
Dec 15 19:53:49 nuc pveproxy[1199]: worker 1202 finished
Dec 15 19:53:49 nuc pveproxy[1199]: worker 1200 finished
Dec 15 19:53:49 nuc pveproxy[1199]: worker 1201 finished
Dec 15 19:53:49 nuc pveproxy[1199]: server stopped
Dec 15 19:53:50 nuc systemd[1]: pveproxy.service: Deactivated successfully.
Dec 15 19:53:50 nuc systemd[1]: Stopped pveproxy.service - PVE API Proxy Server.
Dec 15 19:53:50 nuc systemd[1]: pveproxy.service: Consumed 1.854s CPU time, 400.4M memory peak.
-- Boot 22c9b1f51622432d8288d4d45dd0524a --
Dec 15 19:54:31 nuc systemd[1]: Starting pveproxy.service - PVE API Proxy Server...
Dec 15 19:54:32 nuc pveproxy[1193]: starting server
Dec 15 19:54:32 nuc pveproxy[1193]: starting 3 worker(s)
Dec 15 19:54:32 nuc pveproxy[1193]: worker 1194 started
Dec 15 19:54:32 nuc pveproxy[1193]: worker 1195 started
Dec 15 19:54:32 nuc pveproxy[1193]: worker 1196 started
Dec 15 19:54:32 nuc systemd[1]: Started pveproxy.service - PVE API Proxy Server.
Dec 15 19:58:24 nuc systemd[1]: Stopping pveproxy.service - PVE API Proxy Server...
Dec 15 19:58:25 nuc pveproxy[1193]: received signal TERM
Dec 15 19:58:25 nuc pveproxy[1193]: server closing
Dec 15 19:58:25 nuc pveproxy[1194]: worker exit
Dec 15 19:58:25 nuc pveproxy[1196]: worker exit
Dec 15 19:58:25 nuc pveproxy[1193]: worker 1194 finished
Dec 15 19:58:25 nuc pveproxy[1193]: worker 1196 finished
Dec 15 19:58:25 nuc pveproxy[1193]: worker 1195 finished
Dec 15 19:58:25 nuc pveproxy[1193]: server stopped
Dec 15 19:58:26 nuc systemd[1]: pveproxy.service: Deactivated successfully.
Dec 15 19:58:26 nuc systemd[1]: Stopped pveproxy.service - PVE API Proxy Server.
Dec 15 19:58:26 nuc systemd[1]: pveproxy.service: Consumed 2.310s CPU time, 421.1M memory peak.
-- Boot 77ea13fd0ca445809f042780d2f203fd --
Dec 15 20:01:55 nuc systemd[1]: Starting pveproxy.service - PVE API Proxy Server...
Dec 15 20:01:56 nuc pveproxy[1196]: starting server
Dec 15 20:01:56 nuc pveproxy[1196]: starting 3 worker(s)
Dec 15 20:01:56 nuc pveproxy[1196]: worker 1197 started
Dec 15 20:01:56 nuc pveproxy[1196]: worker 1198 started
Dec 15 20:01:56 nuc pveproxy[1196]: worker 1199 started
Dec 15 20:01:56 nuc systemd[1]: Started pveproxy.service - PVE API Proxy Server.
Dec 15 20:14:26 nuc pveproxy[1197]: proxy detected vanished client connection
Dec 15 20:14:58 nuc systemd[1]: Stopping pveproxy.service - PVE API Proxy Server...
Dec 15 20:14:59 nuc pveproxy[1196]: received signal TERM
Dec 15 20:14:59 nuc pveproxy[1196]: server closing
Dec 15 20:14:59 nuc pveproxy[1199]: worker exit
Dec 15 20:14:59 nuc pveproxy[1196]: worker 1199 finished
Dec 15 20:14:59 nuc pveproxy[1196]: worker 1198 finished
Dec 15 20:14:59 nuc pveproxy[1196]: worker 1197 finished
Dec 15 20:14:59 nuc pveproxy[1196]: server stopped
Dec 15 20:15:00 nuc systemd[1]: pveproxy.service: Deactivated successfully.
Dec 15 20:15:00 nuc systemd[1]: Stopped pveproxy.service - PVE API Proxy Server.
Dec 15 20:15:00 nuc systemd[1]: pveproxy.service: Consumed 6.068s CPU time, 440.6M memory peak.
-- Boot 0eba5c0423594644afe8c5ddf2609f6f --
Dec 15 20:16:16 nuc systemd[1]: Starting pveproxy.service - PVE API Proxy Server...
Dec 15 20:16:17 nuc pveproxy[1203]: starting server
Dec 15 20:16:17 nuc pveproxy[1203]: starting 3 worker(s)
Dec 15 20:16:17 nuc pveproxy[1203]: worker 1204 started
Dec 15 20:16:17 nuc pveproxy[1203]: worker 1205 started
Dec 15 20:16:17 nuc pveproxy[1203]: worker 1206 started
Dec 15 20:16:17 nuc systemd[1]: Started pveproxy.service - PVE API Proxy Server.
 
I performed the test again, disconnecting and then reconnecting the network cable.

The connection was completely lost again.

curl -vk https://localhost:8006 accessible.

systemctl status pveproxy is running.

ethtool --show-eee nic0 EEE status: disabled

ip a All network cards are showing as up.

dmesg | grep igc

IMG_8230.webp
 
Last edited:
Code:
root@nuc:~# journalctl -k -b -1 | grep -Ei "i226|nic|link|reset|timeout"
Dec 18 07:24:01 nuc kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 18 07:24:01 nuc kernel: audit: initializing netlink subsys (disabled)
Dec 18 07:24:01 nuc kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0
Dec 18 07:24:01 nuc kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1
Dec 18 07:24:01 nuc kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0
Dec 18 07:24:01 nuc kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0
Dec 18 07:24:01 nuc kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0
Dec 18 07:24:01 nuc kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0
Dec 18 07:24:01 nuc kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0
Dec 18 07:24:01 nuc kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0
Dec 18 07:24:01 nuc kernel: simple-framebuffer simple-framebuffer.0: [drm] Registered 1 planes with drm panic
Dec 18 07:24:01 nuc kernel: igc 0000:57:00.0: 4.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x1 link)
Dec 18 07:24:01 nuc kernel: igc 0000:57:00.0 nic0: renamed from enp87s0
Dec 18 07:24:01 nuc kernel: i915 0000:00:02.0: [drm] Registered 4 planes with drm panic
Dec 18 07:24:02 nuc kernel: softdog: initialized. soft_noboot=0 soft_margin=60 sec soft_panic=0 (nowayout=0)
Dec 18 07:24:02 nuc kernel: vmbr0: port 1(nic0) entered blocking state
Dec 18 07:24:02 nuc kernel: vmbr0: port 1(nic0) entered disabled state
Dec 18 07:24:02 nuc kernel: igc 0000:57:00.0 nic0: entered allmulticast mode
Dec 18 07:24:02 nuc kernel: igc 0000:57:00.0 nic0: entered promiscuous mode
Dec 18 07:24:06 nuc kernel: igc 0000:57:00.0 nic0: NIC Link is Up 2500 Mbps Full Duplex, Flow Control: RX
Dec 18 07:24:06 nuc kernel: vmbr0: port 1(nic0) entered blocking state
Dec 18 07:24:06 nuc kernel: vmbr0: port 1(nic0) entered forwarding state
Dec 18 07:24:50 nuc kernel: igc 0000:57:00.0 nic0: PCIe link lost, device now detached
Dec 18 07:24:50 nuc kernel: Modules linked in: tcp_diag inet_diag veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter nf_tables bonding tls softdog sunrpc binfmt_misc nfnetlink_log xe gpu_sched drm_gpuvm drm_gpusvm_helper drm_ttm_helper snd_hda_codec_intelhdmi drm_exec drm_suballoc_helper snd_hda_intel snd_sof_pci_intel_mtl snd_sof_intel_hda_generic soundwire_intel snd_sof_intel_hda_sdw_bpt snd_sof_intel_hda_common snd_soc_hdac_hda snd_sof_intel_hda_mlink snd_sof_intel_hda snd_hda_codec_hdmi soundwire_cadence snd_sof_pci snd_sof_xtensa_dsp sch_fq_codel snd_sof intel_uncore_frequency intel_uncore_frequency_common snd_sof_utils snd_hda_ext_core x86_pkg_temp_thermal intel_powerclamp snd_hda_codec input_leds snd_hda_core coretemp snd_intel_dspcfg snd_intel_sdw_acpi snd_soc_acpi_intel_match snd_soc_acpi_intel_sdca_quirks soundwire_generic_allocation snd_soc_acpi kvm_intel snd_hwdep i915 soundwire_bus snd_soc_sdca mei_gsc_proxy intel_rapl_msr hid_apple snd_soc_core kvm
Dec 18 07:24:50 nuc kernel:  processor_thermal_device_pci snd_compress processor_thermal_device ac97_bus processor_thermal_wt_hint drm_buddy platform_temperature_control snd_pcm_dmaengine processor_thermal_soc_slider ttm processor_thermal_rfim snd_pcm irqbypass drm_display_helper intel_pmc_core processor_thermal_rapl snd_timer usbkbd polyval_clmulni int3403_thermal intel_rapl_common ghash_clmulni_intel pmt_telemetry cec aesni_intel snd pmt_discovery processor_thermal_wt_req mei_me rapl pmt_class processor_thermal_power_floor rc_core usbhid int3400_thermal processor_thermal_mbox soundcore intel_pmc_ssram_telemetry intel_cstate pcspkr asus_nb_wmi wmi_bmof crc8 intel_vpu mei i2c_algo_bit int340x_thermal_zone igen6_edac intel_vsec intel_hid acpi_tad acpi_thermal_rel acpi_pad mac_hid apple_mfi_fastcharge zfs(PO) spl(O) msr vhost_net vhost vhost_iotlb tap efi_pstore nfnetlink dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b_generic xor raid6_pq hid_sensor_custom hid_sensor_hub hid_generic intel_ishtp_hid hid ucsi_acpi typec_ucsi typec
Dec 18 07:30:54 nuc systemd-shutdown[1]: Watchdog running with a hardware timeout of 10min.
root@nuc:~#

The key information is actually only one line.

igc 0000:57:00.0 nic0: PCIe link lost, device now detached

This isn't just a regular link down/up event; the PCIe link has completely disappeared, and the kernel is treating the network card as a "disconnected PCIe device."
 
The problem of the PVE server completely losing connectivity after unplugging and replugging the network cable remains unresolved, and the cause is still unknown.
Therefore, I connected a USB 2.5G network card (Realtek r8152) to the NUC15 and bridged vmbr0 to the r8152. However, when I unplugged and replugged the network cable again, the PVE server still lost connectivity.

This confirms that the problem is not related to the Intel i226V network card.

Are there any other troubleshooting approaches I can try?
 
Last edited: