DUP packets on ping

ntnll

New Member
Jul 8, 2024
8
1
3
Good morning,
I’ve been experiencing a port flapping issue ever since I installed a Ubiquiti switch in my network. The switch detects port flapping, and I get “DUP” pings (indicating duplicated packets) when I try to ping Proxmox servers in parallel. However, when one of the servers is turned off, the duplicate pings stop. I’ve tried everything: reinstalling both hypervisors, adding a third node to confirm it’s not a hardware issue, changing ports and cables, and even playing around with bridge STP settings and enabling/disabling storm control and loop protection on the switch. Nothing has worked, and I’ve found many people online with the same problem, but no solutions that have helped. It only happens on the Proxmox servers, even freshly installed ones. I’m working on it since weeks, and I'm really out of ideas, can anybody kindly help me?



Code:
root@proxmox2:/etc/network# cat interfaces
auto lo
iface lo inet loopback

iface enp6s0 inet manual

auto vmbr0
iface vmbr0 inet static
    address 10.0.0.12/16
    gateway 10.0.0.1
    bridge-ports enp1s0
    bridge-stp off
    bridge-fd 0

iface enp1s0 inet manual

iface wlp7s0 inet manual

Code:
root@proxmox2:/etc/network# pveversion -v
proxmox-ve: 8.4.0 (running kernel: 6.8.12-9-pve)
pve-manager: 8.4.0 (running version: 8.4.0/ec58e45e1bcdf2ac)
proxmox-kernel-helper: 8.1.1
proxmox-kernel-6.8: 6.8.12-9
proxmox-kernel-6.8.12-9-pve-signed: 6.8.12-9
ceph-fuse: 17.2.8-pve2
corosync: 3.1.9-pve1
criu: 3.17.1-2+deb12u1
frr-pythontools: 10.2.1-1+pve2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.30-pve2
libproxmox-acme-perl: 1.6.0
libproxmox-backup-qemu0: 1.5.1
libproxmox-rs-perl: 0.3.5
libpve-access-control: 8.2.2
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.1.0
libpve-cluster-perl: 8.1.0
libpve-common-perl: 8.3.1
libpve-guest-common-perl: 5.2.2
libpve-http-server-perl: 5.2.2
libpve-network-perl: 0.11.2
libpve-rs-perl: 0.9.4
libpve-storage-perl: 8.3.6
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.6.0-2
proxmox-backup-client: 3.3.7-1
proxmox-backup-file-restore: 3.3.7-1
proxmox-firewall: 0.7.1
proxmox-kernel-helper: 8.1.1
proxmox-mail-forward: 0.3.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.10
pve-cluster: 8.1.0
pve-container: 5.2.6
pve-docs: 8.4.0
pve-edk2-firmware: 4.2025.02-3
pve-esxi-import-tools: 0.7.3
pve-firewall: 5.1.1
pve-firmware: 3.15-3
pve-ha-manager: 4.0.7
pve-i18n: 3.4.2
pve-qemu-kvm: 9.2.0-5
pve-xtermjs: 5.5.0-2
qemu-server: 8.3.12
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.7-pve2

Code:
root@proxmox2:/etc/network# lspci|grep -i ether
01:00.0 Ethernet controller: Aquantia Corp. AQC107 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion] (rev 02)
06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05)


Code:
root@proxmox2:/etc/network# ls /proc/net/bonding/
root@proxmox2:/etc/network#
 
Hi @ntnll ,

To begin, it's important to note that PVE is not a standalone OS, but rather a set of packages installed on a base Linux distribution - specifically Debian, with an Ubuntu-derived kernel. At the networking layer, PVE uses the standard Linux TCP/IP stack with no modifications.

Your post implies that Debian or Ubuntu’s kernel networking stack, used across millions of systems globally, is fundamentally broken. That’s extremely unlikely.

You've mentioned trying different nodes, cables, and ports, but the only constants appear to be your switch and possibly your NICs. It’s worth noting that consumer-grade NICs (especially Realtek, Aquantia, etc.) can behave inconsistently compared to well-supported enterprise-grade NICs like Intel or Mellanox.

When surface-level causes seem to check out, the most effective path is often to reduce the setup to bare minimum:
  • Replace the Ubiquiti switch with a dumb unmanaged gigabit switch
  • Use vanilla Debian or Ubuntu instead of PVE
  • Start with a single host and a simple static IP configuration
This can help isolate whether the issue is hardware, configuration, or environmental.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Impact
Hi @bbgeek17, thanks for your answer.

I absolutely did not mean to imply that the distributions, the kernel, or Proxmox itself is at fault. I’m pretty convinced that it’s an issue that might arise from the combination of multiple elements. What makes me a bit suspicious is that it only happens with Proxmox servers, but also as you noticed, my hardware is not enterprise grade, so I would not be surprised to realize that could be the issue.

The fact that this issue occurs only with Proxmox, regardless of the ports or cables tested, might suggest that it’s a matter of the combination, or at least that’s what I’ve read in several threads, of using or maybe not properly configuring / fine-tuning Linux bridging in managed switch environments. Anyway, to answer your question, I also tried connecting an unmanaged switch to the environment and noticed the same behavior with ping.