ping result in DUP! messages

gijsbert

Active Member
Oct 13, 2008
47
3
28
We are having an issue when doing a ping to a kvm VM, it results in DUP! replies. We are investigating this issue but did not find a solution yet. The situation is as follows:

1) ping from server1 to proxmox-node (no dup messages)
2) ping from server1 to VM on proxmox-node (DUP! messages)

To make it even a little more complex, the DUP! messages only occur when server1 is connected to the same switch. So if server1 is i.e. my local laptop outside the datacenter and not connected to the same switch no DUP! messages occur.

I have been contacting a network engineer and he describes the problem as follows: "It looks like the core router sometimes sent a ICMP redirect because all subnets are configured in 1 VLAN. The router 'see' all addresses are in the same VLAN and sent out an ICMP redirect 'you don't need to sent to me, you can sent directly'. Why that results in a DUP! message is still unclear, the network engineer believe it has something to do with the Proxmox software.

We don't use bonding, only vmbr through eth0. Our interfaces file is pretty straight foward:

===
auto lo
iface lo inet loopback

iface eth0 inet manual
iface eth1 inet manual

auto vmbr0
iface vmbr0 inet static
address 62.197.158.95
netmask 255.255.255.0
gateway 62.197.158.1
bridge_ports eth0
bridge_stp off
bridge_fd 0

iface vmbr0 inet6 static
address 2a03:8b80:e:aa::10
netmask 48
gateway 2a03:8b80:e::1

auto vmbr0.102
iface vmbr0.102 inet static
address 172.17.2.50
netmask 255.255.0.0
broadcast 172.17.255.255
network 172.17.0.0
vlan_raw_device vbmr0

iface vmbr0.102 inet6 static
address fd00:517e:b17e::250
netmask 64
===

Did anyone see the same behaviour? And if so, how to fix this issue :)

Thanks in advance for any reply,

Gijsbert
 
We are having an issue when doing a ping to a kvm VM, it results in DUP! replies. We are investigating this issue but did not find a solution yet. The situation is as follows:

1) ping from server1 to proxmox-node (no dup messages)
2) ping from server1 to VM on proxmox-node (DUP! messages)

To make it even a little more complex, the DUP! messages only occur when server1 is connected to the same switch. So if server1 is i.e. my local laptop outside the datacenter and not connected to the same switch no DUP! messages occur.

I have been contacting a network engineer and he describes the problem as follows: "It looks like the core router sometimes sent a ICMP redirect because all subnets are configured in 1 VLAN. The router 'see' all addresses are in the same VLAN and sent out an ICMP redirect 'you don't need to sent to me, you can sent directly'. Why that results in a DUP! message is still unclear, the network engineer believe it has something to do with the Proxmox software.

We don't use bonding, only vmbr through eth0. Our interfaces file is pretty straight foward:

===
auto lo
iface lo inet loopback

iface eth0 inet manual
iface eth1 inet manual

auto vmbr0
iface vmbr0 inet static
address 62.197.158.95
netmask 255.255.255.0
gateway 62.197.158.1
bridge_ports eth0
bridge_stp off
bridge_fd 0

iface vmbr0 inet6 static
address 2a03:8b80:e:aa::10
netmask 48
gateway 2a03:8b80:e::1

auto vmbr0.102
iface vmbr0.102 inet static
address 172.17.2.50
netmask 255.255.0.0
broadcast 172.17.255.255
network 172.17.0.0
vlan_raw_device vbmr0

iface vmbr0.102 inet6 static
address fd00:517e:b17e::250
netmask 64
===


Can you give an example (wenn getting DUP!) as follows:

Laptop IP x.x.x.x pings to VM with IP y.y.y.y

Post also the VM configuration file (/etc/pve/qemu-server/<vm-id>.conf) as well as the VM's internal IP configuration.
 
OK I have done a ping with count 250:

1) ping from 62.197.128.91 --> switch --> 62.197.128.109 (proxmox-server)

250 packets transmitted, 250 received, 0% packet loss, time 254983ms
rtt min/avg/max/mdev = 0.086/0.203/0.317/0.046 ms

==> Everything seems to be OK

2) ping from 62.197.128.91 --> switch --> 62.197.128.109 (proxmox-server) --> 62.197.130.93 (vm)

250 packets transmitted, 250 received, +17 duplicates, 0% packet loss, time 254443ms
rtt min/avg/max/mdev = 0.163/0.617/12.066/1.369 ms

==> 17 DUP,
==> Average ping time is also 3 TIMES SLOWER. The VM is based on CentOS6 using Virtio


ifconfig of VM:
===========
eth0 Link encap:Ethernet HWaddr 12:37:0D:54:F5:FD
inet addr:62.197.130.93 Bcast:62.197.130.255 Mask:255.255.255.0
inet6 addr: fe80::1037:dff:fe54:f5fd/64 Scope:Link
inet6 addr: 2a03:8b80:a:1008::1/48 Scope:Global
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:41449671 errors:0 dropped:0 overruns:0 frame:0
TX packets:1265462 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2566514039 (2.3 GiB) TX bytes:2097989947 (1.9 GiB)

eth0.102 Link encap:Ethernet HWaddr 12:37:0D:54:F5:FD
inet addr:172.17.0.5 Bcast:172.17.255.255 Mask:255.255.0.0
inet6 addr: fe80::1037:dff:fe54:f5fd/64 Scope:Link
inet6 addr: fd00:517e:b17e::5/64 Scope:Global
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:19238 errors:0 dropped:0 overruns:0 frame:0
TX packets:18010 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3844102 (3.6 MiB) TX bytes:41733032 (39.7 MiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:48468 errors:0 dropped:0 overruns:0 frame:0
TX packets:48468 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:15269767 (14.5 MiB) TX bytes:15269767 (14.5 MiB)

configuration of VM
===============
boot: cn
bootdisk: virtio0
cores: 2
memory: 4096
name: server1.stinger.com
net0: virtio=12:37:0D:54:F5:FD,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
smbios1: uuid=d319a1c2-7e2e-43ac-8247-21b8e78ad3f4
sockets: 1
virtio0: local:146/vm-146-disk-1.qcow2,format=qcow2,size=75G

Gijsbert
 
1) ping from 62.197.128.91 --> switch --> 62.197.128.109 (proxmox-server)


In your first post the server's IP address was 62.197.158.95

2) ping from 62.197.128.91 --> switch --> 62.197.128.109 (proxmox-server) --> 62.197.130.93 (vm)

They are in (logically) different subnets but physically at the same

ifconfig of VM:
===========
eth0 Link encap:Ethernet HWaddr 12:37:0D:54:F5:FD
inet addr:62.197.130.93 Bcast:62.197.130.255 Mask:255.255.255.0
inet6 addr: fe80::1037:dff:fe54:f5fd/64 Scope:Link
inet6 addr: 2a03:8b80:a:1008::1/48 Scope:Global
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:41449671 errors:0 dropped:0 overruns:0 frame:0
TX packets:1265462 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2566514039 (2.3 GiB) TX bytes:2097989947 (1.9 GiB)

Is the netmask set correctly? /24 and not possibly /22 ?
 
Hello Richard,

Thanks for looking into this issue.

In my first post I did not put the real existing IP's. In my last post I did put the real existing IP's.
They are in (logically) different subnets but physically at the same, correct!
The netmask /24 should be correct as far as I know.
 
If the problem persists follow the packets by using tcpdump respectively wireshark.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!