multicast problem after upgrade from 1.8 to 1.9

Can you reproduce the problem with ssmping?

http://pve.proxmox.com/wiki/Multicast_notes

I think not.

It's only when we used the oracle tool that provide the error.
Like i was saying in a precedent post, replaying the packet sent by the application with tcpreplay doesn't produce the problem....
On our proxmox cluster we have "clustered vm machine" which uses corosync which is based on multicast transmissions and all worked fine.
 
Would you mind to test? Maybe it triggers the bug.

yep i will test it ;)
i'm currently installing the tool on the openvz containers

do you want i send you my "application archive" in order you test it on your platform ?
perhaps it's a local problem and i don't want consume your time badly.
 
All seems ok with ssmping.
Here the result on the client side :

[root@test1 ssmping-0.9.1]# ./asmping -4v 228.5.6.7 192.93.172.210
asmping joined (S,G) = (*,228.5.6.234)
pinging 192.93.172.210 from 192.93.172.209
Server version: 0.9.1 (20080418) [asm][size]
unicast from 192.93.172.210, seq=1 dist=0 time=1.075 ms
multicast from 192.93.172.210, seq=1 dist=0 time=1.104 ms
unicast from 192.93.172.210, seq=2 dist=0 time=0.086 ms
multicast from 192.93.172.210, seq=2 dist=0 time=0.106 ms
unicast from 192.93.172.210, seq=3 dist=0 time=0.095 ms
multicast from 192.93.172.210, seq=3 dist=0 time=0.122 ms
unicast from 192.93.172.210, seq=4 dist=0 time=0.067 ms
multicast from 192.93.172.210, seq=4 dist=0 time=0.093 ms
unicast from 192.93.172.210, seq=5 dist=0 time=0.067 ms
multicast from 192.93.172.210, seq=5 dist=0 time=0.090 ms
unicast from 192.93.172.210, seq=6 dist=0 time=0.060 ms
multicast from 192.93.172.210, seq=6 dist=0 time=0.079 ms
unicast from 192.93.172.210, seq=7 dist=0 time=0.062 ms
multicast from 192.93.172.210, seq=7 dist=0 time=0.082 ms
unicast from 192.93.172.210, seq=8 dist=0 time=0.074 ms
multicast from 192.93.172.210, seq=8 dist=0 time=0.094 ms
unicast from 192.93.172.210, seq=9 dist=0 time=0.060 ms
multicast from 192.93.172.210, seq=9 dist=0 time=0.078 ms
unicast from 192.93.172.210, seq=10 dist=0 time=0.062 ms
multicast from 192.93.172.210, seq=10 dist=0 time=0.085 ms
unicast from 192.93.172.210, seq=11 dist=0 time=0.064 ms
multicast from 192.93.172.210, seq=11 dist=0 time=0.087 ms
unicast from 192.93.172.210, seq=12 dist=0 time=0.064 ms
multicast from 192.93.172.210, seq=12 dist=0 time=0.087 ms
unicast from 192.93.172.210, seq=13 dist=0 time=0.090 ms
multicast from 192.93.172.210, seq=13 dist=0 time=0.113 ms
unicast from 192.93.172.210, seq=14 dist=0 time=0.063 ms
multicast from 192.93.172.210, seq=14 dist=0 time=0.086 ms
unicast from 192.93.172.210, seq=15 dist=0 time=0.063 ms
multicast from 192.93.172.210, seq=15 dist=0 time=0.086 ms
unicast from 192.93.172.210, seq=16 dist=0 time=0.063 ms
multicast from 192.93.172.210, seq=16 dist=0 time=0.087 ms
unicast from 192.93.172.210, seq=17 dist=0 time=0.064 ms
multicast from 192.93.172.210, seq=17 dist=0 time=0.087 ms
unicast from 192.93.172.210, seq=18 dist=0 time=0.063 ms
multicast from 192.93.172.210, seq=18 dist=0 time=0.086 ms
^C
--- 192.93.172.210 statistics ---
18 packets transmitted, time 17272 ms
unicast:
18 packets received, 0% packet loss
rtt min/avg/max/std-dev = 0.060/0.124/1.075/0.231 ms
multicast:
18 packets received, 0% packet loss since first mc packet (seq 1) recvd
rtt min/avg/max/std-dev = 0.078/0.147/1.104/0.232 ms
 
My idea was just to confirm you encounter the same problem in your platform.
The application is java based but can't affirm all the code is accessible.

My assumption was that this is likely an commercial application, so you should have support from that software vendor?
 
My assumption was that this is likely an commercial application, so you should have support from that software vendor?

yes we have support for this application.
The problem, oracle doesn't validate proxmox as a supported platform virtualization......
That just running find for 2 years now...
I'm thinking about sysctl value difference between squeez kernel pve2.4 and rhel kernel pve2.6/7
What do you think about that ?
 
I'm thinking about sysctl value difference between squeez kernel pve2.4 and rhel kernel pve2.6/7
What do you think about that ?

Those kernels differs everywhere, so I assume that does not help. But I do not understand that it works when you replay the packets.

Would be interesting if it works on a RHEL6 (I guess that is supported by that company?)

Note: the pve kernel is a modified RHEL6 kernel