[SOLVED] Error ovs multicast in new PVE 7.0-13 cluster install

Nezdeshniy

Well-Known Member
Jan 28, 2013
33
1
48
Hello!
I have fresh cluster install, 5 nodes. I have all last updates.

pve-manager/7.0-13/7aa7e488 (running kernel: 5.11.22-5-pve)

Setup is 5 IBM blades and CISCO 3110G stack for LACP.
If i have "classic" network with linux, a new node can join. If i use ovs config - node cant join and get this error:

root@*****:~# pvecm add ##############
Please enter superuser (root) password for '##############': ********************************************
Establishing API connection with host '##############'
500 Can't connect to ##############:8006

I enable and disable IGMP snooping on cisco stack, i try to create IGMP Proxy or PIM on upstream BG-router - and no luck.
Also i try "ovs-vsctl set Bridge vmbr**** mcast_snooping_enable=true"
And i read https://pve.proxmox.com/wiki/Multicast_notes

Need help.
 
Not really sure what could be the cause, but with multicast you are following the wrong lead. As the note at the top of that article says, since Proxmox VE 6, corosync is not using multicast anymore.
 
Ok, multicast is not a problem.
I attach linux br and ovs conf's.

I make some tests:
1. With OVS network conf i can connect to ssh and connect via WEB-GUI https://*:8006 to the nodes from internal network or make any other connections from any other place.
2. With OVS network conf i can make connections (ssh\https via wget and all other) from nodes to any other internal system.
3. With OVS network conf i can only make icmp between nodes.
4. If i use linux conf all work fine.

wft??? :eek:

UPD: very strange things, all nodes connect to one switch stack and there no any error's on stack side, no blocking ports, no stp errors or bpdu. Something in network dont work correctly, and only between nodes woth OVS network and only on nodes connected in 1 phisical switch.
 

Attachments

Last edited:
Solved!

Wrong MTU in main VLAN for PVE. I wrote 9000 - it's not right, we need 1500. :cool:
:p
Hehe, glad you found the issue :) I took the liberty to mark the thread as solved.