Hi,
currently are our VMs network dualstack mainly. Only very small number of them is ipv6 only. The problem is currently detected on the dualstack VMs on the PVE 8.4.
When doing manual offline migration (gui or ha-manager), some migrated VM's ipv6 doesn't work after boot. When doing automatic ha migration (for example - node fenced), the VM's ipv6 works every time as expected.
Based on search it looks as problem with "multicast snooping" on the PVE network bridges and solution is to disable it, eg:
for linux bridge, or for openvswitch (which we use):
There is nothing about it in PVE documentation. I don't know how buggy it's in PVE 9. But if it doesn't work reliably out of the box, the documentation needs to be updated at least.
Anybody has similar problems with the ipv6 only VMs ? As i wrote, we don't have such setup at bigger scale.
PVE team, can you do something about it?
currently are our VMs network dualstack mainly. Only very small number of them is ipv6 only. The problem is currently detected on the dualstack VMs on the PVE 8.4.
When doing manual offline migration (gui or ha-manager), some migrated VM's ipv6 doesn't work after boot. When doing automatic ha migration (for example - node fenced), the VM's ipv6 works every time as expected.
Based on search it looks as problem with "multicast snooping" on the PVE network bridges and solution is to disable it, eg:
Code:
bridge-mcsnoop no
for linux bridge, or for openvswitch (which we use):
Code:
ovs_options mcast_snooping_enable=false # <-- it's correctly written?
There is nothing about it in PVE documentation. I don't know how buggy it's in PVE 9. But if it doesn't work reliably out of the box, the documentation needs to be updated at least.
Anybody has similar problems with the ipv6 only VMs ? As i wrote, we don't have such setup at bigger scale.
PVE team, can you do something about it?