We've hit a pretty strange issue after doing a 6to7 upgrade on a system using openvswitch and a ceph rbd pool defined for an external ceph cluster.
Upon boot, for a brief second or two, the ceph storage is available but as soon as VMs start bringing up their vswitch ports, the ceph storage is inaccessible.
We've tried various versions of ceph but brought ourselves back down to 15.2.16-pve1. Everything else is up to date as of this posting. PVE 7.3-3, openvswitch 2.15.0+ds1-2+deb11u1.
We are able to regain access (at least to the web ui storage summary for the ceph pool) if we do a "systemctl restart networking" but obviously that tears down all the existing guest interfaces, rendering things pretty useless.
Kind of at a loss as to where to continue debugging here, beyond ripping openvswitch out and reverting to normal linux bridges - something I'd dearly like to avoid.
Upon boot, for a brief second or two, the ceph storage is available but as soon as VMs start bringing up their vswitch ports, the ceph storage is inaccessible.
We've tried various versions of ceph but brought ourselves back down to 15.2.16-pve1. Everything else is up to date as of this posting. PVE 7.3-3, openvswitch 2.15.0+ds1-2+deb11u1.
We are able to regain access (at least to the web ui storage summary for the ceph pool) if we do a "systemctl restart networking" but obviously that tears down all the existing guest interfaces, rendering things pretty useless.
Kind of at a loss as to where to continue debugging here, beyond ripping openvswitch out and reverting to normal linux bridges - something I'd dearly like to avoid.