Me, of course... single bond 802.3ad (4 ports on PVE, 2 ports on PBS)Hi,
now I don't have time to test it but I think that could be something related to LACP...
Does everyone who has the problem have LACP aggregation?
Me, of course... single bond 802.3ad (4 ports on PVE, 2 ports on PBS)Hi,
now I don't have time to test it but I think that could be something related to LACP...
Does everyone who has the problem have LACP aggregation?
no bonding on mine setup , both pve or pbs sideHi,
now I don't have time to test it but I think that could be something related to LACP...
Does everyone who has the problem have LACP aggregation?
Do you have 10 Gbe or MTU at 9000?no bonding on mine setup , both pve or pbs side
Me too. All my PVE nodes run with a bond lacp 803.2ad layer 3+4 with MTU 1500, on two 10g cards.... Does everyone who has the problem have LACP aggregation?
So, with a 9.1.1 node and 6.14.11-4-pve kernel, I can restore without any problem.you could try to downgrade kernel also on PVE host and report it, maybe help, but its not related to specific manufacter driver.
Actually Im running Intel Corporation Ethernet Controller X710 for 10GbE SFP+ on 8.4.14 hosts
and
BCM5719 on 9.1.1 test host
but no issues on both scenario restoring VMs.
On the other hand Im on Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) on PBS 4.1 , and every backup was a nightmare before I reverted to good old 6.14
both, 10Gbe on PVE and PBS, and MTU at 9000 on both nicsDo you have 10 Gbe or MTU at 9000?
So, with a 9.1.1 node and 6.14.11-4-pve kernel, I can restore without any problem.
PS: i had not problems with mass live migration, and I use always the same bond of 4x10Gbit with vlans and mtu 9000
We use essential cookies to make this site work, and optional cookies to enhance your experience.