Again answering to myself...
This seems to be a known issue (or maybe feature?) and there is a workaround available.
I've got the information from
Maybe not as uncommon as I thought it would be, the Intel DPDK documentation is even describing exactly that us case.
The problem I'm having is that VMs from the left side (via VF driver) cannot talk to VMs on the right hand side (via bridge of PF). Both can talk to the PF itself and outside hosts.
Some more info:
Even the ARP reply is not getting through.
ping the default gateway (192.168.12.1) from the container (192.168.12.13) I can see:
11:33:21.000863 ARP, Request who-has 192.168.12.1 tell 192.168.12.13, length 28
11:33:21.000927 ARP, Reply 192.168.12.1 is-at de:ad:be:ef:21:00...
Yes I still have the issue. Just tried again after some time (and a series of updates in the meantime)
# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-11 (running version: 6.2-11/22fb4983)
I may have been a bit too optimistic here. While it works fine on a single node, on a cluster I do have issues (and this probably where CARP is really needed). Still need to dig into the details and this may not be Proxmox related at all, but I wanted to leave the latest update here since this...
Unfortunately not. They are not accepting my PR (backporting the fix for this known issue), because there is already a PR for a newer cloud-init version in Debian. But that has not been applied for various reasons... it's a mess
I'm passing through a VF using SRIOV to a VM. This VM works fine, I can reach it from Proxmox (mgmt interface on vmbr0 of the PF) and other hosts on the network.
However, a CT attached to the very same vmbr0 cannot reach the VM, but also everything else on the network. According to tcpdump...
Just tried the 20.7a (which already comes with iavf instead of ixlv) and this works perfectly fine (from VF/CARP perspective). Lack of available packages prevent me from performing more thorough testing, but I'd say it looks good and I'll probably switch to the 20.7 alpha/beta as soon as...
It works with a Linux guest, so I have to assume the guest/VF driver is causing the problem. Trying with a vanilla FreeBSD 12 doesn't really help me as regardless of the results I don't have a solution.
Instead I'd like to try iavf on OPNSense itself. As mentioned the module loads, but as the...
I managed to get iavf compiled an loading, but it doesn't take ownership of the interfaces as ixlv is still loaded.
Any recommendations how to get rid of ixlv?
hints.ixlv.0.disabled = 1 left the interface assigned to ixvl (but disabled / unusuable)
if_ixlv_load="NO" in /boot/loader.conf.local...
Yes that is configured (as well as spoof_check off),
As per the doc:
The vf-true-promisc-support priv-flag does not enable promiscuous mode; rather,
it designates which type of promiscuous mode (limited or true) you will get
when you enable promiscuous mode using the ip link commands above...
Yes, the guest/VF doesn't see the packets, but the the PF does. And the guest says:
ixlv0_vlan1234: permanently promiscuous mode enabled
ixlv0: permanently promiscuous mode enabled
on the relevant interfaces
Never mind, supplied driver and ethtool support it. But (as documented) it only works for the "first PF on a device" (which means the first port on a multi-NIC card.
However, even with true-promisc enabled the VF doesn't receive the traffic for the secondary MAC (regardless of a PF-defined or...
I'm passing through a VF from my Intel X722-based NIC to a firewall (OPNsense) qemu vm which uses CARP for high-availability.
However, due to filtering by the PF, the packets destined to the virtual CARP MAC addresses do not reach the VF/guest.
This is "by design" and if such functionality is...
Wouldn't it be sufficient to introduce some method to pass the VMID to the custom cloud-init file?
Then one could have a (dynamically generated) cicustom file for every VM and the template can be cloned without any modifications afterwards.
A hook for a new vm being created from template...
I'm using this one:
The OpenStack image gave me problems with physical drivers (didn't even find cdrom for cloud-init but also lacked driver for my passthrough nics).
I can see the rename to eth0 using "nocloud" citype, but not when using configdrive
But even with nocloud, the interface IP is not applied to eth0 as /etc/network/interfaces.d/50-cloud-init.cfg is never read. The "." in .cfg seems to be an invalid character in included files. However, this is...