Another advantage of macvtap (if the above is not enough) is the ability to passthrough a VF of a network device (or an entire network device) without actual pci-passthrough, allowing migration capabilities. See...
As far as I can see the only thing that prevents semi-integrated usage in proxmox is the fact that "-args" in qm.conf are quoted. This makes the file descriptor creation (i.e. <>/dev/tap31 in above example) fail.
However, the file descriptors from cmdline qm invocation are passed to kvm, so...
I was playing around with ipvtap as I see a real-world advantage here over the usual bridge setup:
VMs (and CTs for that matter) don't need a unique mac address but share the mac address with the physical interface. This is an issue with several hosts (i.e. Hetzner only allowing a single mac...
I also created a hookscript with a little more check logic and opened a bug in Bugzilla.
The script is in use on multiple Proxmox clusters for about a year without any issues.
Again answering to myself...
This seems to be a known issue (or maybe feature?) and there is a workaround available.
I've got the information from
https://bugzilla.redhat.com/show_bug.cgi?id=1067802
and
https://community.intel.com/t5/Ethernet-Products/82599-VF-to-Linux-host-bridge/td-p/351802...
Maybe not as uncommon as I thought it would be, the Intel DPDK documentation is even describing exactly that us case.
The problem I'm having is that VMs from the left side (via VF driver) cannot talk to VMs on the right hand side (via bridge of PF). Both can talk to the PF itself and outside hosts.
This may not be specific to CT. I just spun up the first VM that was supposed to go onto the bridge and it faced the same communication issues like the container.
Some more info:
Even the ARP reply is not getting through.
ping the default gateway (192.168.12.1) from the container (192.168.12.13) I can see:
11:33:21.000863 ARP, Request who-has 192.168.12.1 tell 192.168.12.13, length 28
11:33:21.000927 ARP, Reply 192.168.12.1 is-at de:ad:be:ef:21:00...
Yes I still have the issue. Just tried again after some time (and a series of updates in the meantime)
# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-11 (running version: 6.2-11/22fb4983)
pve-kernel-5.4: 6.2-6
pve-kernel-helper: 6.2-6
pve-kernel-5.3: 6.1-6...
I may have been a bit too optimistic here. While it works fine on a single node, on a cluster I do have issues (and this probably where CARP is really needed). Still need to dig into the details and this may not be Proxmox related at all, but I wanted to leave the latest update here since this...
Unfortunately not. They are not accepting my PR (backporting the fix for this known issue), because there is already a PR for a newer cloud-init version in Debian. But that has not been applied for various reasons... it's a mess
I'm passing through a VF using SRIOV to a VM. This VM works fine, I can reach it from Proxmox (mgmt interface on vmbr0 of the PF) and other hosts on the network.
However, a CT attached to the very same vmbr0 cannot reach the VM, but also everything else on the network. According to tcpdump...
Just tried the 20.7a (which already comes with iavf instead of ixlv) and this works perfectly fine (from VF/CARP perspective). Lack of available packages prevent me from performing more thorough testing, but I'd say it looks good and I'll probably switch to the 20.7 alpha/beta as soon as...
It works with a Linux guest, so I have to assume the guest/VF driver is causing the problem. Trying with a vanilla FreeBSD 12 doesn't really help me as regardless of the results I don't have a solution.
Instead I'd like to try iavf on OPNSense itself. As mentioned the module loads, but as the...
I managed to get iavf compiled an loading, but it doesn't take ownership of the interfaces as ixlv is still loaded.
Any recommendations how to get rid of ixlv?
hints.ixlv.0.disabled = 1 left the interface assigned to ixvl (but disabled / unusuable)
if_ixlv_load="NO" in /boot/loader.conf.local...
Yes that is configured (as well as spoof_check off),
As per the doc:
The vf-true-promisc-support priv-flag does not enable promiscuous mode; rather,
it designates which type of promiscuous mode (limited or true) you will get
when you enable promiscuous mode using the ip link commands above...
Yes, the guest/VF doesn't see the packets, but the the PF does. And the guest says:
ixlv0_vlan1234: permanently promiscuous mode enabled
ixlv0: permanently promiscuous mode enabled
on the relevant interfaces
Never mind, supplied driver and ethtool support it. But (as documented) it only works for the "first PF on a device" (which means the first port on a multi-NIC card.
However, even with true-promisc enabled the VF doesn't receive the traffic for the secondary MAC (regardless of a PF-defined or...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.