It looks like I find myself here, hello. I am familiar with hyperv and vmware, but have been playing with Proxmox, and by God is it glorious. Why has it taken me so long to arrive here?
But yeah, it also looks like I'm stumbling here from Macvtap LANd.
@dietmar
"What is the advantage"
From IBM
"The MacVTap driver provides exceptional transactional throughput and operations/sec results (up to 10-50%) better than either of the two software bridges. Additionally, throughput of MacVTap scales up with load more quickly compared to using a software bridge. This means that MacVTap is more CPU efficient, consuming less CPU resources to complete the same amount of work. Stated another way, MacVTap can do more work using the same amount of CPU resources.
Although MacVTap is the best performing, it suffers from a couple of issues that may limit the use cases where it would be a suitable choice.
The first limitation is that MacVTap can not readily enable network communication between the KVM host and any of the KVM guests using MacVTap."
So macvtap is a kernel driver tap vs the software router bridge that was used in the past (and still used very heavily today)... Trust me, Macvtap documentation appears to be fairly sparse across the interwebs, and yet it for the most part just seems to work as advertised (in my extremely limited testing).
That being said, I believe KVM already integrates with Macvtaps pretty naturally. Take the following /etc/network/interfaces config for a KVM+Virt-Manager setup as an example. (note, I haven't dug into why yet, but this same type of networking setup doesn't play nicely with some custom systemd settings the proxmox hypervisor uses. ¯\_(ツ)_/¯ sysd complains it can't restart networking, but it does anyway and works like it's supposed to... I have no idea. Not really super relevant or important here, just thought I'd mention).
Code:
auto lo
iface lo inet loopback
allow-hotplug enp1s0f0
auto enp1s0f0
iface enp1s0f0 inet static
address 10.8.0.30
netmask 255.255.240.0
gateway 10.8.0.1
dns-nameservers 10.80.0.5 10.10.10.10
auto enp1s0f0.4
iface enp1s0f0.4 inet manual
gateway 10.80.0.1
vlan-raw-device enp1s0f0
auto enp1s0f0.10
iface enp1s0f0.10 inet manual
gateway 10.10.10.1
vlan-raw-device enp1s0f0
auto enp1s0f0.24
iface enp1s0f0.24 inet manual
gateway 10.24.0.1
vlan-raw-device enp1s0f0
auto enp1s0f0.25
iface enp1s0f0.25 inet manual
gateway 10.25.0.1
vlan-raw-device enp1s0f0
I'm just declaring the vlan interfaces in the network config file here, now if I go into virt-manager and add a new vm, I have the option to use these vlan interfaces as a macvtap bridge, vepa, private or
passthrough device.
and once that VM comes online, all that looks like in the ip command (on the host) for any VM that's assigned a vlan interface, looks like this
Code:
~# ip a
...
13: enp1s0f0.4@enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500...
14: enp1s0f0.10@enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500...
15: enp1s0f0.24@enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500...
16: enp1s0f0.25@enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500...
18: macvtap5@enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500...
26: macvtap1@enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500...
27: macvtap2@enp1s0f0.24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500...
28: macvtap3@enp1s0f0.25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500...
I assume virt-manager is just making the `ip link add` calls for the macvtap interfaces here, and I believe once the interfaces are created, they're just handled by the kernel until they're manually deleted... But don't quote me on that.
Things to Note with Macvtaps: As mentioned before, you cannot ping the host IP from guests on the same subnet. That's where the NAT hairpinning comes into play. However, if the guests are on a separate VLan or any other separate cnamed subnet, they are crossing NAT, and that serves the same purpose as the hairpinning mentioned in the docs. Allowing coms between guests and hosts over IP.
Regardess, even though I can't ping host to guest and vice versa, virt-manager is still able to vnc/spice manage these machines, so some type of communication is still taking place here. Perhaps they're riding the vlan interface connection into the macvtap, or ignoring the tap entirely from a management perspective? I'm not sure.
I don't think this is really an either or scenario, it would just be nice to have the option to take advantage of something that's been built into the kernel for over a decade
My use case here is reviving old hardware by running proxmox on older cpus. When I get cpu spike, that also hits my networking. I believe this could help alleviate some of that pressure. Or not, I'm not the brightest.
That's my two cents, but yall keep rocking, I think I'm gonna start migrating everything to proxmox.
Jack
TL:\DR
As far as I can tell, the major documenation issues with Macvtap coms were written around the same time cgroup networking was being built into the kernel. So it was never considered as an easy solution to that problem, and no docs have really re-addressed it since. I guess Docker kind of consumed macvtaps and everyone assumed they were just for containers ¯\_(ツ)_/¯