you can do both,
multiple vlans inside 1vxlan tunnel is working ,
but I think than multple vxlan/vnet by network is better.
(not sure about nic offloading performance with vlan inside vxlan)
I don't think that search domain is already send in dhcp options.
(the code is /usr/share/perl5/PVE/Network/SDN/Dhcp/Dnsmasq.pm)
But it should be defined at zone level (you have a dns suffix option). they are no dhcp options configurable in...
PVE does exactly the same job as LXD and incus (and more), it doesn't make sense to make PVE manage Incus, as you'd end up with two layers of management instead of one..
maybe check your cpu stats if you don't have a core at 100%, as it's quite possible that old nics don't have the RSS feature compatible with vxlan, so are not able to dispatch vxlan traffic across multiple cores.
maybe in the future, but for layer2, this will need a central gateway somewhere to do the nat. (so some kind of vm or router appliance managed by proxmox)
the options are generic across differents zone type, maybe it could be possible to...
I really don't known how vxlan are performing with such "bit old" -> 2010 cpu ;)
also modern nic have vxlan offloading, so it'll be full cpu here for vxlan encapsulation.
maybe try to disable spectre,meltdown,...mitigations.
1.you can use any vmbrX plugged on enox without any vlan enox.Y.
then create the vnets where you'll defined the vlan tag number
better to use vlan-aware on vmbrX, but it's not mandatory.
2. yes. simply defined the vnets , move the vm...
It's currently not supported, as you need to have the vnet to be gateway the vm. (so, you have same ip on each host in the same vlan, it'll not work).
Currently, it's only working with layer3 zones. (simple && evpn zones)
you can edit /etc/corosync/corosync.conf on each node (don't forget to increase config_version), restart corosync on each node.
then copy /etc/corosync/corosync to /etc/pve/corosync.conf when the cluster is ok
if vmbr0 is not vlan-aware, the service vlan is set on the physical interface enslaved in the defined bridge.
if vmbr0 is vlan-aware, the service vlan is set on vmbr0 interface vlan interface.
both should work normally.
Hi,
proxmox don't use storage snapshot for backup.
I'm currently working to add snapshots for shared lvm (no official target date, I'm hoping for pve9). It'll work with lvm over scsi|fc.
proxmox firewall ? or a physical firewall/router somewhere on your network ? (in this case, the mtu of the interfaces of the firewall need to be increased too)
Indeed, you need to increase mtu on your physical switches ports to 1550 for example, to handle the 50bytes of vxlan, if you wan to use 1500 in the vms.