you can migrate live cross-cluster command line only :
qm remote-migrate <vmid> [<target-vmid>] <target-endpoint> --target-bridge <string> --target-storage <string> [OPTIONS]
you can have differents storage,network,... no problem.
you need 3 nodes to have 3 monitors to manage ceph quorum. (ceph quorum != proxmox corosync quorum).
Then you can creates osd only on 2 nodes with replicat2.
The problem is that we can't do a reject with a bridged firewall here with nftables. (I was working with iptables with some dirty trick, not available in nftables).
it'll send traffic to both bond at the same time.
if you want some kind of failover, you can create bond (active-backup) of bonds.
(not sure that the gui allow it, but you can do it in /etc/network/interfaces)
simply use proxmox backup feature ? I don't use any snapshot feature of storage.
you cant export|import lvm snapshots like zfs.
(afaik, only zfs && ceph rbd support this feature)
I'll not be easy, but I think you should check mac address table on different switches, check that the mac of your proxmox node is not flapping between ports or something like that.
the vnets are linux bridge, connected to your main vmbrX defined in the zone. And it can be a linux bridge or ovs bridge.
(I haved tested qinq with differents users with both ovs or linux bridge for the vmbrX).
The config is generated in /etc/network/interfaces.d/sdn
do you have tried virtio nic instead e1000 ? I'm not sure that tagging with e1000 is working fine.
with tcpdump or wireshard, you should see tagged packet on <vnetid> bridge
and double tag on the physical interface out of server
Another way: you can also use a vlan zone, define the outer...
Do you have a firewall between your nodes ? It should works without any problem, the 3 nodes just need to communicate between them with bgp port open (tcp/179) && vxlan port open (udp/4789)
I think you should look at evpn zone with anycast gateway (same ip on each vnet on each host).
They are no way to change gateway ip inside the vm when your are migrating it. (because it's transparent for the guest os).
(Note that dhcp is only implemented on single zone currently, other zones...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.