Using this setup can you use Live Migration of containers?
Live? No. There is no live migration of LXC containers with Proxmox, on any storage. Any LXC container migration requires the container to shutdown and restart.
However, by putting the disk images on NFS/Gluster, that shutdown/migrate/restart only takes 5 seconds or so, even on my slow/weak systems.
Could you maybe describe your setup a little bit more?
This is a cheap bare bones home lab setup. I'm running a pair of Proxmox PVE v5 machines as a 2-node cluster, on two old Intel boxes. I have 2 more disk servers on ARM SBC's (ODROID XU4 with USB-SATA adapters, running Armbian Ubuntu 20.04). All 4 machines are running Gluster 7, and NFS-Ganesha with nfs-ganesha-gluster to provide NFS access to the gluster volumes. Gluster works just fine in a mixed CPU architecture (x86/64 and ARM) cluster.
Each node is cheap and easily replaceable. All mirroring is done by Gluster; no local disk mirror/RAID used on any machine. Each system has 1 or 2 SSD's and 1 or 2 mechanical hard drives.
I use a DNS round-robin name for Gluster and NFS usage which includes all 4 server IP's. This is better than Proxmox's primary/secondary config entries for two reasons: (1) it lets you specify more than 2 servers available to answer requests, and (2) hosts will always prefer themself as part of the RR group, eliminating network traffic when the gluster file they want is on the same machine.
I mirror my gluster volume for the disk images across the two PVE servers, so one half of the mirror access is always on the local machine, cutting network traffic in half of what it might otherwise be.
As a result, when Proxmox/LXC is accessing the container's disk image file on NFS, the Proxmox server is talking to NFS-Ganesha which then talks to Gluster, all on the same box. For reads, everything stays within the server, no network access required. For writes, half stays local and the other mirror half writes to the other gluster/PVE server.
Larger gluster volumes for general file storage/sharing are spread across all 4 servers. Since I have more than 2 servers, I configure my gluster volumes as replicate 2+1 (simple mirror plus arbiter) plus distribute. It works great.