Because of my experience...
I built the glusterfs backend, tried to live-migrate the volumes from NFS to the glustrefs storage, but didn't work.
So, I migrated the volumes off-line, but I couldn't start the VMs because of some error like this:
task started by HA resource agent
kvm: -drive file=gluster://11.22.33.44/pm/images/201/vm-201-disk-0.qcow2,if=none,id=drive- scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on: Could not read qcow2 header: Invalid argument
TASK ERROR: start failed: command '/usr/bin/kvm -id 201 -name vt-oe412 -chardev 'socket,id=qmp,path=/var/run/qemu-server/201.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/201.pid -daemonize -smbios 'type=1,uuid=012ab6e5-f4b4-12ab-ab12-adf81689cb312' -smp '2,sockets=1,cores=2,maxcpus=2' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga std -vnc unix:/var/run/qemu-server/201.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 2014 -k de -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:2322b12dacb1' -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=gluster://11.22.33.44/pm/images/201/vm-201-disk-0.qcow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap201i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=AB:CD:FG:HI:JK:LM,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'' failed: exit code 1
The only way to make it work, was to set another cache mode. Now it works and I can even live migrate between storage back-ends.
Still, it would be nice to have the default cache mode changed, so that my clients don't have to struggle to figure out why their new VM is not starting.