The big difference with a CEPH vs ZFS in the cluster is your VM migration doesn't have to move the storage. With CEPH your data is on a shared pool and it's already available to all the nodes. If you have a small cluster and want to setup ZFS on each node, it will work. You will just have to migrate the virtual disks and the RAM when you want to move a VM. Whereas with CEPH you only need migrate the RAM if the VM is running so it's much faster to shuffle VMs around the cluster, but you pay for that with a bit higher storage requirements. CEPH would need a Proxmox boot drive (hopefully mirrored) and an OSD drive per machine (hopefully several OSD drives). Whereas with ZFS you could just put a pair of SSDs in, mirror them as a boot pool (rpool) and just store your VMs there.
Added- I just realized this was in the PBS forum. If the iSCSI is for a PBS backup target. Personally, I have moved from a SAN disk image to NFS for backing storage as PBS writes everything to small chunk files. This changes my storage requirements from having to grow virtual disks on the SAN to just making sure the NAS over NFS has enough space allocated.