Hi Fireon,
Only my very personal impression on ZFS:
I really like the idea, the possibilities and the cross-platform support. Caching tiering is also a great tool and I like the internal volume manager and of course the only compression.
I evaluated this over the past two month and I cannot get any stable environment for this. I started out with virtual machines with a reasonably low amount of data and it works great for extracted backups. I moved to commodity hardware, which crashed on OmniOS, FreeBSD and Proxmox itself randomly after an amount of ca. 100 GB of data (machine hung). I suspected RAM shortage and upgraded try one of our productive servers. I evacuated the machine (Proxmox 3.4) of all running VMs, plugged in an additional FC card and connected a 12 disk shelf of 450 GB SAS 15k and build a raid-z2 image. The machine has 128 GB RAM, 3 TB internal SSD storage (128 GB used for L2ARC) and 24 Cores. I worked up to ca. 100 GB of logic data (13 GB of physical data) on a 4K record size for extracted vma backups (normal vma and compressed vma is not good deduplicatable). After a while, the node got fenced and according to arcstat there was only 59 GB ARC used, L2ARC was only 4 GB. The crash itself was in kernel function
I personally will not use deduplication for our backups due to the very bad test behavior. Despite the big machine, I was not able to put more than roughly 100 MB/sec to disk and I had a constant load of 30 (spiked to over 70). I cannot advise to use this on normal VM data. I think it is not ready for "small scale" production yet.