You want your pools block size (or in other words your zvols volblocksize) way bigger than the sector size (or in other words the ashift you have chosen) of your disks or you will loose most of your capacity due to padding overhead. You can't directly see the padding overhead because its indirect and only effects zvols and not datasets. So ZFS will report that you got 20TB of usable storage but everything written to a zvol will consume 66% more space, so after writing 12TB of data to your zvols these will be 20TB in size. And then don't forget that a ZFS pool will get slow as soon as it is more than 80% full, because ZFS uses Copy-on-Write so it always needs alot of unfragnemted free space to operate. So of that 12TB you now got, you can actually only use 9.6TB or 8.73TiB if you care about performance. And if you want to use snapshots these will need space too, so you might want to store even way less data on that pool (for example 6TiB of data if you want to reserve a third of the usable capacity for snapshots).
If you don't want that big padding loss you want your volblocksize atleast 8 times higher than your sector size. If you use a ashift=12 you get a sector size of 4K. So you want to increase your volblocksize to atleast 32K. If you are using a ashift=13 you want the volblocksize to be atleast 64K and so on.
If your volblocksize is only 1 or 2 times the sector size you loose 50% of your total raw capacity. If your volblocksize is 4 times the sector size you loose 33% of your total raw capacity. If your volblocksize is 8, 16 or 32 times the sector size you loose 20% of your total raw capacity.
So right now you loose the capacity of 1 of your 6 disks due to parity and the capacity of 2 of your drives to padding overhead and only 3 of the 6 disks are actually usable. So you got the same usable capacity as a striped mirror (raid10) but a striped mirror would be way faster and also would provide a better reliability.
So using raidz1 only would make sense if you atleast increase the volblocksize to 32K and then you get massive performance problems when your guests are doing reads/writes that are smaller than 32K. So stuff like a MySQL or posgres DBs will really suck on that pool because they do 8K or 16k sync writes.
If you do the maths behind ZFS for 6 disks it looks like this in theory (might be a bit different in reality because of compression and so on):
| lost raw capacity: | random IOPS @ 32k+ operations: | random IOPS @ 4K operations: | sequential write/read throughput @ 32K+ operations: | sequential write/read throughput @ 4K operations: | Drives may fail: |
striped mirror (ashift=12; volblocksize=8k): | 50% (50% parity loss) | 3x | 1.5x | 3x / 6x | 1.5x / 3x | 1-3 |
raidz1 (ashift=13; volblocksize=8K): | 50% (17% parity loss + 33% padding loss) | 1x | 0.5x | 5x / 5x | 2.5x / 2.5x | 1 |
raidz1 (ashift=12; volblocksize=32k): | 20% (17% parity loss + 3% padding loss) | 1x | 0.125x | 5x / 5x | 0.625x / 0.625x | 1 |
So you could either increase the volblocksize making your performance way more worse for small reads/writes to get your capacity loss down from 50% to only 20% or you switch to a striped mirror with a way better performance with the same 50% capacity loss. But running that raidz1 with ashift=13 and 8K volblocksize is basically nearly the worst option you could choose.
If these 4TB drives are HDDs then they are already very bad at handling IOPS. And a VM storage mostly benefits from high IOPS, so the IOPS should be the most important performance factor that will be bottlenecking first and there a striped mirror would perform 3 to 12 times better, compared to a proper raidz1, resulting in a way lower IO delay speeding up your VMs. So raidz1 is really only an option if you want to sacrifice alot of performance for a bit more capacity or if you use that pool as a cold storage mostly doing very big async sequential reads/writes or if you use LXCs that will use datasets instead of zvols. If those are SSD a raidz1 might be not that bad as long as you don't want to run applications like DBs that do small sync writes. You still would get a very bad IOPS drop but this won't be that much bottlenecking, because a good SSD easily can do 1000 times the IOPS so even with bad IOPS performance that might be fast enough in real world workloads.
See this blog post of the ZFS head developer if you want to learn more about how raidz works in detail on block level and why there is padding overhead and how to calculate it:
https://www.delphix.com/blog/delphi...or-how-i-learned-stop-worrying-and-love-raidz