Thanks!With the old 8K default, in case you use a raidz1/2/3 to store VMs, you always have to increase the volblocksize.
With the new 16K default, using a raidz1/2/3 you only have to increase it once you use more than 3 disks.
The great blog article is gone, but there is still the table of Matt Ahrens breaking down capacity loss based on volblocksize, raidz type and number of disks: https://docs.google.com/spreadsheets/d/1tf4qx1aMJp8Lo_R6gpT689wTjHv6CGVElrPqTA0w_ZY/
Keep in mind that the table is using "block size in sectors" and not "volblocksize". So you have to multiply the "block size in sectors" by 512B for ashift=9, by 4K for ashift=12, by 8K for ashift=13 and so on to get the corresponding volblocksize.
Is that table only for RAIDZ* pools? I'm storing my VM disks in an all SSD pool containing two mirror vdevs. I read pretty early on to avoid using Z1/2/3 for VM storage if I could so as to avoid potential performance complications. And I don't have enough VMs that using a mirror pool feels wasteful.