Hi,
My raidz1 pool storing my VMs zvols is set to a volblocksize of 32k, so ZFS uses 32KB blocks to store the zvols.
My VMs are using virtio SCSI and these virtual drives are reported as 512B LBA size.
How is the virtualization working here so that data from the guest is stored on the pool?
Will virtio waste capacity because it tries to write 32KB on the pool for every 512B block of data the virtual drive is trying to write or will it do some type of conversion so 64 512B blocks are combined to a 32KB block ZFS can store on the zvol?
If it does some conversion, is it fine to just use the defaults of the guest OS or should I change the blocksize of the guests filesystems so it will match 32KB?
What is a good practice to avoid overhead, bad padding and write amplification?
Right now my setup looks like this:
SSDs (logical/physical sector size: 512B/4K) <-- ZFS pool (ashift: 12 so 4K) <-- zvol (volblocksize: 32K) <-- virtio SCSI virtual drive (LBA: 512B) <-- ext4 partitions (block size 4K)
I would think that so much layers with different block sizes should cause a lot of overhead due to padding.
My raidz1 pool storing my VMs zvols is set to a volblocksize of 32k, so ZFS uses 32KB blocks to store the zvols.
My VMs are using virtio SCSI and these virtual drives are reported as 512B LBA size.
How is the virtualization working here so that data from the guest is stored on the pool?
Will virtio waste capacity because it tries to write 32KB on the pool for every 512B block of data the virtual drive is trying to write or will it do some type of conversion so 64 512B blocks are combined to a 32KB block ZFS can store on the zvol?
If it does some conversion, is it fine to just use the defaults of the guest OS or should I change the blocksize of the guests filesystems so it will match 32KB?
What is a good practice to avoid overhead, bad padding and write amplification?
Right now my setup looks like this:
SSDs (logical/physical sector size: 512B/4K) <-- ZFS pool (ashift: 12 so 4K) <-- zvol (volblocksize: 32K) <-- virtio SCSI virtual drive (LBA: 512B) <-- ext4 partitions (block size 4K)
I would think that so much layers with different block sizes should cause a lot of overhead due to padding.