Hi there.
I have an LVM datastore on top of an MDRAID5 array with the following parameters: 5 SSD drives with a chunk size of 4k (stripe size of 16k).
When I clone or migrate the virtual disk to the datastore on top of the MDRAID5 array, I see that data is read from the source datastore with a 128k block size into the system page cache/buffer and then flushed to the MDRAID5 datastore as a series of 4k blocks that are not aligned with the MDRAID5 stripe. As a result, I see the read-modify-write (RMW) operations, leading to performance degradation.
I haven't been able to find any options to set the block size for flushing data from the system page cache.
Is there any option that can be configured in Proxmox to perform clone or migrate operations with o_direct flag? The system page cache will be bypassed by using this flag.
I understand that using a filesystem on top of MDRAID5 could resolve this issue, but I prefer to stay with the LVM datastore scenario.
I have an LVM datastore on top of an MDRAID5 array with the following parameters: 5 SSD drives with a chunk size of 4k (stripe size of 16k).
When I clone or migrate the virtual disk to the datastore on top of the MDRAID5 array, I see that data is read from the source datastore with a 128k block size into the system page cache/buffer and then flushed to the MDRAID5 datastore as a series of 4k blocks that are not aligned with the MDRAID5 stripe. As a result, I see the read-modify-write (RMW) operations, leading to performance degradation.
I haven't been able to find any options to set the block size for flushing data from the system page cache.
Is there any option that can be configured in Proxmox to perform clone or migrate operations with o_direct flag? The system page cache will be bypassed by using this flag.
I understand that using a filesystem on top of MDRAID5 could resolve this issue, but I prefer to stay with the LVM datastore scenario.