RMW on MDRAID5 array when cloning/migrating a virtual disk

ramm

New Member
Mar 14, 2024
1
0
1
Hi there.

I have an LVM datastore on top of an MDRAID5 array with the following parameters: 5 SSD drives with a chunk size of 4k (stripe size of 16k).

When I clone or migrate the virtual disk to the datastore on top of the MDRAID5 array, I see that data is read from the source datastore with a 128k block size into the system page cache/buffer and then flushed to the MDRAID5 datastore as a series of 4k blocks that are not aligned with the MDRAID5 stripe. As a result, I see the read-modify-write (RMW) operations, leading to performance degradation.
I haven't been able to find any options to set the block size for flushing data from the system page cache.

Is there any option that can be configured in Proxmox to perform clone or migrate operations with o_direct flag? The system page cache will be bypassed by using this flag.

I understand that using a filesystem on top of MDRAID5 could resolve this issue, but I prefer to stay with the LVM datastore scenario.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!