Hi all,
Since the github of drbd seems a bit dead i hope someone here can help us out since we're working with qemu and drbd:
Currently we are running 2 servers with a drbd configuration:
The machine we're logged into in this picture is the v02, where drbd1 is primary.
![image image](https://user-images.githubusercontent.com/15122459/269889356-b9bcc5a1-269f-4cc0-9ca8-042cc737c9b2.png)
Here we run a Windows 2022 server with the following startup rules:
(originally we use aio=native, cache.direct=on)
![image image](https://user-images.githubusercontent.com/15122459/269889961-efa7a9af-e624-46ef-a7cd-03f985b39d71.png)
Our harddisk is a WD nvme drive.
Ofcourse we want to see the high read/write numbers that this drive offers.
We did a lot of benchmarks with various flags disabled and enabled as mentioned above.
We also switched from 'virtio-blk' as device to 'nmve' which resulted in lower performance.
The problem:
When the drbd is synced and connected the benchmark results look like this:
![image image](https://user-images.githubusercontent.com/15122459/269891249-3fc772b5-c93b-4324-ac7a-b014dc448bcc.png)
When we give the command 'drbdadm disconnect drbd0' and 'drbdadm disconnect drbd1' the benchmark results look like this:
![image image](https://user-images.githubusercontent.com/15122459/269893000-d813a417-0a82-4076-8f91-e239448e5b02.png)
We already adjusted the /etc/drbd.conf:
Unfortunately nothing changed the result of the low write speeds when drbd is connected and in sync.
What could be the bottleneck why the writespeed is capped at ~115MB/s when the drbd is functioning normally?
Since the github of drbd seems a bit dead i hope someone here can help us out since we're working with qemu and drbd:
Currently we are running 2 servers with a drbd configuration:
The machine we're logged into in this picture is the v02, where drbd1 is primary.
![image image](https://user-images.githubusercontent.com/15122459/269889356-b9bcc5a1-269f-4cc0-9ca8-042cc737c9b2.png)
Here we run a Windows 2022 server with the following startup rules:
(originally we use aio=native, cache.direct=on)
![image image](https://user-images.githubusercontent.com/15122459/269889961-efa7a9af-e624-46ef-a7cd-03f985b39d71.png)
Our harddisk is a WD nvme drive.
Ofcourse we want to see the high read/write numbers that this drive offers.
We did a lot of benchmarks with various flags disabled and enabled as mentioned above.
We also switched from 'virtio-blk' as device to 'nmve' which resulted in lower performance.
The problem:
When the drbd is synced and connected the benchmark results look like this:
![image image](https://user-images.githubusercontent.com/15122459/269891249-3fc772b5-c93b-4324-ac7a-b014dc448bcc.png)
When we give the command 'drbdadm disconnect drbd0' and 'drbdadm disconnect drbd1' the benchmark results look like this:
![image image](https://user-images.githubusercontent.com/15122459/269893000-d813a417-0a82-4076-8f91-e239448e5b02.png)
We already adjusted the /etc/drbd.conf:
- changing protocol version,
- changing maxbuffer & maxepoch
- setting c-plan-ahead to 0
- adjusting resyncrate
- adjusting rate
Unfortunately nothing changed the result of the low write speeds when drbd is connected and in sync.
What could be the bottleneck why the writespeed is capped at ~115MB/s when the drbd is functioning normally?