Hi,
Maybe you have some ideas for optimization, solution to the problem.
1. A direct 10G network was set up between the two servers for Proxmox replication;
2. To improve the performance of replication and ZFS, I set zfs set sync=disabled and traffic encryption for replication was disabled;
3. ARC set to min=16GB and max 64GB ram, virtual machines for the test each 1TB disk from the ZFS pool with HDD (raidz2 with 8 disks);
4. Proxmox 8.2.2
The problem is with the bottleneck on writes, if I don't limit the bandwidth to 100MB, I first have a spike to 160MB and then it stabilizes at 60MB, with no return, as if something is clogging up. When I set 100MB rigidly I was getting a steady 100MB and in consevence faster copying over time.
I noticed on writes high I/O delay on the CPU, the question is how to solve this, it is probably not the problem of a small ARC, just think about L2ARC ?
Maybe other ideas ?
What happens when the L2ARC is damaged ?
BR,
Robert
Maybe you have some ideas for optimization, solution to the problem.
1. A direct 10G network was set up between the two servers for Proxmox replication;
2. To improve the performance of replication and ZFS, I set zfs set sync=disabled and traffic encryption for replication was disabled;
3. ARC set to min=16GB and max 64GB ram, virtual machines for the test each 1TB disk from the ZFS pool with HDD (raidz2 with 8 disks);
4. Proxmox 8.2.2
The problem is with the bottleneck on writes, if I don't limit the bandwidth to 100MB, I first have a spike to 160MB and then it stabilizes at 60MB, with no return, as if something is clogging up. When I set 100MB rigidly I was getting a steady 100MB and in consevence faster copying over time.
I noticed on writes high I/O delay on the CPU, the question is how to solve this, it is probably not the problem of a small ARC, just think about L2ARC ?
Maybe other ideas ?
What happens when the L2ARC is damaged ?
BR,
Robert