Hi Proxmox Community,
I'm getting transfer rates around 2-5 MiB/s between my servers with a cap of about 10-11MiB/s on some transfers and I'm not thinking it's the write performance on the server I'm putting everything onto but can be wrong.
My setup:
3 servers, all older equipment but functional. I have a poweredge r420 and two old acer veritons. I'm testing this with some older HDDs. The R420 has 4TB Western Digital Enterprise HDDs, one of the acers has a 10TB Seagate, and the last acer I believe is just a 2TB Western Digital Desktop HDD. Originally my thinking is that it's because during my testing of Proxmox I have like 3-4 VMs on the disks while performing replication of 10 VMs across all these servers. I would have 2 servers basically maxing out their one disk constantly replicating and running these VMs. I have turned off replication for now and testing different parts be it, is it ZFS, is it the disk, does EXT4 storage transfer items better, how about non hdds?
Currently the idea is that there's a unseen network cap of 100mbps.
PowerEdge R420 - I've ran ethtool on my eno1 and the virtual machine bridge vmbr0. Both are set to 1000Mb/s but I notice the bridge does not have auto negotiation nor duplex setup. Is this required?
Acer Veritons - eno1 is normal but the vmbr0 set the speed to 10,000Mb/s, no duplex, no negotiation. I'm not sure why this happened. Not sure what the correct speeds should be set to or if I even touch the vmbr0
I'm getting transfer rates around 2-5 MiB/s between my servers with a cap of about 10-11MiB/s on some transfers and I'm not thinking it's the write performance on the server I'm putting everything onto but can be wrong.
My setup:
3 servers, all older equipment but functional. I have a poweredge r420 and two old acer veritons. I'm testing this with some older HDDs. The R420 has 4TB Western Digital Enterprise HDDs, one of the acers has a 10TB Seagate, and the last acer I believe is just a 2TB Western Digital Desktop HDD. Originally my thinking is that it's because during my testing of Proxmox I have like 3-4 VMs on the disks while performing replication of 10 VMs across all these servers. I would have 2 servers basically maxing out their one disk constantly replicating and running these VMs. I have turned off replication for now and testing different parts be it, is it ZFS, is it the disk, does EXT4 storage transfer items better, how about non hdds?
Currently the idea is that there's a unseen network cap of 100mbps.
PowerEdge R420 - I've ran ethtool on my eno1 and the virtual machine bridge vmbr0. Both are set to 1000Mb/s but I notice the bridge does not have auto negotiation nor duplex setup. Is this required?
Acer Veritons - eno1 is normal but the vmbr0 set the speed to 10,000Mb/s, no duplex, no negotiation. I'm not sure why this happened. Not sure what the correct speeds should be set to or if I even touch the vmbr0