Help PBS speed fixed 1gbs

Testani

Member
Oct 22, 2022
38
3
13
Hi everyone, I installed a bare metal PBS on a dual 4110 with 128GB of ram and SSD.
I tried all possible configurations switching from ZFS to any configuration, in extreme I even created a DS with 1 single SSD. I'm on a full 10GB network, the proxmox servers have 2 10gb and the PBS 2 10gb with MTU and network speed tested withiperf.
I attach a screenshot of the PBS configuration and the compression used, in no way can I reach a backup speed higher than 90/100 MB/s. Where can I check and understand where the bottleneck is? Does PBS have 1GB limits somewhere?
 

Attachments

  • p1.png
    p1.png
    182.1 KB · Views: 20
  • p2.png
    p2.png
    158.2 KB · Views: 19
  • p3.png
    p3.png
    145.2 KB · Views: 19
What does your bond status say? What do your ports themselves say? Is the switch configured incorrectly? Is there perhaps a limiter? What does iperf say?
 
This is network tesT:


[ 1] local 10.12.0.249 port 47152 connected with 10.12.0.37 port 5001 (icwnd/mss/irtt=14/1448/106)


[ ID] Interval Transfer Bandwidth


[ 1] 0.0000-10.0134 sec 6.15 GBytes 5.27 Gbits/sec


root@pbs09:~# iperf -c 10.12.0.37


------------------------------------------------------------


Client connecting to 10.12.0.37, TCP port 5001


TCP window size: 85.0 KByte (default)


------------------------------------------------------------


[ 1] local 10.12.0.249 port 41052 connected with 10.12.0.37 port 5001 (icwnd/mss/irtt=14/1448/106)


[ ID] Interval Transfer Bandwidth


[ 1] 0.0000-10.0141 sec 5.80 GBytes 4.98 Gbits/sec
 
Is it possible that your PVE node is simply at its limit and simply can't deliver anymore? Or have you set a limiter on the backup bandwidth so as not to overload your drives? Have you ever done a benchmark on the PBS datastore?
 
here screenshot of speed and pbs load
 

Attachments

  • s7.png
    s7.png
    95.9 KB · Views: 20
  • s8.png
    s8.png
    135 KB · Views: 20
It's difficult to help you if you ignore half the questions.

You can also have thousands of PVE nodes, what would that change? Nothing.
You have to provide a few more facts, answer the questions, check something or try something out.
 
Thanks, i have done all the test. What am i Missing? No limits speed on nodes, no i/o delay, VM run smoothly with 1gb of r/w on Local SSD disk, network bandwidth is ok, PBS test returns this values:

Time per request: 13590 microseconds.


TLS speed: 308.61 MB/s


SHA256 speed: 215.52 MB/s


Compression speed: 373.76 MB/s


Decompress speed: 575.69 MB/s


AES256/GCM speed: 1246.04 MB/s


Verify speed: 161.82 MB/s


+===================================+====================+


| Name | Value |


+===================================+====================+


| TLS (maximal backup upload speed) | 308.61 MB/s (25%) |


+-----------------------------------+--------------------+


| SHA256 checksum computation speed | 215.52 MB/s (11%) |


+-----------------------------------+--------------------+


| ZStd level 1 compression speed | 373.76 MB/s (50%) |


+-----------------------------------+--------------------+


| ZStd level 1 decompression speed | 575.69 MB/s (48%) |


+-----------------------------------+--------------------+


| Chunk verification speed | 161.82 MB/s (21%) |


+-----------------------------------+--------------------+


| AES256 GCM encryption speed | 1246.04 MB/s (34%) |


+===================================+====================+



what can i check?
 
For example, you could evaluate why your PBS obviously can't reach 20g via iperf.

Maybe this could help you: https://forum.proxmox.com/threads/lacp-bonding-with-2-x10g-nic-are-giving-10g-traffic-only.111428/

Check you vzdump config to make Sure, there is really no Limit Set: https://pve.proxmox.com/pve-docs/chapter-vzdump.html#vzdump_configuration

You haven't told us yet which switches you use. Here you should check the config to see whether the port is not limited by something and is also configured correctly.
I also don't know if you can achieve the 20g between two nodes.
You also didn't reveal which MTU you set and whether the switch can do it and is configured correctly on each node. The wrong MTU settings can cause various problems.

There is also no configuration or information about how you integrated the datastore on the PVE. Whether you e.g. Use encryption or not.
 
is only your 10gb nic connected ? because 1gb nic connected for the mgmt can be used, irrc
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!