10gbps streaming server under proxmox?

openaspace

Member
Sep 16, 2019
382
8
23
Italy
Hello,
A 10gbps streaming server could run under proxmox? Could be proxmox a bottleneck for the 1000 connections that the vps will receive? In terms of network performance?

Thank you.
 

openaspace

Member
Sep 16, 2019
382
8
23
Italy
We run KVM vm‘s with icecast Streaming servers since years and serve several 1000‘s of listeners. No issues with network
Thank you.
I need to work with video which is heavier, like streams of 2,5mbps x 1000 viewers each

how many input connections at the same time you manage on a single vps with the proxmox firewall?
I experienced drops problem using pfsense and other similar firewall in the front of the vps.

thank you
 

Dragon19

Member
Jan 4, 2020
48
7
8
31
Thank you.
I need to work with video which is heavier, like streams of 2,5mbps x 1000 viewers each

how many input connections at the same time you manage on a single vps with the proxmox firewall?
I experienced drops problem using pfsense and other similar firewall in the front of the vps.

thank you

Stimulate it with a script. Only way to know for sure.

Everyone has different hardware and configurations so YMMV.
 

openaspace

Member
Sep 16, 2019
382
8
23
Italy
Stimulate it with a script
for my little experience, the simulation it's not the reality

with ipref3 it's possible to set some network settings but the reality is always different.


in this moment i'm live with 500 viewers at 1mbps video stream on a only 1gbps server...yes only 500mpbs theorically .. but I see peaks that reach 800mbps..
 

Dragon19

Member
Jan 4, 2020
48
7
8
31
for my little experience, the simulation it's not the reality

with ipref3 it's possible to set some network settings but the reality is always different.


in this moment i'm live with 500 viewers at 1mbps video stream on a only 1gbps server...yes only 500mpbs theorically .. but I see peaks that reach 800mbps..

Just write a script to simulate the activity. iperf3 isn't going to help with your situation.

I don't see why there would be an issue with 500-1000 livestream viewers @ 1Mbps but again YMMV. Depends ultimately on your hardware and how you've configured your setup. Proxmox itself can handle it as can the underlying KVM technology.
 
  • Like
Reactions: openaspace

Dragon19

Member
Jan 4, 2020
48
7
8
31
For that matter virtio can handle 20Gbps+ with no issue on good hardware. I don't know what that equates to on 1Mbps streams but on file transfers it's golden.
 

openaspace

Member
Sep 16, 2019
382
8
23
Italy
I don't know what that equates to on 1Mbps streams but on file transfers it's golden.

Live streams with live encoding and network congestions are more delicate.

I also have a big file storage where the smallest file is 200gb and no problems..
but when there is in the midlle the stress of the live event .. my approach is different

First experience with live events and kvm, don't use a virtualized firewall in the front of the streaming server..
.. calculate overhead of at least the 30% of the guaranteed bandwidth..

set large buffer enoders...

with web big file storage...is ok.. you reach 90MB/s ?.. ok.. you are going at 50MB/s?.. it's ok .. the file will arrive in any case..
with the live doesn't work like this
 

Dragon19

Member
Jan 4, 2020
48
7
8
31
Live streams with live encoding and network congestions is more delicate.

I also have a big file storage where the smallest file is 200gb and no problems..
but when there is the midlle the stress of the live event .. my approach is different

Well you didn't mention anything about network congestion... best way to find out is to test!

Another approach (which I do) is to setup a wireguard connection to a central server and then distribute the load over multiple other servers using a dns round robin setup with nuster. So my connection at the datacenter is 1Gbps but when it hits the other servers they cache the content and I'm suddenly hitting 10Gbps+ throughput.
 

openaspace

Member
Sep 16, 2019
382
8
23
Italy
wireguard connection to a central server and then distribute the load over multiple other servers using a dns round robin setup with nuster.
with live stream i need to use patent registered software encoder (50$ per server + the streaming software) that process incoming signal and send it to the other server (240$ per 10gbps server) that encode newly the received signal for at least 3 resolution output

finally 2 central server (failover) that split the incoming viewer connection from the player to the available server
 

Dragon19

Member
Jan 4, 2020
48
7
8
31
with live stream i need to set patent registered software encoder (50$ per server) that process incoming signal and send it to other server (240$ per 10gbps server) that encode newly the received signal for at least 3 resolution output

finally 2 central server (failover) that split the incoming viewer connection from the player to the available server

Geeze. Anyway, I'm not sure what you mean about file transfers being 50MB/s. You might be experiencing a peering issue at the datacenter. Do you have TCP BBR enabled? If not I'd do that for HTTP live streaming. It helps a lot with long distance connections.

Edit: Read that wrong but TCP BBR is a godsend for http(s) transfers.
 

Dragon19

Member
Jan 4, 2020
48
7
8
31
Hello I have tried bbr but i don't see big differences
here the result

https://forum.proxmox.com/threads/google-bbr-congestion-control-vs-cubic-results.67856/

BBR only needs to be configured on the VM. BBR is not a magic pill. BBR helps the most with long distances so if you have viewers in say germany and you host in america they will be able to stream in a higher resolution with no buffering. This is usually how BBR works. BBR also only works over TCP.

Here is an explanation of BBR (Google created BBR btw):

https://cloud.google.com/blog/produ...ol-comes-to-gcp-your-internet-just-got-faster

Edit: You also need to reboot after applying BBR or use sysctl -p ...not sure if you've done that.

I don't know how bbr works on iperf as I don't use iperf. I use actual results from my testing from my real workload.
 
Last edited:

openaspace

Member
Sep 16, 2019
382
8
23
Italy
yes is working without reboot ;)

sysctl net.ipv4.tcp_congestion_control

net.ipv4.tcp_congestion_control = bbr

my conf
Code:
#
net.core.wmem_max = 16777216
net.ipv4.tcp_wmem = 4096 4194394 16777216
net.core.default_qdisc=fq
net.ipv4.tcp_congestion_control=bbr
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!