Hi all,
I'd like to completely remove ceph from my installation without reinstalling proxmox and transition over to regular hard drives using NFS with RDMA.
What would be the best way to remove ceph without borking the installation?
Using Proxmox with either a windows guest or linux guest - disconnecting a USB device freezes the VM.
This did not happen on older versions of proxmox.
Currently using a four port USB kvm switch.
BBR only needs to be configured on the VM. BBR is not a magic pill. BBR helps the most with long distances so if you have viewers in say germany and you host in america they will be able to stream in a higher resolution with no buffering. This is usually how BBR works. BBR also only works over...
If you bond them you can double the bandwidth. But the point of dedicated public+private networking on ceph is the private network acts as the backend for data transfer between OSD's and the public is your cephfs/radosgw network.
I would test both out. Bonding would probably be more rewarding...
You have 30 SSD's being shared over a single 10Gbps network? Do you have a separate private network so you have 20Gbps capacity per node?
You are likely running into bottlenecks because of the network.
Nope! I requested this feature weeks ago and pretty much was shot down.
No idea why this feature isn't available considering they record everything else a system admin needs.
Yes, osd's have their own cache so you're probably seeing that. The problem is you have a 10Gbps network and your SSD/NVME pool is maxing the bandwidth.
If you had a 56Gbps FDR infiniband setup you would probably see that hitting 30Gbps+ with significantly higher iops. Depending on pool size...
Also do not expect native speed with ceph. It's going to be slower then a standard setup.
And by putting nvme with ssds in the same pool your max speed is what the weakest ssd can do slows down the entire pool. So unless you're using a enterprise grade ssd, well, some ssds can be slower then...
You have a 10Gbps network and your bandwidth is 8Gbps. You do not get the full 10Gbps with ethernet there is overhead to account for. 8Gbps is max speed.
Geeze. Anyway, I'm not sure what you mean about file transfers being 50MB/s. You might be experiencing a peering issue at the datacenter. Do you have TCP BBR enabled? If not I'd do that for HTTP live streaming. It helps a lot with long distance connections.
Edit: Read that wrong but TCP BBR is...
Well you didn't mention anything about network congestion... best way to find out is to test!
Another approach (which I do) is to setup a wireguard connection to a central server and then distribute the load over multiple other servers using a dns round robin setup with nuster. So my connection...
For that matter virtio can handle 20Gbps+ with no issue on good hardware. I don't know what that equates to on 1Mbps streams but on file transfers it's golden.
Just write a script to simulate the activity. iperf3 isn't going to help with your situation.
I don't see why there would be an issue with 500-1000 livestream viewers @ 1Mbps but again YMMV. Depends ultimately on your hardware and how you've configured your setup. Proxmox itself can handle it...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.