Recent content by Dragon19

  1. D

    Passing usb device causes host to freeze

    I have a x670 pro art creator motherboard. When I pass through a usb device (not the controller) libvirtd appears to freeze or something. The host is still responsive but the vps controls are frozen. I have verified this on a bare metal install as well, so it’s not just proxmox. Is this a...
  2. D

    Uninstall ceph?

    Hi all, I'd like to completely remove ceph from my installation without reinstalling proxmox and transition over to regular hard drives using NFS with RDMA. What would be the best way to remove ceph without borking the installation?
  3. D

    [Latest] Proxmox - USB disconnects freeze KVM

    Hello, The kernel is 5.4.41-1-pve and the pve manager is 6.2-4.
  4. D

    [Latest] Proxmox - USB disconnects freeze KVM

    Using Proxmox with either a windows guest or linux guest - disconnecting a USB device freezes the VM. This did not happen on older versions of proxmox. Currently using a four port USB kvm switch.
  5. D

    10gbps streaming server under proxmox?

    Cool. Read the explanation from google and decide if it suits your work case :)
  6. D

    10gbps streaming server under proxmox?

    BBR only needs to be configured on the VM. BBR is not a magic pill. BBR helps the most with long distances so if you have viewers in say germany and you host in america they will be able to stream in a higher resolution with no buffering. This is usually how BBR works. BBR also only works over...
  7. D

    Terrible Ceph IOPS performance

    If you bond them you can double the bandwidth. But the point of dedicated public+private networking on ceph is the private network acts as the backend for data transfer between OSD's and the public is your cephfs/radosgw network. I would test both out. Bonding would probably be more rewarding...
  8. D

    Degraded Windows VM-Performance on SSD with Ceph-Cluster

    You have 30 SSD's being shared over a single 10Gbps network? Do you have a separate private network so you have 20Gbps capacity per node? You are likely running into bottlenecks because of the network.
  9. D

    Temperature

    Nope! I requested this feature weeks ago and pretty much was shot down. No idea why this feature isn't available considering they record everything else a system admin needs.
  10. D

    Terrible Ceph IOPS performance

    Yes, osd's have their own cache so you're probably seeing that. The problem is you have a 10Gbps network and your SSD/NVME pool is maxing the bandwidth. If you had a 56Gbps FDR infiniband setup you would probably see that hitting 30Gbps+ with significantly higher iops. Depending on pool size...
  11. D

    Terrible Ceph IOPS performance

    Also do not expect native speed with ceph. It's going to be slower then a standard setup. And by putting nvme with ssds in the same pool your max speed is what the weakest ssd can do slows down the entire pool. So unless you're using a enterprise grade ssd, well, some ssds can be slower then...
  12. D

    Terrible Ceph IOPS performance

    Your bandwidth is maxed. It's not possible to increase the iops. You can use fio if you use smaller file sizes you will have more iops.
  13. D

    Terrible Ceph IOPS performance

    You have a 10Gbps network and your bandwidth is 8Gbps. You do not get the full 10Gbps with ethernet there is overhead to account for. 8Gbps is max speed.
  14. D

    Terrible Ceph IOPS performance

    Found your problem. You do see that your bandwidth is maxed in the benchmark correct? It can't go any faster because of it.
  15. D

    10gbps streaming server under proxmox?

    It's two lines in the sysctl, not hard to disable ;)