Recent content by eude

  1. E

    BUG: Bandwith Limit does not work after Migration of VM

    no rate change (unlimited BW): and after the change (limit to 12.5MB/s) iperf on the vm:
  2. E

    BUG: Bandwith Limit does not work after Migration of VM

    root@KDALNPPX008:~# tc qdisc | grep 175 qdisc htb 1: dev tap175i0 root refcnt 2 r2q 10 default 0x1 direct_packets_stat 0 direct_qlen 1000 auto lo iface lo inet loopback auto ens1f0 iface ens1f0 inet manual auto ens1f1 iface ens1f1 inet manual auto ens3f0 iface ens3f0 inet manual auto...
  3. E

    BUG: Bandwith Limit does not work after Migration of VM

    if i migrate the vm, turn it off, turn it back on the Bandwith Limit does NOT work. root@KDALNPPX008:~# pveversion -v proxmox-ve: 6.4-1 (running kernel: 5.4.119-1-pve) pve-manager: 6.4-9 (running version: 6.4-9/5f5c0e3f) pve-kernel-5.4: 6.4-3 pve-kernel-helper: 6.4-3 pve-kernel-5.4.119-1-pve...
  4. E

    BUG: Bandwith Limit does not work after Migration of VM

    Hi everyone, we are using a Bandwith Limit for some of our Customers set viá the Proxmox UI. We discovered, that some of our Customers have way higher Download/Upload rates then they should be. After some digging, we found out, that if you Migrate a VM that has a Bandwith rate limit set to lets...
  5. E

    Slow Backup/restore Speed on PBS

    Testing with a VM on another Storage (full SSD, no ceph) with 2x10GB NIC, was slightly better but not good (was more like ~80MiBs read/write): Network is ok: and vice-versa: Back to testing: fio --rw=readwrite --name=testrand --size=5G --direct=1 --bs=64k --rwmixread=50 (read and write...
  6. E

    Slow Backup/restore Speed on PBS

    Freshly installed pbs on 12 Core, 32GB Ram, with 2 TB Raid configured with 2x10gb/NIC testing showed: Could the issue being connected to his? https://forum.proxmox.com/threads/proxmox-backup-speed-extremely-slow-for-vms-stored-on-ceph.30294/ and this...
  7. E

    Slow Backup/restore Speed on PBS

    i will deploy other Hardware too, to keep testing. Thanks for all your help so far!
  8. E

    Slow Backup/restore Speed on PBS

    Inside VM: Random write with Fio to the ZFS Backup Storage Run status group 0 (all jobs): WRITE: bw=277MiB/s (291MB/s), 277MiB/s-277MiB/s (291MB/s-291MB/s), io=4096MiB (4295MB), run=14772-14772msec iostat on the ZFS while writing: Random readwith Fio to the ZFS Backup Storage Run status...
  9. E

    Slow Backup/restore Speed on PBS

    yes, around 240-270MiB/s is what im expecting with this ZFS and old CPU im using on the PBS. ################ Random read and write inside VM: which VM? ################ Backup parallel performance from multiple nodes: The Speed dropped on all parallel Backups tried 3 Nodes parallel...
  10. E

    Slow Backup/restore Speed on PBS

    the PBS is directly on ZFS, benchmark on the pbs:
  11. E

    Slow Backup/restore Speed on PBS

    root@pve06:~# zpool status -v backup pool: backup state: ONLINE scan: scrub repaired 0B in 0 days 14:41:04 with 0 errors on Sun Mar 14 15:05:06 2021 config: NAME STATE READ WRITE CKSUM backup ONLINE 0 0 0...
  12. E

    Slow Backup/restore Speed on PBS

    Hallo, We run a 14 Node Proxmox Cluster (6.3.3) with an attached 7 Node Ceph Cluster (Nautilus 14.2.16) as Storage. Our Promox Backup Server (1.0.8 but tried also with 1.0.11) runs as Baremetal inside the Proxmox Cluster with a ca. 70TiB ZFS as Backupstorage (128GB Memory, has read+write cache...