Search results

  1. P

    VMs hung after backup

    any news? our clusters is plagued with VM freezes
  2. P

    pbs and cpu performance

    any idea how to boost single backup task to 400-500mb/s ?
  3. P

    pbs and cpu performance

    client, yes backup is limited to about 512mb/s (zfs mirror on 2x intel dc4510), this is the xeon 6146 host with older PVE NFO: Starting Backup of VM 3457 (qemu) INFO: Backup started at 2022-08-09 04:23:19 INFO: status = running INFO: VM Name: SNIP INFO: include disk 'scsi0'...
  4. P

    pbs and cpu performance

    not sure how to test zfs in such configuration, using hdd with special dev, it handle easily multiple backup tasks at once
  5. P

    pbs and cpu performance

    # proxmox-backup-client benchmark --repository SNIP SNIP Uploaded 799 chunks in 5 seconds. Time per request: 6281 microseconds. TLS speed: 667.72 MB/s SHA256 speed: 486.35 MB/s Compression speed: 604.83 MB/s Decompress speed: 912.70 MB/s AES256/GCM speed: 2291.27 MB/s Verify speed: 331.60 MB/s...
  6. P

    pbs and cpu performance

    any feedback on this?
  7. P

    pbs and cpu performance

    hi, im running pbs with dual 2660 v3, it seems that each backup task from pve 6.4 can achieve about 130-180MB/s, multiple tasks at the same time adds up, so i dont think it is storage bottleneck. pbs seems to handle poorly in smp, by this i have been thinking to swap cpu to 2643 v4...
  8. P

    VMs freezing and unreachable when backup server is slow

    thats is a critical issue, it just can not be that backup is crashing vm, is this being worked on? is there usable workaround?
  9. P

    hardware recommendations for larger setup

    Hi, Im considering to build larger PBS with at least 30TB disk space, i just wonder what kind of hardware is recommended for: - 30TB storage with upgrade path to at least 100TB - verify must perform extremely well ( max 12h for all data verify ) - restore should be able to saturate 10G link -...
  10. P

    zfs read performance bottleneck?

    it was one of the first things to test, disabling compression and checksumming, cpu should not be a bottleneck either, tested with 6242R, 5950X and 8700k
  11. P

    zfs read performance bottleneck?

    by running fio against the disk directly i do manager to achieve about 600k iops, in mirror with mdraid about 1.1M iops, with zfs it dosnt matter if single disk or mirror it seems to be capped to about 150-170k IOPS, which actually means about 80% performance lost with zfs with 256 just want to...
  12. P

    zfs read performance bottleneck?

    please ignore different pool/zvol names, it is a different system, i have just created the benchmark zvol, wrote random data with fio and then run fio randread benchmark for 10minutes, the box is not yet in use. zfs create -V 100gb zfs-local/benchmark # zpool status pool: zfs-local state...
  13. P

    zfs read performance bottleneck?

    im trying to find out why zfs is pretty slow when it comes to read performance, i have been testing with different systems, disks and seetings testing directly on the disk im able to achieve some reasonable numbers not far away from specsheet => 400-650k IOPS (p4510 and some samsung based HPE)...
  14. P

    Network problem bond+vlan+bridge

    did some tests, active-backup bond with 2 ports: a) bond/bridge across 2 broadcom network cards - error b) bond/bridge on single dual port broadcom network card - works b) bond/bridge across 2 intel network cards - works c) bond/bridge across broadcom and qlogic network cards - works it would...
  15. P

    Network problem bond+vlan+bridge

    simple workaround for my issue: actually it didnt help, starting vm caused the issue to come back, to fix it had to reconnect network ports to the same card
  16. P

    Network problem bond+vlan+bridge

    changing allow-hotplug to auto didnt help, same error i have tried without updeleay - no change strange thing is that the issue happens only when using 2 different network cards, 2 ports on the same card works just fine.
  17. P

    Network problem bond+vlan+bridge

    auto lo iface lo inet loopback allow-hotplug eth2 iface eth2 inet manual allow-hotplug eth4 iface eth4 inet manual auto bond0 iface bond0 inet manual bond-slaves eth4 eth2 bond-miimon 100 bond-mode active-backup bond-updelay 500 auto vmbr0 iface vmbr0 inet...
  18. P

    Network problem bond+vlan+bridge

    hello, did you find a solution to this problem? im having exactly same issue, im running the BCM57416 model, bond with 2 ports on same card works, bond with 2 ports split onto 2 different cards dosnt - no data available br
  19. P

    [SOLVED] Cluster over WAN

    any input on that?
  20. P

    [SOLVED] Cluster over WAN

    hello, how critical is it if few nodes of 20 node cluster is located on a remote site with 10 up to 20 ms latency? there is no HA or shared storage involved. will it work? are there some considerations? like live migration? (used rarely) thank you. phil

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!