Recent content by avn

  1. A

    Migrate doesn't check target host memory

    When target host doesn't have enough free memory, migration process doesn't stop. It tries to allocate memory on the target host, which result in OOM killing the largest VM, and another, and another. In earlier versions of Proxmox VE it wasn't like that. Migration was giving an error "cannot...
  2. A

    Backup speed limited to 1 Gbps?

    I've tried it and posted results here: https://forum.proxmox.com/threads/backup-speed-limited-to-1-gbps.74209/#post-331218 Backup without compression gives up to 3 Gbps.
  3. A

    Backup speed limited to 1 Gbps?

    root@adm61:/home/avn# dd if=/dev/vms4/vm-112-disk-0 of=/dev/null bs=1M status=progress 53406072832 bytes (53 GB, 50 GiB) copied, 146 s, 366 MB/s 51200+0 records in 51200+0 records out 53687091200 bytes (54 GB, 50 GiB) copied, 149.697 s, 359 MB/s root@adm61:/home/avn# dd...
  4. A

    Backup speed limited to 1 Gbps?

    I didn't said read speed is unlimited. But it's clearly higher than 1 Gbps. Without compression backup gives up to 3 Gbps (as you can see in my previous posts). Lzo and pigz also performing ~1.5 times better than zstd. Edit: Ok, let's check iperf...
  5. A

    Backup speed limited to 1 Gbps?

    So, since the last post I've enabled multithread for zstd (set 'zstd: 0' in /etc/vzdump.conf). Now zstd can use up to half available cores instead of one. But the setting hardly changed anything. Backup performance with zstd is still worse than any other compression algorithms. It's rarely...
  6. A

    Backup speed limited to 1 Gbps?

    You're right. Only zstd compression limited to 1 Gbps: 1 - lzo: max 1.5 Gbps 2 - no compression: max 2.85 Gbps 3 - pigz: max 1.7 Gbps 4 - zstd: max 0.98 Gbps No bandwidth limits set in the GUI or vzdump.conf.
  7. A

    Backup speed limited to 1 Gbps?

    NAS used only for backups. All VMs and hosts run from SAS storage. Between hosts and NAS - no. NAS is FreeNAS (based on FreeBSD). But there's no issue with NAS. As I said, write speed to NAS is not limited. Multiple hosts can write backups up to 3 Gbps, 1 Gbps each. Also, as you can see...
  8. A

    Backup speed limited to 1 Gbps?

    Hi, This is how backup and resore looks like in 10 Gbps network: As you can see, backup speed limited at exactly 1 Gbps. Question: Why backup speed limited and how to remove this limit? PVE version: 6.2-10; VMs on SAS storage, storage type: LVM; Read speed from VM storage not limited...
  9. A

    Root file system read-only

    Ismael, good thought. Maybe I just need to set errors=continue. I have backups of system volumes anyway.
  10. A

    Root file system read-only

    Multipath, dmsetup, pvs, vgs, lvs don't show anything out of ordinary. All seems ok. Dmesg and jornalctl show that multipath lost one path, then another. FS was remounted read-only, then both paths were restored. Here is cut from journal: янв 15 04:30:38 adm55 kernel: hpsa 0000:86:00.0: scsi...
  11. A

    Root file system read-only

    I agree, apparently when HDD dies, there's some critical delay in disk IO. And it's not FS switched to RO by itself, but because underlying block device became write-protected. I don't know what precisely causing this - device mapper, multipathd, or LVM. Removing HDD will lead to another...
  12. A

    Root file system read-only

    All PVE nodes connected to HPE storage via SAS. Two SAS links per node, one for each controller on the storage, for redundancy. Multipath configured on all nodes.
  13. A

    Root file system read-only

    Hello, We have Proxmox cluster with shared storage HPE MSA 2050. Nodes don't have local disks, so root file system located on the storage. Not every time, but rarely when hard drive on the storage dies, root file system may become read-only on one of the nodes (it's different nodes, not the...
  14. A

    Backup slowing down

    I found another way to restore backup speed. It's enough to restart just one node, even node without any VMs. Even node, connected to the storage through SAS switch (so storage doesn't know anything about restart). Speed increases on every node in the cluster exactly when PVE on restarted node...
  15. A

    Backup slowing down

    So. I moved whole cluster to MSA2040. It didn't help. The problem still there. About caching - I've tested read speed by running command "dd if=/dev/vmsX/vm-XXX-disk-X of=/dev/null status=progress" from physical nodes for many virtual machines. The results always multiple times higher than...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!