Search results

  1. M

    Live-Migration almost freezes Targetnode

    What does this prove to you ? 10x more data at 10x the time doesn't seem to be unusual. I think online migrations still shouldn't be able to kill the target system with high iowait. Did i maybe misunderstand some of your previous questions ?
  2. M

    Live-Migration almost freezes Targetnode

    I tested offline and online migration again and used vnstat -l -i vmbr1 for measuring. A few seconds overhead are included but not much. It looks like the qemu drive mirror way really does transfer all the zero blocks. The real diskusage on this system is about 4 GB. Results below Offline ...
  3. M

    Live-Migration almost freezes Targetnode

    Current Hardware : Prx001, Dell R420 2 x Intel Xeon E5-2430L, 64 GB RAM, 8 x 300GB SAS 15k rpm (Seagate Savvio 15K.3 ST9300653SS) ZFS - RAID-Z1 + SLOG + L2ARC (NVMe), Connected to Perc H710P Mini (each disk exported as Raid0) Prx002, Dell R730xd 2 x Intel Xeon E5-2670v3, 256 GB RAM, 24 x 1.8TB...
  4. M

    Live-Migration almost freezes Targetnode

    I tested the following bandwith settings : qm migrate 100 prx001 --with-local-disks --online --bwlimit 100 # ~ 100 kb/s | 0,8 mbit/s = inconspicuous qm migrate 100 prx001 --with-local-disks --online --bwlimit 1221 # ~ 1 MB/s | 8 mbit/s = inconspicuous qm migrate 100 prx001...
  5. M

    Live-Migration almost freezes Targetnode

    The following are Benchmark results from a few days ago : Im not exactly sure if this is the correct and best way to benchmark the pool. I will test the migration with lower values and report back again.
  6. M

    Live-Migration almost freezes Targetnode

    Already tried that and the problem became less noticable but still occured. Limited to ~ 500 mbit/s while the Nodes are connected with 10 GBE. Is this a known issue ? If so : Are there any recommended setups/configurations where this problem doesn't exist ? Regards,
  7. M

    Live-Migration almost freezes Targetnode

    Hi, we have a problem with live-migrations on Proxmox : Whenever we try to live-migrate a VM from Node A to Node B, Node B gets high IOWait and load constantly increases. VMs on Node B, which have their disks in the same ZFS Pool as the "migrating VM", also become unresponsive due to the high...
  8. M

    Backup randomly stops

    Hello, thank you for getting back to this. I've applied the patch as pasted below : ... sub systemd_call($;$) { my ($code, $timeout) = @_; my $bus = Net::DBus->system(); my $con = $bus->get_connection; my $reactor = Net::DBus::Reactor->main(); # If DBus is busy, the...
  9. M

    Backup randomly stops

    Hi, im trying to backup about 40 VMs with PBS. The backup stops/fails everyday at a different VM. From the logs i can't really see the cause. Only the following error is printed : Sep 2 09:00:26 prx002 vzdump[15610]: ERROR: Backup of VM 2026 failed - start failed...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!