Search results

  1. H

    Live migration failed

    Yes it was lacking the 'active mirroring', and in the new successful migration I do see it. ... so case closed I guess. Thanks
  2. H

    Live migration failed

    When I started the migration the originating server had 8.4.17 But when the vm was started could be a long time ago, not sure I can find that version. But something 8.x. I did find these in the system log: QEMU[1545695]: kvm: ../block/io.c:1960: bdrv_co_write_req_prepare: Assertion...
  3. H

    Live migration failed

    In https://lists.proxmox.com/pipermail/pve-devel/2024-July/064982.html it says the patch is applied... Unfortunately this is still an issue with a 8.4 -> 9.1.6 migration. The annoying part being that the vm is not running anymore after the failure. The workaround mentioned above still works...
  4. H

    pveproxy stuck

    TLDR: Solved with `killall -9 pmxcfs; pmxcfs` I did some more digging... root@nodeB:~# mount -o remount /etc/pve /bin/sh: 1: /dev/fuse: Permission denied Node B~# find /etc/pve .. lists normal output ... with "/etc/pve/priv" being the last line and then it hangs ls /etc/pve/priv/lock/...
  5. H

    pveproxy stuck

    Similar situation here. It started with a cluster node coming back online after physical maintenance (node A) 15:20: node A back online, no running VM's I noticed a question mark icon on two other nodes in the cluster of 5 (node B/C). I've had these before and sometimes it fixed itself...
  6. H

    Live migration failed

    Unfortunately I'm seeing this same issue still in an proxmox 8.0 -> 8.2 live migration. (Same VM as last time). My work around is still to just stop the IO intensive daemon inside that vm during the last fase of the migration.
  7. H

    IO delays on live migration lv initialization

    Thanks @fiona I've now posted about this on https://gitlab.com/qemu-project/qemu/-/issues/1889
  8. H

    IO delays on live migration lv initialization

    @fiona is this related to https://forum.proxmox.com/threads/live-migration-failed.111831/#post-482425 ? Or is that patch already merged in 7.4?
  9. H

    IO delays on live migration lv initialization

    Can someone point me to the code that does this initialization? I see the lvcreate is being done in src/PVE/Storage/LvmThinPlugin.pm alloc_image() but see no code filling it afterwards. I do see /dev/zero only in the free_image() function which is for removing an lv.
  10. H

    IO delays on live migration lv initialization

    Yes the target storage is ssd/nvme so discard should work there. thanks for the links, I had been searching the forum for zero, but not for zeroing :( Unchecking the discard option on the vm disks does indeed skip this initialization step during a migration. But it feels wrong to disable that...
  11. H

    IO delays on live migration lv initialization

    Hi, The migration itself goes just fine. But other vm's on the destination host are negatively affected. I'm seeing delays in storage response time. This leads in some instances to an unresponsive webserver. It's not completely locked up, I can ssh in and look around, but storage intensive...
  12. H

    Live migration failed

    LMV thin pool proxmox-ve: 7.2-1 (running kernel: 5.15.53-1-pve) pve-manager: 7.2-11 (running version: 7.2-11/b76d3178) pve-kernel-helper: 7.2-12 pve-kernel-5.15: 7.2-10 pve-kernel-5.4: 6.4-7 pve-kernel-5.15.53-1-pve: 5.15.53-1 pve-kernel-5.4.143-1-pve: 5.4.143-1 pve-kernel-5.4.44-2-pve...
  13. H

    Live migration failed

    qm config 144 balloon: 1532 bootdisk: scsi0 cores: 4 ide2: none,media=cdrom memory: 4096 name: ... net0: virtio=AA:B1:E5:96:F4:BD,bridge=vmbr4 numa: 0 onboot: 1 ostype: l26 scsi0: thin_pool_hwraid:vm-144-disk-0,discard=on,format=raw,size=16192M scsi1...
  14. H

    Live Migration of VM with heavy RAM usage fails

    A similar thread, with more info, can be found in https://forum.proxmox.com/threads/live-migration-failed.111831/#post-482425
  15. H

    Live migration failed for seemingly no reason

    A similar thread, with more info, can be found in https://forum.proxmox.com/threads/live-migration-failed.111831/#post-482425
  16. H

    Live migration failed

    I have a similar scenario as this thread, migrating vm's from a 6.x to a 7.x install, upgrading the servers one by one to 7.x... but could also replicate it between two 7.x servers. After many vm's that migrated just fine I have one that keeps failing. It's a fairly busy monitoring server. I...
  17. H

    Proxmox server as backend of apache reverse proxy

    The trick is in the web socket handling. You need the proxy_wstunnel apache module enabled, Then besides the normal ProxyPass rules you need an extra rewrite rule set to handle the connection upgrade. SSLProxyEngine on ProxyPass / https://proxmoxhost.example.com:8006/ max=100 ProxyPassReverse...
  18. H

    What could be the reason a VM migration failed?

    When I tried starting a test vm in the forground I got 'Could not open '/dev/baressd/vm-209-disk-0': No such file or directory' The lvm volume is not active. I manually activated it with ' lvchange --activate y baressd/vm-209-disk-0' Unfortunately this test vm migrated flawlessly. ;) I'll try...
  19. H

    What could be the reason a VM migration failed?

    no, journalctl just shows the same lines as I reviewd in syslog. Do you mean to start it in the forground on node 1? Would the migration process not start it's own variant on node 2 (and that's where it fails) ? It's probably less then 1 in 20 migrations that fails. So I'll have to plan some...