Search results

  1. S

    LVM: "vgs" takes 5 minutes on one cluster node

    Hi, thanks for the fast reply. If those volumes belong to those two VMs the data on it is not *that* important because those are only test instances. pveversion -v proxmox-ve: 5.4-1 (running kernel: 4.15.18-16-pve) pve-manager: 5.4-6 (running version: 5.4-6/aa7856c5) pve-kernel-4.15: 5.4-4...
  2. S

    LVM: "vgs" takes 5 minutes on one cluster node

    Hey there, we have a Proxmox cluster with six nodes. Everything worked fine before my vacation. Now i'm having some trouble with lvm on one node. "vgs" takes 5 minutes to complete and shows this: vgs Couldn't find device with uuid rptkmO-sIg8-pFcT-FD0M-1RXf-a6zP-eZTw1p. Couldn't find...
  3. S

    random VM ends up in paused state after "Bulk migrate"

    Thank you guys for the patch, if more testing is needed just let me know :)
  4. S

    random VM ends up in paused state after "Bulk migrate"

    I've upgraded two of my servers to qemu 5.0-52 and did 4 bulk migrations between those host without running into the issue. I've tested bulk migration on two servers with qemu 5.0-51 and hit the bug on the first bulk migration. So far i would say this looks really good with qemu 5.0-52.
  5. S

    random VM ends up in paused state after "Bulk migrate"

    Nice, sounds promising. While my cluster is kinda productive i still can do tests on it. Looking forward to do so! I'm unsure about the CPU & network load when this happens. But i'll have a look at it when doing the next bulk migrate.
  6. S

    random VM ends up in paused state after "Bulk migrate"

    I hope to get 10GBit soonish. But since adamb has the same probleme i doubt it will help. Also i've seen VMs migrate with a lot higher than 18ms before and they where fine. My VMs are 50GB - 1TB in size. The failed VM is 510GB, 32GB ram.
  7. S

    random VM ends up in paused state after "Bulk migrate"

    A bit more on the setup maybe someone has an idea how to debug further: * 5 Dell Server with Proxmox VE 5.4-5 connected to a flash storage via Fiberchannel (multipath) using LVM thick * NIC: 1Gbit/s (checked TX/RX error count and both are 0) Any more logs to have a look at? We are considering...
  8. S

    random VM ends up in paused state after "Bulk migrate"

    Hi, today i ran into this issue again: root@pm-05:/var/log/pve/tasks# grep -iR 5CE3FBA5 * active:UPID:pm-05:0000D83C:0298D730:5CE3FBA5:migrateall::root@pam: 1 5CE40274 OK active:UPID:pm-05:0000D840:0298D735:5CE3FBA5:qmigrate:105:root@pam: 1 5CE40274 OK...
  9. S

    random VM ends up in paused state after "Bulk migrate"

    Hi, currently I'm unable to reproduce this (I did two bulk migrations to test). Here is the log entry from one of the machines ending up in pause state when it happend the last time: UPID:pm-04:000030BC:07BDA5F4:5CB5EEDE:qmstart:111:root@pam: 5CB5EEDF start failed: command '/usr/bin/kvm -id...
  10. S

    random VM ends up in paused state after "Bulk migrate"

    Hello, we are running a Proxmox cluster with 5 Hosts, live migration works fine until we do a "Bulk migrate" to another host. A random VM ends up in paused state. Sometimes more than one. The tasks says "Ok" at the end and i'm able to resume the paused VM. We are running Virtual Environment...