I am trying to run live migration(--with-local-discs --online) in a cluster of proxmoxes with local disks. But it's always failed.
pveversion -v
proxmox-ve: 5.3-1 (running kernel: 4.15.18-10-pve)
pve-manager: 5.3-9 (running version: 5.3-9/ba817b29)
pve-kernel-4.15: 5.3-1...
When I migrate/restore VM I see that kworker use 99% IO (by iotop). And other VMs on destination node sometimes stuck.
All running VMs on destination node show this exception:
I use HW RAID1. Lvm-thin pools.
Hardware: R540 - Silver 4114
Raid: Perc H730
pveversion -v
# pveversion -v...
Upgraded from no-subscription repo. Still no luck with container. But error changed to:
root@prox8:~# /usr/bin/lxc-start -n 101
lxc-start: 101: lxccontainer.c: wait_on_daemonized_start: 824 Received container state "STOPPING" instead of "RUNNING"
The container failed to start.
To get...
Hello.
I have just deployed proxmox from original ISO image proxmox-ve_5.1-3.
root@prox8:~# pveversion --verbose
proxmox-ve: 5.1-32 (running kernel: 4.13.13-2-pve)
pve-manager: 5.1-41 (running version: 5.1-41/0b958203)
pve-kernel-4.13.13-2-pve: 4.13.13-32
libpve-http-server-perl: 2.0-8
lvm2...
I change few versions of kernel to find last good version.
This versions HAVE problem with ksoftirqd 99% IO
pve-kernel-4.13.13-1-pve/stable,now 4.13.13-31 amd64 [installed,automatic]
The Proxmox PVE Kernel Image
pve-kernel-4.13.3-1-pve/stable,now 4.13.3-2 amd64 [installed]
The Proxmox PVE...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.