I am having this exact problem after upgrading the host to Proxmox VE 7.
My LXC is running Ubuntu 20.04 with Emby.
The container has the following devices:
crwxrwxrwx 1 root video 226, 0 Jul 13 21:35 card0
crwxrwxrwx 1 root render 226, 128 Jul 13 21:35 renderD128
but Emby is unable to...
Hello,
I have a zpool called "store0" that is mounted to /media/store0 due to consistency reasons. When I create a "mountpoint" for a container on this zpool, the container will not start and gives the error:
lxc-start 125 20200107015607.240 DEBUG conf - conf.c:run_buffer:340 - Script...
I would just like to +1 this feature request.
We are using the workaround to pass a SR-IOV VF to a LXC container so we can run Suricata at wirespeed. For the most part it works, but if you change the "lxc.network.name" in the vmid.conf it will not be picked up until the host is rebooted.
It...
Hi There,
I have noticed that whether a VM is started with fixed memory, or dynamic memory it starts with the same CLI string.
e.g.
VM with 16GB fixed memory:
root 6237 1 4 14:21 ? 00:20:26 /usr/bin/kvm -id 247 -chardev...
Hi Guys,
We have a number of virtual appliances that need a serial port to boot, to get around this we have been using "serial0: socket" in the vm_id.conf which works well.
However, when we try to do a migration on one of these VM's we get the error "can't migrate VM which uses local...
Some additional information.
This problem only occurs on the 3.10-5 and 3.10-7 PVE kernel images.
We compiled our own 3.10-5 image, installed it and rebooted and we no longer get swap_dup errors.
Hello,
We have a cluster running PVE3.3 with kernel 3.10.0-7-pve on Intel Xeon E5v2 processors. Each host has between 256GB and 512GB of ECC RAM.
We have been seeing a large number of "swap_dup: Bad swap file entry" errors in the syslog of our Proxmox hosts, and occasional complete lock up...
We have a PVE3.3 cluster that has ben running well. Recently we added a second PVE3.0 cluster to run some VM's that will not run on newer KVM.
Both clusters share the same management network, but have different IP addresses.
Ever since adding the second cluster we have experienced pveproxy...
I will collect some more info on this. And see if I can come up with an easy way to replicate the problem.
Most of our guests are Windows, but we have had this happen on Linux guests too.
Disks being transferred are in size from 80GB to 10TB.
PVE version is now 3.3 with qemu 2.1.2
Hi mir,
1. We have 2x 10Gbit to each host, performance is good.
2. We have ceph cluster with 84 disks and 14x SSD journals. Performance is good
3. This is reasonably high
4. It is more likely to fail on larger disks.
If we implement the "sleep 10;" fix above, it does on the surface seem to...
OK. I just upgraded our production cluster to PVE3.3 ans still get the same failures.
TASK ERROR: storage migration failed: mirroring error: VM 188 qmp command 'block-job-complete' failed - The active block job for device 'drive-virtio0' cannot be completed
I will put the wait in and see what...
Thanks mir,
We have extensively tested doing an in-place upgrade from 3.0 --> 3.3 with all VM's running, migrating them to another 3.3 node and then rebooting the original one. We upgraded our test cluster with about 60-70 windows VM's this way and it went well, we are just lucky our PVE3.0...
SLA's mean upgrades that require outages require a lot of "change-control" and other such enterprise BS :(
Unfortunately it is not possible to live-migrate between PVE 3.0 and PVE3.3 or we would have already migrated.
We have it scheduled for 2 weeks to do an in-place upgrade from PVE3.0 to...
Hi spirit,
Initial testing looks good. I am just about to test it on our production cluster.
It would be really helpful if the output was more informative.
e.g. on the "sleep 10;" it would output something like "Pausing for sync"
and when it migrates the active disk it outputs something...
Hi spirit,
It may be relevant, I have tested this on around 20 VM's on our test cluster, and could not repeat the problem. But... VM's in our test cluster are much less busy than VM's on our production cluster.
The guests that seem to fail migration are all high IO machines, e.g. File...
Hi,
RBD --> RBD
I get around a 60% success rate.
They usually end with "TASK ERROR: storage migration failed: mirroring error: VM 101 qmp command 'block-job-complete' failed - The active block job for device 'drive-scsi1' cannot be completed" or similar for virtio devices.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.