Search results

  1. N

    [SOLVED] LXC no permission to use VAAPI

    Container config is as follows: arch: amd64 cores: 4 hostname: emby1 memory: 4096 mp0: /store0/Media,mp=/media/store0/Media net0: name=eth0,bridge=vmbr0,gw=10.69.1.254,hwaddr=A6:3D:6B:A5:7C:54,ip=10.69.1.24/24,type=veth onboot: 1 ostype: ubuntu rootfs: lvm-nvme0:vm-125-disk-0,size=32G swap: 512...
  2. N

    Proxmox Intel Iris XE Graphic Passthrough (Core i7-1165G7)

    This problem looks to be related to: LXC no permission to use VAAPI
  3. N

    [SOLVED] LXC no permission to use VAAPI

    I am having this exact problem after upgrading the host to Proxmox VE 7. My LXC is running Ubuntu 20.04 with Emby. The container has the following devices: crwxrwxrwx 1 root video 226, 0 Jul 13 21:35 card0 crwxrwxrwx 1 root render 226, 128 Jul 13 21:35 renderD128 but Emby is unable to...
  4. N

    CT's will not start with ZFS subvol from non-root mounted zpool

    Hello, I have a zpool called "store0" that is mounted to /media/store0 due to consistency reasons. When I create a "mountpoint" for a container on this zpool, the container will not start and gives the error: lxc-start 125 20200107015607.240 DEBUG conf - conf.c:run_buffer:340 - Script...
  5. N

    [lxc/#676] Feature Request: Physical NIC assignment for LXC containers in Proxmox 4.0

    I would just like to +1 this feature request. We are using the workaround to pass a SR-IOV VF to a LXC container so we can run Suricata at wirespeed. For the most part it works, but if you change the "lxc.network.name" in the vmid.conf it will not be picked up until the host is rebooted. It...
  6. N

    PVE still adds balloon device for VM's with "fixed" meory

    Hi There, I have noticed that whether a VM is started with fixed memory, or dynamic memory it starts with the same CLI string. e.g. VM with 16GB fixed memory: root 6237 1 4 14:21 ? 00:20:26 /usr/bin/kvm -id 247 -chardev...
  7. N

    Live migrating VM's with "serial0: socket"

    Hi Guys, We have a number of virtual appliances that need a serial port to boot, to get around this we have been using "serial0: socket" in the vm_id.conf which works well. However, when we try to do a migration on one of these VM's we get the error "can't migrate VM which uses local...
  8. N

    Major problem "swap_dup: Bad swap file entry"

    Some additional information. This problem only occurs on the 3.10-5 and 3.10-7 PVE kernel images. We compiled our own 3.10-5 image, installed it and rebooted and we no longer get swap_dup errors.
  9. N

    Major problem "swap_dup: Bad swap file entry"

    Hello, We have a cluster running PVE3.3 with kernel 3.10.0-7-pve on Intel Xeon E5v2 processors. Each host has between 256GB and 512GB of ECC RAM. We have been seeing a large number of "swap_dup: Bad swap file entry" errors in the syslog of our Proxmox hosts, and occasional complete lock up...
  10. N

    Multiple clusters in the same subnet - pveproxy locking up

    We have a PVE3.3 cluster that has ben running well. Recently we added a second PVE3.0 cluster to run some VM's that will not run on newer KVM. Both clusters share the same management network, but have different IP addresses. Ever since adding the second cluster we have experienced pveproxy...
  11. N

    storage migration virtio failed

    I will collect some more info on this. And see if I can come up with an easy way to replicate the problem. Most of our guests are Windows, but we have had this happen on Linux guests too. Disks being transferred are in size from 80GB to 10TB. PVE version is now 3.3 with qemu 2.1.2
  12. N

    storage migration virtio failed

    Hi mir, 1. We have 2x 10Gbit to each host, performance is good. 2. We have ceph cluster with 84 disks and 14x SSD journals. Performance is good 3. This is reasonably high 4. It is more likely to fail on larger disks. If we implement the "sleep 10;" fix above, it does on the surface seem to...
  13. N

    storage migration virtio failed

    OK. I just upgraded our production cluster to PVE3.3 ans still get the same failures. TASK ERROR: storage migration failed: mirroring error: VM 188 qmp command 'block-job-complete' failed - The active block job for device 'drive-virtio0' cannot be completed I will put the wait in and see what...
  14. N

    storage migration virtio failed

    Thanks mir, We have extensively tested doing an in-place upgrade from 3.0 --> 3.3 with all VM's running, migrating them to another 3.3 node and then rebooting the original one. We upgraded our test cluster with about 60-70 windows VM's this way and it went well, we are just lucky our PVE3.0...
  15. N

    storage migration virtio failed

    SLA's mean upgrades that require outages require a lot of "change-control" and other such enterprise BS :( Unfortunately it is not possible to live-migrate between PVE 3.0 and PVE3.3 or we would have already migrated. We have it scheduled for 2 weeks to do an in-place upgrade from PVE3.0 to...
  16. N

    storage migration virtio failed

    Hi spirit, Our prod cluster which is PVE3.0 returns 1.4.2
  17. N

    storage migration virtio failed

    Hi spirit, Initial testing looks good. I am just about to test it on our production cluster. It would be really helpful if the output was more informative. e.g. on the "sleep 10;" it would output something like "Pausing for sync" and when it migrates the active disk it outputs something...
  18. N

    storage migration virtio failed

    Hi spirit, It may be relevant, I have tested this on around 20 VM's on our test cluster, and could not repeat the problem. But... VM's in our test cluster are much less busy than VM's on our production cluster. The guests that seem to fail migration are all high IO machines, e.g. File...
  19. N

    storage migration virtio failed

    Thanks, It is Ceph Firefly 0.80.5 And yes, this is with cache=writeback
  20. N

    storage migration virtio failed

    Hi, RBD --> RBD I get around a 60% success rate. They usually end with "TASK ERROR: storage migration failed: mirroring error: VM 101 qmp command 'block-job-complete' failed - The active block job for device 'drive-scsi1' cannot be completed" or similar for virtio devices.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!