I have the same issue. I always get stuck "dev" folders in container directorys after a reboot. I think this dev folders comes from tun device passtrough not getting cleaned up after restarting Proxmox host. I give your solution i try.
Little update, all zfs volumes for containers are unmounted. I mounted all by hand and i am able to start the container afterwards. One of the containers got an lock item and tells me it is mounted?
# pct list
VMID Status Lock Name
203 running mounted minio01...
Same issue on my side. VMs from ZFS works find but containers fails with:
Jul 18 19:17:54 x lxc-start: lxc-start: 212: lxccontainer.c: wait_on_daemonized_start: 856 No such file or directory - Failed to receive the container state
Jul 18 19:17:54 x lxc-start: lxc-start: 212...
Host expose the whole CPU type and features to your VM. kvm64 is limited in feature flags and always show the same type to the VM. kvm64 is nice if you live migrate between hosts with different CPU types, it also helps not loosing windows activation because CPU Type, SMBIOS and stuff stays the same.
Yep, for me the new drivers solved my problems. All VMs with updated drivers still running. I startet some other unused VMs with older VirtIO and all of them crashes after some time. I think something changed in the Hypervisor and this change is not compatible with older VirtIO drivers, which...
Here is the output of pnputil.
Microsoft PnP Utility
Published name : oem3.inf
Driver package provider : Red Hat, Inc.
Class : System devices
Driver date and version : 02/12/2017 100.74.104.13200
Signer name : Red Hat, Inc.
Published name ...
I updated virtio drivers on two Windows 10 VMs and had no crashes so far. If this is really the solution i wonder what changed in qemu. I used 0.1.126 for a long time with Windows 2016 and Windows 10 without any issues.
So far it looks good.
Today the same issue with Windows 10 16299.19
Start VM > wait a little bit > boom > reboot > OVMF Bios stuck with Proxmox logo and freeze with KVM sitting there at 100% CPU load. The only way to get rid of the VM is by killing KVM.
VM is running virtio drivers 0.1.126
I have lots of trouble with Windows 10 KVMs since 5.1 upgrade. Random lookup, bluescreens and reboots with hanging uefi bootscreen. Currently KVM is unsuable. Tried it with kvm64 and host cpu (sandy bridge xeon). Containers running fine.
Downgrade is no option because zfs pool is already upgraded.
maybe container content got corrupted?, i get this a lot with Proxmox 5.x. First i thought this is some fiberchannel issue with my cluster. Last week i saw this issue on a standalone system with local lvm storage. Container stopped to start or complain that files got missing/corrupted.
yesterday i upgraded my Proxmox Cluster from 4.4 to 5.0. My Backend is an IBM Storvize Fiberchannel SAN with LVM for shared access.
At first everything looks good, containers and VM starting except i had to build SSH Key trust again in my cluster.
Trouble starts after creating new...
My upgraded Cluster starts making trouble. LXC Containers failed to start, some start but fails about missing filesystem or ELF headers missing in pam.so and stuff like that. For me this looks like some sort of storage corruption with lxc?
Proxmox is running on shared SAN with LVM.
I am unable...
60G Log is way to much. The Log is kept for 5 seconds and then it get flushed to disk. You can monitor this with "zpool iostat -v 1"
Avoid using Consumer SSD's. I recommend using Intel Datacenter SSD for your Log.