Search results

  1. Upgrade from 5.x to 6.x LXC containers will not start

    I have the same issue. I always get stuck "dev" folders in container directorys after a reboot. I think this dev folders comes from tun device passtrough not getting cleaned up after restarting Proxmox host. I give your solution i try.
  2. Upgrade from 5.x to 6.x LXC containers will not start

    looks like you have a different problem than me :/
  3. Upgrade from 5.x to 6.x LXC containers will not start

    Are they? can you enter the directory of the container? if not, you cant mount them with zfs mount <mountpoint>
  4. Upgrade from 5.x to 6.x LXC containers will not start

    Little update, all zfs volumes for containers are unmounted. I mounted all by hand and i am able to start the container afterwards. One of the containers got an lock item and tells me it is mounted? # pct list VMID Status Lock Name 203 running mounted minio01...
  5. Upgrade from 5.x to 6.x LXC containers will not start

    Same issue on my side. VMs from ZFS works find but containers fails with: Jul 18 19:17:54 x lxc-start[5374]: lxc-start: 212: lxccontainer.c: wait_on_daemonized_start: 856 No such file or directory - Failed to receive the container state Jul 18 19:17:54 x lxc-start[5374]: lxc-start: 212...
  6. Blue screen with 5.1

    Host expose the whole CPU type and features to your VM. kvm64 is limited in feature flags and always show the same type to the VM. kvm64 is nice if you live migrate between hosts with different CPU types, it also helps not loosing windows activation because CPU Type, SMBIOS and stuff stays the same.
  7. Blue screen with 5.1

    Yep, for me the new drivers solved my problems. All VMs with updated drivers still running. I startet some other unused VMs with older VirtIO and all of them crashes after some time. I think something changed in the Hypervisor and this change is not compatible with older VirtIO drivers, which...
  8. Blue screen with 5.1

    Here is the output of pnputil. Microsoft PnP Utility Published name : oem3.inf Driver package provider : Red Hat, Inc. Class : System devices Driver date and version : 02/12/2017 100.74.104.13200 Signer name : Red Hat, Inc. Published name ...
  9. Blue screen with 5.1

    I updated virtio drivers on two Windows 10 VMs and had no crashes so far. If this is really the solution i wonder what changed in qemu. I used 0.1.126 for a long time with Windows 2016 and Windows 10 without any issues. So far it looks good.
  10. Blue screen with 5.1

    Today the same issue with Windows 10 16299.19 Start VM > wait a little bit > boom > reboot > OVMF Bios stuck with Proxmox logo and freeze with KVM sitting there at 100% CPU load. The only way to get rid of the VM is by killing KVM. VM is running virtio drivers 0.1.126 VM Config: agent: 1...
  11. Blue screen with 5.1

    I have lots of trouble with Windows 10 KVMs since 5.1 upgrade. Random lookup, bluescreens and reboots with hanging uefi bootscreen. Currently KVM is unsuable. Tried it with kvm64 and host cpu (sandy bridge xeon). Containers running fine. Downgrade is no option because zfs pool is already upgraded.
  12. Proxmox VE 5.1 released!

    Updated to 5.1. So far no issues except qm start on kvm's complain about unknown disk format, it uses raw and lock sector 0 or something.
  13. High IO latency - low FSYNCS rate

    Is your controller battery/flash backed? if not, your controller waits on sync writes until all data is written to disk.
  14. [SOLVED] Cannot restart container

    Hi, maybe container content got corrupted?, i get this a lot with Proxmox 5.x. First i thought this is some fiberchannel issue with my cluster. Last week i saw this issue on a standalone system with local lvm storage. Container stopped to start or complain that files got missing/corrupted.
  15. Proxmox 5.0 LXC Data Corruption after upgrade from 4.4

    Hi, yesterday i upgraded my Proxmox Cluster from 4.4 to 5.0. My Backend is an IBM Storvize Fiberchannel SAN with LVM for shared access. At first everything looks good, containers and VM starting except i had to build SSH Key trust again in my cluster. Trouble starts after creating new...
  16. Proxmox VE 5.0 released!

    My upgraded Cluster starts making trouble. LXC Containers failed to start, some start but fails about missing filesystem or ELF headers missing in pam.so and stuff like that. For me this looks like some sort of storage corruption with lxc? Proxmox is running on shared SAN with LVM. I am unable...
  17. Proxmox VE 5.0 released!

    I upgraded my 4 Node Cluster today. Live Migration and updating node by node did the job. Thanks for this great Release!
  18. Proxmox 4.4 pveam update fails

    Did you try wget one of the files to see if your machine has proper internet connectivity?
  19. pveperf fstab barrier=0

    60G Log is way to much. The Log is kept for 5 seconds and then it get flushed to disk. You can monitor this with "zpool iostat -v 1" Avoid using Consumer SSD's. I recommend using Intel Datacenter SSD for your Log.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!