Search results

  1. T

    Upgrade from 5.x to 6.x LXC containers will not start

    Are they? can you enter the directory of the container? if not, you cant mount them with zfs mount <mountpoint>
  2. T

    Upgrade from 5.x to 6.x LXC containers will not start

    Little update, all zfs volumes for containers are unmounted. I mounted all by hand and i am able to start the container afterwards. One of the containers got an lock item and tells me it is mounted? # pct list VMID Status Lock Name 203 running mounted minio01...
  3. T

    Upgrade from 5.x to 6.x LXC containers will not start

    Same issue on my side. VMs from ZFS works find but containers fails with: Jul 18 19:17:54 x lxc-start[5374]: lxc-start: 212: lxccontainer.c: wait_on_daemonized_start: 856 No such file or directory - Failed to receive the container state Jul 18 19:17:54 x lxc-start[5374]: lxc-start: 212...
  4. T

    Blue screen with 5.1

    Host expose the whole CPU type and features to your VM. kvm64 is limited in feature flags and always show the same type to the VM. kvm64 is nice if you live migrate between hosts with different CPU types, it also helps not loosing windows activation because CPU Type, SMBIOS and stuff stays the same.
  5. T

    Blue screen with 5.1

    Yep, for me the new drivers solved my problems. All VMs with updated drivers still running. I startet some other unused VMs with older VirtIO and all of them crashes after some time. I think something changed in the Hypervisor and this change is not compatible with older VirtIO drivers, which...
  6. T

    Blue screen with 5.1

    Here is the output of pnputil. Microsoft PnP Utility Published name : oem3.inf Driver package provider : Red Hat, Inc. Class : System devices Driver date and version : 02/12/2017 100.74.104.13200 Signer name : Red Hat, Inc. Published name ...
  7. T

    Blue screen with 5.1

    I updated virtio drivers on two Windows 10 VMs and had no crashes so far. If this is really the solution i wonder what changed in qemu. I used 0.1.126 for a long time with Windows 2016 and Windows 10 without any issues. So far it looks good.
  8. T

    Blue screen with 5.1

    Today the same issue with Windows 10 16299.19 Start VM > wait a little bit > boom > reboot > OVMF Bios stuck with Proxmox logo and freeze with KVM sitting there at 100% CPU load. The only way to get rid of the VM is by killing KVM. VM is running virtio drivers 0.1.126 VM Config: agent: 1...
  9. T

    Blue screen with 5.1

    I have lots of trouble with Windows 10 KVMs since 5.1 upgrade. Random lookup, bluescreens and reboots with hanging uefi bootscreen. Currently KVM is unsuable. Tried it with kvm64 and host cpu (sandy bridge xeon). Containers running fine. Downgrade is no option because zfs pool is already upgraded.
  10. T

    Proxmox VE 5.1 released!

    Updated to 5.1. So far no issues except qm start on kvm's complain about unknown disk format, it uses raw and lock sector 0 or something.
  11. T

    High IO latency - low FSYNCS rate

    Is your controller battery/flash backed? if not, your controller waits on sync writes until all data is written to disk.
  12. T

    [SOLVED] Cannot restart container

    Hi, maybe container content got corrupted?, i get this a lot with Proxmox 5.x. First i thought this is some fiberchannel issue with my cluster. Last week i saw this issue on a standalone system with local lvm storage. Container stopped to start or complain that files got missing/corrupted.
  13. T

    Proxmox 5.0 LXC Data Corruption after upgrade from 4.4

    Hi, yesterday i upgraded my Proxmox Cluster from 4.4 to 5.0. My Backend is an IBM Storvize Fiberchannel SAN with LVM for shared access. At first everything looks good, containers and VM starting except i had to build SSH Key trust again in my cluster. Trouble starts after creating new...
  14. T

    Proxmox VE 5.0 released!

    My upgraded Cluster starts making trouble. LXC Containers failed to start, some start but fails about missing filesystem or ELF headers missing in pam.so and stuff like that. For me this looks like some sort of storage corruption with lxc? Proxmox is running on shared SAN with LVM. I am unable...
  15. T

    Proxmox VE 5.0 released!

    I upgraded my 4 Node Cluster today. Live Migration and updating node by node did the job. Thanks for this great Release!
  16. T

    Proxmox 4.4 pveam update fails

    Did you try wget one of the files to see if your machine has proper internet connectivity?
  17. T

    pveperf fstab barrier=0

    60G Log is way to much. The Log is kept for 5 seconds and then it get flushed to disk. You can monitor this with "zpool iostat -v 1" Avoid using Consumer SSD's. I recommend using Intel Datacenter SSD for your Log.
  18. T

    Proxmox really slow

    You have really low fsyncs. Windows use sync writes all over the place and get a big hit if it has to wait for data to be written. Samsung Consumer SSDs are a really bad choice for Virtualization and Servers in general. Here is one of my private (old) Server. ZFS Raid10 4x8TB Seagate Enterprise...
  19. T

    Updating Proxmox HA Nodes reboot mid updating

    Thank you. I try to update my Cluster tomorrow.