The load is better, only when you're using the netwerk a lot (like remote backups) it still goes up again!
I tried newer kernel again and the numbers are worse again, rebooting to the older one tonight (also set to default from now on)
Have you checked if the same VM (exact clone) from the PVE 5.0 host running fine on 4.4?Until this is fixed, I'm kinda stuck on a half 4.4 & half 5.0 cluster.
Have you checked if the same VM (exact clone) from the PVE 5.0 host running fine on 4.4?
If this is the case I would like to downgrade my hosts to PVE 4.4 since IDE runs stable but the disk performance is relatively poor then.
Nope, but I could. Just not right now. With all the reboots to test, there's been enough downtime on that server for the day (week!).
Going back to SCSI Virtio and changing to E1000 for the network is currently behaving decently. (I have CPU to burn.)
Since you are running ZFS it should be easy to create a snapshot and send / receive the snapshot to a test VM on your PVE 4.4 host without any down time. At least if you're running ZFS on the PVE 4.4 host as well and you have some spare storage left there to create the VM.
It would be really interesting to find out if this problem really is a PVE 5.0 thing.
I'm running a terminal server so E1000 is probably no option for me: The rdp protocol don't like slow networks or network latency and the users are picky about that.
So if anyone has a background on this (Proxmox Staff?): It would be interesting to know what the reason was behind this issue.
I can't find anything related to this in the release notes.
What hardware do you use and how is your storage configured? What speed do you get with dd?Unfortunately for me even after the upgrade to PVE5.1 I'm experiencing high IO wait and huge jump in Load during writing operations to VirtIO storage. I did a quick test - adding a new virtio disk to existing linux vm machine and simple dd if=/dev/zero of=test bs=1M count=500 bumped the Load of the host to 18 and the IO wait was huge again.