qemu-server (7.0-11) bullseye; urgency=medium
* nic: support the intel e1000e model
* lvm: avoid the use of io_uring for now
* live-restore: fail early if target storage doesn't exist
* api: always add new CD drives to bootorder
* fix #2563: allow live migration with local...
If you have cache=writeback you should not have to add aio=native as far as I understood the previous comments as writeback implicitly sets this (or something very similar at least).
Was that in addition to the aio changes or did you just apply the microcodes now? I ran into this on Intel...
I am currently rebuilding grub with
diff --git a/pvepatches/device-removal.patch b/pvepatches/device-removal.patch
new file mode 100644
@@ -0,0 +1,12 @@
+diff --git a/grub-core/fs/zfs/zfs.c b/grub-core/fs/zfs/zfs.c
I have performed some pool operations (namely removing a mirror) which resulted in the feature device_removal to become active. This will never switch back to enabled though and now grub fails:
grub-core/fs/zfs/zfs.c:1131: feature missing in check_pool_label:com.delphix:device_removal...
Yes, the system was setup with the default proxmox installer, zfs raid 1 with three disks. Swap according to fstab (no special config, whatever the installer did):
/dev/zvol/rpool/swap none swap sw 0 0
the VM is also in the same pool.
I'll try to capture the data this evening
@fabian: Got a new perf output while downloading a file in a VM (7GB) over the internet with maybe 2MB/s -- so not that much. The VM completely hangs http://apolloner.eu/~apollo13/proxmox_zfs/out_wtf.perf
I've added more information to https://github.com/zfsonlinux/zfs/issues/6171#issuecomment-364197296 -- The idle load may be okay; but something feels wrong as soon as a VM is running. And apparently I am note the only one https://github.com/zfsonlinux/zfs/issues/6171#issuecomment-364362290
@fabian Ok, I've shut down all VMs and gathered a new perf: http://apolloner.eu/~apollo13/out2.perf -- I still see z_null_int at 99% every 3-5 seconds, but it reduced by much. Do so see anything obvious there? I'll try to start the vms one by one to see if one causes that.