Search results

  1. Kernel Panic, whole server crashes about every day

    qemu-server (7.0-11) bullseye; urgency=medium * nic: support the intel e1000e model * lvm: avoid the use of io_uring for now * live-restore: fail early if target storage doesn't exist * api: always add new CD drives to bootorder * fix #2563: allow live migration with local...
  2. Kernel Panic, whole server crashes about every day

    If you have cache=writeback you should not have to add aio=native as far as I understood the previous comments as writeback implicitly sets this (or something very similar at least). Was that in addition to the aio changes or did you just apply the microcodes now? I ran into this on Intel...
  3. Grub issues after pool upgrade -- grub doesn't support all features

    I managed to fix this by creating a new bpool for booting with a reduced feature set.
  4. Grub issues after pool upgrade -- grub doesn't support all features

    Okay #zfsonlinux told me that this is most certainly not a good idea and it will not work due to remap tables.
  5. Grub issues after pool upgrade -- grub doesn't support all features

    I am currently rebuilding grub with diff --git a/pvepatches/device-removal.patch b/pvepatches/device-removal.patch new file mode 100644 index 0000000..c7d81e5 --- /dev/null +++ b/pvepatches/device-removal.patch @@ -0,0 +1,12 @@ +diff --git a/grub-core/fs/zfs/zfs.c b/grub-core/fs/zfs/zfs.c +index...
  6. Grub issues after pool upgrade -- grub doesn't support all features

    Hi there, I have performed some pool operations (namely removing a mirror) which resulted in the feature device_removal to become active. This will never switch back to enabled though and now grub fails: grub-core/fs/zfs/zfs.c:1131: feature missing in check_pool_label:com.delphix:device_removal...
  7. [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    I am now on: root@vmx02:~# uname -a Linux vmx02 4.13.16-2-pve #1 SMP PVE 4.13.16-47 (Mon, 9 Apr 2018 09:58:12 +0200) x86_64 GNU/Linux root@vmx02:~# apt-cache policy zfs-initramfs zfs-initramfs: Installed: 0.7.7-pve1~bpo9 Candidate: 0.7.7-pve1~bpo9 Version table: *** 0.7.7-pve1~bpo9 500...
  8. [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    Yes, the system was setup with the default proxmox installer, zfs raid 1 with three disks. Swap according to fstab (no special config, whatever the installer did): /dev/zvol/rpool/swap none swap sw 0 0 the VM is also in the same pool. I'll try to capture the data this evening
  9. [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    @fabian: Got a new perf output while downloading a file in a VM (7GB) over the internet with maybe 2MB/s -- so not that much. The VM completely hangs http://apolloner.eu/~apollo13/proxmox_zfs/out_wtf.perf
  10. [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    Sorry, didn't mean to imply that! Thank you for your efforts, much appreciated!
  11. [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    I've added more information to https://github.com/zfsonlinux/zfs/issues/6171#issuecomment-364197296 -- The idle load may be okay; but something feels wrong as soon as a VM is running. And apparently I am note the only one https://github.com/zfsonlinux/zfs/issues/6171#issuecomment-364362290
  12. [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    @fabian Ok, I've shut down all VMs and gathered a new perf: http://apolloner.eu/~apollo13/out2.perf -- I still see z_null_int at 99% every 3-5 seconds, but it reduced by much. Do so see anything obvious there? I'll try to start the vms one by one to see if one causes that.
  13. [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    I've attached the perf output to the issue. Any idea why it is waiting that much? The server is literally doing nothing (that is the vms running aren't doing much I/O -- certainly not 99%)
  14. [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    The file is to large for the forum; uploaded to my server: http://apolloner.eu/~apollo13/out.perf
  15. [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    The posted output is from a time when iotop reports 99.99% I/O by z_null_int
  16. [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    Yes I did reboot. perf top shows: Samples: 15K of event 'cycles:ppp', Event count (approx.): 3405002590 Overhead Shared Object Symbol 3.32% perl [.] Perl_yyparse 2.24% [kernel] [k] copy_page 2.16% [kernel] [k]...
  17. [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    Mhm, is there any documentation on how to install perf for the proxmox kernel: /usr/bin/perf: line 13: exec: perf_4.13: not found E: linux-perf-4.13 is not installed. Or is there any extra repo that I can use for that?
  18. [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    @fabian I have applied the patch to one server in the pool and modinfo seems to confirm: filename: /lib/modules/4.13.13-5-pve/zfs/zfs.ko version: 0.7.4-1 license: CDDL author: OpenZFS on Linux description: ZFS srcversion: E8EDB9B5FFA260178BA7DC9 depends...
  19. [SOLVED] VM lockups

    Lacht mich nicht aus, aber die Updates von heute dürften die Probleme behoben haben. Nachdem ich zum testsen einen Node aktualisiert habe und dort mit VMs gespielt habe konnte ich kein Problem feststellen :D Konkret waren das wahrscheinlich: pve-kernel-4.13.13-5-pve:amd64 pve-qemu-kvm:amd64...
  20. [SOLVED] VM lockups

    Bei mir tritt es im laufenden Betrieb auf wenn ich eine VM neu starte.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!