Search results

  1. A

    Grub issues after pool upgrade -- grub doesn't support all features

    Okay #zfsonlinux told me that this is most certainly not a good idea and it will not work due to remap tables.
  2. A

    Grub issues after pool upgrade -- grub doesn't support all features

    I am currently rebuilding grub with diff --git a/pvepatches/device-removal.patch b/pvepatches/device-removal.patch new file mode 100644 index 0000000..c7d81e5 --- /dev/null +++ b/pvepatches/device-removal.patch @@ -0,0 +1,12 @@ +diff --git a/grub-core/fs/zfs/zfs.c b/grub-core/fs/zfs/zfs.c +index...
  3. A

    Grub issues after pool upgrade -- grub doesn't support all features

    Hi there, I have performed some pool operations (namely removing a mirror) which resulted in the feature device_removal to become active. This will never switch back to enabled though and now grub fails: grub-core/fs/zfs/zfs.c:1131: feature missing in check_pool_label:com.delphix:device_removal...
  4. A

    [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    I am now on: root@vmx02:~# uname -a Linux vmx02 4.13.16-2-pve #1 SMP PVE 4.13.16-47 (Mon, 9 Apr 2018 09:58:12 +0200) x86_64 GNU/Linux root@vmx02:~# apt-cache policy zfs-initramfs zfs-initramfs: Installed: 0.7.7-pve1~bpo9 Candidate: 0.7.7-pve1~bpo9 Version table: *** 0.7.7-pve1~bpo9 500...
  5. A

    [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    Yes, the system was setup with the default proxmox installer, zfs raid 1 with three disks. Swap according to fstab (no special config, whatever the installer did): /dev/zvol/rpool/swap none swap sw 0 0 the VM is also in the same pool. I'll try to capture the data this evening
  6. A

    [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    @fabian: Got a new perf output while downloading a file in a VM (7GB) over the internet with maybe 2MB/s -- so not that much. The VM completely hangs http://apolloner.eu/~apollo13/proxmox_zfs/out_wtf.perf
  7. A

    [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    Sorry, didn't mean to imply that! Thank you for your efforts, much appreciated!
  8. A

    [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    I've added more information to https://github.com/zfsonlinux/zfs/issues/6171#issuecomment-364197296 -- The idle load may be okay; but something feels wrong as soon as a VM is running. And apparently I am note the only one https://github.com/zfsonlinux/zfs/issues/6171#issuecomment-364362290
  9. A

    [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    @fabian Ok, I've shut down all VMs and gathered a new perf: http://apolloner.eu/~apollo13/out2.perf -- I still see z_null_int at 99% every 3-5 seconds, but it reduced by much. Do so see anything obvious there? I'll try to start the vms one by one to see if one causes that.
  10. A

    [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    I've attached the perf output to the issue. Any idea why it is waiting that much? The server is literally doing nothing (that is the vms running aren't doing much I/O -- certainly not 99%)
  11. A

    [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    The file is to large for the forum; uploaded to my server: http://apolloner.eu/~apollo13/out.perf
  12. A

    [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    The posted output is from a time when iotop reports 99.99% I/O by z_null_int
  13. A

    [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    Yes I did reboot. perf top shows: Samples: 15K of event 'cycles:ppp', Event count (approx.): 3405002590 Overhead Shared Object Symbol 3.32% perl [.] Perl_yyparse 2.24% [kernel] [k] copy_page 2.16% [kernel] [k]...
  14. A

    [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    Mhm, is there any documentation on how to install perf for the proxmox kernel: /usr/bin/perf: line 13: exec: perf_4.13: not found E: linux-perf-4.13 is not installed. Or is there any extra repo that I can use for that?
  15. A

    [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    @fabian I have applied the patch to one server in the pool and modinfo seems to confirm: filename: /lib/modules/4.13.13-5-pve/zfs/zfs.ko version: 0.7.4-1 license: CDDL author: OpenZFS on Linux description: ZFS srcversion: E8EDB9B5FFA260178BA7DC9 depends...
  16. A

    [SOLVED] VM lockups

    Lacht mich nicht aus, aber die Updates von heute dürften die Probleme behoben haben. Nachdem ich zum testsen einen Node aktualisiert habe und dort mit VMs gespielt habe konnte ich kein Problem feststellen :D Konkret waren das wahrscheinlich: pve-kernel-4.13.13-5-pve:amd64 pve-qemu-kvm:amd64...
  17. A

    [SOLVED] VM lockups

    Bei mir tritt es im laufenden Betrieb auf wenn ich eine VM neu starte.
  18. A

    [SOLVED] VM lockups

    Hallo, ich habe hier 2 Server auf denen Proxmox läuft mit 3 HDDs als ZFS RAID1 und einer SSD als Cache (kein log derweil, da werde ich auf eine 2. SSD warten, damit die ausfallssicher werden). Ich bekomme (vor allem beim booten) lockups in den VMs. Der Host schläft dabei und ioload ist relativ...
  19. A

    [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    @fabian I've read in another thread that your are working on a new ZFS version (sorry cannot post links yet :/), can you consider backporting the fixes for #6171 too? Also, I'd like to test the new patches if you need testers :)
  20. A

    Probleme mit neuer Proxmox installation

    Eventuell ist der Fehler mit kernel commit 939f509d274541e32b1350ddc7f9d16d16617538 fort? Der commit ist auf jeden Fall nicht im Proxmox kernel.