Search results

  1. D

    ZFS zfs_send_corrupt_data parameter not working

    Also, my pveversion -v, if interested: root@prox2:~# pveversion -v proxmox-ve: 5.1-32 (running kernel: 4.13.13-2-pve) pve-manager: 5.1-41 (running version: 5.1-41/0b958203) pve-kernel-4.13.4-1-pve: 4.13.4-26 pve-kernel-4.13.13-2-pve: 4.13.13-32 pve-kernel-4.13.8-3-pve: 4.13.8-30...
  2. D

    ZFS zfs_send_corrupt_data parameter not working

    I believe it was just this (I can't find it in .bash_history): zfs send rpool/data/vm-101-disk-1 | pv | zfs receive -s rpool/data/offload-vm-101-disk-1
  3. D

    ZFS zfs_send_corrupt_data parameter not working

    It was something like this (I used the option for continuing interrupted send): zfs send -t...
  4. D

    ZFS zfs_send_corrupt_data parameter not working

    Hello, I have problem with my ZFS pool. My RAM recently flipped bit and as a result my pool got corrupt by permanent error (zpool status -vx): errors: Permanent errors have been detected in the following files: rpool/data/vm-101-disk-1@experimenty:<0x1>...
  5. D

    Proxmox VE 5.1 - ZFS kernel tainted, pvestatd frozen

    Hi, I've upgraded kernel from pve-test repository, set min_free_kbytes and swappiness and it looks the problem is gone (I will monitor it for longer time). Also IO delay significantly dropped (what was my long-term problem), maybe because of the kernel upgrade...? Anyway, thank you all a lot...
  6. D

    Proxmox VE 5.1 - ZFS kernel tainted, pvestatd frozen

    Here it is: proxmox-ve: 5.1-26 (running kernel: 4.13.4-1-pve) pve-manager: 5.1-36 (running version: 5.1-36/131401db) pve-kernel-4.13.4-1-pve: 4.13.4-26 pve-kernel-4.10.17-2-pve: 4.10.17-20 libpve-http-server-perl: 2.0-6 lvm2: 2.02.168-pve6 corosync: 2.4.2-pve3 libqb0: 1.0.1-1 pve-cluster: 5.0-15...
  7. D

    Proxmox VE 5.1 - ZFS kernel tainted, pvestatd frozen

    Hello, I want to ask you for help with some bug I discovered. I have Proxmox cluster built from two nodes. Sometimes, pvestatd service hangs and that node is marked red (as unavailable) in web GUI. When I look into its log, I see that process launched by it, zpool status -o name -H rpool...
  8. D

    Planning Proxmox VE 5.1: Ceph Luminous, Kernel 4.13, latest ZFS, LXC 2.1

    Man, you have totally saved my day! After dist-upgrade it's fully working again. Maybe some kernel incompatibility with new ZFS..? Thanks a lot! :)
  9. D

    Planning Proxmox VE 5.1: Ceph Luminous, Kernel 4.13, latest ZFS, LXC 2.1

    Hi, so I upgraded to ZFS 0.7.2 (added pvetest repo and launched apt-get upgrade) and now none of my VMs start (CTs aren't affected) with error that the disks are missing, because folder /dev/zvol just doesn't exist - the symlinks aren't there. /dev/zd* devices are visible. Some logs: zfs list...