Search results

  1. T

    Notify scrub proxmox 7

    I have this: root@Proxmox:~# cat /etc/cron.d/zfsutils-linux PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # TRIM the first Sunday of every month. 24 0 1-7 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/trim ]; then /usr/lib/zfs-linux/trim; fi # Scrub the...
  2. T

    Notify scrub proxmox 7

    Although I didn’t get the first part, but yes, you’ll get a email notifications if your scrub jobs finishes, provided the zed.rc and you emailing is configured correctly.
  3. T

    Notify scrub proxmox 7

    This is wrong. ZFS utils has cron for scrubs and trims.
  4. T

    Notify scrub proxmox 7

    Make sure, that you set the email in "/etc/pve/datacenter.cfg". You should first test, if the host can send any email. Try following: echo -e "Subject: Test\n\nThis is a test" | /usr/bin/pvemailforward For your second question, yes you can of course manually scrub your zpools. See...
  5. T

    Notify scrub proxmox 7

    If I'm not mistaken, you need to edit "/etc/zfs/zed.d/zed.rc" to get email notifications about zfs events. Mine looks like this: root@Proxmox:~# cat /etc/zfs/zed.d/zed.rc |grep -v "^#" ZED_EMAIL_ADDR="root" ZED_EMAIL_PROG="mail" ZED_EMAIL_OPTS="-s '@SUBJECT@' @ADDRESS@"...
  6. T

    find zfs configuration

    Check zpool-import(8) for details.
  7. T

    Update to 5.11.22-7-pve causes zfs issues

    I've installed the kernel 5.13.19-1-pve. I can use the arcstat or arc_summary without error. I'm not sure though, if all the relevant informations are there, need to check that later. It seems there are new zfs features available, that I can upgrade my pools. What are those changes exactly?
  8. T

    Update to 5.11.22-7-pve causes zfs issues

    Is there any issues with that kernel? And which one is the newest? pve-kernel-5.13/stable 7.1-4 all Latest Proxmox VE Kernel Image pve-kernel-5.13.14-1-pve/stable 5.13.14-1 amd64 The Proxmox PVE Kernel Image pve-kernel-5.13.18-1-pve/stable 5.13.18-1 amd64 The Proxmox PVE Kernel Image...
  9. T

    ZFS mirror: replace bad disk

    Check this out https://www.thomas-krenn.com/de/wiki/Boot-Device_Replacement_-_Proxmox_ZFS_Mirror_Disk_austauschen
  10. T

    Update to 5.11.22-7-pve causes zfs issues

    Did you just remove those dictionary objects or was this a official patch from the maintainers?
  11. T

    Update to 5.11.22-7-pve causes zfs issues

    So I've updated again and now have another error: root@Proxmox:~# apt update && apt list --upgradable Hit:1 http://security.debian.org/debian-security bullseye-security InRelease Hit:2 http://ftp.debian.org/debian bullseye InRelease Hit:3 http://download.proxmox.com/debian/pve bullseye...
  12. T

    Update to 5.11.22-7-pve causes zfs issues

    I’ll check them out later after work and report back.
  13. T

    Update to 5.11.22-7-pve causes zfs issues

    Just let me know if I can test something.
  14. T

    Linux Kernel 5.13, ZFS 2.1 for Proxmox VE

    Also a warning, see https://forum.proxmox.com/threads/update-to-5-11-22-7-pve-causes-zfs-issues.99401/
  15. T

    Update to 5.11.22-7-pve causes zfs issues

    Hi Thomas, yes, this is PVE 7.0-14 from the community repo with following kernel and zfs: root@Proxmox:~# pveversion pve-manager/7.0-14/a9dbe7e3 (running kernel: 5.11.22-7-pve) root@Proxmox:~# uname -r 5.11.22-7-pve root@Proxmox:~# zfs -V zfs-2.1.1-pve1 zfs-kmod-2.0.6-pve1 root@Proxmox:~#...
  16. T

    Update to 5.11.22-7-pve causes zfs issues

    Hello everyone, I just updated my server from 5.11.22-5-pve to 5.11.22-7-pve , and have issues afterwards with the commands "arcstat" and "arc_summary". They throw following errors now: root@Proxmox:~# arcstat time read miss miss% dmis dm% pmis pm% mmis mm% size c avail...
  17. T

    vTPM support - do we have guide to add the vTPM support?

    When will the pve-manager (7.0-13) be released to the community repository?
  18. T

    [TUTORIAL] How to upgrade LXC containers from buster to bullseye

    Yes, that was it. The dpkg process gets killed by oom. I bumped up the amount of memory and updated successfully. I'll update the post above...
  19. T

    [TUTORIAL] How to upgrade LXC containers from buster to bullseye

    Sure, here is a simple container with grafana installation: arch: amd64 cores: 1 hostname: Grafana memory: 128 net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=AE:D7:85:02:0C:0F,ip=dhcp,type=veth onboot: 1 ostype: debian rootfs: Containers:subvol-107-disk-1,size=4G swap: 128 unprivileged: 1 Here...