Search results

  1. T

    pvestatd doesn't let HDDs go to sleep

    For me it worked to exclude them in /etc/lvm/lvm.conf by adding r|/dev/sdX[/code] to [icode] global_filter like this: global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|" "r|/dev/mapper/.*-(vm|base)--[0-9]+--disk--[0-9]+|" "r|/dev/sda" "r|/dev/sdd"] sda and sdd are frin a ZFS pool, not...
  2. T

    Permanently read I/O on ZFS pool without known access

    It seems that I was too fast. Waiting some time after systemctl stop pvestatd helped: Most of the IO went away. But with the side effects of not having any status information in Proxmox UI any more. I found a better solution using the global_filter: global_filter = [ "r|/dev/zd.*|"...
  3. T

    Permanently read I/O on ZFS pool without known access

    I'm using Proxmox 6.2.4 and I have a ZFS Pool on the server with two 8TB disks (WD Red and Seagate). It was previously used for Proxmox VMS, but now it mainly holds the data of Nextcloud in a Docker instance on the same host. Since the server runs 24/7, I want to take care of standby for the...
  4. T

    Proxmox ZFS raid-1 setup using UUID not /sdX

    I've a separate SSD for the OS, my zfs pools are only for data. As I see and wonder it seems possible to export the pool and then re-import them using -d switch, which accepts the path to devices where I could use by-uid. According to a post on superusers.com which I can't post cause of...
  5. T

    Proxmox ZFS raid-1 setup using UUID not /sdX

    I found this thread cause of a strange problem: I have 2 zfs pools. Now I want to add 2 new hdds for a third pool. When I attach the new hdds, my zfs pools doesn't work any more. Got an error that required drives are missing. I found out that the first hdd from pool 1 is mounted on sda. After I...
  6. T

    Installing from USB Drive issues

    I had the same problem using Easy2Boot. The tool was used to create a multiboot USB stick with different OS images (some Linux distributions like Kubuntu, Live-CDs and also multiple Windows versions from 7 to 10. It gaves me the same error on a test system with 220GB Crucial SSD, 1x160GB WD IDE...