[SOLVED] ZFS autoreplace


Active Member
Aug 3, 2015
I wanted to create a Zpool with 2 mirrored USB devices that was able to autorecover/replace when one of the USB disks was removed and reattached later.
I have read that there is a property in ZFS that could do that; I am not able to get this working however in proxmox.

What I did;

1. Created my devices by-path in /etc/zfs/vdev_id.conf and ran udevadm trigger
alias USB01 /dev/disk/by-path/pci-0000:00:14.0-usb-0:2:1.0-scsi-0:0:0:0 alias USB02 /dev/disk/by-path/pci-0000:00:14.0-usb-0:1:1.0-scsi-0:0:0:0

2. Formatted them GPT
parted /dev/disk/by-path/pci-0000:00:14.0-usb-0:2:1.0-scsi-0:0:0:0 mklabel gpt parted /dev/disk/by-path/pci-0000:00:14.0-usb-0:1:1.0-scsi-0:0:0:0 mklabel gpt

3. Created the ZFS Pool
zpool create "ZFS_Pool_RAID1-USBs" mirror USB01 USB02

4. Installed ZED and added to zed.rc

5. Added the properties to the ZFS Pool
zpool set autoreplace=on ZFS_Pool_RAID1-USBs

When I remove one of the disks the ZFS Pool is degraded.
When I reattach the disk that went offline the Pool is NOT reinstated as normal and I have to manually
zpool online ZFS_Pool_RAID1-USBs USB0*

Is this a bug or am I just doing it wrong?
Last edited:
Did you attach a new disk or the old used one? Anything in the message log from zed? Was zed running while you tested this?


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!