[SOLVED] ZFS Autoreplace and Autoexpand, should it work?

fireon

Distinguished Member
Oct 25, 2010
4,478
466
153
Austria/Graz
deepdoc.at
Hello,

i have this proxmox version for testing:
Code:
proxmox-ve: 4.1-28 (running kernel: 4.2.6-1-pve)
pve-manager: 4.1-2 (running version: 4.1-2/78c5f4a2)
pve-kernel-4.2.6-1-pve: 4.2.6-28
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 0.17.2-1
pve-cluster: 4.0-29
qemu-server: 4.0-42
pve-firmware: 1.1-7
libpve-common-perl: 4.0-42
libpve-access-control: 4.0-10
libpve-storage-perl: 4.0-38
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.4-18
pve-container: 1.0-35
pve-firewall: 2.0-14
pve-ha-manager: 1.0-16
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-5
lxcfs: 0.13-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve6~jessie

I would like to have autoreplace and autoexpand on ZFS Storage. So i test it, and it is not working, but also no error message. I use an extra zpool for the autoreplace and autexpand feature. I thought it is not possible with the extra EFI/Grub partition.

Code:
root@pvetest:~# zpool get all | grep autoreplace
rpool       autoreplace                 off                         default
v-machines  autoreplace                 on                          local
So what is to do that i'am able to use this feature with linux zfs.

Best Regards
 
Then i have misunderstood. I thought this is, go to the server, change the bad disk per hand, and ready. Replacing in zpool goes automaticly. OK. Then this is not so.

And with autoexpand you mean to change existing disks (disk by disk) to expand space? I thought, i insert two or more new disks (depending on raidlevel) in the server and they were added automaticly to existing pool.
 
To make ZFS pool bigger.

Plan A

Add new disk to pool

Pros

* Fast way to expand pool

Cons

* Can not remove disk anymore
* Unbalanced space usage

Plan B

Replacing by bigger disk

Pros

* Balanced space usage

Cons

* Need to replace all disk one by one
* Takes long time
 
Tested autoreplace. Does not work.
Code:
zpool status          
  pool: rpool
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    rpool       ONLINE       0     0     0
     sda2      ONLINE       0     0     0

errors: No known data errors

  pool: v-machines2
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
    invalid.  Sufficient replicas exist for the pool to continue
    functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-4J
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    v-machines2  DEGRADED     0     0     0
     mirror-0  DEGRADED     0     0     0
       sdb     ONLINE       0     0     0
       sdc     UNAVAIL      0     0     0
    spares
     sdd       AVAIL

Code:
zpool get autoreplace
NAME         PROPERTY     VALUE    SOURCE
rpool        autoreplace  off      default
v-machines2  autoreplace  on       local
 
On my new 5.2 proxmox instance is not installed,
after installing this package, pool get automaticaly spare disk after corrupt base disk.
 
On my new 5.2 proxmox instance is not installed,
after installing this package, pool get automaticaly spare disk after corrupt base disk.
You are right, i have a look at my new installed host. The package was not there. I will test this again. But on my old test this packages was installed.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!