[SOLVED] incorrect ZFS pool & disk state ?

cholzer

Member
Nov 5, 2021
32
7
13
45
Hello!

I am currently test driving proxmox 7 on a test machine and ran into this confusing behaviour.

I created a new RAID10 ZFS out of 4x 120GB drives.
Screenshot 2021-11-05 185651.jpg

Then I UNPLUGGED the 120GB Samsung SSD yet it still shows up as "ONLINE" in the ZFS stats (both in the webGUI and console)!?
Screenshot 2021-11-05 190902.jpg

Also the ZFS pool is still healthy even though a disk is gone.

Under DISKS the Samsung SSD is gone, so Proxmox did notice it on some level.
Screenshot 2021-11-05 191207.jpg

Only after I reboot the system proxmox tells me that the zpool is degraded and a drive is missing.
Screenshot 2021-11-05 191706.jpg

That does not seem right.....right? o_O
 
Last edited:
Only after I reboot the system
Hi,

my personal observation/impression just this afternoon while removing a disk for physical exchange of a drive: this might happen if that pool is simply unused, e.g. empty with zero VMs/Containers.

Only after something is written to the pool (or for example a setting is changed) that state will change.

Best regards
 
  • Like
Reactions: cholzer
Hi,

my personal observation/impression just this afternoon while removing a disk for physical exchange of a drive: this might happen if that pool is simply unused, e.g. empty with zero VMs/Containers.

Only after something is written to the pool (or for example a setting is changed) that state will change.

Best regards
You are right!
If the disk drops out *while data is read/written* to the pool it does go to 'degraded' and I get a notification from zfs zed.

Thank you! :)