Hi,
my proxmox server suddenly got SUSPENDED status on my ZFS volume. I've got 2 Samsung M.2 990 Pro SSDs, first on of them hab been in REMOVED status, and the volum wase sipmly in DEGRADED mode.
Here is my zpool status :
pool: data
state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC
config:
NAME STATE READ WRITE CKSUM
data DEGRADED 0 0 0
mirror-0 DEGRADED 28 18 0
nvme-Samsung_SSD_990_PRO_with_Heatsink_4TB_S7HRNJ0WC04330B_1 REMOVED 0 0 0
nvme-Samsung_SSD_990_PRO_with_Heatsink_4TB_S7HRNJ0WC04333J_1 ONLINE 56 19 0
errors: 144 data errors, use '-v' for a list
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 00:02:06 with 0 errors on Sun Mar 10 00:26:07 2024
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
nvme-eui.000000000000000100a075234516b8dc-part3 ONLINE 0 0 0
errors: No known data errors
Any advice ?
Steff
my proxmox server suddenly got SUSPENDED status on my ZFS volume. I've got 2 Samsung M.2 990 Pro SSDs, first on of them hab been in REMOVED status, and the volum wase sipmly in DEGRADED mode.
Here is my zpool status :
pool: data
state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC
config:
NAME STATE READ WRITE CKSUM
data DEGRADED 0 0 0
mirror-0 DEGRADED 28 18 0
nvme-Samsung_SSD_990_PRO_with_Heatsink_4TB_S7HRNJ0WC04330B_1 REMOVED 0 0 0
nvme-Samsung_SSD_990_PRO_with_Heatsink_4TB_S7HRNJ0WC04333J_1 ONLINE 56 19 0
errors: 144 data errors, use '-v' for a list
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 00:02:06 with 0 errors on Sun Mar 10 00:26:07 2024
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
nvme-eui.000000000000000100a075234516b8dc-part3 ONLINE 0 0 0
errors: No known data errors
Any advice ?
Steff