Greetings friends,
This is my first post on here, so apologies if it's hard to be patient with me - still learning a lot!
Context
I fired up a PVE server a few months ago with spare PC parts, running the OS on an SSD, and using two 12TB HDDs configured in RAID1 for storage (via the zfs wizard in PVE)
All has been fine for months. I wanted to expand my storage and just installed two more 12TB drives. (It's worth noting that these two new drives are connected to the motherboard via a m.2 to SATA adapter, which has it's own RAID management software... still figuring that out.)
Problem
I opened the PVE gui to add these drives, and found under ZFS that my ORIGINAL drives are "DEGRADING"
I should note here that my power went out yesterday, and I didn't check on the drives after power came back on (dumb). This could be a source of the problem?
Analysis
zpool status -v shows CKSUM errors on BOTH drives (different amounts), and they increase every few seconds when I run the command
smartctl on each disk shows no problems...?
Output of zpool status -vLP:
Note: the data itself on these drives is not extremely critical, so no need to panic about backing anything up immediately
This is my first post on here, so apologies if it's hard to be patient with me - still learning a lot!
Context
I fired up a PVE server a few months ago with spare PC parts, running the OS on an SSD, and using two 12TB HDDs configured in RAID1 for storage (via the zfs wizard in PVE)
All has been fine for months. I wanted to expand my storage and just installed two more 12TB drives. (It's worth noting that these two new drives are connected to the motherboard via a m.2 to SATA adapter, which has it's own RAID management software... still figuring that out.)
Problem
I opened the PVE gui to add these drives, and found under ZFS that my ORIGINAL drives are "DEGRADING"
I should note here that my power went out yesterday, and I didn't check on the drives after power came back on (dumb). This could be a source of the problem?
Analysis
zpool status -v shows CKSUM errors on BOTH drives (different amounts), and they increase every few seconds when I run the command
smartctl on each disk shows no problems...?
Output of zpool status -vLP:
Code:
root@pve:~# zpool status -vLP
pool: zpool
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: resilvered 594M in 00:00:41 with 1 errors on Fri May 17 12:44:37 2024
config:
NAME STATE READ WRITE CKSUM
zpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
/dev/sdb1 DEGRADED 0 0 87 too many errors
/dev/sdc1 DEGRADED 0 0 84 too many errors
errors: Permanent errors have been detected in the following files:
(2 files are listed here)
Note: the data itself on these drives is not extremely critical, so no need to panic about backing anything up immediately
Attachments
Last edited: