How to fix degraded ZFS pool

mcaroberts

New Member
Feb 14, 2024
12
1
3
I have a Dell server running 4 1TB drives via an HBA (no raid enabled) these disk contain my proxmox host and several LXC's on a ZFS pool. It has been working with issue for several months. The other day I was watching content from plex that runs on proxmox, it started buffering and never came back. After several troubleshooting steps I rebooted the host and the management website would no longer load. Troubleshooting the host OS I found several proxmox services were not loading, that lead to me to an issue with the ZFS pool in which proxmox is installed on.

Being primaraly a windows guy and new to linux, I'm not sure how to fix it. See the output below from "zpool status -v". Any help would be greatly appreciated.


Code:
ProxmoxServer:~# zpool status -v
  pool: rpool
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 00:08:15 with 0 errors on Sun Sep  8 00:32:18 2024
config:

        NAME                                       STATE     READ WRITE CKSUM
        rpool                                      DEGRADED     0     0     0
          raidz1-0                                 DEGRADED     0     0     0
            ata-ST1000LM049-2GH172_ZGS17TJH-part3  DEGRADED     0     0   100  too many errors
            ata-ST1000LM049-2GH172_ZN900980-part3  DEGRADED     0     0   100  too many errors
            1543849991250364374                    UNAVAIL      0     0     0  was /dev/disk/by-id/ata-ST1000LM049-2GH172_ZGS1BGY2-part3
            ata-ST1000LM049-2GH172_ZN900Q5T-part3  DEGRADED     0     0   100  too many errors
            ata-ST1000LM049-2GH172_ZN90YJFM-part3  DEGRADED     0     0   100  too many errors

errors: Permanent errors have been detected in the following files:

        /rpool/data/subvol-109-disk-0/usr/share/man/man8/smtp.8postfix.gz
        /rpool/data/subvol-109-disk-0/usr/share/man/man7/tc-hfsc.7.gz
        /rpool/data/subvol-109-disk-0/usr/share/man/man8/ld.so.8.gz
        /rpool/data/subvol-109-disk-0/usr/share/man/man7/signal.7.gz
        /rpool/data/subvol-109-disk-0/usr/share/man/man7/bootparam.7.gz
        /rpool/data/subvol-109-disk-0/usr/share/man/man5/proc.5.gz
        /rpool/data/subvol-109-disk-0/usr/share/man/man1/posttls-finger.1.gz
        /rpool/data/subvol-109-disk-0/var/lib/apt/lists/deb.debian.org_debian_dists_bookworm_contrib_i18n_Translation-en
        /rpool/data/subvol-109-disk-0/usr/share/man/man8/oqmgr.8postfix.gz
        /rpool/data/subvol-105-disk-1/var/log/journal/035cddc9e8c84e9bb8dc0adae5620d9f/system@7daa68b8fc4b4f47933576ec05038319-00000000001383ee-00062cfb843dfdcb.journal
        /rpool/data/subvol-105-disk-1/var/log/journal/035cddc9e8c84e9bb8dc0adae5620d9f/system@00062de0cd3daea8-6b7f0cde01f9392b.journal~
        /rpool/data/subvol-108-disk-0/usr/lib/x86_64-linux-gnu/systemd/libsystemd-core-252.so
        /rpool/data/subvol-108-disk-0/usr/lib/x86_64-linux-gnu/systemd/libsystemd-shared-252.so
        /rpool/data/subvol-100-disk-0/usr/bin/systemctl
        /rpool/data/subvol-100-disk-0/usr/lib/x86_64-linux-gnu/libc.so.6
        /rpool/data/subvol-100-disk-0/usr/lib/modules/6.1.0-28-rt-amd64/kernel/drivers/net/ethernet/sfc/siena/sfc-siena.ko
        /rpool/data/subvol-100-disk-0/var/lib/apt/lists/security.debian.org_dists_bookworm-security_InRelease
        /rpool/data/subvol-100-disk-0/usr/lib/x86_64-linux-gnu/systemd/libsystemd-shared-252.so
        /rpool/data/subvol-100-disk-0/usr/lib/systemd/systemd-networkd
        /rpool/data/subvol-106-disk-1/usr/lib/x86_64-linux-gnu/libnettle.so.8.6
        /rpool/data/subvol-110-disk-0/usr/share/man/man7/epoll.7.gz
        /rpool/data/subvol-110-disk-0/usr/share/man/de/man1/dpkg-deb.1.gz
        /rpool/data/subvol-110-disk-0/usr/share/man/man5/term.5.gz
        /rpool/data/subvol-110-disk-0/usr/lib/x86_64-linux-gnu/libc.so.6
        /rpool/data/subvol-110-disk-0/usr/share/man/man8/apt-cache.8.gz
        /rpool/data/subvol-110-disk-0/usr/share/man/man5/login.defs.5.gz
        /rpool/data/subvol-110-disk-0/usr/share/man/man1/tset.1.gz
        /rpool/data/subvol-110-disk-0/usr/share/man/man8/pipe.8postfix.gz
        /rpool/data/subvol-110-disk-0/usr/share/man/man1/sendmail.1.gz
        /rpool/data/subvol-110-disk-0/usr/share/man/man1/tar.1.gz
        /rpool/data/subvol-110-disk-0/usr/share/man/man1/dpkg-maintscript-helper.1.gz
        /rpool/data/subvol-110-disk-0/usr/share/man/man7/sched.7.gz
        /rpool/data/subvol-110-disk-0/usr/lib/x86_64-linux-gnu/systemd/libsystemd-shared-252.so
        /rpool/data/subvol-110-disk-0/usr/share/man/man5/scr_dump.5.gz
        /rpool/data/subvol-110-disk-0/usr/share/man/man8/nft.8.gz
        /rpool/data/subvol-110-disk-0/usr/share/man/man8/bridge.8.gz
        /rpool/data/subvol-110-disk-0/usr/share/man/man7/rtld-audit.7.gz
        /rpool/data/subvol-110-disk-0/usr/share/man/da/man1/man.1.gz
        /rpool/data/subvol-110-disk-0/usr/share/man/man5/elf.5.gz
        /rpool/data/subvol-110-disk-0/usr/lib/systemd/systemd-networkd
        /rpool/data/subvol-110-disk-0/usr/share/man/man8/dhclient.8.gz
        /rpool/data/subvol-110-disk-0/usr/share/man/man8/cleanup.8postfix.gz
        /rpool/data/subvol-110-disk-0/usr/share/man/man7/inode.7.gz
        /rpool/data/subvol-110-disk-0/usr/share/man/man5/dhclient.conf.5.gz
        /rpool/data/subvol-110-disk-0/usr/bin/systemctl
        /rpool/data/subvol-110-disk-0/usr/share/man/man5/postconf.5.gz
        /rpool/data/subvol-110-disk-0/usr/share/man/man8/ld.so.8.gz
        /rpool/data/subvol-158-disk-0/usr/share/man/pt/man5/apt.conf.5.gz
        /rpool/data/subvol-158-disk-0/usr/share/man/pt/man5/sources.list.5.gz
        /rpool/data/subvol-158-disk-0/usr/share/man/man5/proc.5.gz
        /rpool/data/subvol-158-disk-0/usr/share/man/fr/man1/update-alternatives.1.gz
        /rpool/data/subvol-158-disk-0/usr/share/man/man5/dhclient.conf.5.gz
        /rpool/data/subvol-158-disk-0/usr/share/man/man5/postconf.5.gz
        /rpool/data/subvol-101-disk-1/var/log/journal/d103153ad8484ad99d1e50873c73396c/system.journal
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2.netdata/system@ba2119820c95492aa36626ef78a3190e-0000000000312401-0006305d8c8ddfa8.journal
        //var/lib/apt/lists/save/deb.debian.org_debian_dists_bookworm_main_i18n_Translation-en
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2/system@38c86c2b3cad449198ed82c34fb888b5-0000000001596019-0006301dd069d0a8.journal
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2.netdata/system@ba2119820c95492aa36626ef78a3190e-00000000002d10d4-000630266c5d5beb.journal
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2/system@38c86c2b3cad449198ed82c34fb888b5-0000000001519568-00062fdf52631112.journal
        //var/cache/netdata/dbengine-tier1/journalfile-1-0000001013.njfv2
        //opt/dell/srvadmin/lib64/openmanage/apache-tomcat/RUNNING.txt
        //usr/lib/modules/6.8.12-4-pve/kernel/sound/pci/hda/snd-hda-codec.ko
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2/system@38c86c2b3cad449198ed82c34fb888b5-0000000001079e62-00062d3937fa6c03.journal
        //etc/freeipmi/freeipmi_interpret_sensor.conf
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2.netdata/system@ba2119820c95492aa36626ef78a3190e-00000000002c0c50-00063018ad2bd7fb.journal
        //opt/dell/srvadmin/lib64/openmanage/jre/lib/server/libjvm.so
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2/system@38c86c2b3cad449198ed82c34fb888b5-00000000015320f9-00062feb609a475d.journal
        //usr/lib/x86_64-linux-gnu/libssl3.so
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2.netdata/system@000630f70d4a4ade-8a5f22a053696258.journal~
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2/system@38c86c2b3cad449198ed82c34fb888b5-0000000001176b46-00062df0d58d1032.journal
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2.netdata/system@ba2119820c95492aa36626ef78a3190e-00000000002f1b0c-00063041ed384190.journal
        //var/cache/netdata/dbengine-tier2/journalfile-1-0000000042.njfv2
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2.netdata/system@ba2119820c95492aa36626ef78a3190e-000000000034b37e-0006308e2fb9b719.journal
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2.netdata/system@ba2119820c95492aa36626ef78a3190e-00000000002c8eb0-0006301f8e203d4d.journal
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2/system@38c86c2b3cad449198ed82c34fb888b5-00000000015fb515-000630553f005865.journal
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2/system@38c86c2b3cad449198ed82c34fb888b5-000000000120765c-00062e42c211976a.journal
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2/system@38c86c2b3cad449198ed82c34fb888b5-00000000017183de-000630f703377674.journal
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2.netdata/system@ba2119820c95492aa36626ef78a3190e-00000000002d9388-0006302d48f0357f.journal
        //usr/lib/x86_64-linux-gnu/libnghttp2.so.14.24.1
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2.netdata/system@ba2119820c95492aa36626ef78a3190e-000000000033af9a-000630806ec2ea49.journal
        //var/lib/pve-cluster/config.db
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2/system@38c86c2b3cad449198ed82c34fb888b5-0000000000bf64b1-00062960a9968b90.journal
        //usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2/system@38c86c2b3cad449198ed82c34fb888b5-00000000010301c0-00062d0230adf847.journal
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2.netdata/system@ba2119820c95492aa36626ef78a3190e-00000000002a86ad-000630040b18c910.journal
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2.netdata/system@ba2119820c95492aa36626ef78a3190e-00000000002e985b-0006303b09d27c10.journal
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2/system@38c86c2b3cad449198ed82c34fb888b5-0000000000c6ab7a-000629d96589e2e0.journal
        //usr/share/snmp/mibs/UCD-SNMP-MIB.txt
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2/system@38c86c2b3cad449198ed82c34fb888b5-00000000014e780a-00062fc6d1f39c20.journal
        //var/cache/netdata/dbengine/journalfile-1-0000007111.njfv2
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2/system@38c86c2b3cad449198ed82c34fb888b5-00000000015e17a4-00063046ab621678.journal
        //opt/dell/srvadmin/iSM/lib64/libdcrceclient.so.4.3.0.0
        //opt/dell/srvadmin/iSM/lib64/libdcsafpi.so.4.3.0.0
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2/system@38c86c2b3cad449198ed82c34fb888b5-00000000016626da-0006308f31b039af.journal
        //var/cache/netdata/dbengine-tier1/journalfile-1-0000000995.njfv2
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2/system@38c86c2b3cad449198ed82c34fb888b5-00000000016c9e59-000630c98a969915.journal
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2/system@38c86c2b3cad449198ed82c34fb888b5-000000000157c642-0006300fad16c0c2.journal
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2/system@38c86c2b3cad449198ed82c34fb888b5-0000000000ec13ac-00062be664bdda81.journal
        //var/log/journal/e991c7db34984e92bfeb5f58bbdfb7e2/system@38c86c2b3cad449198ed82c34fb888b5-00000000014b5d39-00062fae5f378441.journal
        /rpool/data/subvol-152-disk-0/usr/lib/x86_64-linux-gnu/libc.so.6
        /rpool/data/subvol-152-disk-0/usr/bin/systemctl
        /rpool/data/subvol-103-disk-0/usr/lib/x86_64-linux-gnu/systemd/libsystemd-core-252.so
        /rpool/data/subvol-150-disk-0/usr/lib/systemd/systemd
        /rpool/data/subvol-150-disk-0/usr/lib/plexmediaserver/lib/libssl.so.3
        /rpool/data/subvol-150-disk-0/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db
 
Code:
ProxmoxServer:~# lsblk
NAME                    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                       8:0    0  54.6T  0 disk
├─Raid-vm--202--disk--0 252:0    0     4M  0 lvm
├─Raid-vm--202--disk--1 252:1    0   200G  0 lvm
├─Raid-vm--201--disk--3 252:2    0   9.8T  0 lvm
├─Raid-vm--201--disk--5 252:3    0     4M  0 lvm
├─Raid-vm--201--disk--6 252:4    0   200G  0 lvm
├─Raid-vm--201--disk--7 252:5    0  1000G  0 lvm
├─Raid-vm--201--disk--8 252:6    0     4M  0 lvm
├─Raid-vm--203--disk--1 252:7    0   100G  0 lvm
├─Raid-vm--204--disk--0 252:8    0   100G  0 lvm
├─Raid-vm--200--disk--0 252:9    0  11.7T  0 lvm
├─Raid-vm--200--disk--1 252:10   0  13.7T  0 lvm
├─Raid-vm--200--disk--2 252:11   0   500G  0 lvm
├─Raid-vm--200--disk--3 252:12   0   500G  0 lvm
└─Raid-vm--205--disk--0 252:13   0    40G  0 lvm
sdb                       8:16   0 931.5G  0 disk
├─sdb1                    8:17   0  1007K  0 part
├─sdb2                    8:18   0     1G  0 part
└─sdb3                    8:19   0   930G  0 part
sdc                       8:32   0 931.5G  0 disk
sdd                       8:48   0 931.5G  0 disk
├─sdd1                    8:49   0  1007K  0 part
├─sdd2                    8:50   0     1G  0 part
└─sdd3                    8:51   0   930G  0 part
sde                       8:64   0 931.5G  0 disk
├─sde1                    8:65   0  1007K  0 part
├─sde2                    8:66   0     1G  0 part
└─sde3                    8:67   0   930G  0 part
sdf                       8:80   0 931.5G  0 disk
├─sdf1                    8:81   0  1007K  0 part
├─sdf2                    8:82   0     1G  0 part
└─sdf3                    8:83   0   930G  0 part
sdg                       8:96   1     0B  0 disk
sr0                      11:0    1  1024M  0 rom
zd0                     230:0    0    30G  0 disk
└─zd0p1                 230:1    0    30G  0 part
zd16                    230:16   0   200G  0 disk
└─zd16p1                230:17   0   200G  0 part
zd32                    230:32   0    30G  0 disk
├─zd32p1                230:33   0    29G  0 part
├─zd32p2                230:34   0     1K  0 part
└─zd32p5                230:37   0   975M  0 part
 
When I check SMART on all these drives I don't see any errors:

Code:
ProxmoxServer:~# smartctl -a /dev/sdb
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.8.12-8-pve] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Barracuda Pro Compute
Device Model:     ST1000LM049-2GH172
Serial Number:    ZN90YJFM
LU WWN Device Id: 5 000c50 0e81f4af9
Firmware Version: SDM1
User Capacity:    1,000,204,886,016 bytes [1.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      2.5 inches
TRIM Command:     Available
Device is:        In smartctl database 7.3/5319
ATA Version is:   ACS-3 T13/2161-D revision 3b
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Mon Mar 24 10:35:31 2025 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
See vendor-specific Attribute list for marginal Attributes.

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (    0) seconds.
Offline data collection
capabilities:                    (0x71) SMART execute Offline immediate.
                                        No Auto Offline data collection support.
                                        Suspend Offline collection upon new
                                        command.
                                        No Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   1) minutes.
Extended self-test routine
recommended polling time:        ( 123) minutes.
Conveyance self-test routine
recommended polling time:        (   2) minutes.
SCT capabilities:              (0x3035) SCT Status supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   048   037   006    Pre-fail  Always       -       13849723
  3 Spin_Up_Time            0x0003   099   099   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       105
  5 Reallocated_Sector_Ct   0x0033   099   099   036    Pre-fail  Always       -       512
  7 Seek_Error_Rate         0x000f   086   060   045    Pre-fail  Always       -       426082457
  9 Power_On_Hours          0x0032   090   090   000    Old_age   Always       -       8986 (207 160 0)
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       105
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   066   000    Old_age   Always       -       25770262575
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   070   035   040    Old_age   Always   In_the_past 30 (0 5 30 24 0)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       63
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       1947
194 Temperature_Celsius     0x0022   030   065   000    Old_age   Always       -       30 (0 18 0 0 0)
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       8986 (14 69 0)
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       14198362168
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       18567286663
254 Free_Fall_Sensor        0x0032   100   100   000    Old_age   Always       -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

The results of all 4 drives is the same
 
Did one drive breakdown or did a cable become disconnect? Maybe cables to the other drives are also introducing corruption by making poor connection? Or maybe your host memory has gone bad and causing corruption?
Most of those files are not very important but some are, so I would recommend figuring out the hardware (check cables, run memtest and long SMART tests for the drives) and reinstall Proxmox and restoring your VM/CT/data from backups (once the hardware is fixed).

EDIT: Your are using SMR drives which are terrible with ZFS (please search the forum and replace them with CMR drives).
 
Last edited:
Code:
Device Model:     ST1000LM049-2GH172

Its 2.5 SMR disk?
This is the first I have heard of SMR or CMR drives in the 30 years I have been doing this stuff, but you are correct they are all 2.5 SMR Seagate Barracuda drives. They were all pulled out of some Dell laptops we had laying around for parts. Thought I would save some money that way after putting almost $700 in to my main storage array. Seems to have been a bad choose.

So from the post above it seems there is no fixing the issue out side of restoring from backup. Which is a little bit of an issue in its own right. I have an issue with my PBS box too, that I just fixed so my backups are a little dated. I will lose some stuff if I have to go that route, not the end of the world but not ideal either.
 
This is the first I have heard of SMR or CMR drives in the 30 years I have been doing this stuff, but you are correct they are all 2.5 SMR Seagate Barracuda drives. They were all pulled out of some Dell laptops we had laying around for parts. Thought I would save some money that way after putting almost $700 in to my main storage array. Seems to have been a bad choose.
Please don't replace them with a 4TB QLC SSD that looks like a good deal; you'll get into the same kind of troubles. People refuse to believe that they wasted their money to only get single-digit KB/s (sustained) writes and time-outs and errors with ZFS, but it has happened multiple times before on this forum.

So from the post above it seems there is no fixing the issue out side of restoring from backup. Which is a little bit of an issue in its own right. I have an issue with my PBS box too, that I just fixed so my backups are a little dated. I will lose some stuff if I have to go that route, not the end of the world but not ideal either.
You can try to read all the non-corrupted files to some new storage, including VM and CT virtual disks. I don't know how many more errors you will encounter but thankfully ZFS has checksums and will let you know if they happen to be bad. It will involve a lot command line and maybe even virtual disk conversions (in case of a different storage type).
 
I'm not sure how to fix it. See the output below from "zpool status -v".
action: Restore the file in question if possible. Otherwise restore the entire pool from backup.
I dont know what you're asking. Zpool status is telling you how to "fix it."

As in, its no longer fixable.

The lessons to draw from this are as follows:
1. If you are running this on a system without ECC- dont.
2. RAIDZ1 is a no no. You had a disk failure PLUS failed writes on top. results are data corruption.
3. Keep a closer eye on your journal messages. chances are the failed drive didn't fail overnight, and there would have been some warning.

Now if you're asking "can I salvage anything" the answer is a qualified yes. ddrescue is your friend.
 
Please don't replace them with a 4TB QLC SSD that looks like a good deal; you'll get into the same kind of troubles. People refuse to believe that they wasted their money to only get single-digit KB/s (sustained) writes and time-outs and errors with ZFS, but it has happened multiple times before on this forum.


You can try to read all the non-corrupted files to some new storage, including VM and CT virtual disks. I don't know how many more errors you will encounter but thankfully ZFS has checksums and will let you know if they happen to be bad. It will involve a lot command line and maybe even virtual disk conversions (in case of a different storage type).
By QLC SSD do mean any SSD or is this a specific type of SSD? I was thinking about getting 2 250 GB SDD's for the OS in an ZFS mirror and 2 2TB ones in an ZFS mirror for LXC and VMs is there something I should be looking for or are you saying I need to stick with CSM spinning drives?
 
Last edited:
I dont know what you're asking. Zpool status is telling you how to "fix it."

As in, its no longer fixable.

The lessons to draw from this are as follows:
1. If you are running this on a system without ECC- dont.
2. RAIDZ1 is a no no. You had a disk failure PLUS failed writes on top. results are data corruption.
3. Keep a closer eye on your journal messages. chances are the failed drive didn't fail overnight, and there would have been some warning.

Now if you're asking "can I salvage anything" the answer is a qualified yes. ddrescue is your friend.
I will look into ddrescue thanks for the heads up
 
By QLC SSD do mean any SSD or is this a specific type of SSD?
It's an (consumer) SSD that uses QLC flash memory: https://en.wikipedia.org/wiki/Multi-level_cell#Quad-level_cell
Try the forum search: https://forum.proxmox.com/search/8262617/?q=QLC&t=post&c[child_nodes]=1&c[nodes][0]=16&o=date
I was thinking about getting 2 250 GB SDD's for the OS in an ZFS mirror and 2 2TB ones for LXC and VMs is there something I should be looking for or are you saying I need to stick with CSM spinning drives?
Best to use (second-hard) enterprise SSDs with PLP, which is also always suggested in the threads about QLC: https://en.wikipedia.org/wiki/Solid-state_drive#Battery_and_supercapacitor
 
I dont know what you're asking. Zpool status is telling you how to "fix it."

As in, its no longer fixable.

The lessons to draw from this are as follows:
1. If you are running this on a system without ECC- dont.
2. RAIDZ1 is a no no. You had a disk failure PLUS failed writes on top. results are data corruption.
3. Keep a closer eye on your journal messages. chances are the failed drive didn't fail overnight, and there would have been some warning.

Now if you're asking "can I salvage anything" the answer is a qualified yes. ddrescue is your friend.
I'm running tis on a Dell T430 with 192GB of ECC registered RAM. In the message above how were you able to tell one of the drives failed. It looks to me like is it just data corruption that happened. I'm just trying to learn as much as possible, as we all know you can never waste a tragedy to learn.
 
Last edited:
It's an (consumer) SSD that uses QLC flash memory: https://en.wikipedia.org/wiki/Multi-level_cell#Quad-level_cell
Try the forum search: https://forum.proxmox.com/search/8262617/?q=QLC&t=post&c[child_nodes]=1&c[nodes][0]=16&o=date

Best to use (second-hard) enterprise SSDs with PLP, which is also always suggested in the threads about QLC: https://en.wikipedia.org/wiki/Solid-state_drive#Battery_and_supercapacitor
Thank you for taking the time to provide the info and links you have been a big help!
 
i know where the config files are but where are the content of the LCX containers stored on the host OS files system? Is it possible to copy them off and put them back on a new install?
 
Last edited: