First of all, I'm not sure whether this forum answers questions about zfs or not. If not, please forgive my abruptness and point me to the right direction. Thanks in advance!
I started to use Proxmox from version 5.4 on a 8-bay PC several years ago with a zpool containing one raidz1 vdev of four 8TB hdds. This zpool almost reached its max capacity serveral months ago. So I filled all the remaining bays of the machine with four more 14TB hdds, making another raidz1 vdev to extend the zpool. I also upgraded Proxmox to the lastest version then. Recently, I started to receive warning emails regarding two of the old 8TB hdds as follow
The current zpool status is shown below.
Since there's no extra bay available any more, I'm not confident to replace those two defective 8TB hdds. And the capactiy of newly-added raidz1-1 is surely able to cover raidz1-0, is it possible to move all my data from raidz1-0 to raidz1-1 and remove raidz1-0 afterwards? If not, what's the best idea for the current situation of my zpool?
BTW, do I need to run 'zpool upgrade' as suggested in zpool status?
The version info of my Proxmox:
I started to use Proxmox from version 5.4 on a 8-bay PC several years ago with a zpool containing one raidz1 vdev of four 8TB hdds. This zpool almost reached its max capacity serveral months ago. So I filled all the remaining bays of the machine with four more 14TB hdds, making another raidz1 vdev to extend the zpool. I also upgraded Proxmox to the lastest version then. Recently, I started to receive warning emails regarding two of the old 8TB hdds as follow
Code:
The following warning/error was logged by the smartd daemon:
Device: /dev/sdg [SAT], 1 Offline uncorrectable sectors
Device info:
HGST HUH728080ALE600, S/N:VKGLS23X, WWN:5-000cca-254c88546, FW:A4GNT514, 8.00 TB
The current zpool status is shown below.
Code:
pool: mainpool
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 16:07:31 with 0 errors on Sun Dec 10 16:31:32 2023
config:
NAME STATE READ WRITE CKSUM
mainpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-HGST_HUH728080ALE600_VKGLS23X ONLINE 0 0 0
ata-HGST_HUH728080ALE600_VKGLY24X ONLINE 0 0 0
ata-HGST_HUH728080ALE600_VKGMRXVX ONLINE 0 0 0
ata-HGST_HUH728080ALE600_VKGMYEVX ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
ata-WDC_WUH721414ALE6L4_9MG6JU8A ONLINE 0 0 0
ata-WDC_WUH721414ALE6L4_9MGNBNDU ONLINE 0 0 0
ata-WDC_WUH721414ALE6L4_9MGNGUNU ONLINE 0 0 0
ata-WDC_WUH721414ALE6L4_9MGNJ36T ONLINE 0 0 0
errors: No known data errors
Since there's no extra bay available any more, I'm not confident to replace those two defective 8TB hdds. And the capactiy of newly-added raidz1-1 is surely able to cover raidz1-0, is it possible to move all my data from raidz1-0 to raidz1-1 and remove raidz1-0 afterwards? If not, what's the best idea for the current situation of my zpool?
BTW, do I need to run 'zpool upgrade' as suggested in zpool status?
The version info of my Proxmox:
Code:
proxmox-ve: 8.1.0 (running kernel: 6.2.16-3-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.5: 6.5.11-7
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
proxmox-kernel-6.2.16-19-pve: 6.2.16-19
proxmox-kernel-6.2.16-15-pve: 6.2.16-15
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx7
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.2-1
proxmox-backup-file-restore: 3.1.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-2
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.4
pve-qemu-kvm: 8.1.2-5
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1
Last edited: