HI Forum,
actually I'm pretty new to ZFS so I'm a bit confused about the behavior of my recently setup HPE Server.
After roughly 10 years my Synology NAS refused to boot again. So I took an old HPE Microserver GEN10 and added a SSD for the OS and my four WD40 drives from the NAS.
One drive has about 13 k power on hours, the other three have 60 k power on hours. Even though the Synology was complaining about errors on one of the disks before it died, I was able to install Proxmox and create a ZFS Pool (raidz2) without any issues.
On Board Marvell Raid is not used - at least there's no virtual disk created. Server is running updated Bios in UEFI mode (ZA10A380).
So I restored my data backup on the ZFS Pool (partly creating storage for a VM, partly creating storage as storage for containers).
However, after a while SMART tools and (!) ZFS complained about faulty disks (disk 3 and 4) and degraded the pool, but after a reboot eventhing was fine again. Now, after four more days (the server is just running idle) the pool is degraded again (all four disks) but SMART status is passed for all of them.
After anther reboot disk 1 (the youngest one) is marked as faulted, 2 & 4 degraded and 3 is online.
Do I just face issues because of the age of my disks and adding for new drives will solve this or is there anything to consider when using ZFS on a HPE Microserver Gen 10?
Any guidance / help would be highly appreciated.
best,
Mat
actually I'm pretty new to ZFS so I'm a bit confused about the behavior of my recently setup HPE Server.
After roughly 10 years my Synology NAS refused to boot again. So I took an old HPE Microserver GEN10 and added a SSD for the OS and my four WD40 drives from the NAS.
One drive has about 13 k power on hours, the other three have 60 k power on hours. Even though the Synology was complaining about errors on one of the disks before it died, I was able to install Proxmox and create a ZFS Pool (raidz2) without any issues.
On Board Marvell Raid is not used - at least there's no virtual disk created. Server is running updated Bios in UEFI mode (ZA10A380).
So I restored my data backup on the ZFS Pool (partly creating storage for a VM, partly creating storage as storage for containers).
However, after a while SMART tools and (!) ZFS complained about faulty disks (disk 3 and 4) and degraded the pool, but after a reboot eventhing was fine again. Now, after four more days (the server is just running idle) the pool is degraded again (all four disks) but SMART status is passed for all of them.
After anther reboot disk 1 (the youngest one) is marked as faulted, 2 & 4 degraded and 3 is online.
Do I just face issues because of the age of my disks and adding for new drives will solve this or is there anything to consider when using ZFS on a HPE Microserver Gen 10?
Any guidance / help would be highly appreciated.
best,
Mat