ZFS FileSystem error

yena

Renowned Member
Nov 18, 2011
376
4
83
Hello, i have this problem on a server with 2HD Sata and ARECA 1200:

The server can't access disks:

blk_update_request: I/O error, dev sda, sector 37568672
blk_update_request: I/O error, dev sda, sector 2242343324
blk_update_request: I/O error, dev sda, sector 603434343
arcmsr: executing bus reset eh... num_resets = 0, num_aborts =20
sd 0:0:0:0 rejecting I/O to offline device

But the volume raid is normal:
vsf info
--------------------------------------------------------------------------------------------------------------------
# Name Raid Name Level Capacity Ch/Id/Lun State
===============================================================================
1 ARC-1200-VOL#00 Raid Set # 00 Raid1+0 2000.0GB 00/00/00 Normal
===============================================================================
GuiErrMsg<0x00>: Success.
--------------------------------------------------------------------------------------------------------------------

And no error on the event log of the raid card.

Have i to do a scrub on the FS?
If yes, what is the best practoice on ZFS ? ( first time for me on zfs )

Thanks
 
So, no scrub needed?
I have 50% because i don't know witch hard disk have the problem.
Thanks
 
Looks like one of the disks has bad sectors. Are you using the hardware card to do the raid? With ZFS, that is a bad idea. If not, check this link:

https://www.forwardingplane.net/2014/03/replace-zfs-raidz1-disk/
I use Hardware card, Areca 1200.

Now scrup is working:

zpool status -v rpool
pool: rpool
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://zfsonlinux.org/msg/ZFS-8000-8A
scan: scrub in progress since Tue Sep 6 19:43:20 2016
71.6M scanned out of 842G at 1.43M/s, 167h21m to go
0 repaired, 0.01% done
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda2 ONLINE 0 0 0

errors: Permanent errors have been detected in the following files:

rpool/VPS/subvol-100-disk-1@vzdump:/var/www/vhosts/meteoindiretta.it/webcam.meteoindiretta.it/static/17524/2016/04/18/05_1.jpg
---------------------------------------------------------------------------------------------------------
 
Last edited:
So, you're running ZFS on top of hardware RAID which is the worst situation for ZFS. Now you ended up with file loss :-(

Please only use ZFS on hardware RAID with flashed IT-mode firmware such that you will not have any RAID controller stuff in between and ZFS has to manage the real disks for itself.

Have you off-site backups to restore your VMs from?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!