I recently started seeing a "io error" status on my OpenMediaVault VM within Proxmox. OpenMediaVault VM has ZFS drive attached to it (via Proxmox) that it exposes as a network drive. When I detach the ZFS drive from OpenMediaVault VM, OpenMediaVault runs fine, which leads me to believe that the issue is coming from the ZFS drive.
I set up ZFS in Proxmox as a raidz3. Like I said above, I then attached the ZFS storage as a drive in my OpenMediaVault VM, making available almost all space for OpenMediaVault. When I was able to last view the drive in OpenMediaVault, it should have more than 5TiB of space left, so it's nowhere near full in OpenMediaVault. I ran a zpool scrub but it didn't find any errors. Proxmox is showing usage at 100%, however: 100.00% (10.56 TiB of 10.56 TiB)
I ran zpool list, zfs list, and zpool status, but they all seem normal to me. The output is below. The "AVAIL" portion of the zfs list output seems low (~3M). Could that be the cause of the issue? Is there some kind of overhead with ZFS that I need to account for? If so, how do I fix the issue since I can no longer access the drive contents via OpenMediaVault?
Any help is greatly appreciated.
I set up ZFS in Proxmox as a raidz3. Like I said above, I then attached the ZFS storage as a drive in my OpenMediaVault VM, making available almost all space for OpenMediaVault. When I was able to last view the drive in OpenMediaVault, it should have more than 5TiB of space left, so it's nowhere near full in OpenMediaVault. I ran a zpool scrub but it didn't find any errors. Proxmox is showing usage at 100%, however: 100.00% (10.56 TiB of 10.56 TiB)
I ran zpool list, zfs list, and zpool status, but they all seem normal to me. The output is below. The "AVAIL" portion of the zfs list output seems low (~3M). Could that be the cause of the issue? Is there some kind of overhead with ZFS that I need to account for? If so, how do I fix the issue since I can no longer access the drive contents via OpenMediaVault?
Code:
root@lab:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
D2700 10.6T 3.46M 307K /D2700
D2700/vm-100-disk-0 10.6T 3.46M 10.6T -
root@lab:~# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
D2700 13.6T 13.2T 437G - - 10% 96% 1.00x ONLINE -
root@lab:~# zpool status
pool: D2700
state: ONLINE
scan: scrub repaired 0B in 0 days 08:55:16 with 0 errors on Fri May 8 11:19:46 2020
config:
NAME STATE READ WRITE CKSUM
D2700 ONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
wwn-0x5000c500287054e3 ONLINE 0 0 0
wwn-0x5000c5003355b4af ONLINE 0 0 0
wwn-0x5000c5001d5fc1db ONLINE 0 0 0
wwn-0x5000c500337b562f ONLINE 0 0 0
wwn-0x5000c500337b601b ONLINE 0 0 0
wwn-0x5000c500289fd313 ONLINE 0 0 0
wwn-0x5000c5002870621f ONLINE 0 0 0
wwn-0x5000c500289fd83f ONLINE 0 0 0
wwn-0x5000c50028705b53 ONLINE 0 0 0
wwn-0x5000c500287049cb ONLINE 0 0 0
wwn-0x5000c500289fcef7 ONLINE 0 0 0
wwn-0x5000c500289d419b ONLINE 0 0 0
sdn ONLINE 0 0 0
wwn-0x5000c500337b603f ONLINE 0 0 0
wwn-0x5000c50028705c93 ONLINE 0 0 0
wwn-0x5000c50028a2bd43 ONLINE 0 0 0
wwn-0x5000c5003356512b ONLINE 0 0 0
wwn-0x5000c500337468c7 ONLINE 0 0 0
wwn-0x5000c50028a0c2d7 ONLINE 0 0 0
wwn-0x5000c500289fd7ab ONLINE 0 0 0
sdv ONLINE 0 0 0
sdw ONLINE 0 0 0
wwn-0x5000c500289fd8f7 ONLINE 0 0 0
wwn-0x5000c5001d530623 ONLINE 0 0 0
wwn-0x5000c50033565a17 ONLINE 0 0 0
errors: No known data errors
Any help is greatly appreciated.
Last edited: