ext3-fs error / unable to read inode block - error

informant

Renowned Member
Jan 31, 2012
780
10
83
hi all,

we have a error on a autobackup.
error message:
error-screen.png

message after hard reboot:
error-screen-2.png
error-screen-3.png

in log we found following entry:
Code:
Aug 5 21:15:02 andromeda vzdump[171161]: INFO: Starting Backup of VM 4131 (openvz)
Aug 5 21:15:05 andromeda kernel: EXT3-fs: barriers disabled
Aug 5 21:15:05 andromeda kernel: kjournald starting. Commit interval 5 seconds
Aug 5 21:15:05 andromeda kernel: EXT3-fs (dm-3): using internal journal
Aug 5 21:15:05 andromeda kernel: ext3_orphan_cleanup: deleting unreferenced inode 98977913
Aug 5 21:15:05 andromeda kernel: ext3_orphan_cleanup: deleting unreferenced inode 99860500
...
Aug 5 21:15:07 andromeda kernel: ext3_orphan_cleanup: deleting unreferenced inode 104726578
Aug 5 21:15:07 andromeda kernel: ext3_orphan_cleanup: deleting unreferenced inode 104726577
Aug 5 21:15:07 andromeda kernel: EXT3-fs (dm-3): 168 orphan inodes deleted
Aug 5 21:15:07 andromeda kernel: EXT3-fs (dm-3): recovery complete
Aug 5 21:15:07 andromeda kernel: EXT3-fs (dm-3): mounted filesystem with ordered data mode
Aug 5 21:17:01 andromeda /USR/SBIN/CRON[172769]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Aug 5 21:21:16 andromeda kernel: device-mapper: snapshots: Invalidating snapshot: Unable to allocate exception.
Aug 5 21:21:16 andromeda kernel: EXT3-fs error (device dm-3): ext3_get_inode_loc: unable to read inode block - inode=242614277, block=970457090
Aug 5 21:21:16 andromeda kernel: Buffer I/O error on device dm-3, logical block 0
Aug 5 21:21:16 andromeda kernel: lost page write due to I/O error on dm-3
Aug 5 21:21:16 andromeda kernel: EXT3-fs (dm-3): I/O error while writing superblock
...

vgs and lvs say:
Code:
root@andromeda:~# vgs
  /dev/pve/vzsnap-andromeda-0: read failed after 0 of 4096 at 4846774059008: Eingabe-/Ausgabefehler
  /dev/pve/vzsnap-andromeda-0: read failed after 0 of 4096 at 4846774116352: Eingabe-/Ausgabefehler
  /dev/pve/vzsnap-andromeda-0: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler
  /dev/pve/vzsnap-andromeda-0: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler
  VG   #PV #LV #SN Attr   VSize VFree
  pve    1   4   1 wz--n- 4,55t 8,00g
root@andromeda:~# lvs
  /dev/pve/vzsnap-andromeda-0: read failed after 0 of 4096 at 4846774059008: Eingabe-/Ausgabefehler
  /dev/pve/vzsnap-andromeda-0: read failed after 0 of 4096 at 4846774116352: Eingabe-/Ausgabefehler
  /dev/pve/vzsnap-andromeda-0: read failed after 0 of 4096 at 0: Eingabe-/Ausgabefehler
  /dev/pve/vzsnap-andromeda-0: read failed after 0 of 4096 at 4096: Eingabe-/Ausgabefehler
  LV                 VG   Attr     LSize  Pool Origin Data%  Move Log Copy%  Convert
  data               pve  owi-aos-  4,41t
  root               pve  -wi-ao-- 96,00g
  swap               pve  -wi-ao-- 31,00g
  vzsnap-andromeda-0 pve  Swi-I-s-  8,00g      data   100.00

after reboot vgs and lvs said the same, after start a backup manually it say:
Code:
root@andromeda:~# vgs
  VG   #PV #LV #SN Attr   VSize VFree
  pve    1   3   0 wz--n- 4,55t 16,00g
root@andromeda:~# lvs
  LV   VG   Attr     LSize  Pool Origin Data%  Move Log Copy%  Convert
  data pve  -wi-ao--  4,41t
  root pve  -wi-ao-- 96,00g
  swap pve  -wi-ao-- 31,00g

harddisc and memory (ram) are ok. what can we do or check here. what is it for a error? do you have a idea or solution for this, please?

thanks and regards
 
Last edited:
Hi!

> Aug 5 21:21:16 andromeda kernel: device-mapper: snapshots: Invalidating snapshot: Unable to allocate exception
> ...
> root@andromeda:~# vgs
> /dev/pve/vzsnap-andromeda-0: read failed after 0 of 4096 at 4846774059008: Eingabe-/Ausgabefehler
> /dev/pve/vzsnap-andromeda-0: read failed after 0 of 4096 at 4846774116352: Eingabe-/Ausgabefehle

It seems to me that somehow lvm snapshot used for backup got somehow full and is getting in the way. To confirm it say lvdisplay on the host and see if some snapshots are in INACTIVE state. My guess is that when just deleting inactive lvm snapshots at least some of those errors go away. I think deleting them cant make matters worse since inactive snapshots are of no use anyway.


Best regards, Imre
 
hi and thansk for reply,

@ioo, right, but the backupo job cleaned the snapshoot before it starts a new backup.
@RobFantini, on local server we don´t use nfs, but the backup was send/save to a mounted nfs storage. an ideas to solve?

regards
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!