File system errors during backup job

newacecorp

Active Member
Oct 14, 2012
38
0
26
Hello,

We're running a weekly scheduled backup on a 2-node cluster (Both running Linux version 2.6.32-14-pve; pve-manager/2.1/f32f3f46) and receive the following file system errors consistently whenever the backup runs. There are no file system errors reported during the 7-days between backups.

EXT3-fs (dm-3): using internal journal
ext3_orphan_cleanup: deleting unreferenced inode 4268037
ext3_orphan_cleanup: deleting unreferenced inode 4268036
ext3_orphan_cleanup: deleting unreferenced inode 4268035
ext3_orphan_cleanup: deleting unreferenced inode 4268034
ext3_orphan_cleanup: deleting unreferenced inode 4268033
ext3_orphan_cleanup: deleting unreferenced inode 2296766
ext3_orphan_cleanup: deleting unreferenced inode 2296759
ext3_orphan_cleanup: deleting unreferenced inode 2296755
ext3_orphan_cleanup: deleting unreferenced inode 4571462
ext3_orphan_cleanup: deleting unreferenced inode 2060705
ext3_orphan_cleanup: deleting unreferenced inode 2060700
ext3_orphan_cleanup: deleting unreferenced inode 2060699
ext3_orphan_cleanup: deleting unreferenced inode 2060698
ext3_orphan_cleanup: deleting unreferenced inode 2060696
EXT3-fs (dm-3): 14 orphan inodes deleted
EXT3-fs (dm-3): recovery complete
EXT3-fs (dm-3): mounted filesystem with ordered data mode


We're backing up both KVM qcow2 images (4 running images) and OpenVZ containers (2 running containers). The above file system errors repeats itself twice (identically a few minutes apart), which leads me to believe that the file system errors are triggered during the backup of the OpenVZ containers.

In addition, we only have these errors on one of the two machines, as the other machine only has running KVM qcow2 images, no OpenVZ containers at all. We currently do not use shared storage for either of the KVM or OpenVZ containers. We're backing up to an internal SATA HD and attached to each system as /dev/sdb1 (not dm-3).

Can anyone shed any light on why this may be occurring? Is the "dm-3" volume created to contain the snapshot and then removed after it has been backed up?

I'm concerned that the file system errors indicate a problem with the underlying storage (which is HW RAID-1 SAS Dell PERC/6i)

Regards,

Stephan.
 
this is normal and nothing to worry about. just ignore it.
 
I cant ignore this. All VM are offline when this error in the log. What can i do? This error comes when i make backups of the vms...

VM are offline? not due to this messages.

the reason must for this is someting else. post the full backup log.
 
I use a own Backupscript. The Backupscript create a LVM Snapshot of all VMs and moved all Data with rsync to my other nodes.

Backup rSync from Node to Node
Node1 > Node2
Node2 > Node3
Node3 > Node4
Node4 > Node5
Node5 > Node1

When the Backup run then the comes the Problems. 3 or 4 somtimes 5 Backups ok, the next Backup makes error. The complete Filesystem are read only. Thats the Problem.

I use DELL Perc 5/i with Batterie - 2 SATA HDD (WD Raid Edition (REL4) 1 TB) with Hardwareraid 1.

fstab
/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1
UUID=xxxx /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7812
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 930.50 GiB
PE Size 4.00 MiB
Total PE 238207
Alloc PE / Size 238207 / 930.50 GiB
Free PE / Size 0 / 0
VG UUID O9qBAX-OjMr-KMDs-0TAf-bKcW-8O1F-KGH8R6

vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7812
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 930.50 GiB
PE Size 4.00 MiB
Total PE 238207
Alloc PE / Size 238207 / 930.50 GiB
Free PE / Size 0 / 0
VG UUID O9qBAX-OjMr-KMDs-0TAf-bKcW-8O1F-KGH8R6

lvdisplay
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID mH07N9-FBhR-1FVv-RsXF-Y7rn-Dv8Z-WVDO8l
LV Write Access read/write
LV Creation host, time proxmox, 2013-04-14 01:58:55 +0200
LV Status available
# open 1
LV Size 15.00 GiB
Current LE 3840
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID i3LwOg-WWjz-jU07-W1LS-EUCP-Kelj-kg7Glw
LV Write Access read/write
LV Creation host, time proxmox, 2013-04-14 01:58:55 +0200
LV Status available
# open 1
LV Size 96.00 GiB
Current LE 24576
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---
LV Path /dev/pve/data
LV Name data
VG Name pve
LV UUID 13NJY5-eBK4-XzjN-hQHx-2uJH-BmEe-pmWg1L
LV Write Access read/write
LV Creation host, time proxmox, 2013-04-14 01:58:55 +0200
LV Status available
# open 1
LV Size 720.00 GiB
Current LE 184320
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2

--- Logical volume ---
LV Path /dev/pve/snap-data
LV Name snap-data
VG Name pve
LV UUID 1NoEt6-X8Ig-T8db-7U8m-4dbn-xAe2-jPqVIb
LV Write Access read/write
LV Creation host, time vsh001, 2013-07-08 16:00:08 +0200
LV Status available
# open 0
LV Size 99.50 GiB
Current LE 25471
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!