"VZDQ: Tried to clean..." No solution but changing to ext3?

juanmaria

Member
Apr 26, 2012
19
0
21
Hi,

I'm suffering from the problem:
Code:
Dec  5 03:00:11 host01 kernel: VZDQ: Tried to clean orphans on qmblk with 1 state
Dec  5 03:00:11 host01 kernel: BUG: Quota files for 103 are broken: no quota engine running
Dec  5 03:00:11 host01 kernel: EXT4-fs (dm-1): ext4_orphan_cleanup: deleting unreferenced inode 48895662
While I'm backing up my VMs by snpashot. Those kernel logs entries are usually accompained with random crashes of this 103 container.

I've read some posts about this problem on this forum but I've seen no other solution but reinstalling my host and formatting it with ext3 (Right now my FSs are ext4).

Can anyone confirm me that there are no other alternatives to solving this problem?. My host is in production and this action would be risky and painful.

Thanks in advance.

Juan María.
 
Hi again,

I'm wiling to reinstall my server formatting it with Ext3 but I would like to be sure that this will solve this problem because I would have to work overnight to do this, not to mention the risks that have one operation like that.

Is someone there that have had this same problem or that knows if it will be solved by changing to Ext3?.

Thank you.
 
I suggest you test this on a test environment. if you have no spare hardware, just install Proxmox VE inside Proxmox VE (as KVM guest) and do the tests.
 
Hi Tom,

Creating a test environment similar to the one I've got is not an easy task, I'll try it but the question is:

Is there any aproximation to my problem other than installing Proxmox over Ext3?.

Because is there is no other solution I should do it because I cannot afford my OpenVZ containers crashing during snapshot operations.
 
As even you are not able the re-produce it easily how can others do it? and can tell if the change to ext3 will help in your case?

Generally, we always recommend ext3 - still default for Proxmox VE - who told you that ext4 is the way to go?
 
Well, I thought that since others have suffered from the same problem there would be some information about if the Ext3 change solved it.

I think that I formatted it as Ext4 because OVH control-panel proposed me as a default or because I saw it in some doccument but right now I'm not sure.

But, anyway, if you do recommend ext3 I'll go for ext3.
 
Hi,

I did reinstall my system and formatted my FSs with ext3. My OpenVZ containers do not longer crash but now are my snaptshots the ones that are crashing from time to time.

In other words, I solved the
Code:
VZDQ: Tried to clean orphans on qmblk with 1 state BUG: Quota files for 103 are broken: no quota engine running

But I got the
Code:
device-mapper: snapshots: Invalidating snapshot: Unable to allocate exception.
Buffer I/O error on device dm-2, logical block 214008322
lost page write due to I/O error on dm-2
JBD: I/O error detected when updating journal superblock for dm-2.
Buffer I/O error on device dm-2, logical block 214008322
lost page write due to I/O error on dm-2
JBD: I/O error detected when updating journal superblock for dm-2.
Buffer I/O error on device dm-2, logical block 0
lost page write due to I/O error on dm-2
EXT3-fs (dm-2): I/O error while writing superblock
EXT3-fs: barriers disabled

Problem as described in: http://188.165.151.221/threads/6783-strange-things-on-vzdump

I'm going to investigate a little and, maybe, continue in another thread.
 
Last edited:
read "man vzdump"

the file is /etc/vzdump.conf
 
You run out of LVM snapshot space.

I've been having this same problem for years now without a solution, now I understand I need to increase my snapshot size. What is the default snapshot size? What is the largest I can make it with the default ProxMox installation and/or is there some way of seeing how much space is left for snapshots? Also, for the record I am running ext4 on my pve-data because it's on an SSD and was told that's the way to go...

/dev/mapper/pve2-data on /var/lib/vz type ext4 (rw,noatime,nodiratime,discard,nodelalloc)


This is my setup:

# pveversiona --verbose

pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-19-pve
proxmox-ve-2.6.32: 2.3-96
pve-kernel-2.6.32-12-pve: 2.6.32-68
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-17-pve: 2.6.32-83
pve-kernel-2.6.32-7-pve: 2.6.32-60
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-20
pve-firmware: 1.0-21
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-7
vncterm: 1.0-4
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-10
ksm-control-daemon: 1.1-1


# pvdisplay

--- Physical volume ---
PV Name /dev/sdb1
VG Name pve2
PV Size 476.94 GiB / not usable 1.34 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 122096
Free PE 4096
Allocated PE 118000
PV UUID 1xqMdw-BBuH-TfHe-GNsz-STxa-eGPW-itDej5

--- Physical volume ---
PV Name /dev/sda2
VG Name pve
PV Size 931.01 GiB / not usable 0
Allocatable yes
PE Size 4.00 MiB
Total PE 238339
Free PE 4095
Allocated PE 234244
PV UUID JltHvm-0ZJG-VTWL-VF6T-SnM9-GCem-CcIZ8Y


# lvdisplay pve

--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID AjDvgF-255u-Hdcl-M1Z3-t0XS-p6jS-wp4cuY
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 1
LV Size 23.00 GiB
Current LE 5888
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2

--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID 6A3QS6-pbQg-S4Yz-UpvT-6fL7-nTjh-LIJu0o
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 1
LV Size 96.00 GiB
Current LE 24576
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---
LV Path /dev/pve/backup
LV Name backup
VG Name pve
LV UUID IsWpB8-UAe0-rDeH-egsD-Fl6F-MdSx-raOcVf
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 1
LV Size 796.02 GiB
Current LE 203780
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3


# lvdisplay pve2

--- Logical volume ---
LV Path /dev/pve2/data
LV Name data
VG Name pve2
LV UUID DMN3mt-gHbc-Dyle-XlCR-0Enu-wHdd-exdOWM
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 1
LV Size 460.94 GiB
Current LE 118000
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!