Corrupted Filesystem after reinstall of Virtualmachine

timetraxx

New Member
Feb 3, 2014
7
0
1
Hi All, yesterday evening, I reinstall with our Customer different VM's on our 4 Node Proxmox Cluster.
After 6h our new VM's became corrupt ext4 Filesystems. We used the IDE HD Driver instead of the VIRTIO dirver.
I run a lot fsck aessions on the VM's - now way to repair the filesystem. we reinstall the VM's again but now with the VIRTIO driver for our Didks.
It is possible that IDE drivers are not a good Idea to virtualize systems on Proxmox?

Following entries in /var/log/messages EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 2093069
EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 2093069
EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 2093069
EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 2093069
EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 2093069

My Actual Setup:

proxmox-ve-2.6.32: 3.2-121 (running kernel: 2.6.32-27-pve)
pve-manager: 3.2-1 (running version: 3.2-1/1933730b)
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-14
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-4
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

Storage MSA 2000, fibrechannel Multipath ( All 4 Nodes are connected to the MSA)

OS Type VM:
RHEL 6.5
 
Last edited:
Hi,
hard to read your input. If you connect via https to the forum (which is an good idea) use the standard editor (Settings -> Gereral Settings).

For an linux is IDE and virtio very stable. I assume it's has someting to do with your underlaying storage (like an ext-volume, which is mounted on more than one node together).

Udo
 
Hi Udo,

thnx for Your editor help :-)

we have connected al 4 Nodes to the Storage with fibrechannel multipath.
We add the LVM (volume group) under Datacenter with the following options:

Nodes: All
Enable: true
Shared:True

Do you mean that the Nodes All Options can be a problem?

Regards

Martin
 
Hi Udo,

thnx for Your editor help :-)

we have connected al 4 Nodes to the Storage with fibrechannel multipath.
We add the LVM (volume group) under Datacenter with the following options:

Nodes: All
Enable: true
Shared:True

Do you mean that the Nodes All Options can be a problem?

Regards

Martin
Hi Martin,
no "Nodes All" is the right config (if all nodes has access to the storage).

Strange, I use lvm on different storages without trouble (drbd, FC) but without multipath.
Perhaps, there is an problem (do you mask the single devices in lvm.conf?).

Can you try with single-path on one node?

Udo
 
Hi Udo,

yes the device are masked.
I've no chance to switch off from multi to singlemode. But I'll take this point and check in our LAB to test.

I myke some checks but I can see nothing which can be a problem ...

-> VoDo
 
Hi Udo,

yes the device are masked.
I've no chance to switch off from multi to singlemode. But I'll take this point and check in our LAB to test.

I myke some checks but I can see nothing which can be a problem ...

-> VoDo
Hi,
one thing left to test: can you switch the cache-settings of the vm-disk to writethrough?! (power cycle the VM).
Like for lvm-storage with blocksize > 512 it's neccesary.

Udo