Hello everyone.
We are using the enterprise repo on a 10-node cluster running Proxmox 3.4 (latest version).
We're actually experiencing a strange behaviour with GFS2. We're trying to use GFS2 but with a new and clean filesystem we receive a kernel panic when trying to delete some file.
Step by step to reproduce the issue follow (VMFO07 it's the only node logged into the SAN):
root@VMFO07:~# dpkg -l gfs2*
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-=======================-================-================-====================================================
un gfs2-tools <none> (no description available)
ii gfs2-utils 3.1.3-1 amd64 Global file system 2 tools
root@VMFO07:~# uname -a
Linux VMFO07 2.6.32-39-pve #1 SMP Fri May 8 11:27:35 CEST 2015 x86_64 GNU/Linux
root@VMFO07:~# /etc/init.d/open-iscsi start
Starting iSCSI initiator service: iscsid.
Setting up iSCSI targets:
Logging in to [iface: default, target: [snip], portal: 192.168.193.170,3260] (multiple)
Logging in to [iface: default, target: [snip], portal: 192.168.193.171,3260] (multiple)
Logging in to [iface: default, target: [snip], portal: 192.168.194.171,3260] (multiple)
Logging in to [iface: default, target: i[snip], portal: 192.168.194.170,3260] (multiple)
Login to [iface: default, target: [snip]: 192.168.193.170,3260] successful.
Login to [iface: default, target: [snip], portal: 192.168.193.171,3260] successful.
Login to [iface: default, target: [snip], portal: 192.168.194.171,3260] successful.
Login to [iface: default, target: [snip], portal: 192.168.194.170,3260] successful.
root@VMFO07:~# multipath -ll
[snip] dm-0 HP,MSA 1040 SAN
size=9.1T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 7:0:0:0 sdb 8:16 active ready running
|- 9:0:0:0 sdc 8:32 active ready running
|- 10:0:0:0 sdd 8:48 active ready running
`- 8:0:0:0 sde 8:64 active ready running
root@VMFO07:~# mkfs.gfs2 -V
gfs2_mkfs master (built Mar 15 2013 08:54:14)
Copyright (C) Red Hat, Inc. 2004-2010 All rights reserved.
root@VMFO07:~# mkfs.gfs2 -p lock_dlm -t ClusterFO:SANStorage -j 16 /dev/dm-0
Are you sure you want to proceed? [y/n]y
Device: /dev/dm-0
Blocksize: 4096
Device Size 9305.13 GB (2439283712 blocks)
Filesystem Size: 9305.13 GB (2439283710 blocks)
Journals: 16
Resource Groups: 9306
Locking Protocol: "lock_dlm"
Lock Table: "ClusterFO:SANStorage"
UUID: 1b7a065f-d126-a302-0ee9-c9682e5326f0
root@VMFO07:~# fsck.gfs2 -V
GFS2 fsck master (built Mar 15 2013 08:54:10)
Copyright (C) Red Hat, Inc. 2004-2010 All rights reserved.
root@VMFO07:~# fsck.gfs2 -f -y /dev/dm-0
Initializing fsck
Validating Resource Group index.
Level 1 rgrp check: Checking if all rgrp and rindex values are good.
(level 1 passed)
Starting pass1
Pass1 complete
Starting pass1b
Pass1b complete
Starting pass1c
Pass1c complete
Starting pass2
Pass2 complete
Starting pass3
Pass3 complete
Starting pass4
Pass4 complete
Starting pass5
Pass5 complete
gfs2_fsck complete
root@VMFO07:~# mount -t gfs2 /dev/dm-0 /mnt/SANStorage
root@VMFO07:~#
root@VMFO07:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=8238674,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=6593064k,mode=755)
[snip]
/dev/dm-0 on /mnt/SANStorage type gfs2 (rw,relatime,hostdata=jid=0)
root@VMFO07:~# cd /mnt/SANStorage
root@VMFO07:/mnt/SANStorage# touch testfile
root@VMFO07:/mnt/SANStorage# rm testfile
root@VMFO07:/mnt/SANStorage# ls
ls: cannot open directory .: Input/output error
root@VMFO07:/mnt/SANStorage#
At this state if I try to unmount the system hang and i need to manually fence the node to get it running.
Here the logs:
Loading iSCSI transport class v2.0-870.
iscsi: registered transport (tcp)
iscsi: registered transport (iser)
scsi7 : iSCSI Initiator over TCP/IP
scsi8 : iSCSI Initiator over TCP/IP
scsi9 : iSCSI Initiator over TCP/IP
scsi10 : iSCSI Initiator over TCP/IP
scsi 7:0:0:0: Direct-Access HP MSA 1040 SAN G105 PQ: 0 ANSI: 5
scsi 9:0:0:0: Direct-Access HP MSA 1040 SAN G105 PQ: 0 ANSI: 5
sd 7:0:0:0: Attached scsi generic sg3 type 0
sd 9:0:0:0: Attached scsi generic sg4 type 0
sd 7:0:0:0: [sdb] 19514269696 512-byte logical blocks: (9.99 TB/9.08 TiB)
scsi 10:0:0:0: Direct-Access HP MSA 1040 SAN G105 PQ: 0 ANSI: 5
sd 9:0:0:0: [sdc] 19514269696 512-byte logical blocks: (9.99 TB/9.08 TiB)
sd 10:0:0:0: Attached scsi generic sg5 type 0
scsi 8:0:0:0: Direct-Access HP MSA 1040 SAN G105 PQ: 0 ANSI: 5
sd 10:0:0:0: [sdd] 19514269696 512-byte logical blocks: (9.99 TB/9.08 TiB)
sd 8:0:0:0: Attached scsi generic sg6 type 0
sd 8:0:0:0: [sde] 19514269696 512-byte logical blocks: (9.99 TB/9.08 TiB)
sd 7:0:0:0: [sdb] Write Protect is off
sd 7:0:0:0: [sdb] Mode Sense: fb 00 00 08
sd 9:0:0:0: [sdc] Write Protect is off
sd 9:0:0:0: [sdc] Mode Sense: fb 00 00 08
sd 7:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 10:0:0:0: [sdd] Write Protect is off
sd 10:0:0:0: [sdd] Mode Sense: fb 00 00 08
sd 9:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 8:0:0:0: [sde] Write Protect is off
sd 8:0:0:0: [sde] Mode Sense: fb 00 00 08
sd 10:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 8:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sdb:
sdc:
sdd: unknown partition table
sde: unknown partition table
unknown partition table
unknown partition table
sd 7:0:0:0: [sdb] Attached SCSI disk
sd 9:0:0:0: [sdc] Attached SCSI disk
sd 10:0:0:0: [sdd] Attached SCSI disk
sd 8:0:0:0: [sde] Attached SCSI disk
GFS2 (built Jun 24 2015 06:26:56) installed
GFS2: fsid=: Trying to join cluster "lock_dlm", "ClusterFO:SANStorage"
GFS2: fsid=ClusterFO:SANStorage.0: Joined cluster. Now mounting FS...
GFS2: fsid=ClusterFO:SANStorage.0: jid=0, already locked for use
GFS2: fsid=ClusterFO:SANStorage.0: jid=0: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=0: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=1: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=1: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=1: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=2: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=2: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=2: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=3: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=3: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=3: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=4: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=4: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=4: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=5: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=5: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=5: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=6: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=6: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=6: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=7: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=7: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=7: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=8: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=8: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=8: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=9: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=9: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=9: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=10: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=10: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=10: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=11: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=11: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=11: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=12: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=12: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=12: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=13: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=13: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=13: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=14: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=14: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=14: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=15: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=15: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=15: Done
GFS2: fsid=ClusterFO:SANStorage.0: fatal: filesystem consistency error
GFS2: fsid=ClusterFO:SANStorage.0: inode = 1 529788
GFS2: fsid=ClusterFO:SANStorage.0: function = gfs2_dinode_dealloc, file = fs/gfs2/super.c, line = 1421
GFS2: fsid=ClusterFO:SANStorage.0: about to withdraw this file system
GFS2: fsid=ClusterFO:SANStorage.0: telling LM to unmount
GFS2: fsid=ClusterFO:SANStorage.0: withdrawn
Pid: 7059, comm: rm veid: 0 Not tainted 2.6.32-39-pve #1
Call Trace:
[<ffffffffa07cb5f8>] ? gfs2_lm_withdraw+0x128/0x160 [gfs2]
[<ffffffffa07a7880>] ? gfs2_glock_holder_wait+0x0/0x20 [gfs2]
[<ffffffffa07cb82d>] ? gfs2_consist_inode_i+0x5d/0x60 [gfs2]
[<ffffffffa07c7210>] ? gfs2_dinode_dealloc+0x60/0x1e0 [gfs2]
[<ffffffffa07ae819>] ? gfs2_glock_nq+0x269/0x400 [gfs2]
[<ffffffffa07c7861>] ? gfs2_delete_inode+0x281/0x530 [gfs2]
[<ffffffffa07c767a>] ? gfs2_delete_inode+0x9a/0x530 [gfs2]
[<ffffffffa07c75e0>] ? gfs2_delete_inode+0x0/0x530 [gfs2]
[<ffffffff811cd9a6>] ? generic_delete_inode+0xa6/0x1c0
[<ffffffff811cdb15>] ? generic_drop_inode+0x55/0x70
[<ffffffffa07c73c7>] ? gfs2_drop_inode+0x37/0x40 [gfs2]
[<ffffffff811cbed6>] ? iput+0xc6/0x100
[<ffffffff811c0546>] ? do_unlinkat+0x1d6/0x240
[<ffffffff811b3f8a>] ? sys_newfstatat+0x2a/0x40
[<ffffffff811c1c4b>] ? sys_unlinkat+0x1b/0x50
[<ffffffff8100b182>] ? system_call_fastpath+0x16/0x1b
no_formal_ino = 1
no_addr = 529788
i_size = 0
blocks = 2
i_goal = 529788
i_diskflags = 0x00000000
i_height = 0
i_depth = 0
i_entries = 0
i_eattr = 0
GFS2: fsid=ClusterFO:SANStorage.0: gfs2_delete_inode: -5
gdlm_unlock 5,8157c err=-22
Can this be caused by an older version of gfs2 utils? With the previous kernel we don't have any issue.
I understand that GFS2 isn't officially supported by proxmox but we really need a working clustered filesystem usable in a Proxmox Environment.
Thank you in advance.
We are using the enterprise repo on a 10-node cluster running Proxmox 3.4 (latest version).
We're actually experiencing a strange behaviour with GFS2. We're trying to use GFS2 but with a new and clean filesystem we receive a kernel panic when trying to delete some file.
Step by step to reproduce the issue follow (VMFO07 it's the only node logged into the SAN):
root@VMFO07:~# dpkg -l gfs2*
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-=======================-================-================-====================================================
un gfs2-tools <none> (no description available)
ii gfs2-utils 3.1.3-1 amd64 Global file system 2 tools
root@VMFO07:~# uname -a
Linux VMFO07 2.6.32-39-pve #1 SMP Fri May 8 11:27:35 CEST 2015 x86_64 GNU/Linux
root@VMFO07:~# /etc/init.d/open-iscsi start
Starting iSCSI initiator service: iscsid.
Setting up iSCSI targets:
Logging in to [iface: default, target: [snip], portal: 192.168.193.170,3260] (multiple)
Logging in to [iface: default, target: [snip], portal: 192.168.193.171,3260] (multiple)
Logging in to [iface: default, target: [snip], portal: 192.168.194.171,3260] (multiple)
Logging in to [iface: default, target: i[snip], portal: 192.168.194.170,3260] (multiple)
Login to [iface: default, target: [snip]: 192.168.193.170,3260] successful.
Login to [iface: default, target: [snip], portal: 192.168.193.171,3260] successful.
Login to [iface: default, target: [snip], portal: 192.168.194.171,3260] successful.
Login to [iface: default, target: [snip], portal: 192.168.194.170,3260] successful.
root@VMFO07:~# multipath -ll
[snip] dm-0 HP,MSA 1040 SAN
size=9.1T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 7:0:0:0 sdb 8:16 active ready running
|- 9:0:0:0 sdc 8:32 active ready running
|- 10:0:0:0 sdd 8:48 active ready running
`- 8:0:0:0 sde 8:64 active ready running
root@VMFO07:~# mkfs.gfs2 -V
gfs2_mkfs master (built Mar 15 2013 08:54:14)
Copyright (C) Red Hat, Inc. 2004-2010 All rights reserved.
root@VMFO07:~# mkfs.gfs2 -p lock_dlm -t ClusterFO:SANStorage -j 16 /dev/dm-0
Are you sure you want to proceed? [y/n]y
Device: /dev/dm-0
Blocksize: 4096
Device Size 9305.13 GB (2439283712 blocks)
Filesystem Size: 9305.13 GB (2439283710 blocks)
Journals: 16
Resource Groups: 9306
Locking Protocol: "lock_dlm"
Lock Table: "ClusterFO:SANStorage"
UUID: 1b7a065f-d126-a302-0ee9-c9682e5326f0
root@VMFO07:~# fsck.gfs2 -V
GFS2 fsck master (built Mar 15 2013 08:54:10)
Copyright (C) Red Hat, Inc. 2004-2010 All rights reserved.
root@VMFO07:~# fsck.gfs2 -f -y /dev/dm-0
Initializing fsck
Validating Resource Group index.
Level 1 rgrp check: Checking if all rgrp and rindex values are good.
(level 1 passed)
Starting pass1
Pass1 complete
Starting pass1b
Pass1b complete
Starting pass1c
Pass1c complete
Starting pass2
Pass2 complete
Starting pass3
Pass3 complete
Starting pass4
Pass4 complete
Starting pass5
Pass5 complete
gfs2_fsck complete
root@VMFO07:~# mount -t gfs2 /dev/dm-0 /mnt/SANStorage
root@VMFO07:~#
root@VMFO07:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=8238674,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=6593064k,mode=755)
[snip]
/dev/dm-0 on /mnt/SANStorage type gfs2 (rw,relatime,hostdata=jid=0)
root@VMFO07:~# cd /mnt/SANStorage
root@VMFO07:/mnt/SANStorage# touch testfile
root@VMFO07:/mnt/SANStorage# rm testfile
root@VMFO07:/mnt/SANStorage# ls
ls: cannot open directory .: Input/output error
root@VMFO07:/mnt/SANStorage#
At this state if I try to unmount the system hang and i need to manually fence the node to get it running.
Here the logs:
Loading iSCSI transport class v2.0-870.
iscsi: registered transport (tcp)
iscsi: registered transport (iser)
scsi7 : iSCSI Initiator over TCP/IP
scsi8 : iSCSI Initiator over TCP/IP
scsi9 : iSCSI Initiator over TCP/IP
scsi10 : iSCSI Initiator over TCP/IP
scsi 7:0:0:0: Direct-Access HP MSA 1040 SAN G105 PQ: 0 ANSI: 5
scsi 9:0:0:0: Direct-Access HP MSA 1040 SAN G105 PQ: 0 ANSI: 5
sd 7:0:0:0: Attached scsi generic sg3 type 0
sd 9:0:0:0: Attached scsi generic sg4 type 0
sd 7:0:0:0: [sdb] 19514269696 512-byte logical blocks: (9.99 TB/9.08 TiB)
scsi 10:0:0:0: Direct-Access HP MSA 1040 SAN G105 PQ: 0 ANSI: 5
sd 9:0:0:0: [sdc] 19514269696 512-byte logical blocks: (9.99 TB/9.08 TiB)
sd 10:0:0:0: Attached scsi generic sg5 type 0
scsi 8:0:0:0: Direct-Access HP MSA 1040 SAN G105 PQ: 0 ANSI: 5
sd 10:0:0:0: [sdd] 19514269696 512-byte logical blocks: (9.99 TB/9.08 TiB)
sd 8:0:0:0: Attached scsi generic sg6 type 0
sd 8:0:0:0: [sde] 19514269696 512-byte logical blocks: (9.99 TB/9.08 TiB)
sd 7:0:0:0: [sdb] Write Protect is off
sd 7:0:0:0: [sdb] Mode Sense: fb 00 00 08
sd 9:0:0:0: [sdc] Write Protect is off
sd 9:0:0:0: [sdc] Mode Sense: fb 00 00 08
sd 7:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 10:0:0:0: [sdd] Write Protect is off
sd 10:0:0:0: [sdd] Mode Sense: fb 00 00 08
sd 9:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 8:0:0:0: [sde] Write Protect is off
sd 8:0:0:0: [sde] Mode Sense: fb 00 00 08
sd 10:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 8:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sdb:
sdc:
sdd: unknown partition table
sde: unknown partition table
unknown partition table
unknown partition table
sd 7:0:0:0: [sdb] Attached SCSI disk
sd 9:0:0:0: [sdc] Attached SCSI disk
sd 10:0:0:0: [sdd] Attached SCSI disk
sd 8:0:0:0: [sde] Attached SCSI disk
GFS2 (built Jun 24 2015 06:26:56) installed
GFS2: fsid=: Trying to join cluster "lock_dlm", "ClusterFO:SANStorage"
GFS2: fsid=ClusterFO:SANStorage.0: Joined cluster. Now mounting FS...
GFS2: fsid=ClusterFO:SANStorage.0: jid=0, already locked for use
GFS2: fsid=ClusterFO:SANStorage.0: jid=0: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=0: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=1: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=1: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=1: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=2: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=2: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=2: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=3: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=3: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=3: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=4: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=4: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=4: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=5: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=5: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=5: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=6: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=6: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=6: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=7: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=7: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=7: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=8: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=8: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=8: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=9: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=9: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=9: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=10: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=10: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=10: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=11: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=11: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=11: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=12: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=12: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=12: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=13: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=13: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=13: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=14: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=14: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=14: Done
GFS2: fsid=ClusterFO:SANStorage.0: jid=15: Trying to acquire journal lock...
GFS2: fsid=ClusterFO:SANStorage.0: jid=15: Looking at journal...
GFS2: fsid=ClusterFO:SANStorage.0: jid=15: Done
GFS2: fsid=ClusterFO:SANStorage.0: fatal: filesystem consistency error
GFS2: fsid=ClusterFO:SANStorage.0: inode = 1 529788
GFS2: fsid=ClusterFO:SANStorage.0: function = gfs2_dinode_dealloc, file = fs/gfs2/super.c, line = 1421
GFS2: fsid=ClusterFO:SANStorage.0: about to withdraw this file system
GFS2: fsid=ClusterFO:SANStorage.0: telling LM to unmount
GFS2: fsid=ClusterFO:SANStorage.0: withdrawn
Pid: 7059, comm: rm veid: 0 Not tainted 2.6.32-39-pve #1
Call Trace:
[<ffffffffa07cb5f8>] ? gfs2_lm_withdraw+0x128/0x160 [gfs2]
[<ffffffffa07a7880>] ? gfs2_glock_holder_wait+0x0/0x20 [gfs2]
[<ffffffffa07cb82d>] ? gfs2_consist_inode_i+0x5d/0x60 [gfs2]
[<ffffffffa07c7210>] ? gfs2_dinode_dealloc+0x60/0x1e0 [gfs2]
[<ffffffffa07ae819>] ? gfs2_glock_nq+0x269/0x400 [gfs2]
[<ffffffffa07c7861>] ? gfs2_delete_inode+0x281/0x530 [gfs2]
[<ffffffffa07c767a>] ? gfs2_delete_inode+0x9a/0x530 [gfs2]
[<ffffffffa07c75e0>] ? gfs2_delete_inode+0x0/0x530 [gfs2]
[<ffffffff811cd9a6>] ? generic_delete_inode+0xa6/0x1c0
[<ffffffff811cdb15>] ? generic_drop_inode+0x55/0x70
[<ffffffffa07c73c7>] ? gfs2_drop_inode+0x37/0x40 [gfs2]
[<ffffffff811cbed6>] ? iput+0xc6/0x100
[<ffffffff811c0546>] ? do_unlinkat+0x1d6/0x240
[<ffffffff811b3f8a>] ? sys_newfstatat+0x2a/0x40
[<ffffffff811c1c4b>] ? sys_unlinkat+0x1b/0x50
[<ffffffff8100b182>] ? system_call_fastpath+0x16/0x1b
no_formal_ino = 1
no_addr = 529788
i_size = 0
blocks = 2
i_goal = 529788
i_diskflags = 0x00000000
i_height = 0
i_depth = 0
i_entries = 0
i_eattr = 0
GFS2: fsid=ClusterFO:SANStorage.0: gfs2_delete_inode: -5
gdlm_unlock 5,8157c err=-22
Can this be caused by an older version of gfs2 utils? With the previous kernel we don't have any issue.
I understand that GFS2 isn't officially supported by proxmox but we really need a working clustered filesystem usable in a Proxmox Environment.
Thank you in advance.
Last edited: