fuse error : looks like a read only file system

cocconi

Renowned Member
Nov 5, 2009
15
8
68
Nouma, New Caledonia
Hello,

I've a strange error with the fuse disk, I can't do anything anymore without error.
When I try to destroy a vm I get this error :
trying to aquire cfs lock 'file-user_cfg' ...TASK ERROR: pool cleanup failed: got lock request timeout


When I try to create a vm :
TASK ERROR: create failed - unable to open file '/etc/pve/nodes/blade3-3/qemu-server/108.conf.tmp.26084' - Device or resource busy

It seems that the file system is full but it isn't in fact, I've try to unlock a vm but same errors, I've reboot, upgrade too without success.

If someone could help please.

Below is the config :

I'm using proxmox 2.1 :
pve-manager: 2.1-1 (pve-manager/2.1/f9b0f63a)
running kernel: 2.6.32-11-pve
proxmox-ve-2.6.32: 2.0-66
pve-kernel-2.6.32-11-pve: 2.6.32-66
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-2
pve-cluster: 1.0-26
qemu-server: 2.0-39
pve-firmware: 1.0-15
libpve-common-perl: 1.0-27
libpve-access-control: 1.0-21
libpve-storage-perl: 2.0-18
vncterm: 1.0-2
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-9
ksm-control-daemon: 1.1-1

The mounted volumes :
/dev/mapper/pve-root on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw)
/dev/sdc1 on /boot type ext3 (rw)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
beancounter on /proc/vz/beancounter type cgroup (rw,name=beancounter)
container on /proc/vz/container type cgroup (rw,name=container)
fairsched on /proc/vz/fairsched type cgroup (rw,name=fairsched)
172.16.10.18:/distrib on /mnt/pve/distrib type nfs (rw,vers=3,addr=172.16.10.18)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,default_permissions,allow_other)
none on /sys/kernel/config type configfs (rw)


With a 3 nodes cluster :
Version: 6.2.0
Config Version: 3
Cluster Name: MYCLUSTER
Cluster Id: 8658
Cluster Member: Yes
Cluster Generation: 140
Membership state: Cluster-Member
Nodes: 3
Expected votes: 3
Total votes: 3
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: blade3-3
Node ID: 3
Multicast addresses: 239.192.33.243
Node addresses: 172.16.10.13

Node Sts Inc Joined Name
1 M 140 2012-05-07 21:07:01 blade3-1
2 M 140 2012-05-07 21:07:01 blade3-2
3 M 136 2012-05-07 21:07:00 blade3-3


Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/pve-root 99083868 1221188 92829516 2% /
tmpfs 32981856 0 32981856 0% /lib/init/rw
udev 32972140 280 32971860 1% /dev
tmpfs 32981856 32212 32949644 1% /dev/shm
/dev/mapper/pve-data 395678084 34781472 360896612 9% /var/lib/vz
/dev/sdc1 506724 34556 446005 8% /boot
172.16.10.18:/distrib
3196008448 1720167936 1475840512 54% /mnt/pve/distrib
/dev/fuse 30720 24 30696 1% /etc/pve


Thanks
 
As I can see : yes on each nodes :
/etc/init.d/cman start
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... [ OK ]
Waiting for quorum... [ OK ]
Starting fenced... [ OK ]
Starting dlm_controld... [ OK ]
Unfencing self... [ OK ]

I've these errors on every nodes of the cluster
 
To solve this problem I've to do this :
- edit the /etc/host on each server and write the ip & name of each host
- /etc/init.d/cman stop ; /etc/init.d/pve-cluster stop
- /etc/init.d/pve-cluster start ; /etc/init.d/cman start

After all is going back to normal (all nodes green in web), and the fuse filesystem no more readonly. :)
Strange because all was working perfectly before.

Thanks Tom
 
thanks for feedback.

the question is: why was /etc/hosts empty? this should be set correctly during installation.
 
just had the very same issue with a two-host-minicluster, one box recognized /etc/pve as read-only - therefore even vzdump backups of KVM based virtual machines didn't work anymore (no locking available because of readonly filesystem, vz based VEs were backuped fine though).

had to restart cman and then the pve-cluster, this made it work again, though somehow I don't feel as comfortable as before with this system.

pveversion -v
proxmox-ve-2.6.32: 3.4-150 (running kernel: 2.6.32-26-pve)
pve-manager: 3.4-3 (running version: 3.4-3/2fc72fee)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-24-pve: 2.6.32-111
pve-kernel-2.6.32-28-pve: 2.6.32-124
pve-kernel-2.6.32-25-pve: 2.6.32-113
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-34-pve: 2.6.32-140
pve-kernel-2.6.32-31-pve: 2.6.32-132
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-16
qemu-server: 3.4-3
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-32
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!