[SOLVED] "pve configuration filesystem not mounted" after creating a cluster

wdq

New Member
Jul 19, 2013
4
0
1
I tried to create a Proxmox cluster this afternoon following this tutorial. When I ran the pvecm create command I got the following error:

pve configuration filesystem not mounted

All of the virtual machines are still up and running perfectly, but the whole web interface is no longer accessible.

Does anyone have any ideas on how I can fix this? I would prefer to not have to reboot the whole system if that's possible.

Edit: I've been doing a little more research and I now have another question which is whether or not the pvecm create command cleared out the /dev/fuse or /etc/pve directory.

Edit 2: I fixed it through a reboot, and then manually adding the config files back.
 
Last edited:

dietmar

Proxmox Staff Member
Staff member
Apr 28, 2005
17,078
490
103
Austria
www.proxmox.com
What is the output of

# ls -l /etc/pve

Tyr to remount the file system if it is not mounted:

# service pve-cluster restart
 

vadavo

Member
Dec 20, 2015
16
0
21
31
Hi,

We have the same problem after create a cluster:

pve configuration filesystem not mounted

We can't restart the cluster an can't add to de PROXMOX web access.
 
Aug 9, 2016
8
2
23
54
For me a reboot does not solve the problem, the network is up, /etc/pve is not available and in logs and on some commands I've:
ipcc_send_rec failed: Connection refused
 
Last edited:

patrick3

Member
Oct 4, 2016
6
1
21
68
I saw "SOLVED" in the title hoping to find the solution, but no-go. Reboots don't help.

I just installed a fresh PVE 4.3 setup as a LXC container (minimal Deb8, then upgraded to PVE as per the wiki). No issues EXCEPT for this one which seems to be plaguing others so too.

I do NOT have any clustering setup, no filesystems beyond the default setup for LVM. Nothing odd. Fresh LVM with default linux partitions.

root@proxmox:/# pvecm updatecerts --force
pve configuration filesystem not mounted

I can SSH from my PVE container into my Ubuntu host.
I cannot SSH from my Ubuntu host into my PVE container (just keeps telling me permission denied. I cleared certs and tried several times)

root@proxmox:/# pve-firewall stop
ipcc_send_rec failed: Connection refused
ipcc_send_rec failed: Connection refused
ipcc_send_rec failed: Connection refused

No errors in any log files that I can find.


My Web admin interface is also a no-go. On the host Ubuntu, it connects but does not seem to transfer any data.

root@proxmox:/# netstat -ant | grep 8006
tcp 0 0 0.0.0.0:8006 0.0.0.0:* LISTEN
tcp 180 0 10.0.3.111:8006 10.0.3.1:46384 CLOSE_WAIT


I thought these issues were unrelated but I see that others are experiencing them, and the same ones, all at the same time. More than just a coincidence.

Hopefully we can get this figured out soon!
 

patrick3

Member
Oct 4, 2016
6
1
21
68
After more playing around:

root@proxmox:/# service pve-cluster restart
Job for pve-cluster.service failed. See 'systemctl status pve-cluster.service' and 'journalctl -xn' for details.

root@proxmox:/# systemctl -l status pve-cluster.service
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled)
Active: failed (Result: exit-code) since Tue 2016-10-04 09:39:43 UTC; 35s ago
Process: 6135 ExecStart=/usr/bin/pmxcfs $DAEMON_OPTS (code=exited, status=255)

Oct 04 09:39:43 proxmox pmxcfs[6135]: fuse: device not found, try 'modprobe fuse' first
Oct 04 09:39:43 proxmox pmxcfs[6135]: [main] crit: fuse_mount error: No such file or directory
Oct 04 09:39:43 proxmox pmxcfs[6135]: [main] notice: exit proxmox configuration filesystem (-1)
Oct 04 09:39:43 proxmox pmxcfs[6135]: [main] crit: fuse_mount error: No such file or directory
Oct 04 09:39:43 proxmox pmxcfs[6135]: [main] notice: exit proxmox configuration filesystem (-1)
Oct 04 09:39:43 proxmox systemd[1]: pve-cluster.service: control process exited, code=exited status=255
Oct 04 09:39:43 proxmox systemd[1]: Failed to start The Proxmox VE cluster filesystem.
Oct 04 09:39:43 proxmox systemd[1]: Unit pve-cluster.service entered failed state.


root@proxmox:/# modprobe fuse
modprobe: ERROR: ../libkmod/libkmod.c:557 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.0-38-generic/modules.dep.bin'


root@proxmox:/# journalctl -xn
-- Logs begin at Tue 2016-10-04 08:14:14 UTC, end at Tue 2016-10-04 10:03:01 UTC
. --
Oct 04 10:03:00 proxmox pveproxy[7251]: /etc/pve/local/pve-ssl.key: failed
to load local private key (key_file or key) at /usr/share/perl5/PVE/HTTPServer.
pm line 1626.
Oct 04 10:03:00 proxmox pveproxy[7252]: /etc/pve/local/pve-ssl.key: failed
to load local private key (key_file or key) at /usr/share/perl5/PVE/HTTPServer.
pm line 1626.
Oct 04 10:03:00 proxmox pveproxy[7253]: /etc/pve/local/pve-ssl.key: failed
to load local private key (key_file or key) at /usr/share/perl5/PVE/HTTPServer.
pm line 1626.
Oct 04 10:03:01 proxmox pve-ha-lrm[483]: ipcc_send_rec failed: Connection
refused
Oct 04 10:03:01 proxmox pve-ha-lrm[483]: ipcc_send_rec failed: Connection
refused
Oct 04 10:03:01 proxmox pve-ha-lrm[483]: ipcc_send_rec failed: Connection
refused
Oct 04 10:03:01 proxmox pve-ha-crm[474]: ipcc_send_rec failed: Connection
refused
Oct 04 10:03:01 proxmox pve-ha-crm[474]: ipcc_send_rec failed: Connection
refused
Oct 04 10:03:01 proxmox pve-ha-crm[474]: ipcc_send_rec failed: Connection
refused
Oct 04 10:03:01 proxmox cron[455]: (*system*vzdump) CAN'T OPEN SYMLINK (/etc/cro
n.d/vzdump)


root@proxmox:/# ls -alsR /etc/pve
/etc/pve:
total 8
4 drwxr-x--- 2 root root 4096 Oct 2 02:24 .
4 drwxr-xr-x 85 root root 4096 Oct 4 09:46 ..


So, it's looking for certs in a dir that doesn't exist.


I'm new to Proxmox and just getting into this thing. All of the manuals I've found so far list the config files in /etc/pve/, which is empty, so I'm not quite sure where to go from here. Any guidance is greatly appreciated!
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
6,736
1,177
164
modprobe: ERROR: ../libkmod/libkmod.c:557 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.0-38-generic/modules.dep.bin'

seems like you are running a strange kernel.. could you post the output of "uname -a"
 

patrick3

Member
Oct 4, 2016
6
1
21
68
seems like you are running a strange kernel.. could you post the output of "uname -a"

root@proxmox:/# uname -a
Linux proxmox 4.4.0-38-generic #57-Ubuntu SMP Tue Sep 6 15:42:33 UTC 2016 x86_64 GNU/Linux

My host is Ubuntu 16.04 LTS. PVE is a LXC container with the default LXC image for Deb8: Debian, Jessie, AMD64.

I know it's odd to run PVE like this, so here's my setup:

Home Virtual Environment: PVE 4.3 Hypervisor (working fine)
-Media Server VM
-Web Server VM
-Web Cache/DNS Cahce/Print Server VM

Network:
Decent Netgear Home AP Router/Switch
Sonicwall TZ 210 Firewall

Web Server is in DMZ
Cache/Print Server in LAN
Isolated via Firewall settings.

Laptop (Ubuntu 16.04 LTS) (Where PVE exists):
-PVE 4.3 Hypervisor running as a container
Media Server VM, Web Server VM, SAMBA/CACHE server to eventually to be migrated/backed up as VMs within the PVE VM.


The idea is to setup a PVE VM on my laptop and set it up as new node so I can play with HA, Backups, and some of the more advanced and useful features of Proxmox between my laptop and the actual server. My home media server setup is likely going to be expanding to reach out and touch other networks and I want to play around with it locally on my laptop and my firewalled network before venturing down that road.


Not sure if the Ubuntu kernel is causing the issue. I can certainly use a full-blown hypervisor and give PVE it's own kernel instead of just a container with LXC if you think that's likely to solve the problem?!?

Thanks for the reply!
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
6,736
1,177
164
Not sure if the Ubuntu kernel is causing the issue. I can certainly use a full-blown hypervisor and give PVE it's own kernel instead of just a container with LXC if you think that's likely to solve the problem?!?

Thanks for the reply!

yes. running PVE in a container is not supported. running PVE in a fully virtualized VM is (at least for testing purposes). make sure to enabled "nested virtualization" in whatever hypervisor you are using to run PVE, otherwise you won't really be able to use Qemu in PVE ;)
 

zetoniak

New Member
Oct 10, 2016
3
0
1
33
I had same problem and i could fix it doing SSH between both host using the hostname instead of the IP address.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!