cannot access /etc/pve: Transport endpoint not connected

carlosho17

New Member
Dec 6, 2010
11
1
3
Hi
I'm getting this error when running vzlist or anything related to access /etc/pve:

Unable to open /etc/pve/openvz/200.conf: Transport endpoint is not connected
Unable to open /etc/pve/openvz/201.conf: Transport endpoint is not connected
Unable to open /etc/pve/openvz/202.conf: Transport endpoint is not connected
Unable to open /etc/pve/openvz/203.conf: Transport endpoint is not connected


The containers kept running , I'm looking for a command to restore /etc/pve
mount -o remount /etc/pve yields

/bin/sh: /dev/fuse: Permission denied
cxxxxxxxxxxxx:~(12:15:43)> l -d /dev/fuse
crw-rw---- 1 root fuse 10, 229 Oct 24 17:50 /dev/fuse




It's an installation based on squeeze with RAID-1, LVM and ext4 por pve-data and pve-root



pve-manager: 2.0-7 (pve-manager/2.0/de5d8ab1)
running kernel: 2.6.32-6-pve
proxmox-ve-2.6.32: 2.0-46
pve-kernel-2.6.32-6-pve: 2.6.32-46
lvm2: 2.02.86-1pve1
clvm: 2.02.86-1pve1
corosync-pve: 1.4.1-1
openais-pve: 1.1.4-1
libqb: 0.5.1-1
redhat-cluster-pve: 3.1.7-1
pve-cluster: 1.0-9
qemu-server: 2.0-2
pve-firmware: 1.0-13
libpve-common-perl: 1.0-6
libpve-access-control: 1.0-1
libpve-storage-perl: 2.0-4
vncterm: 1.0-2
vzctl: 3.0.29-3pve2
vzdump: 1.2.6-1
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 0.15.0-1
ksm-control-daemon: 1.1-1

Here's the output of df -h

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/pve-root 92G 1.2G 86G 2% /
tmpfs 3.9G 0 3.9G 0% /lib/init/rw
udev 10M 252K 9.8M 3% /dev
tmpfs 3.9G 3.1M 3.9G 1% /dev/shm
/dev/mapper/pve-data 367G 12G 337G 4% /var/lib/vz
df: `/etc/pve': Transport endpoint is not connected

And /etc/fstab

proc /proc proc defaults 0 0
/dev/mapper/pve-root / ext4 noatime,relatime,errors=remount-ro 0 1
/dev/mapper/pve-data /var/lib/vz ext4 noatime,relatime 0 2
/dev/mapper/pve-swap none swap sw 0 0
/dev/sdc1 /media/cdrom0 udf,iso9660 user,noauto 0 0

And the output of mount

/dev/mapper/pve-root on / type ext4 (rw,noatime,relatime,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/mapper/pve-data on /var/lib/vz type ext4 (rw,noatime,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,default_permissions,allow_other)
beancounter on /proc/vz/beancounter type cgroup (rw,name=beancounter)
container on /proc/vz/container type cgroup (rw,name=container)
fairsched on /proc/vz/fairsched type cgroup (rw,name=fairsched)

Thanks for any clue
 

MichaelA

New Member
Jan 4, 2012
13
0
1
Samara, Russia
Same issue on version 2.0-18

And the directory /etc/pve particulary empty:
root@node1b ~ # ls -l /etc/pve/
total 512
-r--r----- 1 root www-data 285 Jan 4 16:58 cluster.conf
lr-xr-x--- 1 root www-data 0 Jan 1 1970 local -> nodes/node1b
lr-xr-x--- 1 root www-data 0 Jan 1 1970 openvz -> nodes/node1b/openvz
lr-xr-x--- 1 root www-data 0 Jan 1 1970 qemu-server -> nodes/node1b/qemu-server

Where the rest files could gone (they were before adding this node to cluster)?
And there is no way to create there anything: permission denied (under root, whith no chattr +i and rw mount).

Restarting cluster doesn't helps.

Maybe this gonna help to figure this out:

root@node1b ~ # umount /etc/pve

root@node1b ~ # mount /dev/fuse -t fuse -o rw,nosuid,nodev,default_permissions,allow_other /etc/pve
/bin/sh: /dev/fuse: Permission denied
 
Last edited:

Faye

Member
Jan 2, 2012
39
0
6
/etc/pve is created from a userspace program pmxcfs which generates the data from /var/lib/pve-cluster/config.db
 

MichaelA

New Member
Jan 4, 2012
13
0
1
Samara, Russia
Will explain from begining.

I want to connect two nodes: node1a (v.2.0-18) and node1b (v.2.0-18)

What I've got at node1a:
root@node1a ~ # pvecm status
Version: 6.2.0
Config Version: 5
Cluster Name: nodesquad
Cluster Id: 55506
Cluster Member: Yes
Cluster Generation: 12
Membership state: Cluster-Member
Nodes: 1
Expected votes: 1
Total votes: 1
Node votes: 1
Quorum: 1
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: node1a
Node ID: 1
Multicast addresses: 239.192.216.171
Node addresses: 213.239.205.99


root@node1a ~ # pvecm nodes
Node Sts Inc Joined Name
1 M 12 2012-01-05 10:07:53 node1a


root@node1a ~ # ls -l /etc/pve
total 3.0K
lrwxr-x--- 1 root www-data 0 Jan 1 1970 qemu-server -> nodes/node1a/qemu-server
lrwxr-x--- 1 root www-data 0 Jan 1 1970 openvz -> nodes/node1a/openvz
lrwxr-x--- 1 root www-data 0 Jan 1 1970 local -> nodes/node1a
-rw-r----- 1 root www-data 1.7K Nov 21 10:31 pve-www.key
drwx------ 2 root www-data 0 Nov 21 10:31 priv
drwxr-x--- 2 root www-data 0 Nov 21 10:31 nodes
-rw-r----- 1 root www-data 451 Nov 21 10:31 authkey.pub
-rw-r----- 1 root www-data 119 Nov 21 10:31 vzdump.cron
-rw-r----- 1 root www-data 1.5K Nov 21 10:31 pve-root-ca.pem
-rw-r----- 1 root www-data 116 Nov 29 17:51 storage.cfg
-rw-r----- 1 root www-data 236 Jan 4 16:22 cluster.conf
What I've got at node1b:
root@node1b ~ # pvecm status
cman_tool: Cannot open connection to cman, is it running ?


root@node1b ~ # ls -l /etc/pve
total 2.0K
-rw-r----- 1 root www-data 451 Jan 5 13:22 authkey.pub
lrwxr-x--- 1 root www-data 0 Jan 1 1970 local -> nodes/node1b
drwxr-x--- 2 root www-data 0 Jan 5 13:22 nodes
lrwxr-x--- 1 root www-data 0 Jan 1 1970 openvz -> nodes/node1b/openvz
drwx------ 2 root www-data 0 Jan 5 13:22 priv
-rw-r----- 1 root www-data 1.5K Jan 5 13:22 pve-root-ca.pem
-rw-r----- 1 root www-data 1.7K Jan 5 13:22 pve-www.key
lrwxr-x--- 1 root www-data 0 Jan 1 1970 qemu-server -> nodes/node1b/qemu-server
-rw-r----- 1 root www-data 119 Jan 5 13:22 vzdump.cron
Now I try to add a node1b to cluster:
root@node1b ~ # pvecm add node1a.nodesquad.com
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
0c:46:2f:29:24:46:36:67:b1:ff:dd:ea:8d:f7:18:f2 root@node1b.nodesquad.com
The key's randomart image is:
+--[ RSA 2048]----+
| .* =.. |
| o * o o |
| o = . |
| + + |
| . S |
| . . . |
| . o o |
| *.o |
| .+.E.. |
+-----------------+
The authenticity of host 'node1a.nodesquad.com (213.239.205.99)' can't be established.
RSA key fingerprint is 89:3d:23:83:e3:30:c4:a4:19:0e:73:8a:82:8d:74:19.
Are you sure you want to continue connecting (yes/no)? yes
root@node1a.nodesquad.com's password:
copy corosync auth key
stopping pve-cluster service
Stopping pve cluster filesystem: pve-cluster.
backup old database
Starting pve cluster filesystem : pve-cluster.
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... /usr/share/cluster/cluster.rng:992: element ref: Relax-NG parser error : Reference PVEVM has no matching definition
/usr/share/cluster/cluster.rng:992: element ref: Relax-NG parser error : Internal found no define for ref PVEVM
Relax-NG schema /usr/share/cluster/cluster.rng failed to compile
[ OK ]
Waiting for quorum... Timed-out waiting for cluster
[FAILED]
cluster not ready - no quorum?
And what I've got at node1a:
root@node1a ~ # pvecm status
Version: 6.2.0
Config Version: 5
Cluster Name: nodesquad
Cluster Id: 55506
Cluster Member: Yes
Cluster Generation: 12
Membership state: Cluster-Member
Nodes: 1
Expected votes: 1
Total votes: 1
Node votes: 1
Quorum: 1
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: node1a
Node ID: 1
Multicast addresses: 239.192.216.171
Node addresses: 213.239.205.99


root@node1a ~ # pvecm nodes
Node Sts Inc Joined Name
1 M 12 2012-01-05 10:07:53 node1a
And on node1b:
root@node1b ~ # pvecm status
Version: 6.2.0
Config Version: 6
Cluster Name: nodesquad
Cluster Id: 55506
Cluster Member: Yes
Cluster Generation: 4
Membership state: Cluster-Member
Nodes: 1
Expected votes: 2
Total votes: 1
Node votes: 1
Quorum: 2 Activity blocked
Active subsystems: 1
Flags:
Ports Bound: 0
Node name: node1b
Node ID: 2
Multicast addresses: 239.192.216.171
Node addresses: 78.46.84.144


root@node1b ~ # pvecm nodes
Node Sts Inc Joined Name
1 X 0 node1a
2 M 4 2012-01-05 13:35:23 node1b


root@node1b ~ # ls -l /etc/pve
total 512
-r--r----- 1 root www-data 285 Jan 5 13:35 cluster.conf
lr-xr-x--- 1 root www-data 0 Jan 1 1970 local -> nodes/node1b
lr-xr-x--- 1 root www-data 0 Jan 1 1970 openvz -> nodes/node1b/openvz
lr-xr-x--- 1 root www-data 0 Jan 1 1970 qemu-server -> nodes/node1b/qemu-server
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!