Proxmox 8 config files only read-only

mpipz-kk

New Member
Jul 3, 2023
4
0
1
I'm facing the following issue on
pve-manager/8.0.4/d258a813cfa6b390 (running kernel: 6.2.16-8-pve)
that i can't edit files in /etc/pve due of permisson errors. root has only read-only rights on this config files.
But i has to edit those files as root, due of issues with the otp auth.

Code:
root@pve-cluster-01:/etc/pve# ls -la
total 14
drwxr-xr-x  2 root www-data    0 Jan  1  1970 .
drwxr-xr-x 92 root root      187 Aug 22 18:20 ..
-r--r-----  1 root www-data  541 Jan  1  1970 .clusterlog
-rw-r-----  1 root www-data    2 Jan  1  1970 .debug
-r--r-----  1 root www-data  257 Jan  1  1970 .members
-r--r-----  1 root www-data  357 Jan  1  1970 .rrd
-r--r-----  1 root www-data 1049 Jan  1  1970 .version
-r--r-----  1 root www-data   18 Jan  1  1970 .vmlist
-r--r-----  1 root www-data  451 Aug 22 20:13 authkey.pub
-r--r-----  1 root www-data  451 Aug 22 20:13 authkey.pub.old
-r--r-----  1 root www-data  503 Aug 22 18:23 ceph.conf
-r--r-----  1 root www-data  471 Aug 22 18:27 corosync.conf
-r--r-----  1 root www-data   87 Aug 22 18:37 datacenter.cfg
-r--r-----  1 root www-data  131 Aug 23 00:11 domains.cfg
dr-xr-xr-x  2 root www-data    0 Aug 15 20:12 firewall
dr-xr-xr-x  2 root www-data    0 Aug 15 20:12 ha
lr-xr-xr-x  1 root www-data    0 Jan  1  1970 local -> nodes/pve-cluster-01
lr-xr-xr-x  1 root www-data    0 Jan  1  1970 lxc -> nodes/pve-cluster-01/lxc
dr-xr-xr-x  2 root www-data    0 Aug 15 20:12 mapping
dr-xr-xr-x  2 root www-data    0 Aug 15 20:12 nodes
lr-xr-xr-x  1 root www-data    0 Jan  1  1970 openvz -> nodes/pve-cluster-01/openvz
dr-x------  2 root www-data    0 Aug 15 20:12 priv
-r--r-----  1 root www-data 2074 Aug 15 20:12 pve-root-ca.pem
-r--r-----  1 root www-data 1704 Aug 15 20:12 pve-www.key
lr-xr-xr-x  1 root www-data    0 Jan  1  1970 qemu-server -> nodes/pve-cluster-01/qemu-server
dr-xr-xr-x  2 root www-data    0 Aug 15 20:12 sdn
-r--r-----  1 root www-data  355 Aug 23 00:08 storage.cfg
-r--r-----  1 root www-data  268 Aug 22 19:04 user.cfg
dr-xr-xr-x  2 root www-data    0 Aug 15 20:12 virtual-guest
-r--r-----  1 root www-data  119 Aug 15 20:12 vzdump.cron

Has anyone facing the same issues?
I cant change the rights for root from r to rw for editing this files

Code:
root@pve-cluster-01:/etc/pve# find . -name "*.cfg"
./domains.cfg
./storage.cfg
./user.cfg
./priv/acme/plugins.cfg
./priv/tfa.cfg
./priv/shadow.cfg
./datacenter.cfg
 
Last edited:
I'm facing the following issue on
pve-manager/8.0.4/d258a813cfa6b390 (running kernel: 6.2.16-8-pve)
that i can't edit files in /etc/pve due of permisson errors. root has only read-only rights on this config files.
But i has to edit those files as root, due of issues with the otp auth.

Code:
root@pve-cluster-01:/etc/pve# ls -la
total 14
drwxr-xr-x  2 root www-data    0 Jan  1  1970 .
drwxr-xr-x 92 root root      187 Aug 22 18:20 ..
-r--r-----  1 root www-data  541 Jan  1  1970 .clusterlog
-rw-r-----  1 root www-data    2 Jan  1  1970 .debug
-r--r-----  1 root www-data  257 Jan  1  1970 .members
-r--r-----  1 root www-data  357 Jan  1  1970 .rrd
-r--r-----  1 root www-data 1049 Jan  1  1970 .version
-r--r-----  1 root www-data   18 Jan  1  1970 .vmlist
-r--r-----  1 root www-data  451 Aug 22 20:13 authkey.pub
-r--r-----  1 root www-data  451 Aug 22 20:13 authkey.pub.old
-r--r-----  1 root www-data  503 Aug 22 18:23 ceph.conf
-r--r-----  1 root www-data  471 Aug 22 18:27 corosync.conf
-r--r-----  1 root www-data   87 Aug 22 18:37 datacenter.cfg
-r--r-----  1 root www-data  131 Aug 23 00:11 domains.cfg
dr-xr-xr-x  2 root www-data    0 Aug 15 20:12 firewall
dr-xr-xr-x  2 root www-data    0 Aug 15 20:12 ha
lr-xr-xr-x  1 root www-data    0 Jan  1  1970 local -> nodes/pve-cluster-01
lr-xr-xr-x  1 root www-data    0 Jan  1  1970 lxc -> nodes/pve-cluster-01/lxc
dr-xr-xr-x  2 root www-data    0 Aug 15 20:12 mapping
dr-xr-xr-x  2 root www-data    0 Aug 15 20:12 nodes
lr-xr-xr-x  1 root www-data    0 Jan  1  1970 openvz -> nodes/pve-cluster-01/openvz
dr-x------  2 root www-data    0 Aug 15 20:12 priv
-r--r-----  1 root www-data 2074 Aug 15 20:12 pve-root-ca.pem
-r--r-----  1 root www-data 1704 Aug 15 20:12 pve-www.key
lr-xr-xr-x  1 root www-data    0 Jan  1  1970 qemu-server -> nodes/pve-cluster-01/qemu-server
dr-xr-xr-x  2 root www-data    0 Aug 15 20:12 sdn
-r--r-----  1 root www-data  355 Aug 23 00:08 storage.cfg
-r--r-----  1 root www-data  268 Aug 22 19:04 user.cfg
dr-xr-xr-x  2 root www-data    0 Aug 15 20:12 virtual-guest
-r--r-----  1 root www-data  119 Aug 15 20:12 vzdump.cron

Has anyone facing the same issues?
I cant change the rights for root from r to rw for editing this files

Code:
root@pve-cluster-01:/etc/pve# find . -name "*.cfg"
./domains.cfg
./storage.cfg
./user.cfg
./priv/acme/plugins.cfg
./priv/tfa.cfg
./priv/shadow.cfg
./datacenter.cfg
Hi,
is this host part of a cluster? If so, do you have quorum? Please post the output of pvecm status and systemctl status pve-cluster.service
 
Last edited:
Hi,
is this host part of a cluster? If so, do you have quorum? Please post the output of pvecm status and systemctl status pmxcfs.service

Code:
root@pve-cluster-01:~# pvecm status   
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
    LANGUAGE = (unset),
    LC_ALL = "de_DE.UTF-8",
    LC_MONETARY = "de_DE.UTF-8",
    LC_PAPER = "de_DE.UTF-8",
    LC_MEASUREMENT = "de_DE.UTF-8",
    LC_TIME = "de_DE.UTF-8",
    LC_NUMERIC = "de_DE.UTF-8",
    LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
Cluster information
-------------------
Name:             pve-cluster
Config Version:   2
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Wed Aug 23 15:14:28 2023
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          1.43
Quorate:          No

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      1
Quorum:           2 Activity blocked
Flags:           

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 192.168.178.35 (local)

Code:
root@pve-cluster-01:~# systemctl status pmxcfs.service
Unit pmxcfs.service could not be found.
 
Code:
root@pve-cluster-01:~# pvecm status  
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
    LANGUAGE = (unset),
    LC_ALL = "de_DE.UTF-8",
    LC_MONETARY = "de_DE.UTF-8",
    LC_PAPER = "de_DE.UTF-8",
    LC_MEASUREMENT = "de_DE.UTF-8",
    LC_TIME = "de_DE.UTF-8",
    LC_NUMERIC = "de_DE.UTF-8",
    LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
Cluster information
-------------------
Name:             pve-cluster
Config Version:   2
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Wed Aug 23 15:14:28 2023
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          1.43
Quorate:          No

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      1
Quorum:           2 Activity blocked
Flags:          

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 192.168.178.35 (local)

Code:
root@pve-cluster-01:~# systemctl status pmxcfs.service
You have a 2 node cluster and the other node is not reachable, therefore the remaining node is read-only as it has no quorum. Please bring up the second node and the cluster filesystem should be read-write again.

systemctl status pmxcfs.service
My bad, should have been systemctl status pve-cluster.service, updated the previous post accordingly.
 
You have a 2 node cluster and the other node is not reachable, therefore the remaining node is read-only as it has no quorum. Please bring up the second node and the cluster filesystem should be read-write again.
Oh really, didn't thought that has such an impact, its an test cluster with 2 nodes which will later be in productive an 8 node cluster.
Does it always goes into read only when some nodes arent available or is there an percentage?

My bad, should have been systemctl status pve-cluster.service, updated the previous post accordingly.
Code:
root@pve-cluster-01:~# systemctl status pve-cluster.service
● pve-cluster.service - The Proxmox VE cluster filesystem
     Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; preset: enabled)
     Active: active (running) since Wed 2023-08-23 09:45:13 CEST; 6h ago
    Process: 1537 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
   Main PID: 1550 (pmxcfs)
      Tasks: 7 (limit: 9209)
     Memory: 55.7M
        CPU: 6.007s
     CGroup: /system.slice/pve-cluster.service
             └─1550 /usr/bin/pmxcfs

Aug 23 09:45:18 pve-cluster-01 pmxcfs[1550]: [dcdb] notice: members: 1/1550
Aug 23 09:45:18 pve-cluster-01 pmxcfs[1550]: [dcdb] notice: all data is up to date
Aug 23 09:45:18 pve-cluster-01 pmxcfs[1550]: [status] notice: members: 1/1550
Aug 23 09:45:18 pve-cluster-01 pmxcfs[1550]: [status] notice: all data is up to date
Aug 23 10:45:12 pve-cluster-01 pmxcfs[1550]: [dcdb] notice: data verification successful
Aug 23 11:45:12 pve-cluster-01 pmxcfs[1550]: [dcdb] notice: data verification successful
Aug 23 12:45:12 pve-cluster-01 pmxcfs[1550]: [dcdb] notice: data verification successful
Aug 23 13:45:12 pve-cluster-01 pmxcfs[1550]: [dcdb] notice: data verification successful
Aug 23 14:45:12 pve-cluster-01 pmxcfs[1550]: [dcdb] notice: data verification successful
Aug 23 15:45:12 pve-cluster-01 pmxcfs[1550]: [dcdb] notice: data verification successful
 
Oh really, didn't thought that has such an impact, its an test cluster with 2 nodes which will later be in productive an 8 node cluster.
Does it always goes into read only when some nodes arent available or is there an percentage?


Code:
root@pve-cluster-01:~# systemctl status pve-cluster.service
● pve-cluster.service - The Proxmox VE cluster filesystem
     Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; preset: enabled)
     Active: active (running) since Wed 2023-08-23 09:45:13 CEST; 6h ago
    Process: 1537 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
   Main PID: 1550 (pmxcfs)
      Tasks: 7 (limit: 9209)
     Memory: 55.7M
        CPU: 6.007s
     CGroup: /system.slice/pve-cluster.service
             └─1550 /usr/bin/pmxcfs

Aug 23 09:45:18 pve-cluster-01 pmxcfs[1550]: [dcdb] notice: members: 1/1550
Aug 23 09:45:18 pve-cluster-01 pmxcfs[1550]: [dcdb] notice: all data is up to date
Aug 23 09:45:18 pve-cluster-01 pmxcfs[1550]: [status] notice: members: 1/1550
Aug 23 09:45:18 pve-cluster-01 pmxcfs[1550]: [status] notice: all data is up to date
Aug 23 10:45:12 pve-cluster-01 pmxcfs[1550]: [dcdb] notice: data verification successful
Aug 23 11:45:12 pve-cluster-01 pmxcfs[1550]: [dcdb] notice: data verification successful
Aug 23 12:45:12 pve-cluster-01 pmxcfs[1550]: [dcdb] notice: data verification successful
Aug 23 13:45:12 pve-cluster-01 pmxcfs[1550]: [dcdb] notice: data verification successful
Aug 23 14:45:12 pve-cluster-01 pmxcfs[1550]: [dcdb] notice: data verification successful
Aug 23 15:45:12 pve-cluster-01 pmxcfs[1550]: [dcdb] notice: data verification successful
Yes, a Proxmox VE cluster requires more than 50% of the nodes to be online and able to communicate with each other in order to avoid split brain situations. Nodes which are not part of this quorate cluster partition are switched into read-only states as you experienced. Details can be found in the docs: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_quorum
 
Yes, a Proxmox VE cluster requires more than 50% of the nodes to be online and able to communicate with each other in order to avoid split brain situations. Nodes which are not part of this quorate cluster partition are switched into read-only states as you experienced. Details can be found in the docs: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_quorum
Okay, alright. Second node is available again and the permissions changed from read, back to read-write.

Almost any question got answered.
During that process, i noticed an login issue for root@pem with an active and inactive 2fa configured.
Is this an normal behavior or an bug?

After getting the second node working again, the 2fa with root works again.

An other informaiton, users from the pve-authentication-server could normally and successfully login through the webinterface.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!