All CT turned to Read only file system

Kan

Well-Known Member
Dec 21, 2016
86
5
48
54
Hello,

I use PVE 6.4 and today I found errors in /var/log/messages of a CT like :

Code:
vm systemd-sysctl: Failed to write '0' to '/proc/sys/kernel/yama/ptrace_scope': Read-only file system

Finally I found same errors on others CT (only 3 CT running + 1 VM on this single node, no cluster).

I rebooted pve but the CT continue not working with Read only errors. Even if I restore a backup it starts as read only file system.

'zpool status' returns all is ok (only 2 disks).

I don't know what can I do to repair it. Please help me.
 
Check if you ran out of space:
Commands that might help: lvs, vgs, zfs list -o space, df -h
 
Not a space problem.

Code:
# df -h
Filesystem                    Size  Used Avail Use% Mounted on
udev                          5.9G     0  5.9G   0% /dev
tmpfs                         1.2G  8.9M  1.2G   1% /run
rpool/ROOT/pve-1              404G  192G  212G  48% /
tmpfs                         5.9G   40M  5.9G   1% /dev/shm
tmpfs                         5.0M     0  5.0M   0% /run/lock
tmpfs                         5.9G     0  5.9G   0% /sys/fs/cgroup
rpool                         212G  128K  212G   1% /rpool
rpool/ROOT                    212G  128K  212G   1% /rpool/ROOT
rpool/data                    212G  128K  212G   1% /rpool/data
rpool/data/subvol-103-disk-0   70G   24G   47G  34% /rpool/data/subvol-103-disk-0
rpool/data/subvol-100-disk-0   20G  1.4G   19G   7% /rpool/data/subvol-100-disk-0
rpool/data/subvol-101-disk-0   20G  1.6G   19G   8% /rpool/data/subvol-101-disk-0
/dev/fuse                      30M   20K   30M   1% /etc/pve
tmpfs                         1.2G     0  1.2G   0% /run/user/0


Code:
# zfs list -o space
NAME                          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
rpool                          211G   239G        0B    104K             0B       239G
rpool/ROOT                     211G   192G        0B     96K             0B       192G
rpool/ROOT/pve-1               211G   192G        0B    192G             0B         0B
rpool/data                     211G  46.0G        0B    112K             0B      46.0G
rpool/data/subvol-100-disk-0  18.7G  1.32G        0B   1.32G             0B         0B
rpool/data/subvol-101-disk-0  18.4G  1.57G        0B   1.57G             0B         0B
rpool/data/subvol-103-disk-0  46.3G  23.7G        0B   23.7G             0B         0B
rpool/data/vm-102-disk-0       211G  19.4G        0B   19.4G             0B         0B

lvs and vgs returned nothing.
 
Even though I backup CT and restore on another PVE I still have the read only problem. Nothing work !


Code:
Mar 26 15:10:53 vm systemd-sysctl: Failed to write '0' to '/proc/sys/kernel/yama/ptrace_scope': Read-only file system
Mar 26 15:10:53 vm systemd-sysctl: Failed to write '16' to '/proc/sys/kernel/sysrq': Read-only file system
Mar 26 15:10:53 vm systemd-sysctl: Failed to write '1' to '/proc/sys/kernel/core_uses_pid': Read-only file system
Mar 26 15:10:53 vm systemd-sysctl: Failed to write '1' to '/proc/sys/fs/protected_hardlinks': Read-only file system
Mar 26 15:10:53 vm systemd-sysctl: Failed to write '1' to '/proc/sys/fs/protected_symlinks': Read-only file system

Please help.
 
Talking to myself...

The problem seems to be related only to CentoS CT. No problem with Debian.

Is there a (new) bad compatibility PVE 6 / Centos 7 ?
 
I definitely not have high i/o on the pve. If I start only 1 CT (CentOS7) the problem continues. I suppose an update of PVE6 is the cause but i can't prove it.
 
I definitely not have high i/o on the pve. If I start only 1 CT (CentOS7) the problem continues. I suppose an update of PVE6 is the cause but i can't prove it.
have you tried to go to PVE GUI and go to your VM then select Hardware then select Hard Disk
then click on Edit on top of the page and see in the left bottom side if there is a checkmark next to read-only?

readonly.jpg
 
have you tried to go to PVE GUI and go to your VM then select Hardware then select Hard Disk
then click on Edit on top of the page and see in the left bottom side if there is a checkmark next to read-only?

View attachment 35499
There is no such parameter for container. The read only file system occurs only with CentOS 7 container. No problem with Debian container and VM.
 
There is no such parameter for container. The read only file system occurs only with CentOS 7 container. No problem with Debian container and VM.
Ah ok I'm sorry I thought I read VM .
hopefully Proxmox Team can help you more

Best Regards,
Spiro
 
There is no such parameter for container. The read only file system occurs only with CentOS 7 container. No problem with Debian container and VM.
one last thing, I can think of -
have you tried a different kernel maybe downgrade the kernel and see if that may be the issue?
 
it's not the disk being read-only, but certain parts of the kernel interface (intentionally - this is part of how containers are isolated..). please post the full container config.
 
Not sure that's what you requested...

Code:
# cat /etc/pve/local/lxc/100.conf
arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: vm.xxxxxxx
memory: 4096
net0: name=eth0,bridge=vmbr0,gw=123.123.123.254,hwaddr=8E:E2:49:BF:6B:05,ip=123.123.123.15/24,type=veth
ostype: centos
rootfs: local-zfs:subvol-100-disk-0,size=20G
swap: 5120
unprivileged: 1
 
your CT is unprivileged, but doesn't have the nesting feature enabled. this can cause issues with some distros, I'd recommend turning on the nesting feature.
 
your CT is unprivileged, but doesn't have the nesting feature enabled. this can cause issues with some distros, I'd recommend turning on the nesting feature.
Ok and how can I turn on "nesting" please ?
 
Ok and how can I turn on "nesting" please ?
pct set CTID -features nesting=1 (CTID = 100 for you) in shell, or you can use the GUI; Container -> Options -> Features -> check "Nesting"
 
Unfortunatly this did not fix the problem. Still same errors in /var/log/messages and no service works... :(

The strangest is the problem seems to occur on the 2 CentOS7 CT in the same time (90% sure). And why didn't it affect the debian CT ?
 
Unfortunatly this did not fix the problem. Still same errors in /var/log/messages and no service works... :(

The strangest is the problem seems to occur on the 2 CentOS7 CT in the same time (90% sure). And why didn't it affect the debian CT ?
have you rebooted the container(s) after enabling nesting?

pct reboot CTID
 
I added "nesting" with CT off. Then I started and rebooted CT but no change.
 
just re-read the messages - those are not nesting related, but setting sysctls that are not settable in a container because they would affect the whole host. does the container actually not work, or are you just worried about those messages?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!