GUI console goes "Error 500: unable to open file '/var/tmp/pve-reserved-ports.tmp.791677' - Read-only file system)"

stooovie

Member
May 16, 2023
48
6
8
PVE 8.0.4, single node, my homelab. After four months of perfect uptime, today I can't enter the console via GUI due to "Error 500: unable to open file '/var/tmp/pve-reserved-ports.tmp.791677' - Read-only file system)"

dmesg reveals a bunch of these:

EXT4-fs warning (device dm-3): ext4_dirblock_csum_verify:404: inode #1835765: comm rm: No space for directory leaf checksum. Please run e2fsck -D.
EXT4-fs error (device dm-3): htree_dirblock_to_tree:1080: inode #1835765: comm rm: Directory block failed checksum

smartctl -a /dev/sda and sdb reveals no errors.

df reveals this (doesn't look like a full disk, does it?):

root@pve:~# df
Filesystem 1K-blocks Used Available Use% Mounted on

udev 8107064 0 8107064 0% /dev
tmpfs 1629024 1912 1627112 1% /run
/dev/mapper/pve-root 40606184 17761160 20750128 47% /
tmpfs 8145112 45648 8099464 1% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
/dev/sda2 1046508 348 1046160 1% /boot/efi
/dev/fuse 131072 108 130964 1% /etc/pve
tmpfs 1629020 0 1629020 0% /run/user/0



The system is up and running, VMs and LXCs are on a different NVMe drive and writable. There has NOT been a blackout nor any reboot in 4 months, the error just appeared today.

I'm afraid to reboot the system. Anything I can do first? Thanks!
 
Last edited:
e2fsck -n /dev/sda
Warning! /dev/sda is in use.
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/sda

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>

Found a gpt partition table in /dev/sda
-----------------
In fact ALL e2fsck checks of drives return "ext2fs_open2: Bad magic number in super-block", including the NVMe that holds the VMs.
 
Last edited:
You need to run the fsck on your partition, not the disk itself, in your case /dev/mapper/pve-root, yet you need to run it in a recovery environment. I hope you have up-to-date backups.
 
I was able to fix it from Proxmox Rescue USB. There was a lot of errors but second pass of fsck did not reveal any more errors. Do you think it's okay? A complete wipe or even replacement of the (new! 120 GB Apacer SSD) drive would be better, right? Data is on a different NVMe (and mostly backed up).
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!