Filesystem EXT4 Repair

Sofa Surfer

New Member
Jul 31, 2023
7
0
1
Hello, if this is off topic please someone point me to the right place. I have a PVE 8 on my machine, with a 2TB SATA SSD drive with 2 partitions that I use for media storage, shared on network with OMV. Suddenly one of the partitions the other day disappeared and on log journal I found it's not mounted anymore.
With a bit of research (I am pretty new to Linux) I found that the filesystem EXT4 is not recognised anymore, I am looking (if there is) a way to repair and avoid al lot of data loss.


Code:
root@pve:~# fsck /dev/sda1
fsck from util-linux 2.38.1
e2fsck 1.47.0 (5-Feb-2023)
ext2fs_open2: Bad magic number in super-block
fsck.ext4: Superblock invalid, trying backup blocks...
fsck.ext4: Bad magic number in super-block while trying to open /dev/sda1

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

Found a gpt partition table in /dev/sda1

Code:
root@pve:~# e2fsck -b 32768 /dev/sda1
e2fsck 1.47.0 (5-Feb-2023)
e2fsck: Bad magic number in super-block while trying to open /dev/sda1

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

Found a gpt partition table in /dev/sda1


Is there some other tool I can use to try to repair? Eventually I can detach the drive and connect to a Windows 11 system for some attempts.

Thanks in advance for your help!
 
Found a gpt partition table in /dev/sda1
Looks like your partition /dev/sda1 does not contain a filesystem but (nested) partitions. As if you used the partition /dev/sda1 as a virtual disk for a VM, which then partitioned that virtual disk.
I would passthrough the partition (like you probably did before) to a (new?) VM and run fsck from inside the VM. Maybe boot that VM with GParted Live or use testdisk to investigate any problems with the filesystem (on the partition inside the partition).
 
Thanks for the reply, I have used sda1 it only for storage and didn't install a VM on it, only passed to OMV VM, then shared with SMB CIFS. sda2 is mounted on datacenter as a backup disk and still ok, this is the structure of my partitions:

Code:
root@pve:~# lsblk
NAME                             MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                                8:0    0   1.8T  0 disk
├─sda1                             8:1    0   1.6T  0 part
└─sda2                             8:2    0 186.3G  0 part /mnt/pve/backups

in the proxmox UI, datacenter I have also no flag on "mounted"
 
Last edited:
Suddenly one of the partitions the other day disappeared and on log journal I found it's not mounted anymore.
only passed to OMV VM
How "passed to OVM" if it was mounted in PVE host?
I'm worried there is something of a conflict here.
Either I've misunderstood your terminology or the 2 above mentioned were at different times. Whatever the case that's probably the cause of the FS corruption.
 
I'm worried there is something of a conflict here.
It's possible, I am not expert. To pass it to OMV I did create the partition with cfdisk, using this guide:

https://www.youtube.com/watch?v=PHmHNzv3a7s&t=1010s

after that I did mount the storage on PVE file storage.cfg, and bind mount to another container (docker, where I have my services such as Frigate and Plex) on fstab.
I hope this was correct!
I think (one of) my mistake(s) was that my machine have been overtemperature (about 75°) for a couple of months because of an error on configuration on Frigate, I was using this partition to save clips/video on network storage.
 
Last edited:
It's possible, I am not expert. To pass it to OMV I did create the partition with cfdisk, using this guide:

https://www.youtube.com/watch?v=PHmHNzv3a7s&t=1010s

after that I did mount the storage on PVE file storage.cfg, and bind mount to another container (docker, where I have my services such as Frigate and Plex) on fstab.
I hope this was correct!
If I understood you did the following?

1. In PVE host you did cfdisk with /dev/sda & created a (GPT) partition /dev/sda1
2. You then passed this disk /dev/sda (as in video) to OMV VM (whole disk)
3. You then "mount the storage on PVE file storage.cfg" ; here I'm really not sure what you mounted but I think you mean /dev/sda2 ?

I won't carry on with the rest. But if what I've written is correct, your partition is doomed, you can't just pass it around like a hot cake. Either OMV has it or your Proxmox host. Not both.

Maybe you made an SMB share in OMV & passed that to Proxmox host? (Probably not based on your lsblk output above). That also wouldn't be the best in storage.cfg, as on Proxmox boot it wouldn't exist until the OMV VM boots up (after) Proxmox. A little/lot of the chicken & the egg.
 
I won't carry on with the rest. But if what I've written is correct, your partition is doomed, you can't just pass it around like a hot cake. Either OMV has it or your Proxmox host. Not both.
This is reassuring.

Maybe you made an SMB share in OMV & passed that to Proxmox host? (Probably not based on your lsblk output above). That also wouldn't be the best in storage.cfg, as on Proxmox boot it wouldn't exist until the OMV VM boots up (after) Proxmox. A little/lot of the chicken & the egg.
I had the same thought about chicken/egg but I couldnt find a better solution, plus it worked well for about 6 months. To confirm what you say I have to, when I reboot the PVE, bind mount manually my fstab (umount mnt/pve/omv, then mount -a from fstab)

Basically I've done something like this (I am a bad example not-to-follow):

1715248923845.jpeg

Another good point I HAD to keep monitored is the drive temperature, this is when (i think) my partition got damaged, on May 2nd:

1715249095197.jpeg

Many thanks for helping open my eyes. What do you suggest to do now? Re-create the partition, format, and then? How to mount my partition directly into my docker without using the SMB share from OMV ( I need this shared folder anyway to have available my data on my home network)? I hope my mistakes will be useful! :D
 
Last edited:
Don't know all your requirements or setup but what I would do:

1. Passthrough the required partition (/dev/sda1) to OMV
2. From OMV make any shares of folders within the partition (/dev/sda1) available by SMB
3. Access those SMB shares to your docker/plex/frigate
4. Do NOT add these partition/shares to Proxmox Storage (storage.cfg).
5. If you want/need some area on this drive for Proxmox Storage - use & create a different partition (/dev/sda2 maybe).

Maybe mark this thread as [SOLVED] (under thread title right-hand side edit).
 
Last edited:
  • Like
Reactions: Sofa Surfer

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!