EXT4-fs (dm-1): Remounting filesystem read only

sarukesh

New Member
Jan 3, 2024
20
0
1
I am new to proxmox and servers. I installed proxmox on my ibm system x3550 m4 server. and installed truenas on it. I did PCI passthrough and turned on truenas. immediately my proxmox showed connection error and in cli it displayed
"EXT4-fs (dm-1): Remounting filesystem read only "
if I try to type something it says :
"-bash: /usr/bin/ls: Input/output error "

what should I do now
 

Attachments

  • WhatsApp Image 2024-12-03 at 1.24.38 PM.jpeg
    WhatsApp Image 2024-12-03 at 1.24.38 PM.jpeg
    257.3 KB · Views: 56
Hi,

I did PCI passthrough and turned on truenas.
I assume you passed through the HBA controller? It's probably in the same IOMMU as the disk controller you are running Proxmox VE, and thus becomes inaccessible once the VM starts.

Can you please post the output of
- pveversion -v
- qm config <vmid>, replacing <vmid> with the respective ID of the VM
- pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist ""

Everything in code tags or as separate files, makes reading them a lot easier :)
 
Hello, I typed what you told. immediately turning on TrueNas both proxmox and Truenas Freeze.

I have added the list of PCI Devices that proxmox shows.
I am new to this so I dosent know how to ask questions and troubleshoot this issues please help me

WhatsApp Image 2024-12-04 at 4.29.32 PM.jpeg
 

Attachments

  • WhatsApp Image 2024-12-04 at 4.29.39 PM.jpeg
    WhatsApp Image 2024-12-04 at 4.29.39 PM.jpeg
    386.7 KB · Views: 20
Is your boot disk (i.e. on which Proxmox VE is installed) connected to the same controller or another?

Again, the host loses connection to the disk as soon as you start the VM.

So either
- the boot disk is on the same controller
- your HBA controller (which you passthrough to TrueNAS) is in the same IOMMU as the disk controller which the boot disk is connected to

Also, the (complete) output of pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist "" before you start the VM is missing.
 
Is your boot disk (i.e. on which Proxmox VE is installed) connected to the same controller or another?

Again, the host loses connection to the disk as soon as you start the VM.

So either
- the boot disk is on the same controller
- your HBA controller (which you passthrough to TrueNAS) is in the same IOMMU as the disk controller which the boot disk is connected to

Also, the (complete) output of pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist "" before you start the VM is missing.
yes, what I found is I my disk is connected to the same controller that's why I lost connection. what can I do to solve it. please give a solution to sort this out
 
what I found is I my disk is connected to the same controller
Well, it's a hardware problem then basically. There's nothing that can be done on the software side.
Your are passing through the entire disk controller, so of course the host loses connection to it. This cannot work.

You can try adding another disk controller to your system and connect your (boot) disks to that - which hopefully also ends up in another IOMMU group.
 
  • Like
Reactions: Johannes S
Well, it's a hardware problem then basically. There's nothing that can be done on the software side.
Your are passing through the entire disk controller, so of course the host loses connection to it. This cannot work.

You can try adding another disk controller to your system and connect your (boot) disks to that - which hopefully also ends up in another IOMMU group.
I am planning to add a 256GB SSD through sata to usb converter at rear usb port of my server. and then install proxmox in it and passthrough my RAID controller to TrueNas. would that be fine ?
 
Hello
I have a similar problem. After 5 days the proxmox gui was no longer accessible. a ping was possible. all containers were running and accessible, only the proxmox gui was not.

Hardware: Minisforum MS-01 Intel Core i9-12900H
RAM: Kingston FURY Impact (2 x 32GB, 5600 MHz, DDR5-RAM, SO-DIMM)
Storage: 2× WD Red SN700 (1000 GB, M.2 2280)

Can anyone help me to find the error?

Screenshot 2025-02-22 110414.png
 

Attachments

Did you do PCI pass through of you RAID card ? What exactly did you di before you got this problem?
 
If you cant do anything in cli try entering proxmox recovery mode. You can enter into it by pressing e while boot is at grub menu. And enter recuse mode. I guess the problem will be due to disk failure.
 
I restarted the server and then had access to it for 5 days. Then the error occurs again. This happened 2 times.
The hard disks are new and a memtest shows no errors.
Temperatures are 46 Celsius of the nvme
 
I doubt it is related to a file system corruption. Chrck system for any errors

Code:
journalctl -xe -p 3 --no-pager

Boot into recovery mode and run
Code:
fsck -y /dev/nvme0n1p1
Code:
fsck -y /dev/nvme1n1p1
If there is any EXT5-fs errors you can fix it
 
Last edited:
Code:
journalctl -xe -p 3 --no-pager
Code:
Feb 22 09:22:50 ms01 kernel: Bluetooth: hci0: No support for _PRR ACPI method
Feb 22 10:10:34 ms01 pveproxy[40766]: got inotify poll request in wrong process - disabling inotify

I have deactivated bluetooth in the bios.
I do not understand the second output.
I will run fsck in a moment

thanks for your support.
 
  • Like
Reactions: sarukesh