Keeps booting in emergency mode

Nanobug

Member
May 14, 2021
9
0
6
34
Hello,
My ProxMox server keeps booting in emergency mode.
I tried doing what it said on the screen, but after a while, it loops back to "you are in emergency mode".

We had a power outage this morning, and it has been acting up since then.
I can ping it, but not SSH to it.

I tried looking in the journal as well, but I have no idea what I'm looking for, and there's hundreds if not thousands of lines to go through.

Does anyone have an idea on how to proceed from this?

I've attached a picture of the display.
 
Looks like your "lvol" disk corrupted because of that power outage. You should type in your password to get to the konsole and then comment out the line for that disk in the fstab. Then your PVE host should boot without that corrupted disk and you can back it up and try to repair its partition table and file systems.

You really should consider getting an UPS (just 50€) so you don't loose disks anymore on power outagages in the future.
 
  • Like
Reactions: swim2811
Looks like your "lvol" disk corrupted because of that power outage. You should type in your password to get to the konsole and then comment out the line for that disk in the fstab. Then your PVE host should boot without that corrupted disk and you can back it up and try to repair its partition table and file systems.

You really should consider getting an UPS (just 50€) so you don't loose disks anymore on power outagages in the future.
Do you know if there is a guide for doing it?

I should just stop being lazy and add the one I already have lying around here....
I was just hoping I could wait till I'd get everything reorganized.
We also rarely have power outages here, guess this was just unlucky :/
 
Type in your root password as asked. Then edit the fstab by using nano /etc/fstab. Find the line that mounts to "/mnt/lvol" and put a "#" infront of it to comment it out. Then save your changes with CTRL+X and Y. Type reboot and look if it then boots again.
 
  • Like
Reactions: rishabhdesai
Like I already said, first you should backup the complete disk on blocklevel. So in case you disk rescuing attemps makes it even worse you can always return to the old state your backed up. After that I would open that disk with parted to see if it complains about a damaged partition table. In case it finds something it will ask you if it should try to repair it. If your partition table is fine you can use fsck to go through the filesystems to repair it.
 
Can I copy all that over to another location somehow?
I haven't bought hardware to set up a backup solution yet.

And could you possibly help with walking me through that as well?
 
You can boot into something like clonezilla and use it to backup a physical disk into a image file that you can store on another disk or network share.
 
There is no file to copy, because you need blocklevel backup the complete block device. So you first would need to create such a image file. You can also create it using the dd command, pipe that to a file and copy that file using SSH somewhere...but if you aren't experienced in working with dd I would highly recommend to use tools like clonezilla with a GUI that are specifically build to do such backups.
 
Last edited:
I've accepted that I can't get the data out.
It's not super important, so I'm not gonna bother more with it.

How do I format the lvol dir so I can use it again?
 
@Dunuin Hey. I've got the same issue with my SSD. Is there really a point to booting into Proxmox when I can't mount the SSD anymore? Shouldn't I rather boot into Clonezilla straightly as you advised and make a ISO of the corrupted SSD?

In case I get into the point I have a ISO backup of the SSD, what should I do next? I'm not sure if I can enter parted in emergency mode. Also, could I use GParted (basically a gui for parted) to have it easier?

In case I would use parted, what commands should I type in order to repair the SSD tables and filesystem? You wrote "if it complains about a damaged partition table. In case it finds something it will ask you if it should try to repair it." - how can I start such test / check so the parted would check if everything is in order?
 
@Dunuin Hey. I've got the same issue with my SSD. Is there really a point to booting into Proxmox when I can't mount the SSD anymore? Shouldn't I rather boot into Clonezilla straightly as you advised and make a ISO of the corrupted SSD?
You should always make a perfect copy and only work with that copy when trying to fix a corrupted disk. So you can`t make it worse while experimenting.
In case I get into the point I have a ISO backup of the SSD, what should I do next? I'm not sure if I can enter parted in emergency mode. Also, could I use GParted (basically a gui for parted) to have it easier?
Not sure if gparted will try to fix partition tables. I only tested it with parted. But when trying to rescue the PVE system disk you usually would boot into a Live Linux with ZFS support (something like a Live Ubuntu) so the PVE system disk isn't in use. Then you could try stuff like fixing partition tables with parted, run fsck to fix ext4 filesystems, chroot into the PVE installation to write a new bootloader and so on.

In case I would use parted, what commands should I type in order to repair the SSD tables and filesystem? You wrote "if it complains about a damaged partition table. In case it finds something it will ask you if it should try to repair it." - how can I start such test / check so the parted would check if everything is in order?
It should automatically check the partition table and complain when there is a problem as soon as you start editing a disk with it.
 
You should always make a perfect copy and only work with that copy when trying to fix a corrupted disk. So you can`t make it worse while experimenting.

Not sure if gparted will try to fix partition tables. I only tested it with parted. But when trying to rescue the PVE system disk you usually would boot into a Live Linux with ZFS support (something like a Live Ubuntu) so the PVE system disk isn't in use. Then you could try stuff like fixing partition tables with parted, run fsck to fix ext4 filesystems, chroot into the PVE installation to write a new bootloader and so on.


It should automatically check the partition table and complain when there is a problem as soon as you start editing a disk with it.

I know it's too late for someone here perhaps, but the whole point of using a solution like PVE is to have backups of the VMs/CTs and if something like this happens (does not have to be power outage, could be simply SSD going bad), you reinstall PVE and recover the actual important things from the replicas. Not sure why the need to dd-copy drives like images at all.

@Dunuin, I might have not gotten it here myself, but when you advise him booting into live linux with ZFS support, why is that? The OP had just plain vanilla LVM going corrupt. It's not clear what @Pheggas has, but if he needs to go around fsck partitions, then we are talking ext/xfs, basically anything but COW filesystems. The system partition he could completely sacrifice in favour of fresh install, he already has partition table in place. He might need to fix LVM.

Lots of people here talk about ZFS, partition tables (GPT), but they do not seem to grasp what LVM is doing for them on standard install - I am not criticizing, I do not know what the OP has in use, but it simply is getting completely overlooked. If he has LVM, he needs to look for guides how to get his VGs together to have them even show in /dev/mapper and then only he can fsck ext4.
 
@Dunuin, I might have not gotten it here myself, but when you advise him booting into live linux with ZFS support, why is that? The OP had just plain vanilla LVM going corrupt. It's not clear what @Pheggas has, but if he needs to go around fsck partitions, then we are talking ext/xfs, basically anything but COW filesystems. The system partition he could completely sacrifice in favour of fresh install, he already has partition table in place. He might need to fix LVM.
Yes, but having the system disk on LVM/etx4 doesn't mean there are maybe some additional ZFS pools too. Doesn't hurt to use a ZFS capable Live Linux even if you don't use ZFS. Many people here use ZFS, so I thought it would be better to recommend a Linux that could actually mount all the filesystems PVE is using out of the box.

Lots of people here talk about ZFS, partition tables (GPT), but they do not seem to grasp what LVM is doing for them on standard install - I am not criticizing, I do not know what the OP has in use, but it simply is getting completely overlooked. If he has LVM, he needs to look for guides how to get his VGs together to have them even show in /dev/mapper and then only he can fsck ext4.
He gave no real information, so my answers are more general. A not booting PVE could be failed disk, corrupted LVM/btrfs/ZFS/etx4/...(he could even running PVE on reiserfs on top of mdadm raid via PVE on top of Debian or other exotic setups...), failed hardware like bad HW raid card, damaged bootloader, damaged partition table, ...

I know it's too late for someone here perhaps, but the whole point of using a solution like PVE is to have backups of the VMs/CTs and if something like this happens (does not have to be power outage, could be simply SSD going bad), you reinstall PVE and recover the actual important things from the replicas.
Again, no existing information. But half of the people here don't seem to have recent backups. So wouldn't wonder if this might be a reason on why trying to rescue some data ;)
 
Last edited:
@Esiy @Dunuin sorry for not specifying my story. I wrote it in my own post that hasn't been approved yet so here you go:

Post title: [Critical issue] Proxmox won't boot and UEFI boot sequence takes minutes

Description:
Hello everyone. I'd like to ask you for help with my proxmox setup. I will first describe my story and after then the errors.

The story: Today i was about to install Hassio so i found nice script that would do it for me (my friend installed it the same way and it is okay so far). It immediately wrote that i have unsupported version of Proxmox (7.1 and it needed 7.2 or higher). So i googled how to update proxmox (at first i wanted to upgrade to Proxmox 8 but then i realised latest version of Proxmox 7 would be fine as well). TO BE SAFE, i started backup on all two VMs and one LXC container i had and i was about to download them locally so in any case of disaster i would recover everything in matter of hours - but i WOULD recover. The LXC container did backup perfectly after a few minutes. Then the first VM (TrueNAS) was fine as well in matter of seconds.

And now the main VM that has 200GB allocated and i had every service that i was running there. I chose to save the backup to 250 GB SSD that has about 100 GB already taken. The reason i chose the SSD was that i noticed the LXC container compressed it's backup to 1GB instead of 5GB. So i said it would do the compress in this case as well and would fit in the space. Well, it stuck on 47% for like 20 minutes so i clicked in WebUI on PVE and clicked shutdown (i wanted to reboot but whatever, i missclicked). So i walked to the server and started it manually. After 5 minutes the WebUI didn't come up so i walked back to the server and force shudown it by holding the power button. Turned it on again and still nothing after couple of minutes. So i force shutdown it again and connected it to the display to see what's going on.

The HP logo showed up (which was perfectly fine and it means it's about to boot what's on the internal ssd). It took about 2 minutes which stressed me out (because it should start the boot in a few seconds, not minutes). I was waiting for any kind of progress which came but it was REALLY slow. It took about 1 and a half minute to write a few log messages from the boot sequence shown on this link. Then it displayed this.

Do you know what gone wrong and maybe how could i fix it? I'm able to login into this emergency mode but the SSD won't mount somehow. Could it be maybe caused by the fact the SSD could be potentially completely out of space? I was reading somewhere SSD needs some breathable space to operate properly and it happened to me once the Windows won't boot because the SSD was completely out of space.

Thank you for any comment.

PS: Attaching the images in case the links won't open properly it says they're too large to upload. I downscaled them to a few kBs and still won't upload. Just write me PM and i will find other way to post them.
 
@fabian Please, please, please. Is there really nothing that can be done for someone when he appeals like this? I just started in this forum a week ago and everyone, one way or another, is fighting with the forum before able to reach out.
 
@fabian Please, please, please. Is there really nothing that can be done for someone when he appeals like this? I just started in this forum a week ago and everyone, one way or another, is fighting with the forum before able to reach out.

by the time you replied, their post was already approved (in fact, it was approved shortly after the reply here).

@OP it seems that you have an automount service installed that fails to start and thus blocks boot. I'd suggest disabling that (should be possible from the rescue shell) and then rebooting again. once your system boots, you can try to debug the automount issue further from a properly booted system.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!