Proxmox won't boot

hzuiel

New Member
Dec 22, 2021
6
0
1
38
Backstory, I installed proxmox and a windows 10 vm a while back. It was rock solid for quite a while. I decided i wanted to host something else on the win10 vm, so I went to increase the amount of storage allocated to the VM, and fumbling around with the GUI managed to give the VM more storage than actually exists in the pool. That combined with the fact that microsoft are morons, an install that was supposed to be 140gb ended up adding like 370gb to my storage, filled up the partition completely and windows 10 couldn't function in that condition and the VM stopped booting. I hadn't made a snapshot so i figured the best solution was to add storage. I got a 2tb hard drive to add to my existing 500gb ssd, and after doing some research, found out the right commands to extend the storage pool to include that drive. The windows 10 vm was able to boot, and annoyingly the drive occupied space dropped by over 200gb once the installation finished(again, microsoft are morons). All was fine at that point it seemed, though a couple of times the win10 vm crashed for no apparent reason. Haven't even had a chance to look into why yet.

Then yesterday the power company was doing maintenance and cut our power while I was sleeping. They apparently knocked on the door once. Anyway when power came back on, proxmox now will not boot. This is what I get.

Found volume group "pve" using metadata type lvm2
4 logical volume(s) in volume group "pve" now active
/dev/mapper/pve-root: recovering journal
/dev/mapper/pve-root: clean, 49658/6291456 files, 2465140/25165824 blocks
[ TIME ] Timed out waiting for device /dev/disk/by-label/Storage.
[DEPEND] Dependency failed for /mnt/data.
[DEPEND] Dependency failed for Local File Systems.
[DEPEND] Dependency failed for File System Check on /dev/disk/by-label/Storage.
You are in emergency mode. After logging in, type "journal -xb" to view system logs, "systemctl reboot" to reboot, "systemctl default" or "exit" to boot into default mode.
Give root password for maintenance
(or press Control-D to continue):

Continue or boot into default both just eventually come back to this screen.

The method i found online, that I used to extend the existing volume was....

partition drive and set LVM 8e flag
create new physical volume with pvcreate /dev/<sdb1
extend default VG with vgextend pve /dev/sdb1
extend data logical volume with lvextend /dev/mapper/pve-data /dev/sdb1
verify your FS is clean with fsck -nv /dev/mapper/pve-data
resize your FS with resize2fs -F /dev/mapper/pve-data

attached are screenshots of what the result of this set of instructions was.
 

Attachments

  • image (1).png
    image (1).png
    17.7 KB · Views: 20
  • image.png
    image.png
    21.8 KB · Views: 17
Last edited:
I had someone suggest elsewhere to do fsck but it says /dev/mapper/pve-root is busy and so it can't conduct the file check. I have tried finding instructions to get around this and nothing works. I can't umount /dev/mapper/pve-root either, and /etc/fstab says i don't have rights to even mess with when in that emergency mode, even after putting in the root password
 
The problem does not seem to be with "/dev/mapper/pve-root", but the other mount point "/dev/disk/by-label/Storage" ("/mnt/data").

That one should be in your "/etc/fstab" and Debian cannot safely find/start it, hence it starts in the emergency mode.
 
The problem does not seem to be with "/dev/mapper/pve-root", but the other mount point "/dev/disk/by-label/Storage" ("/mnt/data").

That one should be in your "/etc/fstab" and Debian cannot safely find/start it, hence it starts in the emergency mode.
Any advice on troubleshooting that?
 
The problem does not seem to be with "/dev/mapper/pve-root", but the other mount point "/dev/disk/by-label/Storage" ("/mnt/data").

That one should be in your "/etc/fstab" and Debian cannot safely find/start it, hence it starts in the emergency mode.
Could that be something that was added by default when using command line to format and create LVM partition?
 
That is all difficult to guess, because we do not know your setup and more happened.

You are sure your disk (or partition or LVM) using " /dev/disk/by-label/Storage" is still online?
 
That is all difficult to guess, because we do not know your setup and more happened.

You are sure your disk (or partition or LVM) using " /dev/disk/by-label/Storage" is still online
Is there a command that would tell me? there are only 2 disks, sda1 and sdb1
Why does it not let me even view /etc/fstab in emergency mode? is there a way around that?
 
EDIT: Apologies I replied to the wrong thread - didn't mean to hijack this.

Using Boot Repair currently, it's finding the below two on the disk in question. But it's marking them both as Primary. Is that right? - In it's current config it still fails to boot.

1664890597355.png
 
Last edited:
Using Boot Repair currently, it's finding the below two on the disk in question. But it's marking them both as Primary. Is that right? - In it's current config it still fails to boot.
The first one looks like a normal ESP and the second one looks like normal LVM. But it looks like you are using a tool that looks at the MBR (which can have up to four primary partitions, so two is not strange), while the drive probably also has a more modern GPT. Personally, I always recommend using GParted Live, which is much more recent than 2015.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!