ProxMox booting in Emergency mode after Acronis TrueImage cloning on bigger SSD

paxmobile

New Member
Mar 3, 2023
21
1
3
Italia
bitnirvana.it
I am using ProxMox since a couple of months and managet to get going 3 servers on a DELL WYSE 5070 with 24GB RAM: MeshCentral on Windows 10 (now it seems a bad decision but it's working gorgeously), Pi-hole on RaspiOS as ad filter and Homeassistant for managing all my domotics. Now I want to add a NextCloud VM and need more SSD space so I cloned a 500GB SSD to a 1TB SSD but ProxMox is not booting since

Found volume group "pve" using metadata type lvm2
8 logical volume(s) in volume group "pve" now active
/dev/mapper/pve-root: recovering journal
/dev/mapper/pve-root: clean, 75271/62914564554752 files, 15299006/18217984 blocks
[ TIME ] Timed out waiting for device /dev/disk/by-uuid/7739-64BE
[DEPEND] Dependency failed for /boot/efi
[DEPEND] Dependency failed for Local File Systems.
[DEPEND] Dependency failed for File System Check on /dev/disk/by-uuid/7739-64BE
You are in emergency mode. After logging in, type "journal -xb" to view system logs, "systemctl reboot" to reboot, "systemctl default" or "exit" to boot into default mode.
Give root password for maintenance
(or press Control-D to continue):

What command(s) could I try to fix this ? I'm quite a rookie to ProxMox and Linux so I could use a little help with the commands, I'm struck since 2 days.
 
Yes the old 500GB SSD runs perfectly, I didn't change anything on the machine. I just tried editing /etc/fstab and after putting a magic # before /dev/disk/by-uuid/7739-64BE this solved the issue and ProxMox booted again so the case is solved.
Now I have a 1TB SSD that being raw cloned from a 500GB has exactly the same old space, but that's generic Debian matter I guess. Thanks!
 
Great, what file system was the root ? Maybe you can resize it now.

Yes the old 500GB SSD runs perfectly, I didn't change anything on the machine. I just tried editing /etc/fstab and after putting a magic # before /dev/disk/by-uuid/7739-64BE this solved the issue and ProxMox booted again so the case is solved.
Now I have a 1TB SSD that being raw cloned from a 500GB has exactly the same old space, but that's generic Debian matter I guess. Thanks!
 
Yes the old 500GB SSD runs perfectly, I didn't change anything on the machine. I just tried editing /etc/fstab and after putting a magic # before /dev/disk/by-uuid/7739-64BE this solved the issue and ProxMox booted again so the case is solved.
Now I have a 1TB SSD that being raw cloned from a 500GB has exactly the same old space, but that's generic Debian matter I guess. Thanks!


Possibly the disk uuid is different, put the 1TB SSD and run blkid to see the uuid for your 1TB SSD.
You should see something like this:
/dev/sda1: UUID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" TYPE="ext4"

copy the uuid of your 1TB SSD

Comment out your /etc/fstab:
/dev/disk/by-uuid/7739-64BE

add a new line:
/dev/disk/by-uuid/uuid-of-1tb-ssd

save and reboot, let me know what will happened
 
Last edited:
I thing there is a misunderstanding, my ProxMox now is booting correctly, all 3 VM are working and it even sees the backup HDD on USB 3 port. Fact is I have to enlarge the partition(s) of the whole SSD. When the disk was cloned I got the same partitions of the 500GB on the 1TB SSD. Today at work I'll create a bootable USB stick with Ventoy, add some partition managers like Acronis DiskManager and Partition Magic so that i can boot from that and resize the partitions. I never did this with Linux partitions but should be same as with Windows NTFS ones... at least I hope so.
 
Now I've run gparted from a bootable USB stick, resised the ext4 partition from about 230GB to 920GB but in ProxMox I cannot see any difference. Any advice why this happened ?
 
You may not have had to even do that but what you can do is:

Example:

growpart /dev/sda 2

resize2fs /dev/sda2

You may need only to do the resize2fs
 
Strange thing is that the GParted seems to have done it's job, I even reboote a second time with the GParted USB pendrive and the ext4 main partition was on 9xxGB... so it's crazy ProxMox do recognise it as 500GB as nothing happened.
What do you mean with resize2fs ?
Anyway I'm going to try the growpart command and will let you know.
 
I will!
Well seems that I often bite off more than I can chew..
So i'm struck with this resize2fs command to input. Should this begin with resize2fs /dev/.... or resize2fs /mnt/ ? Sorry but I wasn't able to find a matching tutorial yet.

PM.png
 
Last edited:
I'm definitely struck with the use of 2fsresize command and have still to enlarge my 240GB circa partition extending it to it's full Terabyte caoaciti. Do anyone know link to a tutorial for an easy tutorial that fits unexperienced *nux users ? I only found guides that requires more basic linux CLI knowledge.
 
lets see output of #lvs

your /dev/sda4 already looks to be 1TB so you may not need to use "pvresize /dev/sda4"

#lvextend -l +100%FREE /dev/pve/root
#resize2fs /dev/mapper/pve-root
 
Try this command if it works or not:

lvextend -l +100%FREE /dev/pve/root

resize2fs /dev/mapper/pve-root

if you get errors post output.. Also send output of command:

lvs
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!