[SOLVED] Emergency mode after upgrade to 6

Ivan Gersi

Well-Known Member
May 29, 2016
79
6
48
53
Node1 from my cluster going after upgrade from 5 to 6 to emergency mode...with old core is booting up corectly. The problem is my LVM.
Code:
root@pve1:~# lsblk
NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                  8:0    0  3.7T  0 disk
├─sda1               8:1    0    1M  0 part
├─sda2               8:2    0  256M  0 part
└─sda3               8:3    0  3.7T  0 part
  ├─pve-swap       253:0    0    8G  0 lvm  [SWAP]
  ├─pve-root       253:1    0   96G  0 lvm  /
  ├─pve-data_tmeta 253:2    0  112M  0 lvm
  │ └─pve-data     253:4    0  3.5T  0 lvm
  └─pve-data_tdata 253:3    0  3.5T  0 lvm
    └─pve-data     253:4    0  3.5T  0 lvm
sr0                 11:0    1 1024M  0 rom
Code:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
/dev/pve/data /var/lib/vz ext4 defaults 0 2

If I want to boot in version 6 I have to comment the last line /dev/pve/data...
Code:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
#/dev/pve/data /var/lib/vz ext4 defaults 0 2

Now I can boot correctly...after login I can coment out the last line again and run mount -a and /dev/pve/data is mounted correctly.
Where is the problem?
Why I can`t mount this lvm partition during boot but I can do it after boot?
I`m litltle confused.
 
form the lsblk output '/dev/pve/data' is a LVM-Thinpool and not an ext4 filesystem
A thinpool cannot be mounted?!

Could you please boot with the old system where it is mounted and look in the journal for any errors regarding the mounting?
What's the output of:
`pvs -a`
`vgs -a`
`lvs -a`
(compare between old and new)

I hope this helps!
 
This is after boot with comment.
Code:
root@pve1:~# df -h
Filesystem                Size  Used Avail Use% Mounted on
udev                      7.8G     0  7.8G   0% /dev
tmpfs                     1.6G   49M  1.5G   4% /run
/dev/mapper/pve-root       94G   48G   42G  54% /
tmpfs                     7.8G   60M  7.7G   1% /dev/shm
tmpfs                     5.0M     0  5.0M   0% /run/lock
tmpfs                     7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/fuse                  30M   40K   30M   1% /etc/pve
192.168.3.77:/mnt/ssd      94G   38G   52G  43% /mnt/pve/DRBD_SSD
192.168.2.2:/mnt/Backups  7.1T  5.2T  2.0T  73% /mnt/pve/freenas3
tmpfs                     1.6G     0  1.6G   0% /run/user/0

uncoment the last line in fstab and then....
Code:
root@pve1:/etc# mount -a
root@pve1:/etc# df -h
Filesystem                Size  Used Avail Use% Mounted on
udev                      7.8G     0  7.8G   0% /dev
tmpfs                     1.6G   49M  1.5G   4% /run
/dev/mapper/pve-root       94G   48G   42G  54% /
tmpfs                     7.8G   60M  7.7G   1% /dev/shm
tmpfs                     5.0M     0  5.0M   0% /run/lock
tmpfs                     7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/fuse                  30M   40K   30M   1% /etc/pve
192.168.3.77:/mnt/ssd      94G   38G   52G  43% /mnt/pve/DRBD_SSD
192.168.2.2:/mnt/Backups  7.1T  5.2T  2.0T  73% /mnt/pve/freenas3
tmpfs                     1.6G     0  1.6G   0% /run/user/0
/dev/mapper/pve-data      3.4T   53G  3.2T   2% /var/lib/vz

My theory...PVE 6 wants to own /var/lib/vz in /dev/mapper/pve-root during the boot but fstab wants to mount this to pve-data and access is denied.
After succesfull boot is /var/lib/vz "ünlocked" and I can mount it via fstab (mount in shell).

I`m going to check pvs vgs and lvs in pve5, pve6 with commented fstab and after manual mouned.
 
I`m going to check pvs vgs and lvs in pve5, pve6 with commented fstab and after manual mouned.
please do - the lsblk output really looks like /dev/pve/data (=/dev/mapper/pve-data) is a thinpool, which is not mountable!

Thanks!
 
please do - the lsblk output really looks like /dev/pve/data (=/dev/mapper/pve-data) is a thinpool, which is not mountable!

Thanks!
Ok but why can I mount this thinpool from running machine?
This is boot to 5
Code:
root@pve1:~# pvs -a
  PV         VG  Fmt  Attr PSize  PFree
  /dev/sda2           ---      0       0
  /dev/sda3  pve lvm2 a--  <3.64t <37.03g
root@pve1:~# vgs -a
  VG  #PV #LV #SN Attr   VSize  VFree
  pve   1   3   0 wz--n- <3.64t <37.03g
root@pve1:~# lvs -a
  LV              VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%                                                                                                                                                            Sync Convert
  data            pve twi-aotz--   3.50t             0.00   10.42                                                                                                                                                                            
  [data_tdata]    pve Twi-ao----   3.50t                                                                                                                                                                                                     
  [data_tmeta]    pve ewi-ao---- 112.00m                                                                                                                                                                                                     
  [lvol0_pmspare] pve ewi------- 112.00m                                                                                                                                                                                                     
  root            pve -wi-ao----  96.00g                                                                                                                                                                                                     
  swap            pve -wi-ao----   8.00g                                                                                                                                                                                                     
root@pve1:~# df -h
Filesystem                Size  Used Avail Use% Mounted on
udev                      7.8G     0  7.8G   0% /dev
tmpfs                     1.6G  9.0M  1.6G   1% /run
/dev/mapper/pve-root       94G   48G   42G  54% /
tmpfs                     7.8G   45M  7.7G   1% /dev/shm
tmpfs                     5.0M     0  5.0M   0% /run/lock
tmpfs                     7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/mapper/pve-data      3.4T   53G  3.2T   2% /var/lib/vz
/dev/fuse                  30M   40K   30M   1% /etc/pve
tmpfs                     1.6G     0  1.6G   0% /run/user/0
192.168.2.2:/mnt/Backups  7.1T  5.2T  2.0T  73% /mnt/pve/freenas3


Can you see /dev/mapper/pve-data 3.4T 53G 3.2T 2% /var/lib/vz.


Now boot to 6 with commented last line in fstab because we don`t need emergency mode.

Code:
root@pve1:~# pvs -a
  PV         VG  Fmt  Attr PSize  PFree
  /dev/sda2           ---      0       0
  /dev/sda3  pve lvm2 a--  <3.64t <37.03g
root@pve1:~# vgs -a
  VG  #PV #LV #SN Attr   VSize  VFree
  pve   1   3   0 wz--n- <3.64t <37.03g
root@pve1:~# lvs -a
  LV              VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%                                                                                                                                                             Sync Convert
  data            pve twi-a-tz--   3.50t             0.00   10.42                                                                                                                                                                           
  [data_tdata]    pve Twi-ao----   3.50t                                                                                                                                                                                                     
  [data_tmeta]    pve ewi-ao---- 112.00m                                                                                                                                                                                                     
  [lvol0_pmspare] pve ewi------- 112.00m                                                                                                                                                                                                     
  root            pve -wi-ao----  96.00g                                                                                                                                                                                                     
  swap            pve -wi-ao----   8.00g                                                                                                                                                                                                     
root@pve1:~# df -h
Filesystem                Size  Used Avail Use% Mounted on
udev                      7.8G     0  7.8G   0% /dev
tmpfs                     1.6G  9.0M  1.6G   1% /run
/dev/mapper/pve-root       94G   48G   42G  54% /
tmpfs                     7.8G   45M  7.7G   1% /dev/shm
tmpfs                     5.0M     0  5.0M   0% /run/lock
tmpfs                     7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/fuse                  30M   40K   30M   1% /etc/pve
192.168.2.2:/mnt/Backups  7.1T  5.2T  2.0T  73% /mnt/pve/freenas3
tmpfs                     1.6G     0  1.6G   0% /run/user/0

Now we can see /var/lib/vz in /.

Ok let`s go mount /dev/pve/data...nevertheless you told there is no fs, there is only thinpool.
Code:
root@pve1:/etc# mount -t ext4 /dev/pve/data /var/lib/vz
root@pve1:/etc# df -h
Filesystem                Size  Used Avail Use% Mounted on
udev                      7.8G     0  7.8G   0% /dev
tmpfs                     1.6G  9.0M  1.6G   1% /run
/dev/mapper/pve-root       94G   48G   42G  54% /
tmpfs                     7.8G   45M  7.7G   1% /dev/shm
tmpfs                     5.0M     0  5.0M   0% /run/lock
tmpfs                     7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/fuse                  30M   40K   30M   1% /etc/pve
192.168.2.2:/mnt/Backups  7.1T  5.2T  2.0T  73% /mnt/pve/freenas3
tmpfs                     1.6G     0  1.6G   0% /run/user/0
/dev/mapper/pve-data      3.4T   53G  3.2T   2% /var/lib/vz


What now master?
 
Ok but why can I mount this thinpool from running machine?
Tried to reproduce it here locally and could...
Yes you can create a filesystem on the blockdevice of a thinpool - however at least to me that's kind of confusing and potentially dangerous - if you create a thin-volume on the pool I would expect the data on the filesystem to get corrupted (you currently don't seem to have a thin-volume on /dev/mapper/pve-data)

Maybe the boot-logs of PVE 6.x have a hint as to why the behavior changed.

In any case I would suggest to migrate the data to a regular LV and to get rid of the thin-pool (just to rule out that someone misuses it by creating a thin-volume on it)

I hope that helps!
 
I think this will be the best way, I have only problems with thinpool, thick is better way. I`l try to change thin to thick.
Edit: I`ve made LVM and node is working correctly now.
 
Last edited:
  • Like
Reactions: Stoiko Ivanov

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!