PVE5 to PVE6 update and problem with LVMTHIN data partition

Foodrinks

Active Member
Jan 25, 2018
8
0
41
54
We have updated proxmox 5.x to 6.x (latest upgrade, via pve5to6 and apt-get dist-upgrade).
Before, we have removed LVM thin data storage and create a LVM data storage.
So, after upgrade to 6.x, booting has a problem (data is for us a directory and mounted in /etc/fstab), and boot stay in "mantenance".

Code:
Feb 10 20:09:00 comeshost1 systemd[1]: dev-pve-data.device: Job dev-pve-data.device/start timed out.
Feb 10 20:09:00 comeshost1 systemd[1]: Timed out waiting for device /dev/pve/data.
Feb 10 20:09:00 comeshost1 systemd[1]: Dependency failed for File System Check on /dev/pve/data.
Feb 10 20:09:00 comeshost1 systemd[1]: systemd-fsck@dev-pve-data.service: Job systemd-fsck@dev-pve-data.service/start failed with result 'dependency'.
Feb 10 20:09:00 comeshost1 systemd[1]: dev-pve-data.device: Job dev-pve-data.device/start failed with result 'timeout'.
Feb 10 20:09:00 comeshost1 systemd[1]: dm-event.service: Main process exited, code=killed, status=9/KILL
Feb 10 20:09:00 comeshost1 systemd[1]: dm-event.service: Failed with result 'timeout'.
Feb 10 20:09:00 comeshost1 systemd[1]: Stopped Device-mapper event daemon.


The solution is booting without mounting data in /etc/fstab and after mounting data by hand and restart all vm non started.

In pve5.x, this problem wasn't occurred...
 

Attachments

  • proxmox-storage-comeshost1.png
    proxmox-storage-comeshost1.png
    105.7 KB · Views: 8
  • proxmox-lvm-thin-status.png
    proxmox-lvm-thin-status.png
    96.3 KB · Views: 8
I found a strange behavior... In /etc/lvm/backup there are 2 files.
Code:
root@comeshost1:/etc/lvm/backup# l
total 16
-rw------- 1 root root 1642 Oct 27  2017 storage1
-rw------- 1 root root 3378 Feb 10 18:32 pve
drwx------ 2 root root 4096 Feb 10 18:32 .
drwxr-xr-x 5 root root 4096 Feb 10 21:19 ..
root@comeshost1:/etc/lvm/backup#
pve file is modified by "upgrade" process.

Code:
# Generated by LVM2 version 2.03.02(2) (2018-12-18): Mon Feb 10 18:32:59 2020

contents = "Text Format Volume Group"
version = 1

description = "Created *after* executing 'vgcfgbackup'"

creation_host = "comeshost1"    # Linux comeshost1 4.15.18-24-pve #1 SMP PVE 4.15.18-52 (Thu, 05 Dec 2019 10:14:17 +0100) x86_64
creation_time = 1581355979    # Mon Feb 10 18:32:59 2020

pve {
    id = "Y0F7Ku-oTFN-I8z3-Yfx8-gsWX-fgqZ-5wapy1"
    seqno = 7
    format = "lvm2"            # informational
    status = ["RESIZEABLE", "READ", "WRITE"]
    flags = []
    extent_size = 8192        # 4 Megabytes
    max_lv = 0
    max_pv = 0
    metadata_copies = 0

    physical_volumes {

        pv0 {
            id = "v63Muk-CChL-7p3O-ghjb-D0sB-1c1i-5W6Nfw"
            device = "/dev/sda3"    # Hint only

            status = ["ALLOCATABLE"]
            flags = []
            dev_size = 4687791247    # 2.18292 Terabytes
            pe_start = 2048
            pe_count = 572239    # 2.18292 Terabytes
        }
    }

    logical_volumes {

        swap {
            id = "tOtdFq-Z9aW-nk29-ryL6-1AYf-Tbhi-bOoQL0"
            status = ["READ", "WRITE", "VISIBLE"]
            flags = []
            creation_time = 1501155015    # 2017-07-27 13:30:15 +0200
            creation_host = "proxmox"
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 2048    # 8 Gigabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 0
                ]
            }
        }

        root {
            id = "CUi8ES-dH3F-AfW0-k1Tp-KFWV-zzwi-2ACtq2"
            status = ["READ", "WRITE", "VISIBLE"]
            flags = []
            creation_time = 1501155015    # 2017-07-27 13:30:15 +0200
            creation_host = "proxmox"
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 24576    # 96 Gigabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 2048
                ]
            }
        }

        data {
            id = "t6feGI-bxaN-zRVg-4kKM-Z0T2-9wSt-0iHZM2"
            status = ["READ", "WRITE", "VISIBLE"]
            flags = []
            creation_time = 1501155016    # 2017-07-27 13:30:16 +0200
            creation_host = "proxmox"
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 541521    # 2.06574 Terabytes

                type = "thin-pool"
                metadata = "data_tmeta"
                pool = "data_tdata"
                transaction_id = 0
                chunk_size = 4096    # 2 Megabytes
                discards = "passdown"
                zero_new_blocks = 1
            }
        }

        data_tdata {
            id = "L3WHIc-tIxD-SKBc-6Nq5-H2zp-J524-i1jKPj"
            status = ["READ", "WRITE"]
            flags = []
            creation_time = 1501155015    # 2017-07-27 13:30:15 +0200
            creation_host = "proxmox"
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 541521    # 2.06574 Terabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 26624
                ]
            }
        }

        data_tmeta {
            id = "NaxcfG-afh6-PxKg-3M2N-4NmD-uPzj-zg6gkH"
            status = ["READ", "WRITE"]
            flags = []
            creation_time = 1501155016    # 2017-07-27 13:30:16 +0200
            creation_host = "proxmox"
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 17    # 68 Megabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 568145
                ]
            }
        }

        lvol0_pmspare {
            id = "zy9VSs-AmKV-sUXz-Dt5f-NZcn-4a6X-YOwT6b"
            status = ["READ", "WRITE"]
            flags = []
            creation_time = 1501155016    # 2017-07-27 13:30:16 +0200
            creation_host = "proxmox"
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 17    # 68 Megabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 568162
                ]
            }
        }
    }

}
root@comeshost1:/etc/lvm/backup#

It's like the upgrade process, take OLD information stored somewhere and consider my lvm data pool like it's a thin volume, but it's not anymore since the first installation of proxmox 5.x...
 
please post the output of:
* `pvs -a`
* `vgs -a`
* `lvs -a`
* `cat /etc/pve/storage.cfg`
* `cat /etc/fstab`

That should make it clear where the error is
 
Code:
root@comeshost1:~# pvs -a
  PV         VG  Fmt  Attr PSize PFree 
  /dev/sda2           ---     0       0
  /dev/sda3  pve lvm2 a--  2.18t <15.86g
root@comeshost1:~# vgs -a
  VG  #PV #LV #SN Attr   VSize VFree 
  pve   1   3   0 wz--n- 2.18t <15.86g
root@comeshost1:~# lvs -a
  LV              VG  Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data            pve twi-aotz-- <2.07t             0.00   10.43                           
  [data_tdata]    pve Twi-ao---- <2.07t                                                   
  [data_tmeta]    pve ewi-ao---- 68.00m                                                   
  [lvol0_pmspare] pve ewi------- 68.00m                                                   
  root            pve -wi-ao---- 96.00g                                                   
  swap            pve -wi-ao----  8.00g                                                   
root@comeshost1:~# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content vztmpl,backup,iso

dir: data
    path /data
    content iso,vztmpl,backup,rootdir,images
    maxfiles 1
    shared 0

nfs: backup
    export /backup
    path /mnt/pve/backup
    server 192.168.0.6
    content backup,iso
    maxfiles 1
    options vers=3

root@comeshost1:~# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=ECAB-77AF /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
/dev/pve/data /data ext4 defaults 0 1
root@comeshost1:~#
 
as you see in the 'lvs -a' output
/dev/pve/data is a thinpool in the PVE volume group.
while it's possible to format the device and use it as ext4 filesystem, this brings quite a bit of risk:
* any manipulation with the lvm tools on the thinpool will corrupt the filesystem - this could happen by accident if someone does not expect it to contain a filesystem

If possible I would suggest to move all data away from the filesystem, then remove the thin lv - and afterwards decide:
* if you want a filesystem - create a regular LV and format that with ext4 (and add it to your fstab)
* if you want a thinpool - create the thinpool and adapt your /etc/pve/storage.cfg to use a lvm-thin storage

then migrate the data back

I hope this helps!
 
I'm, sure, in version 5.x that was a CLEAN LVM partition with filesystem ext4 (and mounted in fstab). Something was wrong in upgrade to 6.x.
What you are suggesting, is what i would have done normally, but, my question is WHY that happened ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!