missing parts of the lvm when trying a file restore

Jan 19, 2024
9
0
1
We are trying to restore a single. when going through the filebrowser we only see the root partition but no other partitions in the backup

when doing a full restore the machine is complete however.

what can be the reason for this?

we are using pve-kernel (5.15.131-3)
 
can you post the output of 'lsblk' 'vgs' 'pvs' and 'lvs' commands when the vm is running normally? (and maybe a screenshot of what the file restore browser view looks like?)
 
Hi, ofcourse i can

lsblk:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10.1G 0 disk
├─sda1 8:1 0 500M 0 part /boot
├─sda2 8:2 0 7.6G 0 part
│ ├─main-root 253:0 0 3.9G 0 lvm /
│ ├─main-swap 253:1 0 1G 0 lvm [SWAP]
│ ├─main-pool00_tmeta 253:2 0 8M 0 lvm
│ │ └─main-pool00-tpool 253:4 0 4.6G 0 lvm
│ │ ├─main-pool00 253:5 0 4.6G 1 lvm
│ │ ├─main-var 253:6 0 500M 0 lvm /var
│ │ ├─main-var_named 253:7 0 132M 0 lvm /var/named
│ │ ├─main-var_cache_yum 253:8 0 460M 0 lvm /var/cache/yum
│ │ ├─main-var_spool_postfix 253:9 0 128M 0 lvm /var/spool/postfix
│ │ ├─main-var_log 253:10 0 1012M 0 lvm /var/log
│ │ ├─main-tmp 253:11 0 460M 0 lvm /tmp
│ │ └─main-var_tmp 253:12 0 460M 0 lvm /var/tmp
│ └─main-pool00_tdata 253:3 0 4.6G 0 lvm
│ └─main-pool00-tpool 253:4 0 4.6G 0 lvm
│ ├─main-pool00 253:5 0 4.6G 1 lvm
│ ├─main-var 253:6 0 500M 0 lvm /var
│ ├─main-var_named 253:7 0 132M 0 lvm /var/named
│ ├─main-var_cache_yum 253:8 0 460M 0 lvm /var/cache/yum
│ ├─main-var_spool_postfix 253:9 0 128M 0 lvm /var/spool/postfix
│ ├─main-var_log 253:10 0 1012M 0 lvm /var/log
│ ├─main-tmp 253:11 0 460M 0 lvm /tmp
│ └─main-var_tmp 253:12 0 460M 0 lvm /var/tmp
└─sda3 8:3 0 1.9G 0 part
├─main-pool00_tmeta 253:2 0 8M 0 lvm
│ └─main-pool00-tpool 253:4 0 4.6G 0 lvm
│ ├─main-pool00 253:5 0 4.6G 1 lvm
│ ├─main-var 253:6 0 500M 0 lvm /var
│ ├─main-var_named 253:7 0 132M 0 lvm /var/named
│ ├─main-var_cache_yum 253:8 0 460M 0 lvm /var/cache/yum
│ ├─main-var_spool_postfix 253:9 0 128M 0 lvm /var/spool/postfix
│ ├─main-var_log 253:10 0 1012M 0 lvm /var/log
│ ├─main-tmp 253:11 0 460M 0 lvm /tmp
│ └─main-var_tmp 253:12 0 460M 0 lvm /var/tmp
└─main-pool00_tdata 253:3 0 4.6G 0 lvm
└─main-pool00-tpool 253:4 0 4.6G 0 lvm
├─main-pool00 253:5 0 4.6G 1 lvm
├─main-var 253:6 0 500M 0 lvm /var
├─main-var_named 253:7 0 132M 0 lvm /var/named
├─main-var_cache_yum 253:8 0 460M 0 lvm /var/cache/yum
├─main-var_spool_postfix 253:9 0 128M 0 lvm /var/spool/postfix
├─main-var_log 253:10 0 1012M 0 lvm /var/log
├─main-tmp 253:11 0 460M 0 lvm /tmp
└─main-var_tmp 253:12 0 460M 0 lvm /var/tmp

vgs
VG #PV #LV #SN Attr VSize VFree
main 2 10 0 wz--n- 9.50g 0

pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 main lvm2 a-- 7.61g 0
/dev/sda3 main lvm2 a-- 1.89g 0

lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
pool00 main twi-aotz-- <4.58g 57.86 25.44
root main -wi-ao---- 3.91g
swap main -wi-ao---- 1.00g
tmp main Vwi-aotz-- 460.00m pool00 79.58
var main Vwi-aotz-- 500.00m pool00 100.00
var_cache_yum main Vwi-aotz-- 460.00m pool00 96.59
var_log main Vwi-aotz-- 1012.00m pool00 97.79
var_named main Vwi-aotz-- 132.00m pool00 5.87
var_spool_postfix main Vwi-aotz-- 128.00m pool00 11.96
var_tmp main Vwi-aotz-- 460.00m pool00 84.65

in the partition /var this is the output of ls -al:
total 32
drwxr-xr-x. 21 root root 4096 May 16 2018 .
dr-xr-xr-x. 19 root root 4096 Jan 22 00:01 ..
drwxr-xr-x. 2 root root 6 Apr 11 2018 adm
drwxr-xr-x. 7 root root 73 Sep 13 2021 cache
drwxr-xr-x. 2 root root 6 Jun 9 2021 crash
drwxr-xr-x. 4 root root 43 May 25 2022 db
drwxr-xr-x. 3 root root 17 Apr 11 2018 empty
drwxr-xr-x. 2 root root 6 Apr 11 2018 games
drwxr-xr-x. 2 root root 6 Apr 11 2018 gopher
drwxr-xr-x. 3 root root 17 Nov 30 2022 kerberos
drwxr-xr-x. 40 root root 4096 Sep 13 2021 lib
drwxr-xr-x. 2 root root 6 Apr 11 2018 local
lrwxrwxrwx. 1 root root 11 Jul 25 2014 lock -> ../run/lock
drwxr-xr-x. 12 root root 4096 Jan 22 00:06 log
lrwxrwxrwx 1 root root 10 May 16 2018 mail -> spool/mail
drwxrwx--T. 6 root named 4096 Jan 22 09:53 named
drwxr-xr-x. 2 root root 6 Apr 11 2018 nis
drwxr-xr-x. 2 root root 6 Apr 11 2018 opt
drwxr-xr-x. 2 root root 6 Apr 11 2018 preserve
lrwxrwxrwx. 1 root root 6 Jul 25 2014 run -> ../run
drwxr-xr-x. 9 root root 94 Apr 11 2018 spool
drwxrwxrwt. 4 root root 4096 Jan 17 23:40 tmp
-rw-r--r-- 1 root root 163 May 16 2018 .updated
drwxr-xr-x. 3 root root 16 Jul 25 2014 var
drwxr-xr-x. 2 root root 6 Apr 11 2018 yp

As you can see there are many directories, in the file restore however we only see lib

i have included two screenshots
 

Attachments

  • screen2.png
    screen2.png
    57.4 KB · Views: 7
  • screen1.png
    screen1.png
    58.6 KB · Views: 7
sorry for the late answer. this should work normally. what filesystems did you put on the lvs ? can you also post the content of the file
/var/log/proxmox-backup/file-restore/qemu.log after you started the file restore?
(the file is on the pve server, not pbs)

normally you should be able to select each thin volume seperately (like the 'root' volume)
but somehow it's not detecting it as a filesystem?
 
the filesystem is xfs. when i click on file restore and the browser comes up i dont see any changes to /var/log/proxmox-backup/file-restore/qemu.log
 
Last edited:
the filesystem is xfs. when i click on file restore and the browser comes up i dont see any changes to /var/log/proxmox-backup/file-restore/qemu.log
can you post the contents of the log?
you have to look on the pve server where you connect to the web ui not the one where the vm is
 
[2024-02-07T12:13:05+01:00] PBS file restore VM log
[init-shim] beginning user space setup
[init-shim] debug: agetty start failed: /sbin/agetty not found, probably not running debug mode and safe to ignore
[init-shim] reached daemon start after 0.50s
[2024-02-07T11:13:06.493Z INFO proxmox_restore_daemon] setup basic system environment...
[2024-02-07T11:13:06.494Z INFO proxmox_restore_daemon] scanning all disks...
[2024-02-07T11:13:06.495Z INFO proxmox_restore_daemon::proxmox_restore_daemon::disk] Supported FS: reiserfs, ext3, ext4, ext2, vfat, msdos, iso9660, hfsplus, hfs, sysv, v7, ntfs, ufs, jfs, xfs, befs, f2fs, btrfs
EXT4-fs (vda): VFS: Can't find ext4 filesystem
EXT4-fs (vda): VFS: Can't find ext4 filesystem
EXT2-fs (vda): error: can't find an ext2 filesystem on dev vda.
FAT-fs (vda): invalid media value (0x00)
FAT-fs (vda): invalid media value (0x00)
VFS: could not find a valid V7 on vda.
ntfs: (device vda): read_ntfs_boot_sector(): Primary boot sector is invalid.
ntfs: (device vda): read_ntfs_boot_sector(): Mount option errors=recover not used. Aborting without trying to recover.
ntfs: (device vda): ntfs_fill_super(): Not an NTFS volume.
ufs: ufs_fill_super(): bad magic number
befs: (vda): invalid magic header
F2FS-fs (vda): Can't find valid F2FS filesystem in 1th superblock
F2FS-fs (vda): Can't find valid F2FS filesystem in 2th superblock
[2024-02-07T11:13:06.511Z INFO proxmox_restore_daemon::proxmox_restore_daemon::disk] drive 'vda' ('drive-scsi0'): found partition '/dev/vda2' (2, 8177844224B)
[2024-02-07T11:13:06.512Z INFO proxmox_restore_daemon::proxmox_restore_daemon::disk] drive 'vda' ('drive-scsi0'): found partition '/dev/vda3' (3, 2034237440B)
[2024-02-07T11:13:06.514Z INFO proxmox_restore_daemon::proxmox_restore_daemon::disk] drive 'vda' ('drive-scsi0'): found partition '/dev/vda1' (1, 524288000B)
[2024-02-07T11:13:06.683Z INFO proxmox_restore_daemon::proxmox_restore_daemon::disk] LVM: found VG 'main' on 'vda2' (drive-scsi0)
[2024-02-07T11:13:06.684Z INFO proxmox_restore_daemon::proxmox_restore_daemon::disk] LVM: found VG 'main' on 'vda3' (drive-scsi0)
[2024-02-07T11:13:06.711Z INFO proxmox_restore_daemon::proxmox_restore_daemon::disk] LVM: attempting to activate thinpool 'pool00_tmeta'
[2024-02-07T11:13:06.831Z INFO proxmox_restore_daemon::proxmox_restore_daemon::disk] LVM: found LV 'root' on 'main' (4198498304B)
[2024-02-07T11:13:06.859Z INFO proxmox_restore_daemon::proxmox_restore_daemon::disk] LVM: found LV 'swap' on 'main' (1073741824B)
[2024-02-07T11:13:06.920Z WARN proxmox_restore_daemon::proxmox_restore_daemon::disk] LVM: LV 'tmp' on 'main' (482344960B) failed to activate: command "/sbin/lvchange" "-ay" "main/tmp" failed - status code: 5 - File descriptor 13 (socket:[809]) leaked on lvchange invocation. Parent PID 53: /proxmox-restore-daemon
File descriptor 16 (socket:[815]) leaked on lvchange invocation. Parent PID 53: /proxmox-restore-daemon
Kernel not configured for semaphores (System V IPC). Not using udev synchronisation code.
Check of pool main/pool00 failed (status:1). Manual repair required!

[2024-02-07T11:13:06.984Z WARN proxmox_restore_daemon::proxmox_restore_daemon::disk] LVM: LV 'var' on 'main' (524288000B) failed to activate: command "/sbin/lvchange" "-ay" "main/var" failed - status code: 5 - File descriptor 13 (socket:[809]) leaked on lvchange invocation. Parent PID 53: /proxmox-restore-daemon
File descriptor 16 (socket:[815]) leaked on lvchange invocation. Parent PID 53: /proxmox-restore-daemon
Kernel not configured for semaphores (System V IPC). Not using udev synchronisation code.
Check of pool main/pool00 failed (status:1). Manual repair required!

[2024-02-07T11:13:07.052Z WARN proxmox_restore_daemon::proxmox_restore_daemon::disk] LVM: LV 'var_cache_yum' on 'main' (482344960B) failed to activate: command "/sbin/lvchange" "-ay" "main/var_cache_yum" failed - status code: 5 - File descriptor 13 (socket:[809]) leaked on lvchange invocation. Parent PID 53: /proxmox-restore-daemon
File descriptor 16 (socket:[815]) leaked on lvchange invocation. Parent PID 53: /proxmox-restore-daemon
Kernel not configured for semaphores (System V IPC). Not using udev synchronisation code.
Check of pool main/pool00 failed (status:1). Manual repair required!

[2024-02-07T11:13:07.120Z WARN proxmox_restore_daemon::proxmox_restore_daemon::disk] LVM: LV 'var_log' on 'main' (1061158912B) failed to activate: command "/sbin/lvchange" "-ay" "main/var_log" failed - status code: 5 - File descriptor 13 (socket:[809]) leaked on lvchange invocation. Parent PID 53: /proxmox-restore-daemon
File descriptor 16 (socket:[815]) leaked on lvchange invocation. Parent PID 53: /proxmox-restore-daemon
Kernel not configured for semaphores (System V IPC). Not using udev synchronisation code.
Check of pool main/pool00 failed (status:1). Manual repair required!

[2024-02-07T11:13:07.188Z WARN proxmox_restore_daemon::proxmox_restore_daemon::disk] LVM: LV 'var_named' on 'main' (138412032B) failed to activate: command "/sbin/lvchange" "-ay" "main/var_named" failed - status code: 5 - File descriptor 13 (socket:[809]) leaked on lvchange invocation. Parent PID 53: /proxmox-restore-daemon
File descriptor 16 (socket:[815]) leaked on lvchange invocation. Parent PID 53: /proxmox-restore-daemon
Kernel not configured for semaphores (System V IPC). Not using udev synchronisation code.
Check of pool main/pool00 failed (status:1). Manual repair required!

[2024-02-07T11:13:07.256Z WARN proxmox_restore_daemon::proxmox_restore_daemon::disk] LVM: LV 'var_spool_postfix' on 'main' (134217728B) failed to activate: command "/sbin/lvchange" "-ay" "main/var_spool_postfix" failed - status code: 5 - File descriptor 13 (socket:[809]) leaked on lvchange invocation. Parent PID 53: /proxmox-restore-daemon
File descriptor 16 (socket:[815]) leaked on lvchange invocation. Parent PID 53: /proxmox-restore-daemon
Kernel not configured for semaphores (System V IPC). Not using udev synchronisation code.
Check of pool main/pool00 failed (status:1). Manual repair required!

[2024-02-07T11:13:07.324Z WARN proxmox_restore_daemon::proxmox_restore_daemon::disk] LVM: LV 'var_tmp' on 'main' (482344960B) failed to activate: command "/sbin/lvchange" "-ay" "main/var_tmp" failed - status code: 5 - File descriptor 13 (socket:[809]) leaked on lvchange invocation. Parent PID 53: /proxmox-restore-daemon
File descriptor 16 (socket:[815]) leaked on lvchange invocation. Parent PID 53: /proxmox-restore-daemon
Kernel not configured for semaphores (System V IPC). Not using udev synchronisation code.
Check of pool main/pool00 failed (status:1). Manual repair required!

[2024-02-07T11:13:07.355Z INFO proxmox_restore_daemon] disk scan complete.


filesystem is xfs
 
LVM: LV 'tmp' on 'main' (482344960B) failed to activate: command "/sbin/lvchange" "-ay" "main/tmp" failed - status code: 5
ok it seems the lv activation fails here

do you have anything special in your lvm setup? (e.g. some adaptions in the lvm.conf)
 
servername ~ # rpm -qf /etc/lvm/lvm.conf
lvm2-2.02.187-6.el7_9.5.x86_64
servername ~ # rpm -V lvm2-2.02.187-6.el7_9.5.x86_64
.M....... g /etc/lvm/cache/.cache


lvm.conf seems original
 
mhmm i tried to reproduce, but a similar setup works here

it it only broken on this backup or on everyone? can you try making a new backup and testing the file restore there?
what happens when you restore that backup (into a new vmid to not overwrite your existing one) ? does that work?
 
i don't really know, i looked into the lvm source code, but there are too many error conditions that return the error code 5 to make any good guess...
if you backup the vm again, does the file restore work then? if yes, i'd assume it is some kind of live state that isn't persisted to disk when we backup
(can you maybe post the corresponding backup task log from pve side? )
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!