Missing VM disk images after PVE reinstall

spockz

New Member
Jul 18, 2022
6
0
1
Hi all,

I wanted to backup my PVE install (v6.x) with `dd` and somehow managed to nuke it instead of back up. Now I'm trying to recreate the install (using PVE 7.2) and am failing at adding the iscsi storage and finding the existing VM disks.

I know in the past I followed a guide to add a iscsi target. And then I did *something*, I think make a LVM with base storage, on top of the iscsi block storage. However, now when I either add the iscsi target saying to use the LUN directly it does not see any images at all. When I use `content none` I also cannot add an LVM on top of it as "base volume" and "base group" do not complete and I have no idea what used to be there.

The iscsi target/portal is running on FreeNas and I do have access to the zvol.

How can I retrieve my disks on the iscsi target?
 
Last edited:
Do NOT pass iSCSI directly if it contained LVM storage on it.
You are going to need to figure out how your storage was organized, and do it very carefully or it will follow the fate of your PVE installation.

If adding iSCSI works, next you need to find out if those disks contain LVM volumes. Run:
pvs
vgs
lvs
lsblk

Do you see LVM information there? If you do, then you can create LVM storage entry in PVE to point to the path.
https://pve.proxmox.com/wiki/Logical_Volume_Manager_(LVM)

Rather than trying to list every possibility, why dont you start with above and return with some more information for advice, if you need it.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Some additional info. I recall having added two entries to /etc/pve/storage.conf. One for iscsci and then another, which I presume was a LVM mount. In the Web UI I saw one "empty" storage tied to iscsi and then another containing the disk images.

Before creating this post, I already tried adding the iscsi targets, does that mean my data is nuked as well now? I presumed adding a target would be non-destructive/read-only.

Output of the commands:

Bash:
root@pve:~# pvs
  PV         VG     Fmt  Attr PSize    PFree
  /dev/sda   group1 lvm2 a--  <100.00g <19.00g  
      # this one I don't recognise. The iscsi target I had was ±110GiB, with a VM disk in it of 80GiB.
  /dev/sde3  pve    lvm2 a--  <465.26g <16.00g.   # this is the new install disk ssd

Bash:
root@pve:~# vgs
  VG     #PV #LV #SN Attr   VSize    VFree
  group1   1   3   0 wz--n- <100.00g <19.00g
  pve      1   3   0 wz--n- <465.26g <16.00g
# Same classification

Bash:
root@pve:~# lvs
  LV            VG     Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vm-101-disk-0 group1 -wi-a-----   80.00g # This is hopeful, this is the VM id of my old vm and the right size
  vm-102-disk-0 group1 -wi-a-----  512.00m # I don't recognise these.
  vm-102-disk-1 group1 -wi-a-----  512.00m
  data          pve    twi-aotz-- <338.36g             0.00   0.50
  root          pve    -wi-ao----   96.00g
  swap          pve    -wi-ao----    8.00g

Bash:
root@pve:~# lsblk
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                         8:0    0   100G  0 disk
|-group1-vm--101--disk--0 253:6    0    80G  0 lvm
|-group1-vm--102--disk--0 253:7    0   512M  0 lvm
`-group1-vm--102--disk--1 253:8    0   512M  0 lvm
sde                         8:64   0 465.8G  0 disk
|-sde1                      8:65   0  1007K  0 part
|-sde2                      8:66   0   512M  0 part /boot/efi
`-sde3                      8:67   0 465.3G  0 part
  |-pve-swap              253:0    0     8G  0 lvm  [SWAP]
  |-pve-root              253:1    0    96G  0 lvm  /
  |-pve-data_tmeta        253:2    0   3.5G  0 lvm
  | `-pve-data-tpool      253:4    0 338.4G  0 lvm
  |   `-pve-data          253:5    0 338.4G  1 lvm
  `-pve-data_tdata        253:3    0 338.4G  0 lvm
    `-pve-data-tpool      253:4    0 338.4G  0 lvm
      `-pve-data          253:5    0 338.4G  1 lvm
sdf                         8:80   1  14.7G  0 disk
|-sdf1                      8:81   1   512K  0 part
`-sdf2                      8:82   1  14.7G  0 part

This looks hopeful. What is the next step?
 
Run 'lvdisplay', it'll give you details including LV Path. You can then mount it to your local filesystem (e.g. /mnt/) with the correct filesystem type.
 
Thank you jameswang. So what if I want to take that virtual disk and store it in LVM-thin of my PVE? Before the reinstall I would have used the `move` option in the web ui of the VM to move the image to a new storage.

I think the way to get the storage back in proxmox by adding to storage.conf

Code:
lvm: iscsi-lvm
    vgname group1
    base <ID SCSI> #?
    content rootdir,images
    shared 1

But I don't know what to put as the base. Maybe /dev/disk/by-id/scsi-36589cfc0000004b9730ac47837dfdeb9 which points to /dev/sda which hosts the disk images. The `lvm-pv-uuid-zVWfyJ-v9VY-izXQ-x7i6-rgYN-sVme-GkDI2D` disk seems to me to be more stable. Should I use that?


I also found https://serverfault.com/questions/1...-top-of-iscsi-how-to-find-base-value-for-pves which suggests to use `pvesm list storage id`. When I run that on the storage I have defined, both the one with content none and content images (the latter created through the WebUI.), I get back an empty list.

Code:
root@pve:~# pvesm list dockerhost-flat
Volid Format  Type      Size VMID


When running lvdisplay it gives me:

Code:
root@pve:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                8uLch7-ZOMK-b7ZQ-Gi7S-FOfY-eiXf-8vGi90
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-07-18 22:01:05 +0200
  LV Status              available
  # open                 2
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                YIrmwk-fZpJ-aKkU-fpxj-Mt3k-XCAD-eKHVHM
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-07-18 22:01:05 +0200
  LV Status              available
  # open                 1
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                edHQpG-LGnu-lnwh-KZno-rKCd-GQne-f9bW2R
  LV Write Access        read/write (activated read only)
  LV Creation host, time proxmox, 2022-07-18 22:01:14 +0200
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                <338.36 GiB
  Allocated pool data    0.00%
  Allocated metadata     0.50%
  Current LE             86619
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:5

  --- Logical volume ---
  LV Path                /dev/group1/vm-101-disk-0
  LV Name                vm-101-disk-0
  VG Name                group1
  LV UUID                wML3n6-syBg-g8a3-GJfl-Ye1T-vjc1-fGwKQS
  LV Write Access        read/write
  LV Creation host, time pve, 2021-02-23 21:12:29 +0100
  LV Status              available
  # open                 0
  LV Size                80.00 GiB
  Current LE             20480
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           253:6

But when I try to mount it I get:

Code:
root@pve:~# mount -r /dev/group1/vm-101-disk-0 /mnt/fedora
mount: /mnt/fedora: wrong fs type, bad option, bad superblock on /dev/mapper/group1-vm--101--disk--0, missing codepage or helper program, or other error.

Running fsck gives

Code:
root@pve:~# fsck.ext4 /dev/group1/vm-101-disk-0
e2fsck 1.46.2 (28-Feb-2021)
ext2fs_open2: Bad magic number in super-block
fsck.ext4: Superblock invalid, trying backup blocks...
fsck.ext4: Bad magic number in super-block while trying to open /dev/group1/vm-101-disk-0

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

Found a dos partition table in /dev/group1/vm-101-disk-0

edit: this makes sense because when I installed fedora on that disk, the installer used LVM again.

But now how to proceed?
 
Last edited:
run:
iscsiadm --mode discovery --type sendtargets --portal <ip-of-your-iscsi-server>

check for matching entry: ls -l /dev/disk/by-id/

create storage
pvesm add lvm iSCSI-LVM --vgname group1 --base iSCSI:0.0.1.dm-name-iSCSI --shared yes --content images

restart services:
systemctl try-reload-or-restart pvedaemon pveproxy pvestatd

create a VM100 with no disks: qm create 100
run : qm rescan

There may be more steps needed to make VM boot.
dont mount anything on the hypervisor...



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
@bbgeek17, I already have a storage with ID lvm-local, created by the installer. Should I just use another ID or is there a specific reason for choosing this ID?


I don't really see what should be matching here?

Code:
root@pve:~# iscsiadm --mode discovery --type sendtargets --portal 192.168.1.240
192.168.1.240:3260,-1 iqn.2021-02.nl.spockz.istgt:pve-docker
root@pve:~# ls -l /dev/disk/by-id/
total 0
lrwxrwxrwx 1 root root  9 Jul 18 23:12 ata-CT500MX500SSD1_2222E637B491 -> ../../sde
lrwxrwxrwx 1 root root 10 Jul 18 23:12 ata-CT500MX500SSD1_2222E637B491-part1 -> ../../sde1
lrwxrwxrwx 1 root root 10 Jul 18 23:12 ata-CT500MX500SSD1_2222E637B491-part2 -> ../../sde2
lrwxrwxrwx 1 root root 10 Jul 18 23:12 ata-CT500MX500SSD1_2222E637B491-part3 -> ../../sde3
lrwxrwxrwx 1 root root 10 Jul 19 14:17 dm-name-group1-vm--101--disk--0 -> ../../dm-6
lrwxrwxrwx 1 root root 10 Jul 18 23:18 dm-name-group1-vm--102--disk--0 -> ../../dm-7
lrwxrwxrwx 1 root root 10 Jul 18 23:18 dm-name-group1-vm--102--disk--1 -> ../../dm-8
lrwxrwxrwx 1 root root 10 Jul 18 23:12 dm-name-pve-root -> ../../dm-1
lrwxrwxrwx 1 root root 10 Jul 18 23:12 dm-name-pve-swap -> ../../dm-0
lrwxrwxrwx 1 root root 10 Jul 18 23:18 dm-uuid-LVM-da4LHgcJbJpjsPdSzci91x0RLSL56Q4j37Pwd3ot23goGRox8z9QfwE5Ig1mbdig -> ../../dm-8
lrwxrwxrwx 1 root root 10 Jul 18 23:18 dm-uuid-LVM-da4LHgcJbJpjsPdSzci91x0RLSL56Q4jr5mvlpRg6wAZ0sHEGkaLcqiBgKIa1LgN -> ../../dm-7
lrwxrwxrwx 1 root root 10 Jul 19 14:17 dm-uuid-LVM-da4LHgcJbJpjsPdSzci91x0RLSL56Q4jwML3n6syBgg8a3GJflYe1Tvjc1fGwKQS -> ../../dm-6
lrwxrwxrwx 1 root root 10 Jul 18 23:12 dm-uuid-LVM-e25CbTUiyGmQR3x6PV2uzzt6MBUkX3e08uLch7ZOMKb7ZQGi7SFOfYeiXf8vGi90 -> ../../dm-0
lrwxrwxrwx 1 root root 10 Jul 18 23:12 dm-uuid-LVM-e25CbTUiyGmQR3x6PV2uzzt6MBUkX3e0YIrmwkfZpJaKkUfpxjMt3kXCADeKHVHM -> ../../dm-1
lrwxrwxrwx 1 root root 10 Jul 18 23:12 lvm-pv-uuid-txXCtK-pU6C-LEda-2Zhq-Xmug-47ep-sjc1BN -> ../../sde3
lrwxrwxrwx 1 root root  9 Jul 18 23:18 lvm-pv-uuid-zVWfyJ-v9VY-izXQ-x7i6-rgYN-sVme-GkDI2D -> ../../sda
lrwxrwxrwx 1 root root  9 Jul 18 23:18 scsi-36589cfc0000004b9730ac47837dfdeb9 -> ../../sda
lrwxrwxrwx 1 root root  9 Jul 18 23:12 usb-JetFlash_Transcend_16GB_69BAJBGP2FU7MNUZ-0:0 -> ../../sdf
lrwxrwxrwx 1 root root 10 Jul 18 23:12 usb-JetFlash_Transcend_16GB_69BAJBGP2FU7MNUZ-0:0-part1 -> ../../sdf1
lrwxrwxrwx 1 root root 10 Jul 18 23:12 usb-JetFlash_Transcend_16GB_69BAJBGP2FU7MNUZ-0:0-part2 -> ../../sdf2
lrwxrwxrwx 1 root root  9 Jul 18 23:12 wwn-0x500a0751e637b491 -> ../../sde
lrwxrwxrwx 1 root root 10 Jul 18 23:12 wwn-0x500a0751e637b491-part1 -> ../../sde1
lrwxrwxrwx 1 root root 10 Jul 18 23:12 wwn-0x500a0751e637b491-part2 -> ../../sde2
lrwxrwxrwx 1 root root 10 Jul 18 23:12 wwn-0x500a0751e637b491-part3 -> ../../sde3
lrwxrwxrwx 1 root root  9 Jul 18 23:18 wwn-0x6589cfc0000004b9730ac47837dfdeb9 -> ../../sda

Should it be like this? So with the `-iSCSI` appended?:

pvesm add lvm iSCSI-LVM --vgname group1 --base iSCSI:0.0.1.dm-name-group1-vm--101--disk--0-iSCSI --shared yes --content images


So apparently not
Code:
root@pve:~# pvesm add lvm iSCSI-LVM --vgname group1 --base dockerhost-flat:0.0.1.dm-name-group1-vm--101--disk--0 --shared yes --content images
create storage failed: pvcreate '/dev/disk/by-id/dm-name-group1-vm--101--disk--0' error:   Cannot use /dev/disk/by-id/dm-name-group1-vm--101--disk--0: device is not in a usable state
 
Last edited:
Thank you jameswang. So what if I want to take that virtual disk and store it in LVM-thin of my PVE? Before the reinstall I would have used the `move` option in the web ui of the VM to move the image to a new storage.

I think the way to get the storage back in proxmox by adding to storage.conf

Code:
lvm: iscsi-lvm
    vgname group1
    base <ID SCSI> #?
    content rootdir,images
    shared 1

But I don't know what to put as the base. Maybe /dev/disk/by-id/scsi-36589cfc0000004b9730ac47837dfdeb9 which points to /dev/sda which hosts the disk images. The `lvm-pv-uuid-zVWfyJ-v9VY-izXQ-x7i6-rgYN-sVme-GkDI2D` disk seems to me to be more stable. Should I use that?


I also found https://serverfault.com/questions/1...-top-of-iscsi-how-to-find-base-value-for-pves which suggests to use `pvesm list storage id`. When I run that on the storage I have defined, both the one with content none and content images (the latter created through the WebUI.), I get back an empty list.

Code:
root@pve:~# pvesm list dockerhost-flat
Volid Format  Type      Size VMID


When running lvdisplay it gives me:

Code:
root@pve:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                8uLch7-ZOMK-b7ZQ-Gi7S-FOfY-eiXf-8vGi90
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-07-18 22:01:05 +0200
  LV Status              available
  # open                 2
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                YIrmwk-fZpJ-aKkU-fpxj-Mt3k-XCAD-eKHVHM
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-07-18 22:01:05 +0200
  LV Status              available
  # open                 1
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                edHQpG-LGnu-lnwh-KZno-rKCd-GQne-f9bW2R
  LV Write Access        read/write (activated read only)
  LV Creation host, time proxmox, 2022-07-18 22:01:14 +0200
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                <338.36 GiB
  Allocated pool data    0.00%
  Allocated metadata     0.50%
  Current LE             86619
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:5

  --- Logical volume ---
  LV Path            
[QUOTE="spockz, post: 485329, member: 157135"]
Thank you jameswang. So what if I want to take that virtual disk and store it in LVM-thin of my PVE? Before the reinstall I would have used the `move` option in the web ui of the VM to move the image to a new storage.

I think the way to get the storage back in proxmox by adding to storage.conf

[CODE]
lvm: iscsi-lvm
    vgname group1
    base <ID SCSI> #?
    content rootdir,images
    shared 1

But I don't know what to put as the base. Maybe /dev/disk/by-id/scsi-36589cfc0000004b9730ac47837dfdeb9 which points to /dev/sda which hosts the disk images. The `lvm-pv-uuid-zVWfyJ-v9VY-izXQ-x7i6-rgYN-sVme-GkDI2D` disk seems to me to be more stable. Should I use that?


I also found https://serverfault.com/questions/1...-top-of-iscsi-how-to-find-base-value-for-pves which suggests to use `pvesm list storage id`. When I run that on the storage I have defined, both the one with content none and content images (the latter created through the WebUI.), I get back an empty list.

Code:
root@pve:~# pvesm list dockerhost-flat
Volid Format  Type      Size VMID


When running lvdisplay it gives me:

Code:
root@pve:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                8uLch7-ZOMK-b7ZQ-Gi7S-FOfY-eiXf-8vGi90
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-07-18 22:01:05 +0200
  LV Status              available
  # open                 2
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                YIrmwk-fZpJ-aKkU-fpxj-Mt3k-XCAD-eKHVHM
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-07-18 22:01:05 +0200
  LV Status              available
  # open                 1
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                edHQpG-LGnu-lnwh-KZno-rKCd-GQne-f9bW2R
  LV Write Access        read/write (activated read only)
  LV Creation host, time proxmox, 2022-07-18 22:01:14 +0200
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                <338.36 GiB
  Allocated pool data    0.00%
  Allocated metadata     0.50%
  Current LE             86619
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:5

  --- Logical volume ---
  LV Path                /dev/group1/vm-101-disk-0
  LV Name                vm-101-disk-0
  VG Name                group1
  LV UUID                wML3n6-syBg-g8a3-GJfl-Ye1T-vjc1-fGwKQS
  LV Write Access        read/write
  LV Creation host, time pve, 2021-02-23 21:12:29 +0100
  LV Status              available
  # open                 0
  LV Size                80.00 GiB
  Current LE             20480
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           253:6

But when I try to mount it I get:

Code:
root@pve:~# mount -r /dev/group1/vm-101-disk-0 /mnt/fedora
mount: /mnt/fedora: wrong fs type, bad option, bad superblock on /dev/mapper/group1-vm--101--disk--0, missing codepage or helper program, or other error.

Running fsck gives

Code:
root@pve:~# fsck.ext4 /dev/group1/vm-101-disk-0
e2fsck 1.46.2 (28-Feb-2021)
ext2fs_open2: Bad magic number in super-block
fsck.ext4: Superblock invalid, trying backup blocks...
fsck.ext4: Bad magic number in super-block while trying to open /dev/group1/vm-101-disk-0

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

Found a dos partition table in /dev/group1/vm-101-disk-0

edit: this makes sense because when I installed fedora on that disk, the installer used LVM again.

But now how to proceed?

LV Name vm-101-disk-0
VG Name group1
LV UUID wML3n6-syBg-g8a3-GJfl-Ye1T-vjc1-fGwKQS
LV Write Access read/write
LV Creation host, time pve, 2021-02-23 21:12:29 +0100
LV Status available
# open 0
LV Size 80.00 GiB
Current LE 20480
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 4096
Block device 253:6
[/CODE]

But when I try to mount it I get:

Code:
root@pve:~# mount -r /dev/group1/vm-101-disk-0 /mnt/fedora
mount: /mnt/fedora: wrong fs type, bad option, bad superblock on /dev/mapper/group1-vm--101--disk--0, missing codepage or helper program, or other error.

Running fsck gives

Code:
root@pve:~# fsck.ext4 /dev/group1/vm-101-disk-0
e2fsck 1.46.2 (28-Feb-2021)
ext2fs_open2: Bad magic number in super-block
fsck.ext4: Superblock invalid, trying backup blocks...
fsck.ext4: Bad magic number in super-block while trying to open /dev/group1/vm-101-disk-0

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

Found a dos partition table in /dev/group1/vm-101-disk-0

edit: this makes sense because when I installed fedora on that disk, the installer used LVM again.

But now how to proceed?
[/QUOTE]

Find out the filesystem type:
file -s /dev/group1/vm-101-disk-0
or if it's symlinked, then
file -L -s /dev/group1/vm-101-disk-0

Then mount it with 'mount -t the-filesystem-type /dev/group1/vm-101-disk-0 /mnt/fedora' --where the-filesystem-type is from the file type the 'file' command output.

Edit: don't use the mount command as it's not for mounting your VMs.
 
Last edited:
@jameswang at this moment I'm primarily interested in getting access to the data on the disk. Preferably by just mounting it in a VM, but alternatively just getting the important data off it by copying stuff is also more than fine by me.

When I do the `file` command it says `data`. I presume I need to set the scan level higher so that proxmox will scan further and find the PV, then I should be able to mount those.
 
/dev/group1/vm-101-disk-0 is the whole disk. On this disk you very likely have various partitions that were created by your VM OS. Dont mount the device at the top level and definitely do not try to fsck it...
What does "lsblk" show? fdisk -l /dev/group1/vm-101-disk-0

There is probably something like:
/dev/mapper/pve-vm--2000--disk--0-part1 18432 229342 210911 103M Linux filesystem
/dev/mapper/pve-vm--2000--disk--0-part15 2048 18431 16384 8M EFI System

Thats what you would mount for recovery of data, ie part1 path.
If you want to start a VM with this disk - dont mount it.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I'm further along now.

After a restart of pve and the FreeNas host I started seeing the disks with `pvesm list dockerhost-flat`


Code:
root@pve:~# pvesm list dockerhost-flat
Volid                                                        Format  Type              Size VMID
dockerhost-flat:0.0.0.scsi-36589cfc0000004b9730ac47837dfdeb9 raw     images    107374198784

I used this to create a new Lvm

Code:
pvesm add lvm freenas-diskimage --vgname group1 --base dockerhost-flat:0.0.0.scsi-36589cfc0000004b9730ac47837dfdeb9

And now I *do* see the disks in the Web UI. However, any actions with the disk complain about the disks to be readonly, and indeed lvscan shows them to be inactive.

Code:
root@pve:~# lvscan
  ACTIVE            '/dev/pve/swap' [8.00 GiB] inherit
  ACTIVE            '/dev/pve/root' [96.00 GiB] inherit
  ACTIVE            '/dev/pve/data' [<338.36 GiB] inherit
  ACTIVE            '/dev/pve/vm-100-disk-0' [14.71 GiB] inherit
  inactive          '/dev/group1/vm-101-disk-0' [80.00 GiB] inherit
  inactive          '/dev/group1/vm-102-disk-0' [512.00 MiB] inherit
  inactive          '/dev/group1/vm-102-disk-1' [512.00 MiB] inherit

I thought this was because I marked the backing ZVOL in FreeNAS as read-only. Changing that and restarting the iscsi, and restarting the iscsi connections and trying to vgchange -ay didn't work.

The error code remains

Code:
  device-mapper: reload ioctl on  (253:7) failed: Read-only file system
.

I get the same error when I try to "move" the image. Is there a way I can just do it read-only and copy the disk now to my local lvm storage?


EDIT: After restarting the whole FreeNAS server iscsi did reconnect but the LV were still inactive. Then vgchange -ay made them active. I am now copying the disk image I wanted. Lets see whether it boots.

EDIT2: Alas, after mounting the disk and trying to boot from it it says that the disk is not bootable and tries to netboot. This is unexpected...

Code:
fdisk -l /dev/group1/vm-101-disk-0
Disk /dev/group1/vm-101-disk-0: 80 GiB, 85899345920 bytes, 20971520 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 1048576 bytes
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!