Thank you jameswang. So what if I want to take that virtual disk and store it in LVM-thin of my PVE? Before the reinstall I would have used the `move` option in the web ui of the VM to move the image to a new storage.
I think the way to get the storage back in proxmox by adding to storage.conf
Code:
lvm: iscsi-lvm
vgname group1
base <ID SCSI> #?
content rootdir,images
shared 1
But I don't know what to put as the base. Maybe /dev/disk/by-id/scsi-36589cfc0000004b9730ac47837dfdeb9 which points to /dev/sda which hosts the disk images. The `lvm-pv-uuid-zVWfyJ-v9VY-izXQ-x7i6-rgYN-sVme-GkDI2D` disk seems to me to be more stable. Should I use that?
I also found
https://serverfault.com/questions/1...-top-of-iscsi-how-to-find-base-value-for-pves which suggests to use `pvesm list storage id`. When I run that on the storage I have defined, both the one with content none and content images (the latter created through the WebUI.), I get back an empty list.
Code:
root@pve:~# pvesm list dockerhost-flat
Volid Format Type Size VMID
When running lvdisplay it gives me:
Code:
root@pve:~# lvdisplay
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID 8uLch7-ZOMK-b7ZQ-Gi7S-FOfY-eiXf-8vGi90
LV Write Access read/write
LV Creation host, time proxmox, 2022-07-18 22:01:05 +0200
LV Status available
# open 2
LV Size 8.00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID YIrmwk-fZpJ-aKkU-fpxj-Mt3k-XCAD-eKHVHM
LV Write Access read/write
LV Creation host, time proxmox, 2022-07-18 22:01:05 +0200
LV Status available
# open 1
LV Size 96.00 GiB
Current LE 24576
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Name data
VG Name pve
LV UUID edHQpG-LGnu-lnwh-KZno-rKCd-GQne-f9bW2R
LV Write Access read/write (activated read only)
LV Creation host, time proxmox, 2022-07-18 22:01:14 +0200
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 0
LV Size <338.36 GiB
Allocated pool data 0.00%
Allocated metadata 0.50%
Current LE 86619
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5
--- Logical volume ---
LV Path
[QUOTE="spockz, post: 485329, member: 157135"]
Thank you jameswang. So what if I want to take that virtual disk and store it in LVM-thin of my PVE? Before the reinstall I would have used the `move` option in the web ui of the VM to move the image to a new storage.
I think the way to get the storage back in proxmox by adding to storage.conf
[CODE]
lvm: iscsi-lvm
vgname group1
base <ID SCSI> #?
content rootdir,images
shared 1
But I don't know what to put as the base. Maybe /dev/disk/by-id/scsi-36589cfc0000004b9730ac47837dfdeb9 which points to /dev/sda which hosts the disk images. The `lvm-pv-uuid-zVWfyJ-v9VY-izXQ-x7i6-rgYN-sVme-GkDI2D` disk seems to me to be more stable. Should I use that?
I also found
https://serverfault.com/questions/1...-top-of-iscsi-how-to-find-base-value-for-pves which suggests to use `pvesm list storage id`. When I run that on the storage I have defined, both the one with content none and content images (the latter created through the WebUI.), I get back an empty list.
Code:
root@pve:~# pvesm list dockerhost-flat
Volid Format Type Size VMID
When running lvdisplay it gives me:
Code:
root@pve:~# lvdisplay
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID 8uLch7-ZOMK-b7ZQ-Gi7S-FOfY-eiXf-8vGi90
LV Write Access read/write
LV Creation host, time proxmox, 2022-07-18 22:01:05 +0200
LV Status available
# open 2
LV Size 8.00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID YIrmwk-fZpJ-aKkU-fpxj-Mt3k-XCAD-eKHVHM
LV Write Access read/write
LV Creation host, time proxmox, 2022-07-18 22:01:05 +0200
LV Status available
# open 1
LV Size 96.00 GiB
Current LE 24576
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Name data
VG Name pve
LV UUID edHQpG-LGnu-lnwh-KZno-rKCd-GQne-f9bW2R
LV Write Access read/write (activated read only)
LV Creation host, time proxmox, 2022-07-18 22:01:14 +0200
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 0
LV Size <338.36 GiB
Allocated pool data 0.00%
Allocated metadata 0.50%
Current LE 86619
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5
--- Logical volume ---
LV Path /dev/group1/vm-101-disk-0
LV Name vm-101-disk-0
VG Name group1
LV UUID wML3n6-syBg-g8a3-GJfl-Ye1T-vjc1-fGwKQS
LV Write Access read/write
LV Creation host, time pve, 2021-02-23 21:12:29 +0100
LV Status available
# open 0
LV Size 80.00 GiB
Current LE 20480
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 4096
Block device 253:6
But when I try to mount it I get:
Code:
root@pve:~# mount -r /dev/group1/vm-101-disk-0 /mnt/fedora
mount: /mnt/fedora: wrong fs type, bad option, bad superblock on /dev/mapper/group1-vm--101--disk--0, missing codepage or helper program, or other error.
Running fsck gives
Code:
root@pve:~# fsck.ext4 /dev/group1/vm-101-disk-0
e2fsck 1.46.2 (28-Feb-2021)
ext2fs_open2: Bad magic number in super-block
fsck.ext4: Superblock invalid, trying backup blocks...
fsck.ext4: Bad magic number in super-block while trying to open /dev/group1/vm-101-disk-0
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
Found a dos partition table in /dev/group1/vm-101-disk-0
edit: this makes sense because when I installed fedora on that disk, the installer used LVM again.
But now how to proceed?