Adding existing LVM on iSCSI

samuelellis

New Member
Nov 1, 2024
4
0
1
I have had an existing proxmox host for a a few months on a NUC and been happy with it but my storage was all via iSCSI on a QNAP and as it was a NUC i had a 1G network link handling all my VM's and storage so as you can guess it could be a bit slow sometimes

I just managed to get a different box that has a dedicated 10G nic that can connect to the QNAP for storage.

I have built the new box as a standalone host and it can seethe iSCSi share over the 10G connection

if i shut down the old host, power on the new one and then try to add the existing LVM storage i get the following error

Code:
create storage failed: command '/sbin/pvs --separator : --noheadings --units k --unbuffered --nosuffix --options pv_name,pv_size,vg_name,pv_uuid /dev/disk/by-id/scsi-36e843b6e3d88aa9daa48d460cd82f9df' failed: exit code 5 (500)

As you can see from the screenshot it can see the LUN but wont connect to the existing LVM. Anyone got any thoughts on how i can do this or is it a case of backup, destroy the existing storage and create a new iSCSI target?
 

Attachments

  • Screenshot from 2024-11-01 20-25-43.png
    Screenshot from 2024-11-01 20-25-43.png
    24.5 KB · Views: 1
So to add a bit more info, this is the storage.cfg file on my old host

Code:
ir: local
        path /var/lib/vz
        content iso,vztmpl,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

iscsi: QNAP
        portal 192.168.1.13
        target iqn.2004-04.com.qnap:ts-1279u:iscsi.vm.e46699
        content none

lvm: QNAP-ISCSI
        vgname Vol1
        base QNAP:0.0.0.scsi-36e843b6e3d88aa9daa48d460cd82f9df
        content images,rootdir
        saferemove 0
        shared 1

and this is the sotrage.cfg on my new host

Code:
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

iscsi: QNAP
        portal 10.10.10.1
        target iqn.2004-04.com.qnap:ts-1279u:iscsi.vm.e46699
        content none

The different IP's are due to the old host having the 1G network but the new host having a 10G NIC dedicated just to iSCSI


Dont worry i didnt do this with both of them on at the same time
 
Hi, thanks for the welcome. im 99% sure this is something ive missed in the setup but i cant quite place what it is, unfortunatly my Linux skills are a bit weak but heres a chance to learn

So on my old host i ran pvs -vv and it came back with the following

Code:
root@proxmox:~# pvs -vv
  global/use_lvmpolld not found in config: defaulting to 1
  devices/sysfs_scan not found in config: defaulting to 1
  devices/scan_lvs not found in config: defaulting to 0
  devices/multipath_component_detection not found in config: defaulting to 1
  devices/md_component_detection not found in config: defaulting to 1
  devices/fw_raid_component_detection not found in config: defaulting to 0
  devices/ignore_suspended_devices not found in config: defaulting to 0
  devices/ignore_lvm_mirrors not found in config: defaulting to 1
  devices/scan_lvs not found in config: defaulting to 0
  devices/allow_mixed_block_sizes not found in config: defaulting to 0
  devices/hints not found in config: defaulting to "all"
  activation/activation_mode not found in config: defaulting to "degraded"
  metadata/record_lvs_history not found in config: defaulting to 0
  devices/search_for_devnames not found in config: defaulting to "auto"
  activation/monitoring not found in config: defaulting to 1
  global/locking_type not found in config: defaulting to 1
  global/wait_for_locks not found in config: defaulting to 1
  global/prioritise_write_locks not found in config: defaulting to 1
  global/locking_dir not found in config: defaulting to "/run/lock/lvm"
  devices/md_component_detection not found in config: defaulting to 1
  devices/md_component_checks not found in config: defaulting to "auto"
  devices/multipath_wwids_file not found in config: defaulting to "/etc/multipath/wwids"
  global/use_lvmlockd not found in config: defaulting to 0
  report/output_format not found in config: defaulting to "basic"
  log/report_command_log not found in config: defaulting to 0
  report/aligned not found in config: defaulting to 1
  report/buffered not found in config: defaulting to 1
  report/headings not found in config: defaulting to 1
  report/separator not found in config: defaulting to " "
  report/prefixes not found in config: defaulting to 0
  report/quoted not found in config: defaulting to 1
  report/columns_as_rows not found in config: defaulting to 0
  report/pvs_sort not found in config: defaulting to "pv_name"
  report/pvs_cols_verbose not found in config: defaulting to "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,dev_size,pv_uuid"
  report/compact_output_cols not found in config: defaulting to ""
  Locking /run/lock/lvm/P_global RB
  devices/use_devicesfile not found in config: defaulting to 0
  /dev/loop0: size is 0 sectors
  /dev/sda: size is 468862128 sectors
  /dev/loop1: size is 0 sectors
  /dev/sda1: size is 2014 sectors
  /dev/loop2: size is 0 sectors
  /dev/sda2: size is 2097152 sectors
  /dev/loop3: size is 0 sectors
  /dev/sda3: size is 466762895 sectors
  /dev/loop4: size is 0 sectors
  /dev/loop5: size is 0 sectors
  /dev/loop6: size is 0 sectors
  /dev/loop7: size is 0 sectors
  /dev/sdb: size is 7814037167 sectors
  /dev/sdb1: size is 262144 sectors
  /dev/sdb2: size is 7813771264 sectors
  /dev/sdc: size is 57482936320 sectors
  Setting devices/global_filter to global_filter = [ "r|/dev/zd.*|", "r|/dev/rbd.*|" ]
  devices/filter not found in config: defaulting to filter = [ "a|.*|" ]
  devices/devicesfile not found in config: defaulting to "system.devices"
  /dev/loop0: using cached size 0 sectors
  /dev/sda: using cached size 468862128 sectors
  /dev/loop1: using cached size 0 sectors
  /dev/sda1: using cached size 2014 sectors
  /dev/loop2: using cached size 0 sectors
  /dev/sda2: using cached size 2097152 sectors
  /dev/loop3: using cached size 0 sectors
  /dev/sda3: using cached size 466762895 sectors
  /dev/loop4: using cached size 0 sectors
  /dev/loop5: using cached size 0 sectors
  /dev/loop6: using cached size 0 sectors
  /dev/loop7: using cached size 0 sectors
  /dev/sdb: using cached size 7814037167 sectors
  /dev/sdb1: using cached size 262144 sectors
  /dev/sdb2: using cached size 7813771264 sectors
  /dev/sdc: using cached size 57482936320 sectors
  /dev/sda3: using cached size 466762895 sectors
  /dev/sda3: using cached size 466762895 sectors
  /dev/sdc: using cached size 57482936320 sectors
  /dev/sdc: using cached size 57482936320 sectors
  /dev/sdc: using cached size 57482936320 sectors
  Locking /run/lock/lvm/V_Vol1 RB
  metadata/lvs_history_retention_time not found in config: defaulting to 0
  /dev/sdc: using cached size 57482936320 sectors
  Processing PV /dev/sdc in VG Vol1.
  /dev/sdc: using cached size 57482936320 sectors
  Unlocking /run/lock/lvm/V_Vol1
  Locking /run/lock/lvm/V_pve RB
  Stack pve/data:0[0] on LV pve/data_tdata:0.
  Adding pve/data:0 as an user of pve/data_tdata.
  Adding pve/data:0 as an user of pve/data_tmeta.
  Adding pve/vm-106-disk-0:0 as an user of pve/data.
  metadata/lvs_history_retention_time not found in config: defaulting to 0
  /dev/sda3: size is 466762895 sectors
  Processing PV /dev/sda3 in VG pve.
  /dev/sda3: using cached size 466762895 sectors
  Unlocking /run/lock/lvm/V_pve
  Reading orphan VG #orphans_lvm2.
  report/compact_output not found in config: defaulting to 0
  PV         VG   Fmt  Attr PSize    PFree   DevSize  PV UUID                               
  /dev/sda3  pve  lvm2 a--  <222.57g  16.00g <222.57g zC3XWN-K1bW-8uVj-HV29-6kbf-yFe0-WSN5bW
  /dev/sdc   Vol1 lvm2 a--   <26.77t <15.64t  <26.77t 6tEYr9-iAXl-ulks-Icbx-gAFJ-d8sY-v24XwE
  Unlocking /run/lock/lvm/P_global
  global/notify_dbus not found in config: defaulting to 1

If i run it on the new host i get the following

Code:
root@lenovo1:~# pvs -v
  PV             VG  Fmt  Attr PSize   PFree  DevSize PV UUID                               
  /dev/nvme0n1p3 pve lvm2 a--  237.47g 16.00g 237.47g iaSCAA-CcPV-mc08-yxrO-ESoN-jecB-QrftUL
root@lenovo1:~# pvs -vv
  global/use_lvmpolld not found in config: defaulting to 1
  devices/sysfs_scan not found in config: defaulting to 1
  devices/scan_lvs not found in config: defaulting to 0
  devices/multipath_component_detection not found in config: defaulting to 1
  devices/md_component_detection not found in config: defaulting to 1
  devices/fw_raid_component_detection not found in config: defaulting to 0
  devices/ignore_suspended_devices not found in config: defaulting to 0
  devices/ignore_lvm_mirrors not found in config: defaulting to 1
  devices/scan_lvs not found in config: defaulting to 0
  devices/allow_mixed_block_sizes not found in config: defaulting to 0
  devices/hints not found in config: defaulting to "all"
  activation/activation_mode not found in config: defaulting to "degraded"
  metadata/record_lvs_history not found in config: defaulting to 0
  devices/search_for_devnames not found in config: defaulting to "auto"
  activation/monitoring not found in config: defaulting to 1
  global/locking_type not found in config: defaulting to 1
  global/wait_for_locks not found in config: defaulting to 1
  global/prioritise_write_locks not found in config: defaulting to 1
  global/locking_dir not found in config: defaulting to "/run/lock/lvm"
  devices/md_component_detection not found in config: defaulting to 1
  devices/md_component_checks not found in config: defaulting to "auto"
  devices/multipath_wwids_file not found in config: defaulting to "/etc/multipath/wwids"
  global/use_lvmlockd not found in config: defaulting to 0
  report/output_format not found in config: defaulting to "basic"
  log/report_command_log not found in config: defaulting to 0
  report/aligned not found in config: defaulting to 1
  report/buffered not found in config: defaulting to 1
  report/headings not found in config: defaulting to 1
  report/separator not found in config: defaulting to " "
  report/prefixes not found in config: defaulting to 0
  report/quoted not found in config: defaulting to 1
  report/columns_as_rows not found in config: defaulting to 0
  report/pvs_sort not found in config: defaulting to "pv_name"
  report/pvs_cols_verbose not found in config: defaulting to "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,dev_size,pv_uuid"
  report/compact_output_cols not found in config: defaulting to ""
  Locking /run/lock/lvm/P_global RB
  devices/use_devicesfile not found in config: defaulting to 0
  /dev/nvme0n1: size is 500118192 sectors
  /dev/loop0: size is 0 sectors
  /dev/sda: size is 57482936320 sectors
  /dev/nvme0n1p1: size is 2014 sectors
  /dev/loop1: size is 0 sectors
  /dev/nvme0n1p2: size is 2097152 sectors
  /dev/loop2: size is 0 sectors
  /dev/nvme0n1p3: size is 498018959 sectors
  /dev/loop3: size is 0 sectors
  /dev/loop4: size is 0 sectors
  /dev/loop5: size is 0 sectors
  /dev/loop6: size is 0 sectors
  /dev/loop7: size is 0 sectors
  /dev/sdb: size is 57482936320 sectors
  /dev/nvme0n1: using cached size 500118192 sectors
  /dev/sda: using cached size 57482936320 sectors
  /dev/sda: using cached size 57482936320 sectors
  /dev/nvme0n1p2: using cached size 2097152 sectors
  /dev/nvme0n1p2: using cached size 2097152 sectors
  /dev/nvme0n1p2: No lvm label detected
  /dev/nvme0n1p3: using cached size 498018959 sectors
  /dev/nvme0n1p3: using cached size 498018959 sectors
  /dev/sdb: using cached size 57482936320 sectors
  /dev/sdb: using cached size 57482936320 sectors
  /dev/sda: using cached size 57482936320 sectors
  Locking /run/lock/lvm/V_pve RB
  Stack pve/data:0[0] on LV pve/data_tdata:0.
  Adding pve/data:0 as an user of pve/data_tdata.
  Adding pve/data:0 as an user of pve/data_tmeta.
  Adding pve/vm-100-disk-0:0 as an user of pve/data.
  Adding pve/vm-101-disk-0:0 as an user of pve/data.
  Adding pve/vm-101-disk-1:0 as an user of pve/data.
  Adding pve/vm-103-disk-0:0 as an user of pve/data.
  metadata/lvs_history_retention_time not found in config: defaulting to 0
  /dev/nvme0n1p3: using cached size 498018959 sectors
  Processing PV /dev/nvme0n1p3 in VG pve.
  /dev/nvme0n1p3: using cached size 498018959 sectors
  Unlocking /run/lock/lvm/V_pve
  Reading orphan VG #orphans_lvm2.
  report/compact_output not found in config: defaulting to 0
  PV             VG  Fmt  Attr PSize   PFree  DevSize PV UUID                               
  /dev/nvme0n1p3 pve lvm2 a--  237.47g 16.00g 237.47g iaSCAA-CcPV-mc08-yxrO-ESoN-jecB-QrftUL
  Unlocking /run/lock/lvm/P_global
  global/notify_dbus not found in config: defaulting to 1
root@lenovo1:~#
 
Do you have multiple paths to storage, even if inadvertently? https://forum.proxmox.com/threads/cannot-create-lvm-on-iscsi-storage.82145/

What happens when you run:
/sbin/pvs --separator : --noheadings --units k --unbuffered --nosuffix --options pv_name,pv_size,vg_name,pv_uuid /dev/disk/by-id/scsi-36e843b6e3d88aa9daa48d460cd82f9df'

What happens if you add the LVM portion of config manually to /etc/pve/storage.cfg ?
What is the output of :
lsblk
lssci
pvs
vgs
lvs
ls -al /dev/disk/by-id/


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I shouldnt have multiple paths, my original host has just 1 nic that handes everthing
my new host has two nics but only one is set for iSCSI

So i copied the lvm portions out of storage.cfg on my old host into the storage.cfg on my new host

in the GUI it can see the LVM but it has a ? mark next to it and shows nothing if i go to it via the GUI. If however i go to command line it appears visible
Code:
root@lenovo1:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                            8:0    0  26.8T  0 disk
├─Vol1-vm--103--disk--0      252:10   0   100G  0 lvm
├─Vol1-vm--102--disk--0      252:11   0    30G  0 lvm
├─Vol1-vm--107--disk--0      252:12   0     8G  0 lvm
├─Vol1-vm--108--disk--0      252:13   0    25G  0 lvm
├─Vol1-vm--104--disk--0      252:14   0   120G  0 lvm
├─Vol1-vm--104--disk--1      252:15   0  10.7T  0 lvm
├─Vol1-vm--101--disk--0      252:16   0     4M  0 lvm
├─Vol1-vm--101--disk--1      252:17   0   100G  0 lvm
├─Vol1-vm--101--disk--2      252:18   0     4M  0 lvm
└─Vol1-vm--105--disk--0      252:19   0    16G  0 lvm
sdb                            8:16   0  26.8T  0 disk
nvme0n1                      259:0    0 238.5G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                  259:3    0 237.5G  0 part
  ├─pve-swap                 252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0  69.4G  0 lvm  /
  ├─pve-data_tmeta           252:2    0   1.4G  0 lvm
  │ └─pve-data-tpool         252:4    0 141.2G  0 lvm
  │   ├─pve-data             252:5    0 141.2G  1 lvm
  │   ├─pve-vm--100--disk--0 252:6    0     8G  0 lvm
  │   ├─pve-vm--101--disk--0 252:7    0     4M  0 lvm
  │   ├─pve-vm--101--disk--1 252:8    0    32G  0 lvm
  │   └─pve-vm--103--disk--0 252:9    0    16G  0 lvm
  └─pve-data_tdata           252:3    0 141.2G  0 lvm
    └─pve-data-tpool         252:4    0 141.2G  0 lvm
      ├─pve-data             252:5    0 141.2G  1 lvm
      ├─pve-vm--100--disk--0 252:6    0     8G  0 lvm
      ├─pve-vm--101--disk--0 252:7    0     4M  0 lvm
      ├─pve-vm--101--disk--1 252:8    0    32G  0 lvm
      └─pve-vm--103--disk--0 252:9    0    16G  0 lvm

PVS
Code:
root@lenovo1:~# pvs
  PV             VG  Fmt  Attr PSize   PFree
  /dev/nvme0n1p3 pve lvm2 a--  237.47g 16.00g

VGS
 
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS

sda 8:0 0 26.8T 0 disk
sdb 8:16 0 26.8T 0 disk
this ^ looks like two paths to the same disk. Most likely via old 1G and new 10G network.
The different IP's are due to the old host having the 1G network but the new host having a 10G NIC dedicated just to iSCSI

I suggest that you try to configure multipath package, you can use new and updated wiki page:

https://pve.proxmox.com/wiki/Multipath

good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!