[SOLVED] local-lvm visible but not accessible

xword9000

New Member
Jun 1, 2022
8
1
3
Hi,

I'll preface this with 'this is very likely self-inflicted'.

I've been using proxmox 6 on a couple nodes for the last 2 years or so with great success and I've (mostly) enjoyed the journey. Now I'm a bit stuck and hoping someone here might help me figure out a fix.

My problem is that, even though I can see local-lvm on each machine in the console, I can't add a new or migrate an existing VM. Space is not an issue. Existing VMs are running on local-lvm, apparently without a problem (yet).

I've been looking at upgrading to v7 and have been exploring backup options and installed a PBU server, hoping that might give me a clear route to the reinstallation of VMs after the upgrade. This problem seems to coincide with the PBU server going in but, I'm not certain of this since the issue only came to life when I went to create a new VM.

My questions are:
  1. Is this solvable?
  2. Am I better to cut my losses and run a clean install?
Assuming option 2, what is the best way to upgrade to ensure any existing VMs and be reinstalled?

Below are my storage config and some of teh results from tests my previous googling suggested might help.

Bash:
root@proxmox1:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content snippets,rootdir,images,vztmpl
        prune-backups keep-last=1
        shared 1

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images
        nodes proxmox1,proxmox0

dir: pm1-backups
        path /media/pm1-backups
        content iso,backup
        nodes proxmox1
        prune-backups keep-all=1
        shared 0

dir: pm0-backups
        path /media/pm0-backups
        content iso,backup
        nodes proxmox0
        prune-backups keep-all=1
        shared 0

pbs: proxmoxBU
        datastore backup_d_1
        server 192.168.2.37
        content backup
        fingerprint 85:d3:ca:43:1d:1c:e9:c8:e9:36:bb:7e:83:00:55:a3:99:89:db:d9:b5:e1:f4:72:bf:f3:9c:1e:e4:14:73:79
        prune-backups keep-all=1
        username backups@pbs

Bash:
root@proxmox1:~# lvs
  LV              VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  base-999-disk-0 pve Vri---tz-k  15.00g data
  data            pve twi-aotz-- <10.77t             7.81   3.10
  root            pve -wi-ao----  96.00g
  swap            pve -wi-ao----   8.00g
  vm-101-disk-0   pve Vwi-aotz-- 550.00g data        100.00
  vm-103-disk-0   pve Vwi-aotz--  32.00g data        100.00
  vm-104-disk-0   pve Vwi-aotz-- 100.00g data        100.00
  vm-108-disk-0   pve Vwi-aotz-- 100.00g data        100.00
  vm-109-disk-0   pve Vwi-aotz--  75.00g data        11.72
  vm-109-state-a1 pve Vwi-a-tz-- <16.49g data        0.00
  vm-120-disk-0   pve Vwi-aotz--  32.00g data        100.00
  vm-121-disk-0   pve Vwi-aotz--  32.00g data        100.00

Bash:
root@proxmox1:~# vgdisplay
  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2138
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                12
  Open LV               9
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               10.91 TiB
  PE Size               4.00 MiB
  Total PE              2861055
  Alloc PE / Size       2856863 / <10.90 TiB
  Free  PE / Size       4192 / <16.38 GiB
  VG UUID               XSP8oM-Wrqz-zFvl-NM5R-tEa8-Zypq-w5JY6L

Bash:
root@proxmox1:~# vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  pve   1  12   0 wz--n- 10.91t <16.38g
root@proxmox1:~#
 

Attachments

  • Screenshot 2022-06-14 190757.png
    Screenshot 2022-06-14 190757.png
    191.2 KB · Views: 56
  • Screenshot 2022-06-14 190732.png
    Screenshot 2022-06-14 190732.png
    159.7 KB · Views: 51
  • Screenshot 2022-06-14 190646.png
    Screenshot 2022-06-14 190646.png
    196.9 KB · Views: 47
  • Screenshot 2022-06-14 190613.png
    Screenshot 2022-06-14 190613.png
    167.8 KB · Views: 44
  • Screenshot 2022-06-14 190540.png
    Screenshot 2022-06-14 190540.png
    185.2 KB · Views: 39
  • Screenshot 2022-06-14 190441.png
    Screenshot 2022-06-14 190441.png
    196.5 KB · Views: 47
Hi,

is there some kind of error message or something similar, or are you just not seeing local-lvm in the selection box?
 
Hi,

None that I've found so far. I decided yesterday to just bite the bullet and go with a fresh V7 install, which went in perfectly. Except I now have exactlly the same issue under V7.
 
can you see erorrs in those outputs?
Code:
journalctl -b0

less /var/log/syslog
 
Nothing that stands out to me

root@pve1:/etc/lvm/archive# grep error /var/log/syslog Jun 16 11:50:48 pve1 kernel: [ 1.264259] HEST: Enabling Firmware First mode for corrected errors. Jun 16 11:50:48 pve1 kernel: [ 5.156620] EXT4-fs (dm-1): re-mounted. Opts: errors=remount-ro. Quota mode: none. Jun 16 11:50:51 pve1 pvescheduler[1333]: jobs: cfs-lock 'file-jobs_cfg' error: pve cluster filesystem not online. Jun 16 11:50:51 pve1 pvescheduler[1332]: replication: cfs-lock 'file-replication_cfg' error: pve cluster filesystem not online. Jun 16 11:52:25 pve1 systemd[1601]: gpgconf: error running '/usr/lib/gnupg/scdaemon': probably not installed Jun 16 12:02:10 pve1 pvedaemon[2914]: An error occured on the cluster node: cluster not ready - no quorum? Jun 16 12:39:35 pve1 kernel: [ 1.260316] HEST: Enabling Firmware First mode for corrected errors. Jun 16 12:39:35 pve1 kernel: [ 5.120591] EXT4-fs (dm-1): re-mounted. Opts: errors=remount-ro. Quota mode: none. Jun 16 14:31:34 pve1 pvedaemon[63161]: error before or during data restore, some or all disks were not completely restored. VM 102 state is NOT cleaned up. Jun 16 14:35:36 pve1 pvestatd[1353]: pbu: error fetching datastores - 500 Can't connect to 192.168.2.37:8007 (Connection timed out) Jun 16 14:35:47 pve1 pvestatd[1353]: pbu: error fetching datastores - 500 Can't connect to 192.168.2.37:8007 (Connection timed out) Jun 16 14:35:56 pve1 pvestatd[1353]: pbu: error fetching datastores - 500 Can't connect to 192.168.2.37:8007 (Connection timed out) Jun 16 14:36:02 pve1 pvestatd[1353]: pbu: error fetching datastores - 500 Can't connect to 192.168.2.37:8007 (No route to host) Jun 16 14:36:12 pve1 pvestatd[1353]: pbu: error fetching datastores - 500 Can't connect to 192.168.2.37:8007 (No route to host) Jun 16 14:36:22 pve1 pvestatd[1353]: pbu: error fetching datastores - 500 Can't connect to 192.168.2.37:8007 (No route to host) Jun 16 14:36:32 pve1 pvestatd[1353]: pbu: error fetching datastores - 500 Can't connect to 192.168.2.37:8007 (No route to host) Jun 16 14:36:42 pve1 pvestatd[1353]: pbu: error fetching datastores - 500 Can't connect to 192.168.2.37:8007 (No route to host) Jun 16 15:01:28 pve1 pvedaemon[79990]: error before or during data restore, some or all disks were not completely restored. VM 102 state is NOT cleaned up. Jun 16 16:28:00 pve1 pvestatd[1353]: qemu status update error: unable to find configuration file for VM 104 on node 'pve1' Jun 16 16:28:01 pve1 pvescheduler[130930]: jobs: cfs-lock 'file-jobs_cfg' error: no quorum! Jun 16 16:28:03 pve1 pvestatd[1353]: pbu: error fetching datastores - 401 Unauthorized Jun 16 17:48:06 pve1 pvedaemon[83578]: error reading cached package status in /var/lib/pve-manager/pkgupdates Jun 17 02:50:30 pve1 pvescheduler[466104]: INFO: Backup job finished with errors Jun 17 02:50:30 pve1 pvescheduler[466104]: job errors

root@pve1:/etc/lvm/archive# journalctl -b0 | grep "error" Jun 16 12:39:34 pve1 kernel: HEST: Enabling Firmware First mode for corrected errors. Jun 16 12:39:34 pve1 kernel: EXT4-fs (dm-1): re-mounted. Opts: errors=remount-ro. Quota mode: none. Jun 16 14:31:34 pve1 pvedaemon[63161]: error before or during data restore, some or all disks were not completely restored. VM 102 state is NOT cleaned up. Jun 16 14:35:36 pve1 pvestatd[1353]: pbu: error fetching datastores - 500 Can't connect to 192.168.2.37:8007 (Connection timed out) Jun 16 14:35:47 pve1 pvestatd[1353]: pbu: error fetching datastores - 500 Can't connect to 192.168.2.37:8007 (Connection timed out) Jun 16 14:35:56 pve1 pvestatd[1353]: pbu: error fetching datastores - 500 Can't connect to 192.168.2.37:8007 (Connection timed out) Jun 16 14:36:02 pve1 pvestatd[1353]: pbu: error fetching datastores - 500 Can't connect to 192.168.2.37:8007 (No route to host) Jun 16 14:36:12 pve1 pvestatd[1353]: pbu: error fetching datastores - 500 Can't connect to 192.168.2.37:8007 (No route to host) Jun 16 14:36:22 pve1 pvestatd[1353]: pbu: error fetching datastores - 500 Can't connect to 192.168.2.37:8007 (No route to host) Jun 16 14:36:32 pve1 pvestatd[1353]: pbu: error fetching datastores - 500 Can't connect to 192.168.2.37:8007 (No route to host) Jun 16 14:36:42 pve1 pvestatd[1353]: pbu: error fetching datastores - 500 Can't connect to 192.168.2.37:8007 (No route to host) Jun 16 15:01:28 pve1 pvedaemon[79990]: error before or during data restore, some or all disks were not completely restored. VM 102 state is NOT cleaned up. Jun 16 16:28:00 pve1 pvestatd[1353]: qemu status update error: unable to find configuration file for VM 104 on node 'pve1' Jun 16 16:28:01 pve1 pvescheduler[130930]: jobs: cfs-lock 'file-jobs_cfg' error: no quorum! Jun 16 16:28:03 pve1 pvestatd[1353]: pbu: error fetching datastores - 401 Unauthorized Jun 16 17:48:06 pve1 pvedaemon[83578]: error reading cached package status in /var/lib/pve-manager/pkgupdates Jun 17 02:50:30 pve1 pvescheduler[466104]: INFO: Backup job finished with errors Jun 17 02:50:30 pve1 pvescheduler[466104]: job errors Jun 17 06:30:51 pve1 sshd[599455]: error: kex_exchange_identification: Connection closed by remote host Jun 17 06:31:04 pve1 sshd[599559]: error: kex_exchange_identification: Connection closed by remote host
 
V7 Storage

root@pve1:~# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content vztmpl shared 0 lvmthin: local-lvm thinpool data vgname pve content rootdir,images nodes pve1 pbs: pbu datastore backup_d_1 server 192.168.2.37 content backup fingerprint 85:d3:ca:43:1d:1c:e9:c8:e9:36:bb:7e:83:00:55:a3:99:89:db:d9:b5:e1:f4:72:bf:f3:9c:1e:e4:14:73:79 prune-backups keep-all=1 username backups@pbs dir: Backups1 path /media/usb-drive1 content backup,iso nodes pve1 prune-backups keep-all=1 shared 0

root@pve1:/run/pve# lvdisplay pve/data --- Logical volume --- LV Name data VG Name pve LV UUID 8tXVTB-31gH-aQpe-m31J-rsEr-WuRP-1rlRGy LV Write Access read/write (activated read only) LV Creation host, time proxmox, 2022-06-16 11:25:08 +0100 LV Pool metadata data_tmeta LV Pool data data_tdata LV Status available # open 0 LV Size <1.17 TiB Allocated pool data 7.54% Allocated metadata 0.54% Current LE 305916 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:5
Oh i see something. Can you call pvecm status?

root@pve1:/run/pve# pvecm status Cluster information ------------------- Name: pve7 Config Version: 1 Transport: knet Secure auth: on Quorum information ------------------ Date: Fri Jun 17 08:07:02 2022 Quorum provider: corosync_votequorum Nodes: 1 Node ID: 0x00000001 Ring ID: 1.5 Quorate: Yes Votequorum information ---------------------- Expected votes: 1 Highest expected: 1 Total votes: 1 Quorum: 1 Flags: Quorate Membership information ---------------------- Nodeid Votes Name 0x00000001 1 192.168.2.34 (local)
 
I saw some cluster no quorate errors in you log, but you pvecm status says you only have one node in your cluster :D.

Where exactly aren't you seeing the storage? From the screenshots it looks ok.
 
I can't add new VMs, I'm hoping I'm just doing something stupid, from the command line and in the console it all *looks* right.. :(
 

Attachments

  • pve7.png
    pve7.png
    165.9 KB · Views: 58
Hi,

you are in the OS tab where you can only select ISO files on CD/DVD boot.
Just select Do not use any media and select your VM Disk in the Disks tab?

Backups1 is the only selectable storage because the iso content is only enabled on that storage.

Greetz
 
Last edited:
  • Like
Reactions: xword9000

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!