[SOLVED] SSDs could not be found after upgrade

Jospeh Huber

Renowned Member
Apr 18, 2016
99
7
73
45
Hi,

today I wanted to upgrade my cluster to PVE 7 and before I wanted to upgrade to the last version of PVE 6.
But on two hosts of my clusters LVM thin volumes on SSDs cannot be detected anymore.
Other SSDs with LVM thin are there.

Code:
journalctl -b   (parts)
-----
Jul 28 17:07:38 vmhost kernel: Linux version 5.4.195-1-pve (build@proxmox) (gcc version 8.3.0 (Debian 8.3.0-6)) #1 SMP PVE 5.4.
Jul 28 17:07:38 vmhost kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-5.4.195-1-pve root=/dev/mapper/pve-root ro quiet
...
Jul 28 17:07:38 vmhost kernel: sd 9:0:0:0: [sdd] Attached SCSI removable disk
Jul 28 17:07:38 vmhost kernel: device-mapper: thin: Data device (dm-4) discard unsupported: Disabling discard passdown.
...
Jul 28 17:07:38 vmhost lvm[554]:   /dev/sdd: open failed: No medium found
...

As one can see, it is detected "[sdd] Attached SCSI removable disk" but it get's lost somewhere.
For the OS "/dev/sdd" does not exist (not fdisk, lsblk,...)
Code:
 ls -l /dev/sdd
brw-rw---- 1 root disk 8, 48 Jul 28 17:07 /dev/sdd

 fdisk -l /dev/sdd
fdisk: cannot open /dev/sdd: No medium found

First I tried to go back to the old kernel "5.4.189-2-pve" but no luck there... there is exactly the same problem.
Any ideas?

Greetings
jh
 
Last edited:
what ssds do you use and can you post the output of 'lsblk' and 'dmesg' ?
 
Hi,

we use Samsung SATA and m2 SSDs mixed types:
SAMSUNG MZ7KH960HAJR
SAMSUNG 860 PRO, 970 PRO
The missing SSD is a SATA 860 Pro (I guess, because at the moment I have no physical access to check).

lsblk does not show the ssd (/dev/sdd) :
Code:
:~# lsblk
NAME                                                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                                           8:0    0   3.7T  0 disk
├─sda1                                                        8:1    0  1007K  0 part
├─sda2                                                        8:2    0   512M  0 part /boot/efi
└─sda3                                                        8:3    0   3.7T  0 part
  ├─pve-swap                                                253:1    0     8G  0 lvm  [SWAP]
  ├─pve-root                                                253:2    0    96G  0 lvm  /
  ├─pve-data_tmeta                                          253:3    0  15.8G  0 lvm
  │ └─pve-data-tpool                                        253:5    0   3.5T  0 lvm
  │   ├─pve-data                                            253:6    0   3.5T  0 lvm
  │   ├─pve-vm--131--disk--1                                253:7    0     4G  0 lvm
  │   ├─pve-vm--137--disk--1                                253:8    0     4G  0 lvm
  │   ├─pve-vm--130--disk--0                                253:9    0     5G  0 lvm
  │   └─pve-vm--134--disk--1                                253:10   0    11G  0 lvm
  └─pve-data_tdata                                          253:4    0   3.5T  0 lvm
    └─pve-data-tpool                                        253:5    0   3.5T  0 lvm
      ├─pve-data                                            253:6    0   3.5T  0 lvm
      ├─pve-vm--131--disk--1                                253:7    0     4G  0 lvm
      ├─pve-vm--137--disk--1                                253:8    0     4G  0 lvm
      ├─pve-vm--130--disk--0                                253:9    0     5G  0 lvm
      └─pve-vm--134--disk--1                                253:10   0    11G  0 lvm
sdb                                                           8:16   0 894.3G  0 disk
└─ceph--2de4f3c5--7f5f--442d--b1d0--3c78790aa08b-osd--block--91984b8b--6278--4169--8266--3331054bc682
                                                            253:0    0 894.3G  0 lvm
sdc                                                           8:32   0   477G  0 disk
└─sdc1                                                        8:33   0   477G  0 part
sr0                                                          11:0    1  1024M  0 rom
rbd0                                                        252:0    0     8G  0 disk
rbd1                                                        252:16   0    24G  0 disk
nvme0n1                                                     259:0    0   477G  0 disk
└─nvme0n1p1                                                 259:1    0   477G  0 part
  ├─ssd-ssd1_tmeta                                          253:11   0   120M  0 lvm
  │ └─ssd-ssd1                                              253:13   0 476.7G  0 lvm
  └─ssd-ssd1_tdata                                          253:12   0 476.7G  0 lvm
    └─ssd-ssd1                                              253:13   0 476.7G  0 lvm

 ~# pvdisplay
  /dev/sdd: open failed: No medium found
  --- Physical volume ---
  PV Name               /dev/sdb
  VG Name               ceph-2de4f3c5-7f5f-442d-b1d0-3c78790aa08b
  PV Size               894.25 GiB / not usable <3.34 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              228928
  Free PE               0
  Allocated PE          228928
  PV UUID               2mRYMG-aMnI-w0D3-4u03-oHbH-GQSN-eRreR7

  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               pve
  PV Size               <3.64 TiB / not usable <2.82 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              953733
  Free PE               4192
  Allocated PE          949541
  PV UUID               e6rvxs-SZY2-uf4P-4XdL-vIEm-j8A0-MePlIT

  --- Physical volume ---
  PV Name               /dev/nvme0n1p1
  VG Name               ssd
  PV Size               <476.94 GiB / not usable <1.32 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              122096
  Free PE               0
  Allocated PE          122096
  PV UUID               Lp2A2s-PGoM-4SQS-793k-luct-xKQk-ai8TkR

The whole dmesg ..
 

Attachments

afaics from the dmesg the disks are:

sda: WDC WD40EFRX-68N
sdb: SAMSUNG MZ7KH960
sdc: Samsung SSD 860 PRO
nvme: ?? ( i guess the 970 pro) ?

so you said the 860 pro is missing, but it's there as sdc ?

Code:
[    3.507170] scsi 3:0:0:0: Direct-Access     ATA      Samsung SSD 860  1B6Q PQ: 0 ANSI: 5          
[    3.507334] ata4.00: Enabling discard_zeroes_data                                                 
[    3.507352] sd 3:0:0:0: Attached scsi generic sg2 type 0                                          
[    3.507394] sd 3:0:0:0: [sdc] 1000215216 512-byte logical blocks: (512 GB/477 GiB)                
[    3.507404] sd 3:0:0:0: [sdc] Write Protect is off                                                
[    3.507405] sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00                                             
[    3.507426] sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    3.522030] ata4.00: Enabling discard_zeroes_data                                                 
[    3.522354]  sdc: sdc1
 
Hi,

yes it looks like the device name has changed from sdd to sdc...
On the second host it is excatly the same.
Never had this before.
Code:
:~# mount /dev/sdc1 /mnt/
mount: /mnt: unknown filesystem type 'LVM2_member'.


##pvscan
pvscan
  /dev/sdd: open failed: No medium found
  PV /dev/sdb         VG ceph-2de4f3c5-7f5f-442d-b1d0-3c78790aa08b   lvm2 [894.2
5 GiB / 0    free]
  PV /dev/sda3        VG pve                                         lvm2 [<3.64
 TiB / <16.38 GiB free]
  PV /dev/nvme0n1p1   VG ssd                                         lvm2 [<476.
94 GiB / 0    free]
  Total: 3 [<4.98 TiB] / in use: 3 [<4.98 TiB] / in no VG: 0 [0   ]

pvscan did not help...
What is the way to change the devicename in lvm?
 
Hi,

I found it...
There was a filter defined on /dev/sdc in /etc/lvm.conf.
Code:
        global_filter = [  "r|/dev/sdc|", ....

After adapting the filter, the thin volume is back again... oh man.:mad:
Code:
:~# pvscan
  PV /dev/sdc1        VG ssd2                                        lvm2 [<476.
94 GiB / 0    free]
  PV /dev/sdb         VG ceph-2de4f3c5-7f5f-442d-b1d0-3c78790aa08b   lvm2 [894.2
5 GiB / 0    free]
  PV /dev/sda3        VG pve                                         lvm2 [<3.64
 TiB / <16.38 GiB free]
  PV /dev/nvme0n1p1   VG ssd                                         lvm2 [<476.
94 GiB / 0    free]
  Total: 4 [5.44 TiB] / in use: 4 [5.44 TiB] / in no VG: 0 [0   ]

Before /dev/sdc was the cdrom/dvd drive and therefore a filter was defined long ago to avoid those lvm messages "/dev/xxx: open failed: No medium found".
But during the last upgrade the devicenames swapped (I do not know why...) and the lvm filter did it's best :eek:

The problem was "semi-home-made"...

thx for your help!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!