Disk suddenly showing empty but cannot be used

4leftpaws

New Member
Oct 17, 2021
8
0
1
27
Recently the primary non-os disk for my proxmox install is showing as 100% free despite also showing it contains several hundred GB of data on it. Any VM that attempts to boot that was on that disk is unable to boot. And attempting to create a new VM using that disk also fails. I'm suspecting a dead drive, but I'd like to see if it's able to be recovered before replaced to grab a bit of data off it first.

The disk in question is /dev/sda2. I have it configured for LVM-thin with no raid. It purely contains VM/LXC disks, the OS is on /dev/sdb
1658007949729.png

Looking at the LVM-Thin menu it shows the disk as 0% used:
1658008079155.png

However when looking at the disk under the storage view I see:
1658008311082.png

I'm not sure where to go from here. If it's a dead drive with no way to recover or if there's a chance of getting some of my data off of here.

If it helps, the output of pveversion -v:
Code:
root@proxmox:~# pveversion -v
proxmox-ve: 7.0-2 (running kernel: 5.11.22-4-pve)
pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e)
pve-kernel-5.11: 7.0-7
pve-kernel-helper: 7.0-7
pve-kernel-5.11.22-4-pve: 5.11.22-8
ceph-fuse: 15.2.14-pve1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.3.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-6
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-10
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.9-2
proxmox-backup-file-restore: 2.0.9-2
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.0-9
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.3-1
pve-ha-manager: 3.3-1
pve-i18n: 2.5-1
pve-qemu-kvm: 6.0.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-13
smartmontools: 7.2-1
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1
 
I would guess the outputs of lvdisplay, vgdisplay, pvesm status and smartctl -a /dev/sda could be helpful. And you could initialize a long smart selftest with smartctl -t long /dev/sda.
 
Last edited:
  • Like
Reactions: 4leftpaws
Thanks for the commands to run, I've got them all listed below. I've also started the long smart selftest that will complete in approximately 6 hours. The SMART test seems to have passed and the volumes seem okay, but disk management has always been my weakest point. It does complain about one of the disks being offline, however that's because it was hosted on the disk that is currently offline. Yea running backups on my disk makes zero sense in the event of a hardware/disk failure, but it was better than nothing.

lvdisplay
Code:
--- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                ep2vjc-Odlf-q2SE-BNU3-k8cD-lNhk-RgiFPC
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-10-13 21:56:45 -0400
  LV Status              available
  # open                 2
  LV Size                7.00 GiB
  Current LE             1792
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
  
  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                JSA2zh-vVyy-Xtdl-e4PF-c5KR-IunR-gJJr4d
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-10-13 21:56:45 -0400
  LV Status              available
  # open                 1
  LV Size                29.50 GiB
  Current LE             7552
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
  
  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                SWuVGr-H6xH-dRiY-nKJI-OWYL-c1Ro-AGynbX
  LV Write Access        read/write (activated read only)
  LV Creation host, time proxmox, 2021-10-13 21:56:53 -0400
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                <65.49 GiB
  Allocated pool data    17.74%
  Allocated metadata     2.13%
  Current LE             16765
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:7
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-121-disk-0
  LV Name                vm-121-disk-0
  VG Name                pve
  LV UUID                uBAEeY-yb59-izRz-XOOl-29uM-deaV-weoiCS
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-10-23 19:09:56 -0400
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                8.00 GiB
  Mapped size            28.02%
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:8
 
    --- Logical volume ---
  LV Path                /dev/pve/vm-102-disk-0
  LV Name                vm-102-disk-0
  VG Name                pve
  LV UUID                MdCdRn-xtq8-l3N2-oq0H-5vqp-vEOO-QExJb1
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-01-03 11:05:53 -0500


vgdisplay
Code:
  --- Volume group ---
  VG Name               pve
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1459
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                5
  Open LV               4
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <118.74 GiB
  PE Size               4.00 MiB
  Total PE              30397
  Alloc PE / Size       26621 / <103.99 GiB
  Free  PE / Size       3776 / 14.75 GiB
  VG UUID               vxVMR9-HE9A-xdU5-Z7GD-0uUk-DJ3k-q9BRCr
  
  --- Volume group ---
  VG Name               BulkPool
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  523
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                13
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <1.82 TiB
  PE Size               4.00 MiB
  Total PE              476932
  Alloc PE / Size       476804 / <1.82 TiB
  Free  PE / Size       128 / 512.00 MiB
  VG UUID               OOyboQ-GOYJ-3zdH-TplU-rfed-ZeDm-65ohZq


pvesm :
Note: 192.168.0.205 is the backup server VM that is also hosted on the same disk. It is, of course, currently powered off.
Code:
root@proxmox:~# pvesm status
VM_Backup: error fetching datastores - 500 Can't connect to 192.168.0.205:8007 (No route to host)
Name               Type     Status           Total            Used       Available        %
BulkPool        lvmthin     active      1919827968               0      1919827968    0.00%
BulkierPool     lvmthin   disabled               0               0               0      N/A
VM_Backup           pbs   inactive               0               0               0    0.00%
local               dir     active        30271428         9342252        19366144   30.86%
local-lvm       lvmthin     active        68669440        12181958        56487481   17.74%


smartctl -a /dev/sda
Code:
root@proxmox:~# smartctl -a /dev/sda
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.11.22-4-pve] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Green
Device Model:     WDC WD20EARX-22PASB0
Serial Number:    WD-WCAZA9766289
LU WWN Device Id: 5 0014ee 25b912c2b
Firmware Version: 51.0AB51
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sat Jul 16 18:14:13 2022 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x84) Offline data collection activity
                                        was suspended by an interrupting command from host.
                                        Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (38760) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        ( 374) minutes.
Conveyance self-test routine
recommended polling time:        (   5) minutes.
SCT capabilities:              (0x3035) SCT Status supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   187   167   021    Pre-fail  Always       -       5650
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       999
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   031   031   000    Old_age   Always       -       50666
 10 Spin_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   100   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       781
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       178
193 Load_Cycle_Count        0x0032   108   108   000    Old_age   Always       -       276334
194 Temperature_Celsius     0x0022   116   100   000    Old_age   Always       -       34
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age   Offline      -       1

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%         0         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
 
It does complain about one of the disks being offline, however that's because it was hosted on the disk that is currently offline. Yea running backups on my disk makes zero sense in the event of a hardware/disk failure, but it was better than nothing.
I don't think it is better than nothing if you now got nothing because you stored your only backups on the same disk with the stuff you backed up.:(
You can argue if snapshots, raid or a dedicated backup server are needed in a home setup but not that atleast a second copy of everything important should be on another disk and for very important data also a third copy somewhere offsite. Thats really saving bucks on the wrong place when a 50$ USB-HDD would have done the job.

But SMART attrbiutes of the disk looks ok, even if its a quite old disk. No pending or uncorrectible sectors. But also got no idea why your LVM-thin pool is now empty. Maybe the staff got an idea on monday.
 
Last edited:
  • Like
Reactions: 4leftpaws
I don't think it is better than nothing if you now got nothing because you stored your only backups on the same disk with the stuff you backed up. :(
It was more to protect around me breaking settings instead of hardware failures. It's saved my butt once when I changed a disk config on a VM. But hardware failures, you're right, it's no help.

I apologize, I'm pretty new to the forums. Does staff browse to look at issues like these? Or do I need to escalate it in some way? I don't have a subscription so I figured I'd be getting no official support.
 
Sooner or later someone of the staff will have a look at (nearly) all threads. If you got a license or not. Its just not that fast and not on the weekend. But most simple stuff can be fixed by the community with crowd knowlage, so the staff can focus on the harder ones no one got a good answer.
 
  • Like
Reactions: 4leftpaws
can you post the output of
Code:
pvs
vgs
lvs
and please the complete output, since in your first post at least the lvdisplay was cut off

also the content of the storage config would be interesting (/etc/pve/storage.cfg)
 
pvs
Code:
root@proxmox:~# pvs
  PV         VG       Fmt  Attr PSize    PFree
  /dev/sda   BulkPool lvm2 a--    <1.82t 512.00m
  /dev/sdb3  pve      lvm2 a--  <118.74g  14.75g

vgs
Code:
root@proxmox:~# vgs
  VG       #PV #LV #SN Attr   VSize    VFree
  BulkPool   1  13   0 wz--n-   <1.82t 512.00m
  pve        1   5   0 wz--n- <118.74g  14.75g


lvs
Code:
root@proxmox:~# lvs
  LV            VG       Attr       LSize    Pool     Origin Data%  Meta%  Move Log Cpy%Sync Convert
  BulkPool      BulkPool twi---tz--   <1.79t                                                       
  vm-100-disk-0 BulkPool Vwi---tz-- 1000.00g BulkPool                                              
  vm-102-disk-0 BulkPool Vwi---tz--   16.00g BulkPool                                              
  vm-103-disk-0 BulkPool Vwi---tz--    4.00m BulkPool                                              
  vm-103-disk-1 BulkPool Vwi---tz--   32.00g BulkPool                                              
  vm-104-disk-0 BulkPool Vwi---tz--   16.00g BulkPool                                              
  vm-109-disk-0 BulkPool Vwi---tz--   32.00g BulkPool                                              
  vm-109-disk-1 BulkPool Vwi---tz--  500.00g BulkPool                                              
  vm-111-disk-0 BulkPool Vwi---tz--   16.00g BulkPool                                              
  vm-111-disk-1 BulkPool Vwi---tz--  100.00g BulkPool                                              
  vm-114-disk-0 BulkPool Vwi---tz--   32.00g BulkPool                                              
  vm-115-disk-0 BulkPool Vwi---tz--   32.00g BulkPool                                              
  vm-116-disk-0 BulkPool Vwi---tz--   60.00g BulkPool                                              
  data          pve      twi-aotz--  <65.49g                 17.77  2.13                           
  root          pve      -wi-ao----   29.50g                                                       
  swap          pve      -wi-ao----    7.00g                                                       
  vm-102-disk-0 pve      Vwi-aotz--   10.00g data            93.78                                 
  vm-121-disk-0 pve      Vwi-aotz--    8.00g data            28.26




/etc/pve/storage.cfg
Code:
root@proxmox:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

lvmthin: BulkPool
        thinpool BulkPool
        vgname BulkPool
        content images,rootdir
        nodes proxmox

lvmthin: BulkierPool
        thinpool BulkierPool
        vgname BulkierPool
        content images,rootdir
        nodes gateway

pbs: VM_Backup
        datastore main
        server 192.168.0.205
        content backup
        fingerprint 33:7c:2c:86:76:1c:68:ad:b1:74:78:8c:3e:ed:ec:50:8e:7d:2b:3c:74:09:16:c8:54:dc:d4:96:72:01:26:5a
        prune-backups keep-last=3
        username root@pam

And the completed output of lvdisplay, sorry I didn't realize it was cut off:
Code:
root@proxmox:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                ep2vjc-Odlf-q2SE-BNU3-k8cD-lNhk-RgiFPC
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-10-13 21:56:45 -0400
  LV Status              available
  # open                 2
  LV Size                7.00 GiB
  Current LE             1792
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
 
  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                JSA2zh-vVyy-Xtdl-e4PF-c5KR-IunR-gJJr4d
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-10-13 21:56:45 -0400
  LV Status              available
  # open                 1
  LV Size                29.50 GiB
  Current LE             7552
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
 
  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                SWuVGr-H6xH-dRiY-nKJI-OWYL-c1Ro-AGynbX
  LV Write Access        read/write (activated read only)
  LV Creation host, time proxmox, 2021-10-13 21:56:53 -0400
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                <65.49 GiB
  Allocated pool data    17.77%
  Allocated metadata     2.13%
  Current LE             16765
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:7
 
  --- Logical volume ---
  LV Path                /dev/pve/vm-121-disk-0
  LV Name                vm-121-disk-0
  VG Name                pve
  LV UUID                uBAEeY-yb59-izRz-XOOl-29uM-deaV-weoiCS
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-10-23 19:09:56 -0400
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                8.00 GiB
  Mapped size            28.26%
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:8
 
  --- Logical volume ---
  LV Path                /dev/pve/vm-102-disk-0
  LV Name                vm-102-disk-0
  VG Name                pve
  LV UUID                MdCdRn-xtq8-l3N2-oq0H-5vqp-vEOO-QExJb1
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-01-03 11:05:53 -0500
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                10.00 GiB
  Mapped size            93.78%
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:9
 
  --- Logical volume ---
  LV Name                BulkPool
  VG Name                BulkPool
  LV UUID                yUs99b-JcUf-r2VI-zffi-Q6WZ-QuO5-ZB43pw
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-10-16 17:22:45 -0400
  LV Pool metadata       BulkPool_tmeta
  LV Pool data           BulkPool_tdata
  LV Status              NOT available
  LV Size                <1.79 TiB
  Current LE             468708
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
 
  --- Logical volume ---
  LV Path                /dev/BulkPool/vm-102-disk-0
  LV Name                vm-102-disk-0
  VG Name                BulkPool
  LV UUID                WYDfHN-ursJ-xbcq-dhmR-ebx7-1CSQ-ZZrWYC
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-10-16 23:39:51 -0400
  LV Pool name           BulkPool
  LV Status              NOT available
  LV Size                16.00 GiB
  Current LE             4096
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
 
  --- Logical volume ---
  LV Path                /dev/BulkPool/vm-100-disk-0
  LV Name                vm-100-disk-0
  VG Name                BulkPool
  LV UUID                E7JkoI-5LL7-YyoK-eZtF-QOJk-O21a-gYZNoA
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-10-17 10:49:17 -0400
  LV Pool name           BulkPool
  LV Status              NOT available
  LV Size                1000.00 GiB
  Current LE             256000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
 
  --- Logical volume ---
  LV Path                /dev/BulkPool/vm-103-disk-0
  LV Name                vm-103-disk-0
  VG Name                BulkPool
  LV UUID                W5S7k3-G0Ip-SKJN-FMxb-Ku9J-G2Ao-qhxmuM
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-10-31 23:36:39 -0400
  LV Pool name           BulkPool
  LV Status              NOT available
  LV Size                4.00 MiB
  Current LE             1
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
 
  --- Logical volume ---
  LV Path                /dev/BulkPool/vm-103-disk-1
  LV Name                vm-103-disk-1
  VG Name                BulkPool
  LV UUID                x7FyM7-RAM0-v3UK-DDjN-Z2sY-dzWf-1FRJYs
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-10-31 23:36:40 -0400
  LV Pool name           BulkPool
  LV Status              NOT available
  LV Size                32.00 GiB
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
 
  --- Logical volume ---
  LV Path                /dev/BulkPool/vm-104-disk-0
  LV Name                vm-104-disk-0
  VG Name                BulkPool
  LV UUID                st7YKk-t9Gp-VQzz-l8PY-0iNI-qJeD-ghhbeL
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-11-03 12:59:52 -0400
  LV Pool name           BulkPool
  LV Status              NOT available
  LV Size                16.00 GiB
  Current LE             4096
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
 
  --- Logical volume ---
  LV Path                /dev/BulkPool/vm-109-disk-0
  LV Name                vm-109-disk-0
  VG Name                BulkPool
  LV UUID                NNwQE9-D6P0-Dqto-7Yv8-H24X-vTBb-LvoRv2
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-01-03 22:15:21 -0500
  LV Pool name           BulkPool
  LV Status              NOT available
  LV Size                32.00 GiB
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
 
  --- Logical volume ---
  LV Path                /dev/BulkPool/vm-109-disk-1
  LV Name                vm-109-disk-1
  VG Name                BulkPool
  LV UUID                pNYNed-sJ1U-wtnO-xrgE-4dft-thr2-Xz6Sxk
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-01-03 22:23:27 -0500
  LV Pool name           BulkPool
  LV Status              NOT available
  LV Size                500.00 GiB
  Current LE             128000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
 
  --- Logical volume ---
  LV Path                /dev/BulkPool/vm-111-disk-0
  LV Name                vm-111-disk-0
  VG Name                BulkPool
  LV UUID                zIGrPH-fTCJ-NaEr-fQrh-PiLv-sonN-17Q1iw
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-01-04 21:05:00 -0500
  LV Pool name           BulkPool
  LV Status              NOT available
  LV Size                16.00 GiB
  Current LE             4096
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
 
  --- Logical volume ---
  LV Path                /dev/BulkPool/vm-111-disk-1
  LV Name                vm-111-disk-1
  VG Name                BulkPool
  LV UUID                kpEllg-5SXt-kuVV-ZRC7-exT2-cK8f-d94l9p
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-01-04 21:27:28 -0500
  LV Pool name           BulkPool
  LV Status              NOT available
  LV Size                100.00 GiB
  Current LE             25600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
 
  --- Logical volume ---
  LV Path                /dev/BulkPool/vm-114-disk-0
  LV Name                vm-114-disk-0
  VG Name                BulkPool
  LV UUID                TWBJIC-JXDW-q2U0-CyxW-A13X-z60a-njp3ev
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-04-04 23:11:13 -0400
  LV Pool name           BulkPool
  LV Status              NOT available
  LV Size                32.00 GiB
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
 
  --- Logical volume ---
  LV Path                /dev/BulkPool/vm-115-disk-0
  LV Name                vm-115-disk-0
  VG Name                BulkPool
  LV UUID                HExXIi-2LMT-aahO-k6kI-Y11s-HduS-qMoiiC
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-04-09 17:34:00 -0400
  LV Pool name           BulkPool
  LV Status              NOT available
  LV Size                32.00 GiB
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
 
  --- Logical volume ---
  LV Path                /dev/BulkPool/vm-116-disk-0
  LV Name                vm-116-disk-0
  VG Name                BulkPool
  LV UUID                0KNQYG-CzeN-hGzj-ULDe-VhTo-yaZP-slJ5Ml
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-04-26 23:45:06 -0400
  LV Pool name           BulkPool
  LV Status              NOT available
  LV Size                60.00 GiB
  Current LE             15360
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
 
Last edited:
ok, for some reason your thinpool is not active/online
what happens when you do:

Code:
lvchange -ay BulkPool/BulkPool
?
 
ok, for some reason your thinpool is not active/online
what happens when you do:

Code:
lvchange -ay BulkPool/BulkPool
?
It appears the disk is active:

Code:
root@proxmox:~# lvchange -ay /dev/BulkPool/BulkPool
  Activation of logical volume BulkPool/BulkPool is prohibited while logical volume BulkPool/BulkPool_tmeta is active.
 
So I got that deactivated and I ran the suggested command. It's now showing the disk as using 800Gb, which is a step forward for sure. However I'm not able to start any of the virtual machines or containers still. I attempted a reboot thinking that would fix it but it put me back to where I was before. So I repeated the process after boot. I'll paste some of the previous commands output that have changed and may be useful. For completeness sake, commands I ran were:

Code:
root@proxmox:~# lvchange -an /dev/BulkPool/BulkPool_tmeta
root@proxmox:~# lvchange -an /dev/BulkPool/BulkPool_tdata
root@proxmox:~# lvchange -an /dev/BulkPool/BulkPool
root@proxmox:~# lvchange -ay /dev/BulkPool/BulkPool

Interestingly, it looks like the lv is read only.
lvdisplay
Code:
[...]
  --- Logical volume ---
  LV Name                BulkPool
  VG Name                BulkPool
  LV UUID                yUs99b-JcUf-r2VI-zffi-Q6WZ-QuO5-ZB43pw
  LV Write Access        read/write (activated read only)
  LV Creation host, time proxmox, 2021-10-16 17:22:45 -0400
  LV Pool metadata       BulkPool_tmeta
  LV Pool data           BulkPool_tdata
  LV Status              available
  # open                 0
  LV Size                <1.79 TiB
  Allocated pool data    40.91%
  Allocated metadata     2.46%
  Current LE             468708
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:11
[...]

lvs now shows data % and meta % used.
lvs
Code:
root@proxmox:~# lvs
  LV            VG       Attr       LSize    Pool     Origin Data%  Meta%  Move Log Cpy%Sync Convert
  BulkPool      BulkPool twi-aotz--   <1.79t                 40.91  2.46                           
  vm-100-disk-0 BulkPool Vwi---tz-- 1000.00g BulkPool                                               
  vm-102-disk-0 BulkPool Vwi---tz--   16.00g BulkPool                                               
  vm-103-disk-0 BulkPool Vwi---tz--    4.00m BulkPool                                               
  vm-103-disk-1 BulkPool Vwi---tz--   32.00g BulkPool                                               
  vm-104-disk-0 BulkPool Vwi---tz--   16.00g BulkPool                                               
  vm-109-disk-0 BulkPool Vwi---tz--   32.00g BulkPool                                               
  vm-109-disk-1 BulkPool Vwi---tz--  500.00g BulkPool                                               
  vm-111-disk-0 BulkPool Vwi---tz--   16.00g BulkPool                                               
  vm-111-disk-1 BulkPool Vwi---tz--  100.00g BulkPool                                               
  vm-114-disk-0 BulkPool Vwi---tz--   32.00g BulkPool                                               
  vm-115-disk-0 BulkPool Vwi---tz--   32.00g BulkPool                                               
  vm-116-disk-0 BulkPool Vwi---tz--   60.00g BulkPool                                               
  data          pve      twi-aotz--  <65.49g                 17.77  2.13                           
  root          pve      -wi-ao----   29.50g                                                       
  swap          pve      -wi-ao----    7.00g                                                       
  vm-102-disk-0 pve      Vwi-aotz--   10.00g data            93.78                                 
  vm-121-disk-0 pve      Vwi-aotz--    8.00g data            28.26

And pvesm status seems to have the same story:
pvesm status
Code:
root@proxmox:~# pvesm status
VM_Backup: error fetching datastores - 500 Can't connect to 192.168.0.205:8007 (No route to host)
Name               Type     Status           Total            Used       Available        %
BulkPool        lvmthin     active      1919827968       785401621      1134426346   40.91%
BulkierPool     lvmthin   disabled               0               0               0      N/A
VM_Backup           pbs   inactive               0               0               0    0.00%
local               dir     active        30271428         9365512        19342884   30.94%
local-lvm       lvmthin     active        68669440        12202559        56466880   17.77%
 
mhm... can you post the complete journal from a fresh boot ?
 
Hi,
can you activate an individual LV in the thin pool with e.g. lvchange -ay BulkPool/vm-100-disk-0? (Newer versions of Proxmox VE will attempt this automatically, but it requires libpve-storage-perl >= 7.0-14). What if you run lvchange -ay BulkPool, which should activate all LVs in the pool?
 
Hi,
can you activate an individual LV in the thin pool with e.g. lvchange -ay BulkPool/vm-100-disk-0? (Newer versions of Proxmox VE will attempt this automatically, but it requires libpve-storage-perl >= 7.0-14). What if you run lvchange -ay BulkPool, which should activate all LVs in the pool?
This has fixed it! Tested on vm-100 and it was able to power on, then running lvchange -ay BulkPool fixed all of them.

Thank you so much!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!