Issues with 2 hard drives after upgrading to Proxmox 7

darkzorrow

Member
Jul 12, 2021
7
0
6
40
Hi everyone,

I have just upgraded to proxmox 7 from 6.4-13 and I have run into a problem with one of my VMs where the vm disk files on the two physical hard drives wont open. See the image for one of the drives below:
1626121675846.png

When I look at the size of the file on the drive it seems to be the right size, but as you can see from the picture below, something is not right. The issue is the same for both hgst_12tb and seagate_10tb:

1626121911446.png

The drives nvme-thin01 and samsung_ssd_1tb works just fine.

I would very much to get some help with this. Thanks.
 

Attachments

  • 1626121813215.png
    1626121813215.png
    26.8 KB · Views: 5
Hi, as first step, could you please copy&paste the output of the following commands (for example from the shell in the GUI of your PVE host)?
Code:
cat /etc/pve/storage.cfg
fdisk -l
df -h
 
Hi Dominic,

Sure. See the output below.

------------------------------------------------------------------

dir: local
path /var/lib/vz
content iso,backup,vztmpl

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

lvmthin: nvme-thin01
thinpool nvme-thin01
vgname nvme-thin01
content rootdir,images
nodes proxmox02

lvmthin: hgst_12tb
thinpool hgst_12tb
vgname hgst_12tb
content images,rootdir
nodes proxmox02

lvmthin: samsung_ssd_1tb
thinpool samsung_ssd_1tb
vgname samsung_ssd_1tb
content rootdir,images
nodes proxmox02

lvmthin: seagate_10tb
thinpool seagate_10tb
vgname seagate_10tb
content rootdir,images

Disk /dev/nvme1n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: SAMSUNG MZVKV512HAJH-000L1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/nvme0n1: 238.47 GiB, 256060514304 bytes, 500118192 sectors
Disk model: SKHynix_HFS256GD9TNG-L5B0B
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: B0817368-FC1C-4B2F-8ACC-0E379BE4214C

Device Start End Sectors Size Type
/dev/nvme0n1p1 34 2047 2014 1007K BIOS boot
/dev/nvme0n1p2 2048 1050623 1048576 512M EFI System
/dev/nvme0n1p3 1050624 500118158 499067535 238G Linux LVM


Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-root: 59.25 GiB, 63619203072 bytes, 124256256 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-vm--100--disk--0: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: gpt
Disk identifier: 25CCA9DF-705F-4847-A01D-F0CD88B9588C

Device Start End Sectors Size Type
/dev/mapper/pve-vm--100--disk--0-part1 2048 1050623 1048576 512M EFI System
/dev/mapper/pve-vm--100--disk--0-part2 1050624 65107967 64057344 30.5G Linux filesystem
/dev/mapper/pve-vm--100--disk--0-part3 65107968 67106815 1998848 976M Linux swap


Disk /dev/mapper/pve-vm--100--disk--1: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/sdc: 953.87 GiB, 1024209543168 bytes, 2000409264 sectors
Disk model: Samsung SSD 860
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdb: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors
Disk model: ST10000DM0004
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sda: 10.91 TiB, 12000138625024 bytes, 23437770752 sectors
Disk model: HGST HUH721212AL
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/nvme--thin01-vm--100--disk--0: 480 GiB, 515396075520 bytes, 1006632960 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: gpt
Disk identifier: 152F695B-DA8C-4CDB-9649-777966651FF0

Device Start End Sectors Size Type
/dev/mapper/nvme--thin01-vm--100--disk--0-part1 2048 976566271 976564224 465.7G Linux filesystem


Disk /dev/mapper/samsung_ssd_1tb-vm--100--disk--0: 960 GiB, 1030792151040 bytes, 2013265920 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: gpt
Disk identifier: FEADA6E0-0763-4DF3-9624-407E2E11BC3E

Device Start End Sectors Size Type
/dev/mapper/samsung_ssd_1tb-vm--100--disk--0-part1 2048 1953128447 1953126400 931.3G Linux filesystem
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 1.6M 3.2G 1% /run
/dev/mapper/pve-root 59G 5.4G 50G 10% /
tmpfs 16G 43M 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/nvme0n1p2 511M 312K 511M 1% /boot/efi
/dev/fuse 128M 16K 128M 1% /etc/pve
tmpfs 3.2G 0 3.2G 0% /run/user/0

------------------------------------------------------------------

I hope it helps

Best regards.
Anders
 
The fdisk output contains entries for the working nvme-thin01 and samsung_ssd_1tb but not for the others. Do you have physical volumes and volume groups as defined in the storage.cfg?

Code:
pvs
vgs
 
Here is the outout:

--------------------------------------------------

PV VG Fmt Attr PSize PFree
/dev/nvme0n1p3 pve lvm2 a-- 237.97g 15.99g
/dev/nvme1n1 nvme-thin01 lvm2 a-- <476.94g 124.00m
/dev/sda hgst_12tb lvm2 a-- 10.91t 512.00m
/dev/sdb seagate_10tb lvm2 a-- <9.10t 512.00m
/dev/sdc samsung_ssd_1tb lvm2 a-- <953.87g 124.00m
VG #PV #LV #SN Attr VSize VFree
hgst_12tb 1 2 0 wz--n- 10.91t 512.00m
nvme-thin01 1 2 0 wz--n- <476.94g 124.00m
pve 1 5 0 wz--n- 237.97g 15.99g
samsung_ssd_1tb 1 2 0 wz--n- <953.87g 124.00m
seagate_10tb 1 2 0 wz--n- <9.10t 512.00m

-------------------------------------------------

Does it help you?

Best regards,
Anders
 
And all your thinpools do actually exist as logical volumes? To check this, could you please also post
Code:
lvs
?

I have just upgraded to proxmox 7 from 6.4-13 and I have run into a problem with one of my VMs where the vm disk files on the two physical hard drives wont open.
So before the upgrade everything was OK? Are there other VMs that
  • have a similar problem or have disks on
  • work even though they have a disk on a "bad" storage
?
 
lvs output:

--------------------------------------------

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
hgst_12tb hgst_12tb twi---tz-- 10.88t
vm-100-disk-0 hgst_12tb Vwi---tz-- <11.72t hgst_12tb
nvme-thin01 nvme-thin01 twi-aotz-- <467.28g 98.94 4.32
vm-100-disk-0 nvme-thin01 Vwi-a-tz-- 480.00g nvme-thin01 96.31
data pve twi-aotz-- <151.63g 17.17 1.79
root pve -wi-ao---- 59.25g
swap pve -wi-ao---- 8.00g
vm-100-disk-0 pve Vwi-a-tz-- 32.00g data 81.38
vm-100-disk-1 pve Vwi-a-tz-- 4.00m data 3.12
samsung_ssd_1tb samsung_ssd_1tb twi-aotz-- <934.67g 97.25 4.15
vm-100-disk-0 samsung_ssd_1tb Vwi-a-tz-- 960.00g samsung_ssd_1tb 94.68
seagate_10tb seagate_10tb twi---tz-- 9.06t
vm-100-disk-0 seagate_10tb Vwi---tz-- <9.77t seagate_10tb

--------------------------------------------

So before the upgrade everything was OK? Are there other VMs that
  • have a similar problem or have disks on
    • Before the upgrade the server had been updated within the 6.3 and 6.4 versions without any issues
  • work even though they have a disk on a "bad" storage
    • Both the proxmox server and the VM that has the disks attached have been restarted and auto booted without problems before and I do not recall getting notifications or warnings about "bad" storage.
 
When you compare the attributes of your thin volumes, there is an additional letter a for the working pools
Code:
twi-aotz--
vs
Code:
twi---tz--
for the bad hgst_12tb and seagate10tb.
a is for active (see "lvs" man page). I think it should work when you activate the volumes:

Code:
lvchange --activate y hgst_12tb
lvchange --activate y seagate_10tb
 
Last edited:
Good call. I did not see that.
I ran the code but get these messages:

root@proxmox02:~# lvchange --activate y hgst_12tb
Activation of logical volume hgst_12tb/hgst_12tb is prohibited while logical volume hgst_12tb/hgst_12tb_tmeta is active.
Activation of logical volume hgst_12tb/vm-100-disk-0 is prohibited while logical volume hgst_12tb/hgst_12tb_tmeta is active.
root@proxmox02:~# lvchange --activate y seagate_10tb
Activation of logical volume seagate_10tb/seagate_10tb is prohibited while logical volume seagate_10tb/seagate_10tb_tmeta is active.
Activation of logical volume seagate_10tb/vm-100-disk-0 is prohibited while logical volume seagate_10tb/seagate_10tb_tmeta is active.

I also tried while the hard disks were detached from the VM but same error.
Do you know what to do about this?

Best regards,
Anders
 
Hi Dominic,

Its been over a week since I posted the reply any progress or update on this topic?
My server has been offline for over 2 weeks after the upgrade and it is starting to make me anxious.

Thanks.

Best regards,
Anders
 
I have the same issue. As for the workaround, just do:
lvchange -an hgst_12tb
*then*
lvchange -ay hgst_12tb

For me, this works. the 'lvchange -an hgst_12tb' command will take about 5 minutes, but it does eventually work, and then my VM's are functional again. Really weird. Downside is, I need to do this anytime I reboot the server, which fortunately, is pretty rare. Hoping to find a permanent fix though.
 
Hi Krypty,

Thanks for the tip but unfortunately it do not seem to work for me.

When I run "lvchange -an hgst_12tb" I get this error:
device-mapper: remove ioctl on (253:12) failed: Device or resource busy

And when I run "lvchange -ay hgst_12tb" I get this error:
Activation of logical volume hgst_12tb/hgst_12tb is prohibited while logical volume hgst_12tb/hgst_12tb_tmeta is active.
Activation of logical volume hgst_12tb/vm-100-disk-0 is prohibited while logical volume hgst_12tb/hgst_12tb_tmeta is active.

Do you know why or how to solve this?

I have installed the newest updates today but the error still remains.

@Dominic - Any updates on this issue?
 
Hi Krypty,

Thanks for the tip but unfortunately it do not seem to work for me.

When I run "lvchange -an hgst_12tb" I get this error:
device-mapper: remove ioctl on (253:12) failed: Device or resource busy

And when I run "lvchange -ay hgst_12tb" I get this error:
Activation of logical volume hgst_12tb/hgst_12tb is prohibited while logical volume hgst_12tb/hgst_12tb_tmeta is active.
Activation of logical volume hgst_12tb/vm-100-disk-0 is prohibited while logical volume hgst_12tb/hgst_12tb_tmeta is active.

Do you know why or how to solve this?

I have installed the newest updates today but the error still remains.

@Dominic - Any updates on this issue?

Apologies for not responding sooner. But for me, I often need to wait a good 5, maybe even 10 minutes after Proxmox is fully booted in order to run the lvchange -an command as I get the same error. But eventually, it does let me run it, which then allows me to enable the volume again, and I'm rolling again.

I do not have a permanent fix for this though. I'm tempted to maybe copy all the data off, reformat the disk, and then put the data back on and see.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!