lvm-thin on SSD stalls/fails clone or create

sholinaty

New Member
Apr 11, 2022
6
0
1
steps:
1. installed a new 1tb SSD into server
2. configured a new thinpool on the web UI
3. hit Create VM... set the target to my new thinpool
- I see the "Logical olume "vm-106-disk-0" created message
- the create never actually finishes (left it running for 24hrs once, just in case)

or 3. right click on a template... Clone... FullClone, target = my new thinpool
-- disk is created, transfer starts
- transfer stalls out every time at exactly 7.4gb or 20gb
also tested it cloning a different non-template VM, dies at Exactly the same point.

Any suggestions on where to start?
 
from"syslog
Apr 14 15:18:33 proxmox1 kernel: [1055476.174774] device-mapper: thin: dm_thin_get_highest_mapped_block returned -5

dmesg is the same:
[1055535.600523] device-mapper: thin: dm_thin_get_highest_mapped_block returned -5

[1055535.601022] device-mapper: thin: dm_thin_get_highest_mapped_block returned -5

[1055535.601529] device-mapper: thin: dm_thin_get_highest_mapped_block returned -5

[1055535.832456] device-mapper: thin: dm_thin_get_highest_mapped_block returned -5

[1055535.834789] device-mapper: thin: dm_thin_get_highest_mapped_block returned -5

[1055535.835576] device-mapper: thin: dm_thin_get_highest_mapped_block returned -5

[1055535.836356] device-mapper: thin: dm_thin_get_highest_mapped_block returned -5

[1055546.225200] device-mapper: thin: dm_thin_get_highest_mapped_block returned -5

[1055546.226326] device-mapper: thin: dm_thin_get_highest_mapped_block returned -5

[1055546.226804] device-mapper: thin: dm_thin_get_highest_mapped_block returned -5

[1055546.227311] device-mapper: thin: dm_thin_get_highest_mapped_block returned -5

[1055546.473502] device-mapper: thin: dm_thin_get_highest_mapped_block returned -5

[1055546.474825] device-mapper: thin: dm_thin_get_highest_mapped_block returned -5

[1055546.475488] device-mapper: thin: dm_thin_get_highest_mapped_block returned -5


[1055546.476199] device-mapper: thin: dm_thin_get_highest_mapped_block returned -5
 
A google search for that message, albeit with few results, indicates an lvm metadata corruption. I would also consider your new disk suspect as well. I would try to read/write entire disk with "dd" and check its SMART stats.
If hardware checks out ok, I would erase and rebuild the LVM on it.



Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
that would likely make sense - as shown by Screen Shot 2022-04-14 at 4.44.48 PM.png

the disk labelled thinpool is my SSD... data was an old 1tb spinning disk.

I'm wondering if I just need to manually specify the metadata size when provisioning the thinpool.

I have tried wiping and reinitializing the disk a few times now.
 
still stuck on this.
I was able to extend the metadata, and everything "Seems" right, bu I am still getting the syslog errors..

proxmox1 kernel: [1396195.867975] device-mapper: thin: dm_thin_get_highest_mapped_block returned -5

wiped the disk, same errors.
 
i always get problem with adding disk and the solution quite easy is to just boot over a partition manager soft and create lvm there. Then reboot and hop all detect in prox.
 
i always get problem with adding disk and the solution quite easy is to just boot over a partition manager soft and create lvm there. Then reboot and hop all detect in prox.
problem is, having to boot into a partition manager causes downtime. the whole point of adding extra drives is to help ensure critical services remain Up.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!