Hi,
I have a node running latest up to date Proxmox version.
I configured lvm thin provisionning for users with a Raid0 composed with 2 SSD for about 1To.
Maybe a misconfig or bad usage made it crash. A user provisionned a template with a storage of 500Go.
Everything was running fine for our test and thin provisionning.
A change was introduced in the kickstart contained in the template about a filesystem with option "--grow".
A user tried to start two vms based on this template and everything crashed.
For the moment I have two questions about that :
- I can't delete any volume contained on my virtualgroup, because my thin pool needs a repair.
When I try to repair, I don't have enough free space
And when I try to extend my VG, it's already full
So : How can I deal with this stuck situation ? Does every devices lost ?
- Is it possible to create 2 or more thin volumes to avoid this kind of errors ?
I mean, if I create 3 thin pools of about 200G on the same VG, and if a vm become too fat to fit in its thin volume, the others will not be impacted ? Am I wrong ?
Thanks for your advices and reply.
Regards,
I have a node running latest up to date Proxmox version.
I configured lvm thin provisionning for users with a Raid0 composed with 2 SSD for about 1To.
Maybe a misconfig or bad usage made it crash. A user provisionned a template with a storage of 500Go.
Everything was running fine for our test and thin provisionning.
A change was introduced in the kickstart contained in the template about a filesystem with option "--grow".
A user tried to start two vms based on this template and everything crashed.
For the moment I have two questions about that :
- I can't delete any volume contained on my virtualgroup, because my thin pool needs a repair.
Code:
root@hypervisor04:~# lvremove /dev/VGRAID0/vm-103-disk-0
Do you really want to remove and DISCARD logical volume VGRAID0/vm-103-disk-0? [y/n]: y
Check of pool VGRAID0/lv_thin_build failed (status:1). Manual repair required!
Failed to update pool VGRAID0/lv_thin_build.
When I try to repair, I don't have enough free space
Code:
root@hypervisor04:~# lvconvert --repair -v VGRAID0/lv_thin_build
Preparing pool metadata spare volume for Volume group VGRAID0.
Volume group "VGRAID0" has insufficient free space (0 extents): 30 required.
And when I try to extend my VG, it's already full
Code:
root@hypervisor04:~# lvextend -l +100%FREE VGRAID0/lv_thin_build
New size (238345 extents) matches existing size (238345 extents).
So : How can I deal with this stuck situation ? Does every devices lost ?
- Is it possible to create 2 or more thin volumes to avoid this kind of errors ?
I mean, if I create 3 thin pools of about 200G on the same VG, and if a vm become too fat to fit in its thin volume, the others will not be impacted ? Am I wrong ?
Thanks for your advices and reply.
Regards,