VM Migration Issue

CameronMG

New Member
Nov 15, 2022
19
0
1
Hi Everyone,

Not sure if this would be the correct place to post this, however appologies in advance if it isnt.

So a couple of days ago, I had my main proxmox host die on me and this then required a full reinstall. My VMs where backed up via PBA so I wasnt worried about the recovery.

I've since rebuilt a PBA (the origional one was lost when the host died) and managed to import the datastore which is stored on an NFS share.

So I have restored a couple of the backups to my secondary PVE host and then migrated them across to the primary (I needed to do it this way as I needed my Plex and Docker boxes back ASAP).

Upon restore and once I'd sorted the PVE hosts out, I migrated them back to the main host, however when I have done this, they seems to have gone back with the disks being the full storage size and taking up alot of space on the "Local-LVM" which is where they live. I recovered the VMs to the "Local-LVM" which is set to 'LVM-Thin' on both machines. I did confirm that when I restored the VM's back intially from PBA, they where using nowhere near as much space. I know that the disks have not had that much written to them as my Plex server pulls it's data from CIFS shares and my docker host is cleaned fairly regularly and is only reporting 20(ish)% utilisation.

Any assistance with getting the disks back to a "Thin Provisioned" state would be apprecaited.

Regards,
Cameron
 
Did you live migrate those VMs?
In this case the whole disk has to be allocated since it is not known beforehand by QEMU which blocks are zero and which have data in them.
You should be able to enable `discard` on the disk and run a trim from inside the guest.

Restoring directly to the target host should make it thin provisioned from the get go, since no live migration is involved.
 
Did you live migrate those VMs?
In this case the whole disk has to be allocated since it is not known beforehand by QEMU which blocks are zero and which have data in them.
You should be able to enable `discard` on the disk and run a trim from inside the guest.

Restoring directly to the target host should make it thin provisioned from the get go, since no live migration is involved.
Hi Mira,

No I didnt. I did stop at the time and have a think about it, however I didnt want to risk data corruption so I done a "Non-Live" migration from PBA to PVE.

I've got another few VMs which I still need to pull back, so I'm going to try and restore these and see what happens.
 
Hi Mira,

No I didnt. I did stop at the time and have a think about it, however I didnt want to risk data corruption so I done a "Non-Live" migration from PBA to PVE.

I've got another few VMs which I still need to pull back, so I'm going to try and restore these and see what happens.
Hi Mira,

Just further to my last comment, I've also noticed that the VMs (On their status page) is showing the "Boot Disk" as '0B'. Not sure if this is indicative of the issue or not?

Regards,
Cameron
 
Hey, is there any solution to this?
I am having the same issue
Can you provide the complete VM config (qm config <VMID>) as well as the storage config (cat /etc/pve/storage.cfg)?
Feel free to mask any IPs, domains or comments you don't want public.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!