I wanted to show the steps that I took to accomplish this task.
I am a real beginner to Proxmox, and I struggled with the idea of changing the disk partitions using a live disk and GParted, etc.
So, I looked for an easier method for me to do this, and I hope this helps someone.
I had installed proxmox about 10 months ago using all the defaults because I had never used it before.
I got 5 VM's running and all was dandy.
After time went by, I began relying on my VM's and realized I should back them up.
That's where the trouble was.
My SSD is 500gb. The default installation allocated 65gb to local, and the rest to local-lvm. My VM's only used 130gb of local-lvm, but a single windows backup consumed 25gb of the local volume.
I watched Chuck Networks youtube video, and he explains the process of removing local-lvm and increasing local.
HOWEVER, that was on a clean install. Not a situation where VM's had already been created. What would happen on my server????
Here is what I did to accomplish the same results. (I do think there are errors in the order I did things, (see #2) but they do work with no issues)
1. First step is to do backups of all VM's and then copy those backups to a usb or other machine.
In my case, I didn't have enough room on the local volume to make backups of every machine, so I made as many as I could, the moved them off to another server I have running using the scp command. (works great). Backups are located in /var/lib/vz/dump
2. After backups are safely made, I suggest using the GUI to remove all the VM's and any CT containers. (I did not do this until later, and I think it would be easier to do it at this point).
3. Using the Proxmox GUI, go to the node, click storage and select local-lvm. I used the 'Remove' button to remove the volume.
4. Then I clicked on the node, and clicked shell. Once in the shell, I entered this command (as stated in chuck's video)
#lvresize -l +100%FREE /dev/pve/root
5. After that, I then entered the next command from chuck's video
#resize2fs /dev/mapper/pve-root
6. reboot proxmox
**Now, when proxmox came back up, under storage, I only had the single volume 'local', and it was over 450gb!
The problem was, I had 5 VM's that were configured for local-lvm, and proxmox would not let me delete them from the GUI. I suggest that removing the VM's in step 2 above.
7. Since I did not do step #2 above, I had to remove the VM's using the console. So open the console and remove the VM's by using the command:
#rm /etc/pve/nodes/pve/qemu-server/*.conf
Remove CT containers with this command:
#rm /etc/pve/nodes/pve/lxc/*.conf
8. Copy all of the backup files back to /var/lib/vz/dump
9. Back in the GUI, go to Datacenter, storage, select local, then click the edit button. Here I added Disk image and container.
10. Now, I was ready to restore my VM's from the backups. So, in node, select local volume, then click on Backups. Select one of your backups and click Restore. Select the storage location (local), enter the VM number, and click the Restore button.
11. Do this for each backup.
All DONE!
Hope this helps.
I am a real beginner to Proxmox, and I struggled with the idea of changing the disk partitions using a live disk and GParted, etc.
So, I looked for an easier method for me to do this, and I hope this helps someone.
I had installed proxmox about 10 months ago using all the defaults because I had never used it before.
I got 5 VM's running and all was dandy.
After time went by, I began relying on my VM's and realized I should back them up.
That's where the trouble was.
My SSD is 500gb. The default installation allocated 65gb to local, and the rest to local-lvm. My VM's only used 130gb of local-lvm, but a single windows backup consumed 25gb of the local volume.
I watched Chuck Networks youtube video, and he explains the process of removing local-lvm and increasing local.
HOWEVER, that was on a clean install. Not a situation where VM's had already been created. What would happen on my server????
Here is what I did to accomplish the same results. (I do think there are errors in the order I did things, (see #2) but they do work with no issues)
1. First step is to do backups of all VM's and then copy those backups to a usb or other machine.
In my case, I didn't have enough room on the local volume to make backups of every machine, so I made as many as I could, the moved them off to another server I have running using the scp command. (works great). Backups are located in /var/lib/vz/dump
2. After backups are safely made, I suggest using the GUI to remove all the VM's and any CT containers. (I did not do this until later, and I think it would be easier to do it at this point).
3. Using the Proxmox GUI, go to the node, click storage and select local-lvm. I used the 'Remove' button to remove the volume.
4. Then I clicked on the node, and clicked shell. Once in the shell, I entered this command (as stated in chuck's video)
#lvresize -l +100%FREE /dev/pve/root
5. After that, I then entered the next command from chuck's video
#resize2fs /dev/mapper/pve-root
6. reboot proxmox
**Now, when proxmox came back up, under storage, I only had the single volume 'local', and it was over 450gb!
The problem was, I had 5 VM's that were configured for local-lvm, and proxmox would not let me delete them from the GUI. I suggest that removing the VM's in step 2 above.
7. Since I did not do step #2 above, I had to remove the VM's using the console. So open the console and remove the VM's by using the command:
#rm /etc/pve/nodes/pve/qemu-server/*.conf
Remove CT containers with this command:
#rm /etc/pve/nodes/pve/lxc/*.conf
8. Copy all of the backup files back to /var/lib/vz/dump
9. Back in the GUI, go to Datacenter, storage, select local, then click the edit button. Here I added Disk image and container.
10. Now, I was ready to restore my VM's from the backups. So, in node, select local volume, then click on Backups. Select one of your backups and click Restore. Select the storage location (local), enter the VM number, and click the Restore button.
11. Do this for each backup.
All DONE!
Hope this helps.