Shrink disk size

Hi,
it dependence on what storage it is?
 
Ok, I've inserted the below command.

root@ns385061:~# qemu-img resize -f raw /var/lib/vz/images/103/vm-103-disk-1.raw 500G
Image resized.

Then I edited the "/etc/pve/nodes/ns385061/lxc/100.conf" file and changed the the size of the disk to 500G

The server doesn't boot. BTW this VPS for testing.
 
You've just broken your VM disk. You need to shrink the disk inside the VM first and then shrink the raw device.

What you did was to blind copy the a 100GB disk to a 30GB disk and then wonder why it is broken.
 
Since this is a container it should have an ext4 filesystem on the raw image. The usual way to shrink this is to first shutdown the container, then use resize2fs to shrink the filesystem, then use truncate to shrink the file. Note that if you truncate the file too much (more than what resize2fs did, which may even happen with the same parameters being interpreted once as *1000 and once as *1024 per order of magnitude...) the filesystem will be broken.
Also it only works when there's enough free space on the file system.
So here's my recommended procedure: (No promises, always expect the worst.)
1) Stop the container.
2) Make a backup.
3) Make sure you didn't forget step 2
4) Use resize2fs -M $rawfile. This will first wait a couple of seconds due to the multiple-mount protection. It might tell you to run e2fsck first - do it. This might even tell you to use tune2fs to clear the MMP data. Do this only if you're sure the container is stopped and the file not in use.
Finally this will resize the filesystem to the minimal possible size it can have with the data contained, and will print a line telling you how much that is, which looks like this:
Code:
The filesystem on vm-406-disk-1.raw is now 223982 (4k) blocks long.
This tells me it's almost 900MiB in size. If the size is bigger than what you want to shrink it to - you can't shrink it before freeing up space from inside the container. And you should always leave a margin of error so you don't accidentally cut off part of the filesystem's metadata, so in this case we'll use 1GiB:
5) truncate -s 1G $rawfile
6) Since there's now likely some extra space on the file, let resize2fs extend the filesystem to the now maximum possible size by running it without extra parmeters: resize2fs $rawfile. This will again take a couple of seconds due to the MMP, except no e2fsck or tune2fs will be required now.
7) Update the lxc/$vmid.conf file.
8) Expect breakage.
 
Last edited:
Thank you all for your reply.

@sigxcpu There's no option to shrink the disk for LXC.
@wbumiller Ok, I will try to do this.
 
Last edited:
Followed the steps from @wbumiller and they worked for me. I had a raw image of 32GB and ended up in 10GB. This disk is only using 2.8GB but going down from 32 to 10 is pretty good :)

Thanks.

Before:

# ls -lh
total 27G
-rw-r----- 1 root root 32G Aug 7 10:22 vm-144-disk-1.raw

After:

# ls -lh
total 8.9G
-rw-r----- 1 root root 10G Aug 7 16:46 vm-144-disk-1.raw

# du -sh vm-144-disk-1.raw
8.9G vm-144-disk-1.raw

/dev/loop3 9.8G 2.8G 6.5G 31% /
 
resize2fs -M vm-2659-disk-0.raw
resize2fs 1.43.4 (31-Jan-2017)
resize2fs: Bad magic number in super-block while trying to open vm-2659-disk-0.raw
Couldn't find valid filesystem superblock.

What am I doing wrong here?
 
@wbumiller, is your procedure still supposed to work in 2019 ?

I just followed your steps to shrink a 256GB raw file to 32GB (with less than 10GB of real data), it seemed to work fine, the container starts OK, but no login prompt on console (so I guess the OS doesn't boot).

I did manually a #pct mount vm_id, the filesystem was OK, everything was there

What could be wrong ?

I'm on Proxmox 5.4
 
@wbumiller, is your procedure still supposed to work in 2019 ?

Sure.

I did manually a #pct mount vm_id, the filesystem was OK, everything was there

chroot into the container and see if all packages are still correctly installed (this is heavily distribution dependend how to check. Please refer to the OS manual of your guest)

What could be wrong ?

Hard to say. If you don't get a login, it generally does not mean that the guest ist not starting up. Try to start the container and enter it. If that works, the system works. You can then inspect what is not working (please also refer to the troubleshooting guide on your guest OS)
 
You want to shrink a disk, instead of partition. You need to map the raw file e.g. via kpartx and then resize your partition and then the filesystem on the partition (setup without LVM of course).
Do you not mean to resize the filesystem first and then the partition?
 
Since this is a container it should have an ext4 filesystem on the raw image. The usual way to shrink this is to first shutdown the container, then use resize2fs to shrink the filesystem, then use truncate to shrink the file. Note that if you truncate the file too much (more than what resize2fs did, which may even happen with the same parameters being interpreted once as *1000 and once as *1024 per order of magnitude...) the filesystem will be broken.
Also it only works when there's enough free space on the file system.
So here's my recommended procedure: (No promises, always expect the worst.)
1) Stop the container.
2) Make a backup.
3) Make sure you didn't forget step 2
4) Use resize2fs -M $rawfile. This will first wait a couple of seconds due to the multiple-mount protection. It might tell you to run e2fsck first - do it. This might even tell you to use tune2fs to clear the MMP data. Do this only if you're sure the container is stopped and the file not in use.
Finally this will resize the filesystem to the minimal possible size it can have with the data contained, and will print a line telling you how much that is, which looks like this:
Code:
The filesystem on vm-406-disk-1.raw is now 223982 (4k) blocks long.
This tells me it's almost 900MiB in size. If the size is bigger than what you want to shrink it to - you can't shrink it before freeing up space from inside the container. And you should always leave a margin of error so you don't accidentally cut off part of the filesystem's metadata, so in this case we'll use 1GiB:
5) truncate -s 1G $rawfile
6) Since there's now likely some extra space on the file, let resize2fs extend the filesystem to the now maximum possible size by running it without extra parmeters: resize2fs $rawfile. This will again take a couple of seconds due to the MMP, except no e2fsck or tune2fs will be required now.
7) Update the lxc/$vmid.conf file.
8) Expect breakage.

Hi @wbumiller,

Thanks for this, hope you can help with my problem :)

My system has 1TB SSD with just one virtual disk .raw format on it. Few weeks ago I have added extra storage to that virtual disk (by mistake making it bigger than the actual physical drive).

Decided to shrink virtual drive, actual data stored in this virtual disk is around 820GB (88% of the physical drive), when I run resize2fs the process starts but then it also starts filling up the drive, within 15-20 minutes the physical drive is 100% full and the resize2fs fails.

How does this process work? How much space to I need to make for it to complete? Is there any work around?

Thanks a lot

Lolek
 
How does this process work? How much space to I need to make for it to complete? Is there any work around?
I do not know the details, as this depends heavily on the implementation of resize2fs. It has to copy all the data beyond the targeted size to the front, and in doing so will of course start using more real disk space. In theory it could also deallocate the data it already moved forward, keeping the overall size required for this process from growing too much, but I do not see any options for this in the resize2fs manpage, and I'm not sure that was a considered use case, and it is possible they consider all the "extra space" to be usable scratch space. I'm afraid you've put yourself in a rather uncommon situation there, and the only recommendation I can give you is to get an extra drive, at least temporarily, to move the data to.
 
I do not know the details, as this depends heavily on the implementation of resize2fs. It has to copy all the data beyond the targeted size to the front, and in doing so will of course start using more real disk space. In theory it could also deallocate the data it already moved forward, keeping the overall size required for this process from growing too much, but I do not see any options for this in the resize2fs manpage, and I'm not sure that was a considered use case, and it is possible they consider all the "extra space" to be usable scratch space. I'm afraid you've put yourself in a rather uncommon situation there, and the only recommendation I can give you is to get an extra drive, at least temporarily, to move the data to.

Thanks a lot of your response.
I wonder if specifying it to shrink to certain size (almost the size of actual physical drive rather than all available space) would make any difference? I'll perhaps check resize2fs attributes tomorrow and check.
Thanks
 
Thanks a lot of your response.
I wonder if specifying it to shrink to certain size (almost the size of actual physical drive rather than all available space) would make any difference? I'll perhaps check resize2fs attributes tomorrow and check.
Thanks
It worked @wbumiller!

Resized to 870GB fine. But... Because I was operating on KB. I then truncated it to... 872000000 (but forgotten to put K on the end). Ended up with 872Mb drive. I have truncated again to correct size (872000000K). That's shows drives fine, but unfortunately all data is gone (can see some folder while container running but no files there). E2Fsck did some fixing but still no luck... Any thoughts?

Thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!