The filesystem on vm-406-disk-1.raw is now 223982 (4k) blocks long.
What am I doing wrong here?
@wbumiller, is your procedure still supposed to work in 2019 ?
I did manually a #pct mount vm_id, the filesystem was OK, everything was there
What could be wrong ?
Do you not mean to resize the filesystem first and then the partition?You want to shrink a disk, instead of partition. You need to map the raw file e.g. via kpartx and then resize your partition and then the filesystem on the partition (setup without LVM of course).
Do you not mean to resize the filesystem first and then the partition?
Since this is a container it should have an ext4 filesystem on the raw image. The usual way to shrink this is to first shutdown the container, then use resize2fs to shrink the filesystem, then use truncate to shrink the file. Note that if you truncate the file too much (more than what resize2fs did, which may even happen with the same parameters being interpreted once as *1000 and once as *1024 per order of magnitude...) the filesystem will be broken.
Also it only works when there's enough free space on the file system.
So here's my recommended procedure: (No promises, always expect the worst.)
1) Stop the container.
2) Make a backup.
3) Make sure you didn't forget step 2
4) Use resize2fs -M $rawfile. This will first wait a couple of seconds due to the multiple-mount protection. It might tell you to run e2fsck first - do it. This might even tell you to use tune2fs to clear the MMP data. Do this only if you're sure the container is stopped and the file not in use.
Finally this will resize the filesystem to the minimal possible size it can have with the data contained, and will print a line telling you how much that is, which looks like this:
This tells me it's almost 900MiB in size. If the size is bigger than what you want to shrink it to - you can't shrink it before freeing up space from inside the container. And you should always leave a margin of error so you don't accidentally cut off part of the filesystem's metadata, so in this case we'll use 1GiB:Code:The filesystem on vm-406-disk-1.raw is now 223982 (4k) blocks long.
5) truncate -s 1G $rawfile
6) Since there's now likely some extra space on the file, let resize2fs extend the filesystem to the now maximum possible size by running it without extra parmeters: resize2fs $rawfile. This will again take a couple of seconds due to the MMP, except no e2fsck or tune2fs will be required now.
7) Update the lxc/$vmid.conf file.
8) Expect breakage.
I do not know the details, as this depends heavily on the implementation of resize2fs. It has to copy all the data beyond the targeted size to the front, and in doing so will of course start using more real disk space. In theory it could also deallocate the data it already moved forward, keeping the overall size required for this process from growing too much, but I do not see any options for this in the resize2fs manpage, and I'm not sure that was a considered use case, and it is possible they consider all the "extra space" to be usable scratch space. I'm afraid you've put yourself in a rather uncommon situation there, and the only recommendation I can give you is to get an extra drive, at least temporarily, to move the data to.How does this process work? How much space to I need to make for it to complete? Is there any work around?
I do not know the details, as this depends heavily on the implementation of resize2fs. It has to copy all the data beyond the targeted size to the front, and in doing so will of course start using more real disk space. In theory it could also deallocate the data it already moved forward, keeping the overall size required for this process from growing too much, but I do not see any options for this in the resize2fs manpage, and I'm not sure that was a considered use case, and it is possible they consider all the "extra space" to be usable scratch space. I'm afraid you've put yourself in a rather uncommon situation there, and the only recommendation I can give you is to get an extra drive, at least temporarily, to move the data to.
It worked @wbumiller!Thanks a lot of your response.
I wonder if specifying it to shrink to certain size (almost the size of actual physical drive rather than all available space) would make any difference? I'll perhaps check resize2fs attributes tomorrow and check.
Thanks