Guidance on shrinking boot-pool from 500Gb

cmrho

New Member
Nov 4, 2024
22
5
3
Since I was used to Truenas and other installations having a pretty robust host backup mechanism that doesn't require routine full HD images, I installed PVE on a 500Gb mirrored boot-pool. I've learned that restoring a PVE host configuration is somewhat cumbersome and rough. I have a cron job tar-balling the entire /etc + /var/lib folders for backup purposes.

I've had a gander at shrinking the boot partition(s) on gparted, but it won't let me shrink the massively large (and largely empty) non-boot partition on the boot drive.

Would anyone be able to point me in the direction of the 'best' way to move this PVE installation to a dual 16GB ssd pool? If I perform a reinstallation of PVE, I am not at all confident that restoring the folders I've backed up will get me up and running.

BTW, the reason for wanting to shrink the boot pool is because full images of a 500GB drive takes a loooooooong time. If it just took a few minutes, I wouldn't bother. Thanks all!
 
16GB is a bit small for ext4 proxmox root, not to mention ZFS. (with tune2fs -m2 implemented, at that.) That might be sufficient for a temporary 1-2 week install with no updates, until you can get a bigger drive.

I usually advise 30-40GB for ext4 root minimum for housekeeping, ISOs and updates. 50GB, you probably never have to worry about running out of free space unless logs fill up or somebody starts overusing /home.

64GB minimum is what I would recommend for ZFS boot/root, don't forget you need room for metadata and snapshots. And if you don't mitigate writes**, or leave unpartitioned room for wear leveling, your SSD media is going to wear out pretty quickly.

** Turn off cluster services, turn off atime everywhere (including in-guest), install log2ram and zram and set:

echo 20 >/sys/module/zfs/parameters/zfs_txg_timeout

...in /etc/rc.local (so it survives a reboot) to combine and reduce frequent (default 5-second) writes to every 20 seconds. And make sure everything is on UPS.

Even 256GB nvme is quite smallish for proxmox ZFS boot/root, and really only for homelab. 512GB is "reasonable" (with a high TBW rating, probably over ~600) if you want it to last more-or-less the life of the server. 1TB (again, Enterprise-level or high TBW rating) might give you 8-10 years. And BTW, that is with 2 different make/models of SSD making up the mirror so they don't both wear out around the same time.

https://github.com/kneutron/ansites...replace-zfs-mirror-boot-disks-with-smaller.sh

PRACTICE IN A VM FIRST, to familiarize yourself with the process. Use lvm-thin or e.g. XFS for the vdisk backing storage so you don't get horrific write amplification. Mistakes are a LOT more forgiving when you have a VM snapshot.

And HAVE BACKUPS.

Of course, this advice is based on if you are planning to run Proxmox 24/7 as a hypervisor / server and would prefer a minimum of maintenance. You can get away with a lot more cutting corners if you only run it on weekends / infrequently.
 
Last edited:
  • Like
Reactions: cmrho
16GB is a bit small for ext4 proxmox root, not to mention ZFS. (with tune2fs -m2 implemented, at that.) That might be sufficient for a temporary 1-2 week install with no updates, until you can get a bigger drive.

I usually advise 30-40GB for ext4 root minimum for housekeeping, ISOs and updates. 50GB, you probably never have to worry about running out of free space unless logs fill up or somebody starts overusing /home.

64GB minimum is what I would recommend for ZFS boot/root, don't forget you need room for metadata and snapshots. And if you don't mitigate writes**, or leave unpartitioned room for wear leveling, your SSD media is going to wear out pretty quickly.

** Turn off cluster services, turn off atime everywhere (including in-guest), install log2ram and zram and set:

echo 20 >/sys/module/zfs/parameters/zfs_txg_timeout

...in /etc/rc.local (so it survives a reboot) to combine and reduce frequent (default 5-second) writes to every 20 seconds. And make sure everything is on UPS.

Even 256GB nvme is quite smallish for proxmox ZFS boot/root, and really only for homelab. 512GB is "reasonable" (with a high TBW rating, probably over ~600) if you want it to last more-or-less the life of the server. 1TB (again, Enterprise-level or high TBW rating) might give you 8-10 years. And BTW, that is with 2 different make/models of SSD making up the mirror so they don't both wear out around the same time.

https://github.com/kneutron/ansites...replace-zfs-mirror-boot-disks-with-smaller.sh

PRACTICE IN A VM FIRST, to familiarize yourself with the process. Use lvm-thin or e.g. XFS for the vdisk backing storage so you don't get horrific write amplification. Mistakes are a LOT more forgiving when you have a VM snapshot.

And HAVE BACKUPS.

Of course, this advice is based on if you are planning to run Proxmox 24/7 as a hypervisor / server and would prefer a minimum of maintenance. You can get away with a lot more cutting corners if you only run it on weekends / infrequently.
Thanks verymuch for your detailed response. So if I'm understanding you correctly, you would recommend I stick with these dual Samsung 500Gb Evo SSDs for my boot-pool?