Reduce Proxmox boot partition

carsten2

Member
Mar 25, 2017
58
3
8
50
I have two 512 GB mirrored SSD as the promox boot disk with ZFS, and want to reduce the root file system paritition, so I can add ZIL/SLOG for the separate ZFS data pool. This gives a great improvment, as the system SSDs are idle in most of the time anyway, so there is to conflict between promox system and SLOG/ZLOG for another ZFS pool on the same drive. I did this already by installing to a smaller disk, than adding a mirror with a larger disks, leaving free space at the end.

How is it possible to reduce the rpool root partition online (remote server). E.g. removing the second mirror disk, repartition, send ZFS file systems to second disk, and somehow make proxmox boot from second disk?

Any step by step instructions?
 

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
4,763
316
83
Hi,

ZIL/SLOG should have there dedicated disk. Putting them on another used disk can reduce the performance extreme.
Anyway, you cant shrink a zfs disk without new installation.
 

carsten2

Member
Mar 25, 2017
58
3
8
50
Hi,

ZIL/SLOG should have there dedicated disk. Putting them on another used disk can reduce the performance extreme.
Anyway, you cant shrink a zfs disk without new installation.
This is not an answer to my question:

1) The performance reduction is only theoretically and in fact insignificant or even non present, because the proxmox system disk is 99% idle. On the contrary: This change in system configuration is the cheapest and easiest major performance improvement that can be done without adding additional hardware. I already did this in installations (by using a smallker disk in the intial install), and the results are, that the systems are MUCH faster afterwards.

2) It is definitively possible and I already pointed out which way to go, i.e. breaking the ZFS mirror, creating a new smaller ZFS, sending the original ZFS file system to the new one and then somehow switch boot order. The question was, if someone has done this already, has additional remakes for caveats or omissions or in the best case step-by-step instructions, how to do it.
 

LnxBil

Well-Known Member
Feb 21, 2015
4,088
388
83
Germany
I am looking for a solution where this can be done live.
There is a lot of manual tweaking involved. I'd suggest to try it out in a local copy of your online server and do it multiple times while recording all commands etc.
In general, everything is possible, the question is the amount of work you have to put in there. You have to create a new pool with a new name, change all references to rpool inside of your root-FS to the new pool. Best would be to boot a zfs-enabled live linux (or install ZFS afterwards on your live linux) on your server and do the migration offline. You can skip the whole renaming thing because you can just import your new pool as the old name after removing the old rpool.

The cheapest option to build a fast system is to have spinners and only (one, maximum two) small enterprise-grade ZIL/SLOG devices (32 GB are totally enough) and have only one pool for everything with as much vdevs as possible. It does not make sense to have two ZFS pools that'll share the available ARC if it is not necessary.
 

alexskysilk

Active Member
Oct 16, 2015
582
61
28
Chatsworth, CA
www.skysilk.com
I have two 512 GB mirrored SSD as the promox boot disk with ZFS, and want to reduce the root file system paritition, so I can add ZIL/SLOG for the separate ZFS data pool.
There is no easy way to do this after the fact. As a fresh install you can do it by installing debian first with hard partitions, leaving free space for your others (log/data/etc)

This gives a great improvment, as the system SSDs are idle in most of the time anyway
I'm really curious; have you done this and realized ANY improvement? I'd really be interested in a system config and before/after usecase benchmarks.
 

carsten2

Member
Mar 25, 2017
58
3
8
50
I'm really curious; have you done this and realized ANY improvement? I'd really be interested in a system config and before/after usecase benchmarks.
Yes, the improvement is much better with log.

With Log: pveperf FSYNCS/SECOND: 1404.95
Without Log: pveperf FSYNCS/SECOND: 96.31
Factor 14

Copying large amounts of files to a windows VM: Transferspeed +40%
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!